id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.07727
Scaling limit of a one-dimensional polymer in a repulsive i.i.d. environment
The purpose of this paper is to study a one-dimensional polymer penalized by its range and placed in a random environment $\omega$. The law of the simple symmetric random walk up to time $n$ is modified by the exponential of the sum of $\beta \omega_z - h$ sitting on its range, with~$h$ and $\beta$ positive parameters. It is known that, at first order, the polymer folds itself to a segment of optimal size $c_h n^{1/3}$ with $c_h = \pi^{2/3} h^{-1/3}$. Here we study how disorder influences finer quantities. If the random variables $\omega_z$ are i.i.d.\ with a finite second moment, we prove that the left-most point of the range is located near $-u_* n^{1/3}$, where $u_* \in [0,c_h]$ is a constant that only depends on the disorder. This contrast with the homogeneous model (i.e. when $\beta=0$), where the left-most point has a random location between $-c_h n^{1/3}$ and $0$. With an additional moment assumption, we are able to show that the left-most point of the range is at distance $\mathcal U n^{2/9}$ from $-u_* n^{1/3}$ and the right-most point at distance $\mathcal V n^{2/9}$ from $(c_h-u_*) n^{1/3}$. Here again, $\mathcal{U}$ and $\mathcal{V}$ are constants that depend only on $\omega$.
Nicolas Bouchot
2023-05-12T18:41:34Z
http://arxiv.org/abs/2305.07727v2
# Localization of a one-dimensional polymer in a repulsive i.i.d. environment ###### Abstract The purpose of this paper is to study a one-dimensional polymer penalized by its range and placed in a random environment \(\omega\). The law of the simple symmetric random walk up to time \(n\) is modified by the exponential of the sum of \(\beta\omega_{z}-h\) sitting on its range, with \(h\) and \(\beta\) positive parameters. It is known that, at first order, the polymer folds itself to a segment of optimal size \(c_{h}n^{1/3}\) with \(c_{h}=\pi^{2/3}h^{-1/3}\). Here we study how disorder influences finer quantities. If the random variables \(\omega_{z}\) are i.i.d. with a finite second moment, we prove that the left-most point of the range is located near \(-u_{*}n^{1/3}\), where \(u_{*}\in[0,c_{h}]\) is a constant that only depends on the disorder. This contrast with the homogeneous model (_i.e._ when \(\beta=0\)), where the left-most point has a random location between \(-c_{h}n^{1/3}\) and \(0\). With an additional moment assumption, we are able to show that the left-most point of the range is at distance \(\mathcal{U}n^{2/9}\) from \(-u_{*}n^{1/3}\) and the right-most point at distance \(\mathcal{V}n^{2/9}\) from \((c_{h}-u_{*})n^{1/3}\). Here again, \(\mathcal{U}\) and \(\mathcal{V}\) are constants that depend only on \(\omega\). Keywords: random walk, polymer, random media, localization 2020 Mathematics subject classification: 82B44, 60G50, 60G51 ## 1 Introduction We study a simple symmetric random walk \((S_{k})_{k\geq 0}\) on \(\mathbb{Z}\), starting from \(0\), with law \(\mathbf{P}\). Let \(\omega=(\omega_{z})_{z\in\mathbb{Z}}\) be a collection of i.i.d. random variables with law \(\mathbb{P}\), independent from the random walk \(S\), which we will call _environment_ or _field_. We also assume that \(\mathbb{E}[\omega_{0}]=0\) and \(\mathbb{E}[\omega_{0}^{2}]=1\). For \(h>0\), \(\beta>0\) and a given realization of the field \(\omega\), we define the following Gibbs transformation of \(\mathbf{P}\), called the (quenched) _polymer measure_: \[\mathrm{d}\mathbf{P}_{n,h}^{\omega,\beta}(S):=\frac{1}{Z_{n,h}^{\omega,\beta} }\exp\Big{(}\sum_{z\in\mathcal{R}_{n}}\big{(}\beta\omega_{z}-h\big{)}\Big{)} \mathrm{d}\mathbf{P}(S),\] where \(\mathcal{R}_{n}=\mathcal{R}_{n}(S):=\big{\{}S_{0},\dots,S_{n}\big{\}}\) is the range of the random walk up to time \(n\), and \[Z_{n,h}^{\omega,\beta}:=\mathbf{E}\Big{[}\exp\Big{(}\sum_{z\in\mathcal{R}_{n} }\big{(}\beta\omega_{z}-h\big{)}\Big{)}\Big{]}=\mathbf{E}\Big{[}\exp\Big{(} \beta\sum_{z\in\mathcal{R}_{n}}\omega_{z}-h|\mathcal{R}_{n}|\Big{)}\Big{]}\] is the partition function, such that \(\mathbf{P}_{n,h}^{\omega,\beta}\) is a (random) probability measure on the space of trajectories of length \(n\). In other words, the polymer measure \(\mathbf{P}_{n,h}^{\omega,\beta}\) penalizes trajectories by their range and rewards visits to sites where the field \(\omega\) takes greater values. In this setting, the disorder term \(\sum_{z\in\mathcal{R}_{n}}\omega_{z}\) is typically of order \(|\mathcal{R}_{n}|^{1/2}\): one can prove that11\(\beta\sum_{z\in\mathcal{R}_{n}}\omega_{z}-h|\mathcal{R}_{n}|\sim-h|\mathcal{R}_{n}|\) for \(\mathbb{P}\)-almost all \(\omega\), see [5]. Thus, disorder does not sufficiently impact the behavior of the polymer on a first approximation, which is seen in Theorem 1.1 below. We introduce the following notation: Footnote 1: In the rest of the paper we shall use the standard Landau notation: as \(x\to a\), we write \(g(x)\sim f(x)\) if \(\lim_{x\to a}\frac{g(x)}{(x)}=1\), \(g(x)=\bar{o}(f(x))\) if \(\lim_{x\to a}\frac{g(x)}{f(x)}=0\),\(g(x)=\bar{\mathcal{O}}(f(x))\) if \(\limsup_{x\to a}\big{|}\frac{g(x)}{f(x)}\big{|}<+\infty\) and \(f\asymp g\) if \(g(x)=\bar{\mathcal{O}}(f(x))\) and \(f(x)=\bar{\mathcal{O}}(g(x))\). \[\xi_{n}\xrightarrow[n\to\infty]{\mathbf{P}^{\omega,\beta}_{n,h}}\xi\quad \Longleftrightarrow\quad\forall\varepsilon>0,\lim_{n\to\infty}\mathbf{P}^{ \omega,\beta}_{n,h}\left(|\xi_{n}-\xi|>\varepsilon\right)=0\,.\] We will say that "\(\xi_{n}\) converges in \(\mathbf{P}^{\omega,\beta}_{n,h}\)-probability" even if \(\mathbf{P}^{\omega,\beta}_{n,h}\) depends on \(n\). **Theorem 1.1** ([5, Theorem 3.7]).: _For all \(h>0\), define \(c_{h}:=(\pi^{2}h^{-1})^{1/3}\). Then, for any \(h,\beta>0\), \(\mathbb{P}\)-almost surely we have the following convergence_ \[\lim_{n\to\infty}\frac{1}{n^{1/3}}\log Z^{\omega,\beta}_{n,h}=-\frac{3}{2}( \pi h)^{2/3},\qquad n^{-1/3}|\mathcal{R}_{n}|\xrightarrow[n\to\infty]{\mathbf{ P}^{\omega,\beta}_{n,h}}c_{h}\,. \tag{1.1}\] The main goal of this paper is to extract further information on the polymer, notably on the location of the segment where the random walk is folded or on how \(|\mathcal{R}_{n}|\) fluctuates at lower scales than \(n^{1/3}\). ### About the homogeneous setting Since we are working in dimension one, we make use of the fact that the range is entirely determined by the position of its edges, meaning that \(\mathcal{R}_{n}\) is exactly the segment \(\llbracket M_{n}^{-},M_{n}^{+}\rrbracket\), where \(M_{n}^{-}:=\min_{0\leq k\leq n}S_{k}\) and \(M_{n}^{+}:=\max_{0\leq k\leq n}S_{k}\). We will also adopt the following notation: \[T_{n}:=M_{n}^{+}-M_{n}^{-}=|\mathcal{R}_{n}|-1\,,\qquad T_{n}^{*}:=\left(\frac {n\pi^{2}}{h}\right)^{1/3}=c_{h}n^{1/3}\,,\qquad\Delta_{n}:=T_{n}-T_{n}^{*}. \tag{1.2}\] Hence, \(T_{n}\) is the size of the range and \(T_{n}^{*}\) is the optimal size of the range at scale \(n^{1/3}\) that appears in (1.1). In the homogeneous setting, that is when \(\beta=0\), it is proven in [7] that the location of the left-most point is random (on the scale \(n^{1/3}\)) with a density proportional to \(\sin(\pi u/c_{h})\). As far as the size of the range \(T_{n}^{*}\) is concerned, it is shown to have Gaussian fluctuations. In fact, [7] treats the case of a parameter \(h=h_{n}\) that may depend on the length of the polymer: in this case, fluctuations vanish when the penalty strength \(h_{n}\) is too high. We state the full result for the sake of completeness. **Theorem 1.2** ([7, Theorem 1.1]).: _Recall the notations of (1.2) and replace \(h\) by \(h_{n}\) in the definition of \(T_{n}^{*}\). Then for \(\beta=0\), we have the following results:_ * _Assume that_ \(h_{n}\geq n^{-1/2}(\log n)^{3/2}\) _and_ \(\lim_{n\to\infty}n^{-1/4}h_{n}=0\)_. Let_ \(a_{n}:=\frac{1}{\sqrt{3}}\left(\frac{n\pi^{2}}{h_{n}^{4}}\right)^{1/6}\)_, which is such that_ \(\lim_{n\to\infty}a_{n}=+\infty\)_. Then for any_ \(r<s\) _and any_ \(0\leq a<b\leq 1\)_,_ \[\lim_{n\to\infty}\mathbf{P}^{\omega,0}_{n,h_{n}}\left(r\leq\frac{\Delta_{n}}{a _{n}}\leq s\,;\,a\leq\frac{|M_{n}^{-}|}{T_{n}^{*}}\leq b\right)=\frac{\sqrt{ \pi}}{2\sqrt{2}}\int_{r}^{s}e^{-\frac{u^{2}}{2}}\mathrm{d}u\int_{a}^{b}\sin( \pi v)\,\mathrm{d}v\,.\] * _Assume that_ \(\lim_{n\to\infty}n^{-1/4}h_{n}=+\infty\) _and_ \(\lim_{n\to\infty}n^{-1}h_{n}=0\)_. Denote by_ \(t_{n}^{o}\) _the decimal part of_ \(T_{n}^{*}\) _and_ \(\tau_{n}^{o}:=\frac{1}{2}-\frac{\pi^{2}}{18T_{n}^{a}}\)_. Define_ \(\mathcal{A}_{n}\) _as_ \(\{0\}\) _if_ \(t_{n}^{o}<\tau_{n}^{o}\)_,_ \(\{1\}\) _if_ \(t_{n}^{o}>\tau_{n}^{o}\) _and_ \(\{0,1\}\) _if_ \(t_{n}^{o}=\tau_{n}^{o}\)_. Then we have for any Borel set_ \(B\subseteq[0,1]\)__ \[\lim_{n\to\infty}\mathbf{P}_{n,h_{n}}^{\omega,0}\left(T_{n}-\lfloor T_{n}^{*} -2\rfloor\not\in\mathcal{A}_{n}\right)=0,\quad\lim_{n\to\infty}\mathbf{P}_{n,h _{n}}^{\omega,0}\Big{(}\frac{|M_{n}^{-}|}{T_{n}^{*}}\in B\Big{)}=\frac{\pi}{2 }\int_{B}\sin(\pi v)\,\mathrm{d}v.\] We will see that the disordered model displays a very different behavior: the location of the left-most and right-most points are \(\mathbf{P}\)-deterministic, in the sense that they are completely determined by the disorder field \(\omega\) (at least for the first few orders). ### A rewriting of the partition function In the disordered setting, we can write \(Z_{n,h}^{\omega,\beta}\) as \[Z_{n,h}^{\omega,\beta}=\sum_{x,y=0}^{+\infty}\exp\Big{(}-h(x+y+1)+\beta\sum_{z =-x}^{y}\omega_{z}\Big{)}\mathbf{P}\big{(}\mathcal{R}_{n}=\llbracket-x,y \rrbracket\big{)}\,. \tag{1.3}\] Gambler's ruin formulae derived from [15, Chap. XIV] can be used to compute sharp asymptotics for \(\mathbf{P}\big{(}\mathcal{R}_{n}=\llbracket-x,y\rrbracket\big{)}\), see [7, Theorem 1.4]. In particular, as \(n,x+y\to\infty\), one gets \(\log\mathbf{P}\big{(}\mathcal{R}_{n}=\llbracket-x,y\rrbracket\big{)}\sim- \frac{n\pi^{2}}{2(x+y)^{2}}\) when \(x+y\ll n\), the exact asymptotics depending on the ratio \(n/(x+y)^{3}\). The optimal range size \(T_{n}^{*}=c_{h}n^{1/3}\) appearing in (1.1) is found as the minimizer of \(\phi_{n}(T):=hT+\frac{n\pi^{2}}{2T^{2}}\). Letting \(\psi_{h}:=\frac{8}{\pi}e^{h}\left[\cosh h-1\right]\), the sharp estimates of \(\mathbf{P}\big{(}\mathcal{R}_{n}=\llbracket-x,y\rrbracket\big{)}\) from [7, Theorem 1.4] give as \(n\to+\infty\) \[Z_{n,h}^{\omega,\beta}=(1+\bar{o}(1))\psi_{h}e^{-\phi_{n}(T_{n}^{*})}\sum_{x, y=0}^{+\infty}\sin\left(\frac{x\pi}{x+y}\right)\exp\left(\beta\sum_{z=-x}^{y} \omega_{z}+\phi_{n}(T_{n}^{*})-\phi_{n}(x+y)\right)\,,\] where the \(\bar{o}(1)\) is deterministic. Note that Theorem 1.1 implies that there exists a vanishing sequence \((\varepsilon_{n})_{n\geq 1}\) such that \(\lim_{n\to\infty}\mathbf{P}_{n,h}^{\omega,\beta}(|\Delta_{n}|>\varepsilon_{n} n^{1/3})=0,\mathbb{P}\)-a.s. In particular, we can restrict the partition function to trajectories such that \(|\Delta_{n}|\leq\varepsilon_{n}n^{1/3}\). Then, as in [7, Lemma 2.1], we can use the Taylor expansion of \(\phi_{n}\) around \(T_{n}^{*}\) to obtain the following: writing \(\Delta_{n}^{x,y}:=x+y-c_{h}n^{1/3}\), we have \[Z_{n,h}^{\omega,\beta}\sim e^{-\frac{3}{2}hT_{n}^{*}}\psi_{h}\sum_{\begin{subarray} {c}x,y\geq 0\\ |\Delta_{n}^{x,y}|\leq\varepsilon_{n}n^{1/3}\end{subarray}}\sin\left(\frac{x \pi}{x+y}\right)\exp\left(\beta\sum_{z=-x}^{y}\omega_{z}-\frac{3\pi^{2}( \Delta_{n}^{x,y})^{2}}{2c_{h}^{4}n^{1/3}}(1+\bar{o}(1))\right)\,, \tag{1.4}\] where the \(\bar{o}(1)\) is deterministic, uniform in \(x,y\), and stems from the Taylor expansion. Equation (1.4) will be the starting point of the proofs of our results. ### First convergence result Akin to [5], we define the following quantities: for any \(j\geq 0\) for which the sum is not empty, \[\Sigma_{j}^{+}(\omega):=\sum_{z=0}^{j}\omega_{z}\,,\qquad\Sigma_{j}^{-}( \omega):=\sum_{z=1}^{j}\omega_{-z}\,.\] Using Skorokhod's embedding theorem (see [24, Chapter 7.2] and Theorem 4.1 below) we can define on the same probability space a coupling \(\hat{\omega}=\hat{\omega}^{(n)}\) of \(\omega\) and two independent standard Brownian motions \(X^{(1)}\) and \(X^{(2)}\) such that for each \(n\), \(\hat{\omega}^{(n)}\) has the same law as the environment \(\omega\) and \[\left(\frac{1}{n^{1/6}}\Sigma_{un^{1/3}}^{-}(\hat{\omega})\right)_{u\geq 0} \xrightarrow[n\to\infty]{a.s.}\left(X^{(1)}_{u}(\hat{\omega})\right)_{u\geq 0 }\,,\quad\left(\frac{1}{n^{1/6}}\Sigma_{vn^{1/3}}^{+}(\hat{\omega})\right)_{v \geq 0}\xrightarrow[n\to\infty]{a.s.}\left(X^{(2)}_{v}(\hat{\omega})\right)_{v\geq 0}\] in the Skorokhod metric on the space of all cadlag real functions. With an abuse of notation, we will still denote by \(\omega\) this coupling, while keeping in mind that the field now depends on \(n\). Our first result improves estimates on the asymptotic behavior of \(Z_{n,h}^{\omega,\beta}\) and \((M_{n}^{-},M_{n}^{+})\). **Theorem 1.3**.: _For any \(h,\beta>0\), we have the following \(\mathbb{P}\)-a.s. convergence_ \[\lim_{n\to\infty}\frac{1}{\beta n^{1/6}}\left(\log Z_{n,h}^{\omega,\beta}+ \frac{3}{2}hc_{h}n^{1/3}\right)=\sup_{0\leq u\leq c_{h}}\left\{X^{(1)}_{u}+X^{ (2)}_{c_{h}-u}\right\}\,, \tag{1.5}\] _where \(X^{(1)}\) and \(X^{(2)}\) are the two independent standard Brownian motions defined above. Furthermore, \(u_{*}:=\arg\max_{u\in[0,c_{h}]}\left\{X^{(1)}_{u}+X^{(2)}_{c_{h}-u}\right\}\) is \(\mathbb{P}\)-a.s. well-defined and_ \[\frac{1}{n^{1/3}}(M_{n}^{-},M_{n}^{+})\xrightarrow[n\to\infty]{\mathbf{P}^{ \omega,\beta}_{n,h}}(-u_{*},c_{h}-u_{*})\qquad\mathbb{P}\text{-a.s.} \tag{1.6}\] _In other words, \(n^{-1/3}\mathcal{R}_{n}\) converges in \(\mathbf{P}^{\omega,\beta}_{n,h}\)-probability to the segment \(\llbracket-u_{*},c_{h}-u_{*}\rrbracket\), \(\mathbb{P}\)-a.s._ _Comment_. Theorem 1.3 still holds if \((\omega_{z})\) are i.i.d and in the domain of attraction of an \(\alpha\)-stable law with \(\alpha\in(1,2)\), only replacing the Brownian motions \(X^{(i)}\) by Levy processes as in [5] and \(n^{1/6}\) by \(n^{1/3\alpha}\): we refer to Theorem A.1 and its proof in Appendix A. As most of the work in this paper requires stronger assumptions on the field \(\omega\) we will not dwell further on this possibility and focus on the case where \(\mathbb{E}\left[\omega_{0}^{2}\right]=1\). **Heuristic.** Intuitively, the result of Theorem 1.3 is a consequence of the following reasoning: if we assume that the optimal size is \(T_{n}^{*}\) (at a first approximation), the location of the polymer should be around the points \((x_{n},y_{n})\in\mathbb{N}^{2}\) such that \(x_{n}+y_{n}\approx T_{n}^{*}\) and \(\Sigma_{x_{n}}^{-}+\Sigma_{y_{n}}^{+}\) is maximized. Translating in terms of the processes \(X^{(1)},X^{(2)}\), we want to maximize \(n^{-1/6}(\Sigma_{x_{n}}^{-}+\Sigma_{y_{n}}^{+})\), which is "close" to \(X^{(1)}_{x_{n}n^{-1/3}}+X^{(2)}_{y_{n}n^{-1/3}}\). Since \(x_{n}+y_{n}\sim T_{n}^{*}\) we have \(y_{n}n^{-1/3}\sim c_{h}-x_{n}n^{-1/3}\) and we want to pick \(x_{n}n^{-1/3}\) to maximize \(u\mapsto X^{(1)}_{u}+X^{(2)}_{c_{h}-u}\) Figure 1: A typical trajectory under the polymer measure for a given \(u_{*}\) and large \(n\) ### Second order convergence result To ease the notation we will denote \(X_{u}:=X_{u}^{(1)}+X_{c_{h}-u}^{(2)}\). Note that \(X\) has the distribution of \(\sqrt{2}W+X_{c_{h}}^{(2)}\), where \(W\) is a standard Brownian motion. Hence, the supremum on \([0,c_{h}]\) of \(X_{u}\) is almost surely positive and finite, attained at a unique \(u_{*}\) which follows the arcsine law on \([0,c_{h}]\). In order to extract more information on the typical behavior of the polymer, we need to go deeper into the expansion of \(\log Z_{n,h}^{\omega,\beta}\). To do so, we factorize \(Z_{n,h}^{\omega,\beta}\) by \(e^{\beta n^{1/6}X_{u_{*}}}\) and we study the behavior of \(\log Z_{n,h}^{\omega,\beta}+\frac{3}{2}hc_{h}n^{1/3}-\beta n^{1/6}X_{u_{*}}\), which is related to the behavior of \(X\) near \(u_{*}\). Studying Wiener processes near their maximum leads to study both the three-dimensional Bessel process and the Brownian meander, as outlined by the following classical result. **Proposition 1.4** ([17, Theorem 5]).: _Let \(W\) be a Brownian motion on \([0,1]\) and let \(\sigma\) be the time at which \(W\) reaches its maximum on \([0,1]\). On the event \(\{\sigma=u\}\), the processes \((W_{u+s}-W_{u},0\leq s\leq 1-u)\) and \((W_{u-s}-W_{u},0\leq s\leq u)\) are Brownian meanders of respective duration \(1-u\) and \(u\)._ Some other technical results about the meander are presented in Appendix B. We also define the following process, which we call _two-sided three-dimensional Bessel_ (BES\({}_{3}\)) _process._ **Definition 1.1**.: We call _two-sided_ three-dimensional Bessel process \(\mathbf{B}\) the concatenate of two three-dimensional Bessel processes \(B^{-}\) and \(B^{+}\). Namely, for all \(s\in\mathbb{R}\), \(\mathbf{B}_{s}=B_{-s}^{-}\mathbb{1}_{\mathbb{R}^{-}}(s)+B_{s}^{+}\mathbb{1}_{ \mathbb{R}^{+}}(s)\). Additionally, we will use the following coupling between \((X_{u}^{(1)}+X_{c_{h}-u}^{(2)},X_{u}^{(1)}-X_{c_{h}-u}^{(2)})\) seen from \(u_{*}\) and a two-sided BES\({}_{3}\) process and a Brownian motion. This will allow us to obtain \(\mathbb{P}\)-almost sure results instead of convergences in distribution; in particular we obtain trajectorial results that depend on the realization of the environment. The proof is postponed to Appendix C and relies on the path decomposition of usual Brownian-related processes. **Proposition 1.5**.: _Let_ \[X_{u}=X_{u}^{(1)}+X_{c_{h}-u}^{(2)}\,,\qquad Y_{u}:=X_{u}^{(1)}-X_{c_{h}-u}^{ (2)}\,.\] _Then, conditionally on \(u_{*}\), one can construct a coupling of \((X^{(1)},X^{(2)})\) and \(\mathbf{B}\) a two-sided BES\({}_{3}\), \(\mathbf{Y}\) a two-sided standard Brownian motion such that: almost surely, there is a \(\delta_{0}=\delta_{0}(\omega)>0\) for which on a \(\delta_{0}\)-neighborhood of \(0\),_ \[\frac{1}{\sqrt{2}}\big{(}X_{u_{*}}-X_{u_{*}+u}\big{)}_{u}=\big{(}\chi\mathbf{ B}_{u}\big{)}_{u}\,,\qquad\frac{1}{\sqrt{2}}\big{(}Y_{u_{*}+u}-Y_{u_{*}}\big{)}_{u }=(\mathbf{Y}_{u})_{u}\,,\] _where we have set \(\chi=\chi(u,\omega):=\big{(}\sqrt{c_{h}-u_{*}}\mathbb{1}_{\{u\geq 0\}}+\sqrt{u_{*}} \mathbb{1}_{\{u<0\}}\big{)}^{-1}\)._ _Comment_.: It should be noted that \(\chi\) actually only depends on the sign of \(u\), which means that the process \(\chi\mathbf{B}\) has the Brownian scaling invariance property. This will be used in Section 1.5 to get a suitable coupling. **Theorem 1.6**.: _Suppose \(\mathbb{E}\left[\left|\omega_{0}\right|^{3+\eta}\right]<\infty\) for some \(\eta>0\). With the coupling of Proposition 1.5, we have the \(\mathbb{P}\)-a.s. convergence_ \[\lim_{n\to\infty}\frac{\sqrt{2}}{\beta n^{1/9}}\left(\log Z_{n,h}^{\omega,\beta }+\frac{3}{2}hc_{h}n^{1/3}-\beta n^{1/6}X_{u_{*}}\right)=\sup_{u,v}\left\{ \mathcal{Y}_{u,v}-\frac{3\pi^{2}}{\beta c_{h}^{4}\sqrt{2}}\big{(}u+v\big{)}^{2 }\right\}\,, \tag{1.7}\] _where \(\mathcal{Y}_{u,v}:=\mathbf{Y}_{u}-\mathbf{Y}_{-v}-\chi\big{[}\mathbf{B}_{u}+ \mathbf{B}_{v}\big{]}\). Moreover, \((\mathcal{U},\mathcal{V}):=\arg\max_{u,v}\{\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{ \beta c_{h}^{4}\sqrt{2}}(u+v)^{2}\}\) is \(\mathbb{P}\)-a.s. well-defined and we have_ \[\left(\frac{M_{n}^{-}+u_{*}n^{1/3}}{n^{2/9}},\frac{M_{n}^{+}-(c_{h}-u_{*})n^{ 1/3}}{n^{2/9}}\right)\,\frac{\mathbf{P}_{n,h}^{\omega,\beta}}{n\to\infty}\,( \mathcal{U},\mathcal{V})\quad\mathbb{P}\text{-a.s.} \tag{1.8}\] _In particular, we have \(\frac{T_{n}-c_{h}n^{1/3}}{n^{2/9}}\xrightarrow[n\to\infty]{\mathcal{U}} +\mathcal{V}\)\(\mathbb{P}\)-a.s._ _Comment_.: We should be able to obtain a statement assuming only that \(\mathbb{E}\left[\left|\omega_{0}\right|^{2+\eta}\right]<\infty\) for some positive \(\eta\). The statement is a bit more involved: we need to use a different coupling between \(\omega\) and \(X^{(1)},X^{(2)}\), to get the following convergence: \[\lim_{n\to\infty}\frac{\sqrt{2}}{\beta n^{1/9}}\left(\log Z_{n,h}^{\omega, \beta}+\frac{3}{2}hc_{h}n^{1/3}-\beta\sum_{z=-u_{*}n^{1/3}}^{(c_{h}-u_{*})n^{1 /3}}\omega_{z}\right)=\sup_{u,v}\left\{\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{\beta c _{h}^{4}\sqrt{2}}(u+v)^{2}\right\}\,. \tag{1.9}\] We refer to Section 4.2 for further details and a partial proof of (1.9), adapting the proof of Theorem 1.6. ### Coupling and construction Observe that we combine two different couplings that have different uses to prove our results: * A coupling for a given size \(n\) between the environment \(\omega\) and two Brownian motions \(X^{(1)}\) and \(X^{(2)}\). This coupling allows for the almost sure convergence in Theorem 1.3, and the assumption \(\mathbb{E}\left[\omega_{0}^{3+\eta}\right]<+\infty\) is used to have a good enough control on the coupling. * A coupling between \((X^{(1)},X^{(2)},u_{*})\) and \((\mathbf{B},\mathbf{Y})\) to study the behavior of the Brownian motions \(X^{(1)}\) and \(X^{(2)}\) near \(u_{*}\). This allows us to get the almost sure convergence of Theorem 1.6, in addition to easier proofs when excluding non typical configurations for the polymer. Here we explain how these two coupling combine to yield all the desired results. We start by picking \(u_{*}\) according to the arcsine law on \([0,c_{h}]\) and by considering a three-dimensional two-sided Bessel process \(\mathbf{B}\) as well as an independent two-sided Brownian motion \(\mathbf{Y}\), both defined on \(\mathbb{R}\). Since the process \((X_{u_{*}}-X_{u_{*}+u})/\sqrt{2}\) is a two-sided Brownian meander (with left interval \([0,u_{*}]\) and right interval \([0,c_{h}-u_{*}]\)), using Proposition 1.5 we can find a \(\delta_{0}(\omega)\) such that if \(\left|u\right|\leq\delta_{0}\), we have \(X_{u_{*}}-X_{u_{*}+u}=\mathbf{B}_{u}\chi\sqrt{2}\). We are interested in a coupling that will be such that \(n^{1/18}(X_{u_{*}}-X_{u_{*}+\frac{u}{n^{1/9}}})=\mathbf{B}_{u}\chi\sqrt{2}\) for all \(n\) large enough and for any \(un^{-1/9}\) sufficiently close to \(0\). To do so, for each \(n\) we construct from \(\mathbf{B}\) a suitable \(X^{n}\), with the same law as \(X\), that satisfies the desired equality. Consider only the pair \((\delta_{0},{\bf B})\) that was previously defined, and let \(n_{0}\) be such that \(\varepsilon_{n_{0}}\leq\delta_{0}\). Then, for any \(n\geq n_{0}\), we paste the trajectory of \(n^{-1/18}\sqrt{2}\chi{\bf B}_{un^{1/9}}\), which is still a two-sided three-dimensional Bessel process multiplied by \(\sqrt{2}\) (note that \(\chi\) is scale-invariant), until \(|u|=\delta_{0}\). By construction, we have \(X^{n}_{u_{*}}-X^{n}_{u_{*}+u}=n^{-1/18}\sqrt{2}\chi{\bf B}_{un^{1/9}}\) for \(|u|\leq\delta_{0}\). Next, we consider two independent Brownian meanders \(M^{L,n,\delta_{0}},M^{R,n,\delta_{0}}\) of duration \(u_{*}\) (resp. \(c_{h}-u_{*}\)) conditioned on \(M^{L,n,\delta_{0}}_{\delta_{0}}=n^{-1/18}\sqrt{2}\chi{\bf B}_{-\delta_{0}n^{1/9}}\) (resp. \(M^{R,n,\delta_{0}}_{\delta_{0}}=n^{-1/18}\sqrt{2}\chi{\bf B}_{\delta_{0}n^{1/9}}\)), and we plug their trajectory to complete the process \(X^{n}\). The full definition of \(X^{n}\) is thus given by \[\frac{1}{\sqrt{2}}(X^{n}_{u_{*}}-X^{n}_{u})=n^{-1/18}\chi{\bf B}_{un^{1/9}} \mathbb{1}_{\{|u-u_{*}|<\delta_{0}\}}+M^{L,n,\delta_{0}}_{u_{*}-u}\mathbb{1}_{ \{u\in[0,u_{*}-\delta_{0}]\}}+M^{R,n,\delta_{0}}_{u-u_{*}}\mathbb{1}_{\{u\in[u _{*}+\delta_{0},c_{h}]\}}\] We can similarly define \(Y^{n}_{u_{*}+u}-Y^{n}_{u_{*}}:=n^{-1/18}\sqrt{2}{\bf Y}_{un^{1/9}}\) where no particular coupling is needed. From \(X^{n}\) and \(Y^{n}\), we can recover our new Brownian motions \(X^{(1),n},X^{(2),n}\). After this coupling of the processes, we construct the environment \(\omega=\omega^{n}\) from \((X^{(1),n},X^{(2),n})\) using Skorokhod's embedding theorem (see Theorems 4.1,4.2). Thus, all of our processes are defined to have almost sure convergences, and when \(n\) is large enough (how large only depends on \(\delta_{0}\)) and \(|un^{-1/9}|\leq\varepsilon_{n}\leq\delta_{0}\), we have \[n^{1/18}(X^{n}_{u_{*}}-X^{n}_{u_{*}+\frac{u}{n^{-1/9}}})=\sqrt{2}\chi{\bf B}_ {u}\,,\quad n^{1/18}(Y^{n}_{u_{*}}-Y^{n}_{u_{*}+\frac{u}{n^{-1/9}}})=\sqrt{2 }{\bf Y}_{u}. \tag{1.10}\] ### Comments on the results, outline of the paper Expansion of the log-partition function.One may think about our results as an expansion of \(\log Z^{\beta,\omega}_{n,h}\) up to several orders, gaining each time some information on the location of the endpoints of the range. A way to formulate such result is, for some real numbers \(\alpha_{1}>\cdots>\alpha_{p}\geq 0\), to define the following sequence of free energies which we may call \(k\)_-th order free energy, at scale \(\alpha_{k}\)_: \[\begin{split} f^{(1)}_{\omega}(h,\beta)&=\lim_{n\to \infty}n^{-\alpha_{1}}\log Z^{\beta,\omega}_{n,h}\\ f^{(k+1)}_{\omega}(h,\beta)&=\lim_{n\to\infty}n^{- \alpha_{k+1}}\bigg{(}\log Z^{\beta,\omega}_{n,h}-\sum_{i=1}^{k}n^{\alpha_{i}}f^ {(i)}_{\omega}(h,\beta)\bigg{)}\,,\end{split} \tag{1.11}\] when these quantities exist and are in \(\mathbb{R}\setminus\{0\}\). Theorems 1.1, 1.3 and 1.6 can be summarized in the following statement: assuming that \(\mathbb{E}\left[\omega_{0}^{3+\eta}\right]<\infty\) for some positive \(\eta\), then letting \(\alpha_{k}=\frac{1}{3k}\) for \(k\in\{1,2,3\}\), we have \(\mathbb{P}\)-a.s. \[\begin{split} f^{(1)}_{\omega}(h,\beta)&=\lim_{n\to \infty}\frac{1}{n^{1/3}}\log Z^{\omega,\beta}_{n,h}=-\frac{3}{2}(\pi h)^{2/3}\,, \\ f^{(2)}_{\omega}(h,\beta)&=\lim_{n\to\infty}\frac{1}{ n^{1/6}}\left(\log Z^{\omega,\beta}_{n,h}+\frac{3}{2}hc_{h}n^{1/3}\right)=\beta \sup_{0\leq u\leq c_{h}}\Big{\{}X^{(1)}_{u}+X^{(2)}_{c_{h}-u}\Big{\}}=\beta X_{ u_{*}}\,,\\ f^{(3)}_{\omega}(h,\beta)&=\frac{\beta}{\sqrt{2}}\, \sup_{u,v}\bigg{\{}\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{\beta c_{h}^{4}\sqrt{2}} \big{(}u+v\big{)}^{2}\bigg{\}}\.\end{split}\] In particular, we prove in this paper that \[\log Z^{\omega,\beta}_{n,h}=-\frac{3}{2}hc_{h}n^{1/3}+\beta\sum_{z=-u_{*}n^{1/3 }}^{(c_{h}-u_{*})n^{1/3}}\omega_{z}+\frac{\beta}{\sqrt{2}}\left(\mathcal{Y}_{ \mathcal{U},\mathcal{V}}-\frac{3\pi^{2}}{\beta c_{h}^{4}\sqrt{2}}\big{(} \mathcal{U}+\mathcal{V}\big{)}^{2}\right)n^{1/9}(1+\bar{o}(1))\] holds \(\mathbb{P}\)-a.s. with the \(\bar{o}(1)\) going to \(0\) in \(\mathbf{P}_{n,h}^{\beta,\omega}\)-probability. Note that the first two orders of \(\log Z_{n,h}^{\omega,\beta}\), meaning \(f_{\omega}^{(1)}\) and \(f_{\omega}^{(2)}\), are respectively called the _free energy_ and the _surface energy_. In Section 5 we study a simplified model where the random walk is constrained to be non-negative. By restricting it so, the processes involved are less complex as they depend on only one variable (which represents the upper edge of the polymer), which simplifies the calculations. The idea is to give some insights on what happens when studying \(\log Z_{n,h}^{\omega,\beta}-\sum_{i=1}^{3}n^{\alpha_{i}}f^{(i)}(h,\beta)\), especially on the scale of the \(=4\)-th order free energy. The environment is taken to be Gaussian in order to get the coupling of \(n^{-1/6}\Sigma_{zn^{1/3}}^{\pm}\) with no coupling error (otherwise the result could change). We give in Section 5 a detailed justification for the following conjecture. **Conjecture 1.7**.: _If \(\omega_{0}\) is Gaussian, there is a positive process \(\mathcal{W}=\{\mathcal{W}_{a,b},(a,b)\in\mathbb{R}^{2}\}\) and a sequence \((u_{n},v_{n})_{n}\) with values in \(\left[0,1\right]^{2}\) such that_ \[\lim_{n\to\infty}\left|Z_{n,h}^{\omega,\beta}\exp\Big{(}-\sum_{i=1}^{3}n^{ \alpha_{i}}f^{(i)}(h,\beta)\Big{)}-\sum_{i,j\in\mathbb{Z}}e^{-\mathcal{W}_{i+u _{n},j+v_{n}}}\right|=0\qquad\mathbb{P}\text{-a.s.} \tag{1.12}\] _In particular, for any couple of integers \((i,j)\), we have \(\mathbb{P}\)-a.s._ \[\mathbf{P}_{n,h}^{\omega,\beta}\left(M_{n}^{-}-\lfloor-u_{*}n^{1/3}+\mathcal{ U}n^{2/9}\rfloor=i,M_{n}^{+}-\lfloor(c_{h}-u_{*})n^{1/3}+\mathcal{V}n^{2/9} \rfloor=j\right)\sim\frac{e^{-\mathcal{W}_{i+u_{n},j+v_{n}}}}{\theta_{\omega}( n)}\,, \tag{1.13}\] _with \(\theta_{\omega}(n):=\sum_{i,j\in\mathbb{Z}}e^{-\mathcal{W}_{i+u_{n},j+v_{n}}}\) a normalizing constant._ The case of varying parameters \(\beta,h\).As mentioned above, the present model has previously been studied in [5], with the difference that the parameters \(\beta,h\) were allowed to depend on \(n\), the size of the polymer. More precisely, the polymer measure considered was given by \[\mathrm{d}\mathbf{P}_{n,h}^{\omega,\beta}(S)=\frac{1}{Z_{n,h}^{\omega,\beta} }\exp\Big{(}\sum_{z\in\mathcal{R}_{n}(S)}\big{(}\beta_{n}\omega_{z}-h_{n}\big{)} \Big{)}\mathrm{d}\mathbf{P}(S),\quad\text{with }h_{n}=\hat{h}n^{-\zeta},\,\beta_{n}= \hat{\beta}n^{-\gamma}\,.\] The authors in [5] obtained \(\mathbb{P}\)-almost sure convergences of \(n^{-\lambda}\log Z_{n,h_{n}}^{\omega,\beta_{n}}\) for some suitable \(\lambda\in\mathbb{R}\), which corresponds to a first order development of the log-partition function. Afterwards, asymptotics for \(\mathbb{E}\mathbf{E}_{n,h_{n}}^{\omega,\beta_{n}}[|\mathcal{R}_{n}|]\) as well as scaling limits for \((M_{n}^{-},M_{n}^{+})\) were established and displayed a wide variety of phases. In addition, the authors also investigated the case where \((\omega_{z})\) are i.i.d and in the domain of attraction of an \(\alpha\)-stable law with \(\alpha\in(0,1)\cup(1,2)\) to unveil an even richer phase diagram. Theorems 1.3 and 1.6 confirm the conjecture of Comment 4 of [5] that for a typical configuration \(\omega\), the fluctuations of the log-partition function and \(n^{-1/3}(M_{n}^{-},M_{n}^{+})\) are not \(\mathbf{P}\)-random for fixed \(h,\beta>0\). With our methods, it should be possible to extend our results to account for size-dependent \(h=h_{n},\beta=\beta_{n}\), with similar results for "reasonable" \(h_{n},\beta_{n}\) (meaning with sufficiently slow growth/decay). Link with the random walk among Bernoulli obstacles.Take a Bernoulli site percolation with parameter \(p\), meaning a collection \(\mathcal{O}=\big{\{}z\in\mathbb{Z}^{d},\eta_{z}=1\big{\}}\) where \(\eta_{z}\) are i.i.d. Bernoulli variables with parameter \(p\), and write \(\mathcal{P}=\mathcal{B}(p)^{\otimes\mathbb{Z}}\) its law on \(\mathbb{Z}\). Consider the random walk starting at \(0\) and let \(\tau\) denote the time it first encounters \(\mathcal{O}\) (called the set of obstacles): one is interested in the asymptotic behavior of the survival probability \(\mathbf{P}(\tau>n)\) as \(n\to\infty\) and of the behavior of the random walk conditionally on having \(\tau>n\), see for example [13] and references therein. The _annealed_ survival probability \(\mathbb{E}^{\mathcal{P}}\mathbf{P}(\tau>n)\) is given by \[\mathbb{E}^{\mathcal{P}}\otimes\mathbf{E}\left[\mathbb{1}_{\{\mathcal{R}_{n} \cap\mathcal{O}=\varnothing\}}\right]=\mathbf{E}\left[\mathcal{P}\left[\forall z \in\mathcal{R}_{n},\eta_{z}=0\right]\right]=\mathbf{E}\left[(1-p)^{|\mathcal{ R}_{n}|}\right]=\mathbf{E}\left[e^{|\mathcal{R}_{n}|\log(1-p)}\right],\] and we observe that this is exactly \(Z_{n,h_{p}}^{\omega,0}\) with \(h_{p}=-\log(1-p)\). Thus, for \(\beta=0\), our model can be seen as an annealed version of the random walk among Bernoulli obstacles with common parameter \(p=1-e^{-h}\). If we push the analogy a bit further and assume \(\beta\omega_{z}-h\leq 0\) for all \(z\in\mathbb{Z}\), we can see \(Z_{n,h}^{\omega,\beta}\) as the annealed survival probability of the random walk among obstacles \(\mathcal{O}^{\omega}=\left\{z\in\mathbb{Z}^{d},\eta_{z}^{\omega}=1\right\}\) where \(\eta_{z}^{\omega}\) are i.i.d. Bernoulli variables with random parameter \(p_{z}^{\omega}=1-e^{\beta\omega_{z}-h}\). The averaging is done on the random walk (with law \(\mathbf{P}\)) and the Bernoulli variables (with law \(\mathcal{P}^{\omega}=\bigotimes_{z\in\mathbb{Z}}\mathcal{B}(p_{z}^{\omega})\)), while the parameters \(p_{z}^{\omega}=1-e^{\beta\omega_{z}-h}\) (with law \(\mathbb{P}\)) are quenched. Link with the directed polymer model.Another famous model is given by considering a doubly indexed field \((\omega_{i,z})_{(i,z)\in\mathbb{N}\times\mathbb{Z}}\) and the polymer measure \[\mathrm{d}\mathbf{P}_{n,h}^{\omega,\beta}(S)=\frac{1}{Z_{n,h}^{\omega,\beta}} \exp\Big{(}\sum_{i=0}^{n}\big{(}\beta\omega_{i,S_{i}}-\lambda(\beta)\big{)} \Big{)}\mathrm{d}\mathbf{P}(S)\,,\quad\lambda(\beta)=\log\mathbb{E}\left[e^{ \beta\omega}\right]\,.\] This is known as the _directed polymer model_ (in contrast with our non-directed model) and has been the object of an intense activity over the past decades, see [10] for an overview. Let us simply mention that the partition function somehow solves a discretized version of Stochastic Heat Equation (SHE) with multiplicative space-time noise \(\partial_{t}u=\Delta u+\beta\xi\cdot u\). Hence, the convergence of the partition function under a proper scaling \(\beta=\beta(n)\), dubbed _intermediate disorder_ scaling, has raised particular interest in recent years: see [1, 8] for the case of dimension \(1\) and [9] for the case of dimension \(2\), where this approach enabled the authors to give a notion of solution to the SHE; see also [4] for the case of a heavy-tailed noise. The main difference with our model is how the disorder \(\omega\) plays into the polymer measure. Here, the polymer gets a new reward/penalty \(\omega_{i,z}\) at each step it takes, whereas in our model such event only happens when reaching a new site of \(\mathbb{Z}\), in some sense "consuming" \(\omega_{z}\) when landing on \(z\) for the first time. Outline of the paper.This paper can be split into three parts. The first part in Section 2 consists in the proof of Theorem 1.3, the second part and main part focuses on the proof of Theorem 1.6. This proof is split into Section 3 where \(\omega\) is assumed to be Gaussian and Section 4 where we explain how to get the general statement thanks to a coupling. A third part, in Section 5, studies the simplified model where the random walk is constrained to be non-negative. Precise results under some technical assumption help us formulate the conjectures in (1.12) and (1.13). Finally, we prove in Appendix A the generalization of Theorem 1.3 to the case when \(\omega\) does not have a finite second moment, as announced. We also state some useful properties of the Brownian meander that we use in our proofs in Appendix B. In Appendix C we detail a way to couple Brownian meanders with a two-sided three-dimensional Bessel process so that they are equal near \(0\) (_i.e._ we prove Proposition 1.5). ## 2 Second order expansion and optimal position We extensively use the following notation: For a given event \(\mathcal{A}\) (which may depend on \(\omega\)), we write the partition function restricted to \(\mathcal{A}\) as \[Z^{\omega,\beta}_{n,h}(\mathcal{A}):=\mathbf{E}\Big{[}\exp\Big{(}\sum_{z\in \mathcal{R}_{n}(S)}\big{(}\beta\omega_{z}-h\big{)}\Big{)}\mathbb{1}_{\mathcal{A }}\Big{]},\quad\text{so that}\quad\mathbf{P}^{\omega,\beta}_{n,h}(\mathcal{A})= \frac{1}{Z^{\omega,\beta}_{n,h}}Z^{\omega,\beta}_{n,h}(\mathcal{A}).\] This section consists in the proof of Theorem 1.3 and is divided into two steps: * We first make use of a coarse-graining approach with a size \(\delta n^{1/3}\) to prove the convergence of the rescaled \(\log Z^{\omega,\beta}_{n,h}-\frac{3}{2}hT^{*}_{n}\). At the same time, we locate the main contribution as coming from trajectories whose left-most point is around \(-u_{*}n^{1/3}\), proving (1.5). * We then prove that \(\mathbb{P}\)-a.s., \(n^{-1/3}M^{-}_{n}\) converges in \(\mathbf{P}^{\omega,\beta}_{n,h}\)-probability to \(-u_{*}\), using the previously step and the fact that \(\mathbf{P}^{\omega,\beta}_{n,h}(\mathcal{A})=Z^{\omega,\beta}_{n,h}(\mathcal{ A})/Z^{\omega,\beta}_{n,h}\). Since we also have the result of (1.1), we deduce (1.6) thanks to Slutsky's lemma as \(M^{-}_{n},M^{+}_{n}\) and \(T_{n}\) are defined on the same probability space. ### Convergence of the log partition function In order to lighten notation, we always omit integer parts in the following. _Proof of Theorem 1.3-_(1.5). Recall (1.4), choose some \(\delta>0\) and split the sum over \(x,y\) depending on \(k_{1}\delta n^{1/3}\leq x<(k_{1}+1)\delta n^{1/3}\) and \(k_{2}\delta n^{1/3}\leq y<(k_{2}+1)\delta n^{1/3}\). By (1.4), we can only consider the pairs \((x,y)\) that satisfy \(|\Delta^{x,y}_{n}|=|x+y-c_{h}n^{1/3}|\leq\varepsilon_{n}n^{1/3}<\delta n^{1/3}\) for \(n\) sufficiently large; note that this implies that \((k_{1}+k_{2})\delta\in\{c_{h}-\delta,c_{h}\}\). We can now rewrite (1.4) as \[\log Z^{\omega,\beta}_{n,h}+\frac{3}{2}hc_{h}n^{1/3}=\bar{o}(1)+\log\psi_{h}+ \log\Lambda^{\omega,\beta}_{n,h}(\delta)\,, \tag{2.1}\] in which we defined \[\Lambda^{\omega,\beta}_{n,h}(\delta):=\sum_{k_{1}=0}^{c_{h}/\delta}\sum_{k_{2 }=\frac{c_{h}}{\delta}-k_{1}-1}^{\frac{c_{h}}{\delta}-k_{1}}Z^{\omega,\beta}_ {n,h}(k_{1},k_{2},\delta)\,, \tag{2.2}\] with \[Z^{\omega,\beta}_{n,h}(k_{1},k_{2},\delta):=\sum_{\begin{subarray}{c}k_{1} \delta n^{1/3}\leq x<(k_{1}+1)\delta n^{1/3}\\ k_{2}\delta n^{1/3}\leq y<(k_{2}+1)\delta n^{1/3}\end{subarray}}\sin\left( \frac{x\pi}{x+y}\right)\exp\left(\beta\sum_{z=-x}^{y}\omega_{z}-\frac{3\pi^{2 }(\Delta^{x,y}_{n})^{2}}{2c_{h}^{4}n^{1/3}}(1+\bar{o}(1))\right)\,. \tag{2.3}\] Then let us define \[\mathcal{W}^{\pm}(u,v,\delta):=X^{(1)}_{u}+X^{(2)}_{v}\pm\sup_{u\leq u^{ \prime}\leq u+\delta}\big{|}X^{(1)}_{u^{\prime}}-X^{(1)}_{u}\big{|}\pm\sup_{v \leq v^{\prime}\leq v+\delta}\big{|}X^{(2)}_{v^{\prime}}-X^{(2)}_{v}\big{|}. \tag{2.4}\] Theorem 1.3-(1.5) essentially derives from the following lemma. **Lemma 2.1**.: _For any integers \(k_{1},k_{2}\) and any \(\delta>0\), we have \(\mathbb{P}\)-almost surely_ \[\mathcal{W}^{-}(k_{1}\delta,k_{2}\delta,\delta)\leq\varliminf_{n\to\infty}\frac {\log Z_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)}{\beta n^{1/6}}\leq\varlimsup_{ n\to\infty}\frac{\log Z_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)}{\beta n^{1/6}} \leq\mathcal{W}^{+}(k_{1}\delta,k_{2}\delta,\delta)\,.\] Let us use this lemma to conclude the proof of the convergence (1.5). Since the sum in (2.2) has \(\frac{2c_{h}}{\delta}\) terms, we easily get that \[0\leq\log\Lambda_{n,h}^{\omega,\beta}(\delta)-\max_{\begin{subarray}{c}0\leq k _{1},k_{2}\leq c_{h}/\delta\\ (k_{1}+k_{2})\delta\in\{c_{h}-\delta,c_{h}\}\end{subarray}}\log Z_{n,h}^{ \omega,\beta}(k_{1},k_{2},\delta)\leq\log\frac{2c_{h}}{\delta}\,.\] Dividing by \(\beta n^{1/6}\) and taking the limit \(n\to\infty\), Lemma 2.1 yields \[\varlimsup_{n\to\infty}\frac{1}{\beta n^{1/6}}\log\Lambda_{n,h}^{\omega,\beta }(\delta)\leq\max_{\begin{subarray}{c}0\leq k_{1},k_{2}\leq c_{h}/\delta\\ (k_{1}+k_{2})\delta\in\{c_{h}-\delta,c_{h}\}\end{subarray}}\mathcal{W}^{+}(k_ {1}\delta,k_{2}\delta,\delta).\] We write \(u=k_{1}\delta\) and \(v=k_{2}\delta\), belonging to the finite set \(U_{\delta}\) defined as \[U_{\delta}:=\Big{\{}(u,v)\in(\mathbb{R}_{+})^{2}\,:\,u\in\big{\{}\delta,2 \delta,\ldots,\lfloor\frac{c_{h}}{\delta}\rfloor\delta\big{\}},u+v\in\{c_{h},c _{h}-\delta\}\Big{\}}\, \tag{2.5}\] so \[\varlimsup_{\delta\to 0}\varlimsup_{n\to\infty}\frac{\log\Lambda_{n,h}^{ \omega,\beta}(\delta)}{\beta n^{1/6}}\leq\varlimsup_{\delta\to 0}\max_{ \begin{subarray}{c}0\leq u,v\leq c_{h}\\ u+v\in\{c_{h}-\delta,c_{h}\}\end{subarray}}\mathcal{W}^{+}(u,v,\delta)=\sup_ {\begin{subarray}{c}0\leq u,v\leq c_{h}\\ u+v=c_{h}\end{subarray}}\big{\{}X_{u}^{(1)}+X_{v}^{(2)}\big{\}}\quad\mathbb{P} \text{-a.s.}\,,\] where for the last identity, we have used the continuity of \(X^{(1)}\) and \(X^{(2)}\). The same goes for \(\liminf_{n\to\infty}n^{-1/6}\Lambda_{n,h}^{\omega,\beta}(\delta)\), with the lower bound \(\mathcal{W}^{-}(u,v,\delta)\), which concludes the proof. Proof of Lemma 2.1.: The proof is inspired by the proof of Lemma 5.1 in [5]. Recall the definition (2.3) of \(Z_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)\) and write for the disorder term: \[\Big{(}\Sigma_{k_{2}\delta n^{1/3}}^{+}+\Sigma_{k_{1}\delta n^{1/3}}^{-}\Big{)} -R_{n}^{\delta}(k_{1}\delta,k_{2}\delta)\leq\sum_{z=-x}^{y}\omega_{z}\leq \Big{(}\Sigma_{k_{2}\delta n^{1/3}}^{+}+\Sigma_{k_{1}\delta n^{1/3}}^{-}\Big{)} +R_{n}^{\delta}(k_{1}\delta,k_{2}\delta) \tag{2.6}\] where the error term \(R_{n}^{\delta}\) is defined for \(u,v\geq 0\) by \[R_{n}^{\delta}(u,v):=\max_{un^{1/3}+1\leq j\leq(u+\delta)n^{1/3}-1}\left| \Sigma_{j}^{-}-\Sigma_{un^{1/3}}^{-}\right|+\max_{vn^{1/3}+1\leq j\leq(u+ \delta)n^{1/3}-1}\left|\Sigma_{j}^{+}-\Sigma_{un^{1/3}}^{+}\right|.\] Using the coupling \(\hat{\omega}\) and Lemma A.5 of [5] (for Levy processes), \(\mathbb{P}\)-a.s., for all \(\varepsilon>0\), for all \(n\) large enough (how large depends on \(\varepsilon,\delta,\omega\)), \[\frac{1}{n^{1/6}}R_{n}^{\delta}(u,v)\leq\varepsilon+\sup_{u\leq u^{\prime} \leq u+\varepsilon+\delta}\left|X_{u^{\prime}}^{(1)}-X_{u}^{(1)}\right|+\sup_{ v\leq v^{\prime}\leq v+\varepsilon+\delta}\left|X_{v^{\prime}}^{(2)}-X_{v}^{(2)}\right|\] \[\Big{(}\Big{|}\frac{1}{n^{1/6}}\Sigma_{un^{1/3}}^{+}-X_{v}^{(2)}\Big{|}\lor \Big{|}\frac{1}{n^{1/6}}\Sigma_{un^{1/3}}^{-}-X_{u}^{(1)}\Big{|}\Big{)}\leq \varepsilon\,,\] uniformly in \(u\) and \(v\), since \(U_{\delta}\) is a finite set. Thus, letting \(n\to\infty\) then \(\varepsilon\to 0\) we obtain that \(\mathbb{P}\)-almost surely, \[\varlimsup_{n\to\infty}\frac{1}{\beta n^{1/6}}\log Z^{\omega,\beta}_{n,h}(k_{1 },k_{2},\delta)\leq\sup_{\begin{subarray}{c}k_{1}\delta n^{1/3}\leq c<(k_{1}+1) \delta n^{1/3}\\ k_{2}\delta n^{1/3}\leq b<(k_{2}+1)\delta n^{1/3}\end{subarray}}\varlimsup_{n \to\infty}\frac{1}{n^{1/6}}\sum_{z=-x}^{y}\omega_{z}\leq\mathcal{W}^{+}(u,v, \delta)\,.\] in which we recall the definition (2.4) of \(\mathcal{W}^{\pm}(u,v,\delta)\). On the other hand, since \(Z^{\omega,\beta}_{n,h}(k_{1},k_{2},\delta)\) is a sum of non-negative terms, we get a simple lower bound by restricting to configurations with almost no fluctuation around \(T_{n}^{*}\): \[\frac{\log Z^{\omega,\beta}_{n,h}(k_{1},k_{2},\delta)}{n^{1/6}} \geq\sup_{|\Delta_{n}^{x,y}|\leq 1}\left\{\frac{\beta}{n^{1/6}}\sum_{z=-x}^ {y}\omega_{z}-\frac{3\pi^{2}}{2c_{h}^{4}}\frac{(\Delta_{n}^{x,y})^{2}}{n^{1/2 }}\right\}-\bar{o}(1)\] \[=\frac{\beta}{n^{1/6}}\sup_{|\Delta_{n}^{x,y}|\leq 1}\sum_{z=-x}^ {y}\omega_{z}-\frac{3\pi^{2}}{2c_{h}^{4}\sqrt{n}}-\bar{o}(1),\] in which the supremum is taken on the \((x,y)\) that satisfy the criteria of \(Z^{\omega,\beta}_{n,h}(k_{1},k_{2},\delta)\), see (2.3). In the above, the \(\bar{o}(1)\) is deterministic and comes from the contribution of \(n^{-1/6}\log\sin(\frac{x\pi}{x+y})\); in the case where \(k_{1}=0\), we restrict the supremum to additionally having \(x\neq 0\), so that we always have \(\sin(\frac{x\pi}{x+y})\geq\frac{c}{x+y}\sim\frac{c}{n^{1/3}}\). After the exact same calculations as above, we get the lower bound \[\liminf_{n\to\infty}\frac{1}{\beta n^{1/6}}\log Z^{\omega,\beta}_{n,h}(k_{1}, k_{2},\delta)\geq\mathcal{W}^{-}(u,v,\delta).\qed\] **Lemma 2.2**.: _The quantity \(\sup_{u+v=c_{h}}\left\{X_{v}^{(1)}+X_{u}^{(2)}\right\}=\sup_{u\in[0,c_{h}]}X_ {u}\) is almost surely positive and finite, and attained at a unique point \(u_{*}\) of \([0,c_{h}]\)._ Proof.: Recall that \(X\) has the same law as \(\sqrt{2}W+X_{h}^{(2)}\) where \(W_{u}=\frac{1}{\sqrt{2}}(X_{u}^{(1)}+X_{u}^{(2)})\) is a standard Brownian motion independent from \(X_{c_{h}}^{(2)}\). Thus it is a classical result, see for example [20, Lemma 2.6]. ### Path properties under the polymer measure Proof of Theorem 1.3-(1.6).: The proof essentially reduces to the following lemma. **Lemma 2.3**.: _For any \(h,\beta>0\), recall \(u_{*}:=\arg\max_{u\in[0,c_{h}]}\left\{X_{u}^{(1)}+X_{c_{h}-u}^{(2)}\right\}\). Then, \(\mathbb{P}\)-a.s._ \[\frac{1}{n^{1/3}}M_{n}^{-}\xrightarrow[n\to\infty]{\mathbb{P}^{\omega,\beta} _{n,h}}-u_{*}\,.\] By Slutsky's Lemma (for a fixed \(\omega\) in the set of \(\omega\)'s for which both convergences are true), Lemma 2.3 combined with (1.1) readily implies that \(\mathbb{P}\)-a.s. \(n^{-1/3}M_{n}^{+}\) converges to \(c_{h}-u_{*}\) in \(\mathbf{P}^{\omega,\beta}_{n,h}\)-probability. Note that Slutsky's lemma can be used on \(M_{n}^{+},M_{n}^{-},T_{n}\) since they are all defined on the same probability space. Proof of Lemma 2.3.: The proof is analogous to what is done in [5]. Define the following set \[\mathcal{U}^{\varepsilon,\varepsilon^{\prime}}:=\left\{u\in[0,c_{h}]\,:\,\sup_ {s,|s-u|<\varepsilon}\left\{X_{s}^{(1)}+X_{c_{h}-s}^{(2)}\right\}\geq X_{u_{* }}-\varepsilon^{\prime}>0\right\}.\] We shall prove that for almost all \(\omega\), we have \(\mathbf{P}^{\omega,\beta}_{n,h}\big{(}\frac{1}{n^{1/3}}|M_{n}^{-}|\not\in\mathcal{ U}^{\varepsilon,\varepsilon^{\prime}}\big{)}\to 0\). For this, we denote by \(\mathcal{A}^{\varepsilon,\varepsilon^{\prime}}_{n}\) the event \(\big{\{}\frac{1}{n^{1/3}}|M_{n}^{-}|\not\in\mathcal{U}^{\varepsilon,\varepsilon ^{\prime}}\big{\}}\). As \[\log\mathbf{P}^{\omega,\beta}_{n,h}\left(\mathcal{A}^{\varepsilon,\varepsilon^{\prime}}_{n}\right) =\log Z^{\omega,\beta}_{n,h}(\mathcal{A}^{\varepsilon,\varepsilon^ {\prime}}_{n})-\log Z^{\omega,\beta}_{n,h}\] \[=\left(\log Z^{\omega,\beta}_{n,h}(\mathcal{A}^{\varepsilon, \varepsilon^{\prime}}_{n})+\frac{3}{2}hc_{h}n^{1/3}\right)-\left(\log Z^{ \omega,\beta}_{n,h}+\frac{3}{2}hc_{h}n^{1/3}\right),\] we only need to prove that \(\varlimsup\limits_{n\to\infty}\frac{1}{\beta n^{1/6}}\big{[}\log Z^{\omega, \beta}_{n,h}(\mathcal{A}^{\varepsilon,\varepsilon^{\prime}}_{n})+\frac{3}{2} hc_{h}n^{1/3}\big{]}<X_{u_{*}}\). Indeed, using the convergence (1.5) in Theorem 1.3, we then get that \(\varlimsup\limits_{n\to\infty}\frac{1}{\beta n^{1/6}}\log\mathbf{P}^{\omega, \beta}_{n,h}\left(\mathcal{A}^{\varepsilon,\varepsilon^{\prime}}_{n}\right)<0\). We apply the same decomposition we used in the proof of Theorem 1.3 over indices \(k_{1}\) such that \(-[k_{1},k_{1}+1)\delta n^{1/3}\not\subset\mathcal{U}^{\varepsilon,\varepsilon^ {\prime}}\). Thus, \[\varlimsup\limits_{\delta\downarrow 0}\varlimsup\limits_{n\to\infty}\frac{1}{ \beta n^{1/6}}\left[\log Z^{\omega,\beta}_{n,h}(\mathcal{A}^{\varepsilon, \varepsilon^{\prime}}_{n})+\frac{3}{2}hT_{n}^{*}\right]\leq\sup\limits_{u\not \in\mathcal{U}^{\varepsilon,\varepsilon^{\prime}}}\big{\{}X^{(1)}_{u}+X^{(2)}_ {c_{h}-u}\big{\}}\leq X_{u_{*}}-\varepsilon^{\prime}\,,\] so we indeed have \(\mathbf{P}^{\omega,\beta}_{n,h}(\mathcal{A}^{\varepsilon,\varepsilon^{\prime }}_{n})\to 0\). Using that \(\bigcap_{\varepsilon^{\prime}>0}\mathcal{U}^{\varepsilon,\varepsilon^{\prime}} \subset B_{2\varepsilon}(u_{*})\) by unicity of the supremum, we have thus proved that, \(\mathbb{P}\)-a.s., \(n^{-1/3}M_{n}^{-}\to-u_{*}\) in \(\mathbf{P}^{\omega,\beta}_{n,h}\)-probability. ## 3 Proof of Theorem 1.6 for a Gaussian environment In this section we prove Theorem 1.6 under the assumption that \(\omega_{0}\) has a Gaussian distribution. We take full advantage of the fact that in this case, the coupling with the Brownian motions \(X^{(1)},X^{(2)}\), is just an identity: it will thus not create any coupling error and allows us to work directly on these processes. The proof still requires some heavy calculations as we must first find what are the relevant trajectories in the factorized log-partition function. Going forward, we take the following setting: random variables \(\omega_{z}\) are i.i.d. with normal distribution \(\mathcal{N}(0,1)\) and \(X^{(1)},X^{(2)}\) are standard Brownian motions such that \[\frac{1}{n^{1/6}}\sum_{z=1}^{x}\omega_{-z}=X^{(1)}_{xn^{-1/3}}\;,\qquad\frac{1 }{n^{1/6}}\sum_{z=0}^{y}\omega_{z}=X^{(2)}_{yn^{-1/3}}\,. \tag{3.1}\] We will adapt the following proof to a general environment in Section 4 by controlling the error term due to the coupling. We define \[\bar{Z}^{\omega,\beta}_{n,h}:=Z^{\omega,\beta}_{n,h}\,e^{\frac{3}{2}hT_{n}^{* }-\beta n^{1/6}X_{u_{*}}}\;,\quad\bar{Z}^{\omega,\beta}_{n,h}(\mathcal{A}):=Z ^{\omega,\beta}_{n,h}(\mathcal{A})\,e^{\frac{3}{2}hT_{n}^{*}-\beta n^{1/6}X_{ u_{*}}}\,,\] so that (1.7) can be rewritten as a statement regarding the convergence of \(n^{-1/9}\log\bar{Z}^{\omega,\beta}_{n,h}\). Here are the four steps of the proof: * We first rewrite \(\bar{Z}^{\omega,\beta}_{n,h}\) to make \(X_{xn^{-1/3}}-X_{u_{*}}\) appear. Having this negative quantity makes it easier to find the relevant trajectories since when \(|X_{|M_{n}^{-}|n^{-1/3}}-X_{u_{*}}|\) is too large for a given trajectory, the relative contribution of this trajectory to the partition function goes exponentially to \(0\), meaning it has a low \(\mathbf{P}^{\omega,\beta}_{n,h}\)-probability. * We prove the \(\mathbb{P}\)-almost sure convergence of \(n^{-1/9}\log\bar{Z}_{n,h}^{\omega,\beta}\) restricted to the event \(\mathcal{A}_{n,\omega}^{K,L}=\left\{|\Delta_{n}|\leq Kn^{2/9},|M_{n}^{-}+u_{*}n^ {1/3}|\leq Ln^{2/9}\right\}\) towards a positive value. It consists again of a coarse-graining approach where each component \(\bar{Z}_{n,h}^{\omega,\beta}(u,v)\) converges to \(\mathcal{Y}_{u,v}-c_{h,\beta}(u+v)^{2}\). This leads to defining \((\mathcal{U},\mathcal{V})\) via a variational problem. * We prove that \(n^{-1/9}\log\bar{Z}_{n,h}^{\omega,\beta}\) restricted to \((\mathcal{A}_{n,\omega}^{K,L})^{c}\) is almost surely negative as \(n\to\infty\) as soon as \(K\) or \(L\) is sufficiently large. Coupled with the previous convergence towards a positive limit, we prove that all of these trajectories have a negligible contribution. * Afterwards, the convergences in \(\mathbf{P}_{n,h}^{\omega,\beta}\)-probability are derived in the same way as for Theorem 1.3. **Corollary 3.1** (of Lemma 2.3).: _There exists a vanishing sequence \((\varepsilon_{n})_{n\geq 1}\) such that_ \[Z_{n,h}^{\omega,\beta}=(1+\bar{o}(1))Z_{n,h}^{\omega,\beta}\big{(}|M_{n}^{-}n^ {-1/3}+u_{*}|\leq\varepsilon_{n}\big{)}\qquad\mathbb{P}\text{-a.s. as }n\to\infty\,.\] Going forward, we will work conditionally on \(u_{*}\). Recall that \(\frac{1}{\sqrt{2}}(X-X_{c_{h}}^{(2)})\) has the law of a standard Brownian motion, thus according to Proposition 1.4, the processes \((X_{u_{*}}-X_{u_{*}-t},t\geq 0)\) and \((X_{u_{*}}-X_{u_{*}+t},t\geq 0)\) are two Brownian meanders, respectively on \([0,u_{*}]\) and \([0,c_{h}-u_{*}]\). Recall that since \(u_{*}\) follows the arcsine law on \([0,c_{h}]\), these intervals are \(\mathbb{P}\)-almost surely nonempty. ### Rewriting the partition function Thanks to (1.4) and Theorem 1.3, we have shown that \[\bar{Z}_{n,h}^{\omega,\beta}=(1+\bar{o}(1))\psi_{h}\sin\left(\frac{u_{*}\pi}{ c_{h}}\right)\sum_{\begin{subarray}{c}|x-u^{*}n^{1/3}|\leq\varepsilon_{n}n^{1/3} \\ |y-(c_{h}-u^{*})n^{1/3}|\leq\varepsilon_{n}n^{1/3}\end{subarray}}\exp\left(\beta n ^{1/6}\Omega_{n}^{x,y}-\frac{3\pi^{2}(\Delta_{n}^{x,y})^{2}}{2c_{h}^{4}n^{1/3 }}(1+\bar{o}(1))\right), \tag{3.2}\] with \[\Omega_{n}^{x,y}:=\frac{1}{n^{1/6}}\sum_{z=1}^{x}\omega_{-z}+\frac{1}{n^{1/6} }\sum_{z=0}^{y}\omega_{z}-X_{u_{*}}=X_{xn^{-1/3}}^{(1)}+X_{yn^{-1/3}}^{(2)}-X_{ u_{*}}, \tag{3.3}\] where for the last identity we have used the relation (3.1) between \(X\) and \(\omega\). Both \(\bar{o}(1)\) are deterministic and the first one includes a term in \(\varepsilon_{n}\). Note that \(X_{u_{*}}-(X_{xn^{-1/3}}^{(1)}+X_{yn^{-1/3}}^{(2)})\) is not necessarily positive since the supremum in (1.5) is taken over non negative \(u\) and \(v\) such that \(u+v=c_{h}\), whereas \(x+y\neq c_{h}n^{1/3}\) in the general case. However we can write \[\left(X_{xn^{-1/3}}^{(1)}+X_{yn^{-1/3}}^{(2)}\right)=\left(X_{xn^{-1/3}}^{(1)} +X_{c_{h}-xn^{-1/3}}^{(2)}\right)+\left(X_{ym^{-1/3}}^{(2)}-X_{c_{h}-xn^{-1/3} }^{(2)}\right),\] so that \(\Omega_{n}^{x,y}\) can be rewritten as \[\Omega_{n}^{x,y}:=-\left(X_{u_{*}}-X_{xn^{-1/3}}\right)+X_{yn^{-1/3}}^{(2)}-X_ {c_{h}-xn^{-1/3}}^{(2)}\,. \tag{3.4}\] Note that it is not problematic that \(c_{h}-xn^{-1/3}\) can be negative if \(y\) is small enough, since \(X^{(2)}\) can be defined on the real line. Although (3.4) may seem more complex to study than (3.3), having a term that is always non-positive is useful to isolate the main contributions to the partition function. Recall that \(X_{u_{*}}-X_{xn^{-1/3}}\) can be expressed in terms of Brownian meanders depending on the sign of \(u_{*}-xn^{-1/3}\), see Proposition 3.7. More precisely, there exist \(\mathcal{M}^{+},\mathcal{M}^{-}\) two independent Brownian meanders on \([0,1]\) such that \[X_{u_{*}}-X_{xn^{-1/3}}=\sqrt{u_{*}}\mathcal{M}^{-}_{\frac{u^{*}-xn^{-1/3}}{u_{ *}}}\mathbbm{1}_{\left\{u_{*}\geq xn^{-1/3}\right\}}+\sqrt{c_{h}-u_{*}} \mathcal{M}^{-}_{\frac{xn^{-1/3}-u_{*}}{c_{h}-u_{*}}}\mathbbm{1}_{\left\{u_{*} <xn^{-1/3}\right\}}\,. \tag{3.5}\] **Heuristic.** In (3.2) and in view of (3.4), the term inside the exponential can be split into three parts. The first part is \(-\beta n^{1/6}\left(X_{u_{*}}-X_{xn^{-1/3}}\right)\), which is negative and of order \(n^{1/6}|u_{*}-xn^{-1/3}|^{1/2}\). The second term is \(\beta n^{1/6}(X_{yn^{-1/3}}^{(2)}-X_{c_{h}-xn^{-1/3}}^{(2)})\), which is of order at most \((\Delta_{n})^{1/2}\). The last term is \(-\tilde{c}_{h}(\Delta_{n})^{2}n^{-1/3}\). We thus can easily compare the second term to the last one: dominant terms in (1.4) are all negative when \((\Delta_{n})^{2}n^{-1/3}\gg(\Delta_{n})^{1/2}\) or in other words if \(\Delta_{n}\gg n^{2/9}\). Thus we will show that the corresponding trajectories have a negligible contribution to \(Z_{n,h}^{\omega,\beta}\), and that we can restrict the partition function to trajectories such that \(\Delta_{n}=\bar{\mathcal{O}}(n^{2/9})\). We can apply the same reasoning to the first term, which must verify \(n^{1/6}(X_{yn^{-1/3}}^{(2)}-X_{c_{h}-xn^{-1/3}}^{(2)})=\bar{\mathcal{O}}(n^{1 /9})\), from which we will deduce \(|M_{n}^{-}n^{-1/3}+u^{*}|=\bar{\mathcal{O}}(n^{-1/9})\). ### Restricting the trajectories Our goal is now to characterize the main contribution to the partition function directly in terms of \(M_{n}^{-}\) and \(M_{n}^{+}\) or equivalent quantities, and not in terms of the processes. With this goal in mind, we define, for \(K,L\geq 0\): \[\bar{Z}_{n,\omega}^{>}(K,L):=\bar{Z}_{n,h}^{\omega,\beta}\big{(}|\Delta_{n}| \geq Ln^{2/9},Kn^{2/9}\leq|M_{n}^{-}+u_{*}n^{1/3}|\leq\varepsilon_{n}n^{1/3} \big{)}\] and \[\bar{Z}_{n,\omega}^{<}(K,L):=\bar{Z}_{n,h}^{\omega,\beta}\big{(}|\Delta_{n}| \leq Ln^{2/9},|M_{n}^{-}+u_{*}n^{1/3}|\leq Kn^{2/9}\leq\varepsilon_{n}n^{1/3} \big{)}.\] In the next section, we will prove in particular that \(\mathbb{P}\)-a.s., \(\varliminf_{n\to\infty}n^{-1/9}\log\bar{Z}_{n,\omega}^{<}(K,L)>0\), see Proposition 3.6 and Lemma 3.9. The following proposition shows that \(\varlimsup_{n\to\infty}n^{-1/9}\log\bar{Z}_{n,\omega}^{>}(K,L)<0\)\(\mathbb{P}\)-a.s. for \(K\)_or_\(L\) large enough, meaning that trajectories in \(\bar{Z}_{n,\omega}^{>}(K,L)\) have a negligible contribution. **Proposition 3.2**.: _Uniformly in \(n\geq 1\) such that \(\varepsilon_{n}<\frac{1}{2}\), we have_ \[\varlimsup_{K\to\infty}\mathbb{P}\left(\frac{1}{n^{1/9}}\log\bar{Z}_{n,\omega} ^{>}(K,0)\geq-1\right)=\varlimsup_{L\to\infty}\mathbb{P}\left(\frac{1}{n^{1/9 }}\log\bar{Z}_{n,\omega}^{>}(0,L)\geq-1\right)=0\,.\] The proof boils down to upper bounds on both probabilities, uniform in \(n\), and a use of monotone convergence theorem. We first explain a small argument that we will use repetitively throughout the paper when we don't need to have exact values for the constants. Take an interval \(I\) and real numbers \(\alpha,\lambda>0\). In the following Lemmas we will need to compute probabilities such as \[\mathbb{P}\Big{(}\inf_{|u|\in\lambda I}\{X_{u_{*}}-X_{u_{*}+u}\}\leq\alpha \Big{)}\leq 2\max_{\sigma\in\{-1,+1\}}\mathbb{P}\Big{(}\inf_{u\in\lambda I}\{X_{u_{*} }-X_{u_{*}+u}\}\leq\alpha\Big{)}.\] Recall that for both values of \(\sigma\) we can express \(X_{u_{*}}-X_{u_{*}+u}\) as a Brownian meander \(\mathcal{M}^{\sigma}\) on \([0,1]\) by using the scaling given in (3.5). Taking for example \(\sigma=+1\), we have \[\mathbb{P}\Big{(}\inf_{u\in\lambda I}\{X_{u_{*}}-X_{u_{*}+u}\}\leq\alpha\Big{)} =\mathbb{P}\Big{(}\inf_{u\in\frac{\lambda}{c_{h}-u_{*}}I}\sqrt{c_{h}-u_{*}} \mathcal{M}_{u}^{+}\leq\alpha\Big{)}.\] Now, if \(\lambda\) and \(\alpha\) can be multiplied by some positive number (typically when we don't need to have exact values), we can freely replace them by \(\lambda(c_{h}-u_{*})\) and \(\alpha\sqrt{c_{h}-u_{*}}\) in order to ease the notation. **Lemma 3.3**.: _There is a positive constant \(C=C(h,\beta)\), uniform in \(L,n\geq 1\) such that_ \[\mathbb{P}\left(\frac{1}{n^{1/9}}\log\bar{Z}_{n,\omega}^{>}(0,L)\geq-1\right) \leq e^{-CL^{3}}\xrightarrow[L\to\infty]{}0\,. \tag{3.6}\] Proof.: We can write \(\bar{Z}_{n,\omega}^{>}(0,L)=\sum_{k,l\geq 0}\bar{Z}_{n,\omega}^{>}(0,L)_{k,l}\) with \[\bar{Z}_{n,\omega}^{>}(0,L)_{k,l}=\bar{Z}_{n,h}^{\omega,\beta}\big{(}|\Delta_ {n}|\in 2^{l}[1,2)Ln^{2/9}\,,\,||M_{n}^{-}|n^{-1/3}-u_{*}|\in[k,k+1)2^{l}Ln^{-1/9 }\big{)}\,.\] Using (3.2), we have \[\bar{Z}_{n,\omega}^{>}(0,L)_{k,l}\leq C_{h}(\varepsilon_{n}n^{1/3})^{2}\exp \left(\beta n^{1/6}(\mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n})-\frac{3 \pi^{2}}{2c_{h}^{4}}(2^{l}Ln^{2/9})^{2}n^{-1/3}\right)\,,\] where we have set \[\mathcal{M}_{k,l}^{n}:=\inf_{|u-u_{*}|\in[k,k+1)2^{l}Ln^{-1/9}}X_{u_{*}}-X_{u }\,,\quad\mathcal{X}_{k,l}^{(2)}:=\sup_{\begin{subarray}{c}|\Delta_{n}^{xy}| \in 2^{l}[1,2)Ln^{2/9}\\ |u-u_{*}|\in[k,k+1)2^{l}Ln^{-1/9}\end{subarray}}|X_{v}^{(2)}-X_{c_{h}-u}^{(2)}|\] with \(u=xn^{-1/3}\) and \(v=yn^{-1/3}\). Thus, a union bound yields \[\mathbb{P}\left(\bar{Z}_{n,\omega}^{>}(0,L)\geq e^{-n^{1/9}}\right) \leq\sum_{k,l=0}^{+\infty}\mathbb{P}\left(C_{h}(\varepsilon_{n}n^ {1/3})^{2}e^{\beta n^{1/6}(\mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n})}e^{ -\frac{3\pi^{2}}{2c_{h}^{4}}(2^{l}L)^{2}n^{1/9}}\geq\frac{e^{-n^{1/9}}}{2^{l+ 1}}\right)\] \[\leq\sum_{k,l=0}^{+\infty}\mathbb{P}\left(\beta n^{1/6}(\mathcal{ X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n})\geq c_{h}^{\prime}n^{1/9}2^{2l}L^{2} \right)\,,\] where we have used that for \(n\) large enough (how large depends only on \(h\)) \[n^{1/9}(\frac{3\pi^{2}}{2c_{h}^{4}}2^{2l}L^{2}-1)-(l+1)\log 2-2\log(\varepsilon_{ n}n^{1/3})-\log C_{h}\geq c_{h}^{\prime}n^{1/9}2^{2l}L^{2}\] for some constant \(c_{h}^{\prime}\), uniformly in \(L\geq c_{h}^{2}/\pi\) and \(l\geq 0\). We now work out an upper bound on \(\mathbb{P}\left(\mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n}\geq c_{h}^{ \prime}n^{-1/18}2^{2l}L^{2}/\beta\right)\), we first observe that writing \(u_{k}=u_{*}+2^{l}kLn^{-1/9}\), we have \(|c_{h}-u_{k}-v|\leq|c_{h}-u-v|+|u-u_{k}|\leq 2^{l+1}Ln^{-1/9}\) on the intervals where the supremum are taken in \(\mathcal{X}_{k,l}^{(2)}\). Thus, we have the upper bound \[\mathcal{X}_{k,l}^{(2)}\leq\sup_{|c_{h}-u_{k}-v|\leq 2\frac{2^{l}L}{n^{1/9}}}|X_{v}^ {(2)}-X_{c_{h}-u_{k}}^{(2)}|+\sup_{|u-u_{k}|\leq\frac{2^{l}L}{n^{1/9}}}|X_{c_{ h}-u_{k}}^{(2)}-X_{c_{h}-u}^{(2)}|\,. \tag{3.7}\] Write \(\mathcal{X}_{k,l}^{(2),v}\) and \(\mathcal{X}_{k,l}^{(2),u}\) for the first and the second term of the right-hand side of (3.7) respectively, as well as \(\alpha_{l}:=c_{h}^{\prime}n^{-1/18}2^{2l}L^{2}/\beta\). We first need to control the term \(k=0\), in which we know that \(\mathcal{M}_{0,l}^{n}=0\), then we are left to bound \[\sum_{l=0}^{+\infty}\mathbb{P}\left(\beta n^{1/6}\mathcal{X}_{0,l}^{(2)}\geq c _{h}^{\prime}n^{1/9}2^{2l}L^{2}\right)\leq\sum_{l=0}^{+\infty}\left[\mathbb{P} \left(\mathcal{X}_{0,l}^{(2),v}\geq\frac{\alpha_{l}}{2}\right)+\mathbb{P} \left(\mathcal{X}_{0,l}^{(2),u}\geq\frac{\alpha_{l}}{2}\right)\right]\,.\] By the reflection principle for Brownian motion, both of these variables are the modulus of a Gaussian of variance \(2\frac{2^{l}L}{n^{1/9}}\) and \(\frac{2^{l}L}{n^{1/9}}\) respectively. Thus, we have the upper bound \[\sum_{l=0}^{+\infty}\mathbb{P}\left(\beta n^{1/6}\mathcal{X}_{0,l}^{(2)}\geq c _{h}^{\prime}n^{1/9}2^{2l}L^{2}\right)\leq\sum_{l=0}^{+\infty}c_{0}\left[e^{- c_{1}2^{3l}L^{3}}+e^{-c_{2}2^{3l}L^{3}}\right]\leq(cst.)\sum_{l=0}^{+\infty}e^{-c 2^{3l}L^{3}}\,.\] for some constants \(c_{0},c_{1},c_{2},c>0\). We now focus on \(k\geq 1\) and decompose on whether \(\mathcal{M}_{k,l}^{n}\) is less or greater than \(k^{1/8}(2^{l}L)^{1/2}n^{-1/18}\). On \(\left\{\mathcal{M}_{k,l}^{n}\leq k^{1/8}(2^{l}L)^{1/2}n^{-1/18}\right\}\), we use Holder inequality for \(p>1\): \[\mathbb{P}\left(\mathcal{M}_{k,l}^{n}\leq k^{1/8}\frac{\sqrt{2^{l}L}}{n^{1/18 }},\mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n}\geq\alpha_{l}\right)\leq \mathbb{P}\left(\mathcal{M}_{k,l}^{n}\leq k^{1/8}\frac{\sqrt{2^{l}L}}{n^{1/18 }}\right)^{1/p}\mathbb{P}\left(\mathcal{X}_{k,l}^{(2)}\geq\alpha_{l}\right)^{1 -\frac{1}{p}}.\] Since \(k\) can we taken arbitrarily, in the definition of \(\mathcal{M}_{k,l}^{n}\) we can replace \(X_{u_{*}}-X_{u_{*}+u}\) by \(\mathcal{M}_{u}\) for \(\mathcal{M}\) a Brownian meander on \([0,1]\). Thus, with the help of Corollary B.2 with \(\lambda=2\) and the previous argument (\(L\) and \(k\) can be taken up to a positive multiplicative constant), we compute \[\mathbb{P}\left(\mathcal{M}_{k,l}^{n}\leq k^{1/8}\sqrt{2^{l}L}n^{ -1/18}\right) \leq\frac{16k^{1/8}}{\sqrt{\pi k}}\left(1\wedge\frac{(2k^{1/8})^{ 2}}{2k}\right)+(cst.)k^{5/4}\frac{e^{-2k^{1/4}}}{1-e^{-8k^{1/4}}}\] \[\leq(cst.)k^{-9/8}+(cst.)k^{1/4}\frac{e^{-2k^{5/4}}}{1-e^{-8k^{1/ 4}}}\leq(cst.)k^{-9/8}\,.\] And for the other probability, we use \[\mathbb{P}\left(\mathcal{X}_{k,l}^{(2)}\geq\alpha_{l}\right)=\mathbb{P}\left( \mathcal{X}_{0,l}^{(2)}\geq\alpha_{l}\right)\leq(cst.)e^{-c2^{3l}L^{3}}\,.\] Therefore, \[\mathbb{P}\left(\mathcal{M}_{k,l}^{n}\leq k^{1/8}\sqrt{2^{l}L}n^{-1/18}, \mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n}\geq\alpha_{l}\right)\leq(cst.)k ^{-9/8p}e^{-c(1-\frac{1}{p})2^{3l}L^{3}}\] which has a finite sum in \(k\geq 1\) and \(l\geq 0\) that goes to \(0\) when \(L\to+\infty\) when \(p\) is suffineciently close to \(1\). On the other hand, \[\mathbb{P}\left(\mathcal{M}_{k,l}^{n}\geq k^{1/8}\sqrt{2^{l}L}n^{-1/18}, \mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n}\geq\alpha_{l}\right)\leq \mathbb{P}\left(\mathcal{X}_{k,l}^{(2)}\geq\alpha_{l}+k^{1/8}\sqrt{2^{l}L}n^{- 1/18}\right)\,,\] and again by (3.7) and the Brownian reflection principle, \[\mathbb{P}\left(\mathcal{X}_{k,l}^{(2)}\geq\alpha_{l}+k^{1/8}\sqrt{2^{l}L}n^{- 1/18}\right)\leq Ce^{-c\frac{(\alpha_{l}+k^{1/8}\sqrt{2^{l}L}n^{-1/18})^{2}}{ 2^{l}Ln^{-1/9}}}\leq(cst.)e^{-c_{1}2^{3l}L^{3}}e^{-c_{2}k^{1/4}}\,,\] which is again summable in \(k,l\), with a sum that goes to \(0\) when \(L\to+\infty\). In conclusion, we proved that \(\mathbb{P}\left(\bar{Z}_{\widehat{n},\omega}^{>}(0,L)\geq e^{-n^{1/9}}\right)\) is bounded by \(ce^{-CL^{3}}\), uniformly in \(n\) large enough, thus proving the lemma. **Lemma 3.4**.: _There is a positive \(C\) such that for any \(n\geq 1\) such that \(\varepsilon_{n}<\frac{1}{2}\), any \(K\geq 1\)_ \[\mathbb{P}\left(\frac{1}{n^{1/9}}\log\bar{Z}^{>}_{n,\omega}(K,0)\geq-1\right) \leq\frac{C}{K^{1/12}}\xrightarrow[K\to\infty]{}0\,. \tag{3.8}\] Proof.: We will use the same strategy as for Lemma 3.3, meaning controlling both \(M_{n}^{-}+u_{*}n^{1/3}\) and \(\Delta_{n}\), instead of only \(M_{n}^{-}+u_{*}n^{1/3}\). Thus, we consider \[\bar{Z}^{>}_{n,\omega}(K,0)_{k,l}=\bar{Z}^{\omega,\beta}_{n,h}\big{(}|M_{n}^{ -}+u_{*}n^{1/3}|\in 2^{k}[1,2)Kn^{2/9},|\Delta_{n}|\in[l,l+1)n^{2/9}\big{)}\,,\] and when summing on \(l\geq 0\) we get \(\bar{Z}^{>}_{n,\omega}(K,0)_{k}=\bar{Z}^{\omega,\beta}_{n,h}\big{(}|M_{n}^{ -}+u_{*}n^{1/3}|\in 2^{k}[1,2)Kn^{2/9}\big{)}\) similar to the notation in Lemma 3.3. Let us introduce \[\xi_{k,l}:=-\inf_{|u|\in 2^{k}[1,2)Kn^{-1/9}}\{X_{u_{*}}-X_{u_{*}+u}\}+\sup_{| \Delta_{n}|\in[l,l+1)Ln^{2/9}}|X_{v}^{(2)}-X_{c_{h}-u}^{(2)}|\,.\] With the same considerations as before, we have by a union bound \[\mathbb{P}\left(\bar{Z}^{>}_{n,\omega}(K,0)\geq e^{-n^{1/9}}\right)\leq\sum_{ \begin{subarray}{c}k,l=0\\ 2^{k}K\leq\varepsilon_{n}n^{1/9}\end{subarray}}^{+\infty}\mathbb{P}\left(C_{ h}(\varepsilon_{n}n^{1/3})^{2}e^{\beta n^{1/6}\xi_{k,l}}e^{-\frac{3\pi^{2}l}{2c_{h} ^{2}}l^{2}n^{1/9}}\geq\frac{e^{-n^{1/9}}}{2^{k+l+1}}\right)\,. \tag{3.9}\] Observe that each probability in the sum is equal to \[\mathbb{P}\left(\beta n^{1/6}\xi_{k,l}\geq\Big{(}\frac{3\pi^{2}}{2c_{h}^{4}} l^{2}-1\Big{)}n^{1/9}-(k+l+1)\log 2-2\log(\varepsilon_{n}n^{1/3})-\log C_{h} \right).\] Since we have the restriction \(2^{k}Kn^{-1/9}\leq\varepsilon_{n}\), assuming \(K\geq 1\) we have \(k\log 2\leq\log(\varepsilon_{n}n^{1/9})\), thus there is a constant \(c>0\) such that for \(n\) large enough (how large depends only on \(h\)), \[\Big{(}\frac{3\pi^{2}}{2c_{h}^{4}}l^{2}-1\Big{)}n^{1/9}-(k+l+1)\log 2-2\log( \varepsilon_{n}n^{1/3})\geq\big{(}cl^{2}-2\big{)}n^{1/9}\,,\] uniformly in \(k,K,l\). Therefore, we get that \[\mathbb{P}\left(\bar{Z}^{>}_{n,\omega}(K,0)\geq e^{-n^{1/9}}\right)\leq\sum_ {\begin{subarray}{c}k,l=0\\ 2^{k}K\leq\varepsilon_{n}n^{1/9}\end{subarray}}^{+\infty}\mathbb{P}\left( \beta n^{1/18}\xi_{k,l}\geq cl^{2}-2\right)\,.\] Let us define \(C_{n,l}:=\sup_{|\Delta_{n}^{x,y}|\in[l,l+1)n^{2/9}}|X_{v}^{(2)}-X_{c_{h}-u}^{( 2)}|-\beta^{-1}n^{-1/18}\,(cl^{2}-2)\), with again \(u=xn^{-1/3}\) and \(v=yn^{-1/3}\). Recalling the definition of \(\xi_{k,l}\) above we have \[\mathbb{P}\left(\beta n^{1/18}\xi_{k,l}\geq cl^{2}-2\right)=\mathbb{P}\Big{(} \inf_{|u|\in 2^{k}[1,2)Kn^{-1/9}}\{X_{u_{*}}-X_{u_{*}+u}\}\leq C_{n,l}\Big{)}\,.\] Let us now decompose over the values of \(C_{n,l}\). Since \(X_{u_{*}}-X_{u}\geq 0\), when \(C_{n,l}<0\) the probability equals \(0\), so we can intersect with \(C_{n,l}\geq 0\). We have \[\mathbb{P}\left(\beta n^{1/18}\xi_{k,l}\geq cl^{2}-2\right)\] \[\quad\leq\sum_{j=1}^{+\infty}\mathbb{P}\left(\inf_{|u|\in 2^{k}[1,2)Kn^{-1/9}}X _{u_{*}}-X_{u_{*}+u}\leq jn^{-1/18},C_{n,l}\in[j-1,j)\,n^{-1/18}\right)\] \[\quad\leq\sum_{j=1}^{+\infty}\mathbb{P}\Big{(}\inf_{|u|\in 2^{k}[1,2)Kn^{-1/9}} X_{u_{*}}-X_{u_{*}+u}\leq jn^{-1/18}\Big{)}^{1/2}\mathbb{P}\Big{(}C_{n,l}\in[j-1,j )\,n^{-1/18}\Big{)}^{1/2}\,,\] where we have used Cauchy-Schwartz inequality. First, let us treat the last probability: using the Brownian scaling, we have \[\mathbb{P}\left(C_{n,l}\in[j-1,j)\,n^{-1/18}\right) =\mathbb{P}\Big{(}\sup_{r\in[l,l+1)}|X_{r}^{(2)}|-\beta^{-1}(cl^{2} -2)\in[j-1,j)\,\Big{)}\] \[\leq\mathbb{P}\Big{(}\sup_{r\in[l,l+1)}|X_{r}^{(2)}|\geq j-1+\beta ^{-1}(cl^{2}-2)\Big{)}\,.\] We can get a bound on this probability using usual Gaussian bounds and the reflection principle: \[\mathbb{P}\Big{(}\sup_{r\in[l,l+1)}|X_{r}^{(2)}|\geq\alpha\Big{)}\leq\mathbb{ P}\Big{(}\sup_{r\in[0,l+1)}|X_{r}^{(2)}|\geq\alpha\Big{)}\leq 2e^{-\frac{\alpha^{2}}{2( l+1)}}\,.\] Then, we substitute \(\alpha\) with \(j-1+\beta^{-1}(cl^{2}-2)\) to get the upper bound \[\mathbb{P}\left(C_{n,l}\in[j-1,j)\,n^{-1/18}\right)\leq 2e^{-c\frac{l^{4}}{ l+1}}e^{-\frac{(j-\epsilon^{\prime})^{2}}{2(l+1)}}e^{-c\frac{l^{2}}{l+1}(j- \epsilon^{\prime})}\,,\] for some constants \(c,c^{\prime}\) (that depend only on \(h,\beta\)). For the other probability, with the argument explained previously (since \(K\) and \(j\) can again be taken up to a positive multiplicative constant), we only need to get a bound on \[\mathbb{P}\Big{(}\inf_{u\in 2^{k}[1,2)Kn^{-1/9}}\mathcal{M}_{u}^{+}\leq jn^{-1/ 18}\Big{)}\,. \tag{3.10}\] For \(\sigma=-1\) we can do the same reasoning, thus we only need to get a bound for (3.10). We use Corollary B.2: for any \(\lambda>0\), we have \[\mathbb{P}\Big{(}\inf_{u\in[s,t]}\mathcal{M}_{u}^{+}\leq a\Big{)}\leq\frac{8 \lambda a}{\sqrt{\pi s}}\left(1\wedge\frac{(\lambda a)^{2}}{2s}\right)+(cst.) \frac{a\sqrt{t}}{t-s}\frac{e^{-\frac{2}{t-s}a^{2}(\lambda-1)^{2}}}{1-e^{-\frac {2}{t-s}a^{2}\lambda^{2}}}\,,\] which translates for \(\lambda=(2^{k}K)^{1/3}\) to \[\mathbb{P}\Big{(}\inf_{u\in 2^{k}[1,2)Kn^{-1/9}}\mathcal{M}_{u}^{+} \leq jn^{-1/18}\Big{)} \leq\frac{8j(2^{k}K)^{1/3}}{\sqrt{\pi^{2^{k}}K}}+\frac{j\sqrt{2^{ k}K}}{2^{k}K}\frac{(cst.)}{1-e^{-2(2^{k}K)^{2/3}\frac{j^{2}}{2^{k}K}}}\] \[\leq\frac{16j}{\sqrt{\pi}(2^{k}K)^{1/6}}+(cst.)\frac{j}{2^{k}K}(2 ^{k}K)^{5/6}\leq(cst.)\frac{j}{(2^{k}K)^{1/6}}\] where we used that \(\frac{1}{x(1-e^{-\alpha/x})}\) is bounded for \(x\geq 1\), uniformly in \(\alpha\geq 1\) (note that \(j\geq 1\)). Together with the above, this yields the following upper bound for \(2^{k}Kn^{-1/9}\leq\varepsilon_{n}<\frac{1}{2}\): \[\mathbb{P}\left(\beta n^{1/6}\xi_{k,l}\geq n^{1/9}\left(cl^{2}-2\right)\right) \leq\frac{(cst.)}{(2^{k}K)^{1/12}}\,e^{-d^{3}}\sum_{j=1}^{+\infty}je^{-\frac{ (j-\epsilon^{\prime})^{2}}{2(l+1)}}e^{-d(j-\epsilon^{\prime})}\,. \tag{3.11}\] The sum on \(j\geq 1\) is bounded from above by \(c^{\prime\prime}(l+1)^{3}\) (where the constant \(c^{\prime\prime}\) does not depend on \(l\geq 0\)), so we finally get \[\mathbb{P}\left(\bar{Z}_{n,\omega}^{>}(K,0)\geq e^{-n^{1/9}}\right)\leq\sum_{ k,l=0}^{+\infty}\mathbb{P}\left(\beta n^{1/18}\xi_{k,l}\geq cl^{2}-2\right)\leq \frac{(cst.)}{K^{1/12}}\sum_{k,l=0}^{+\infty}\frac{1}{2^{k/6}}(l+1)^{3}e^{-cl^{ 3}}\,.\] The lemma follows since the last sum is finite. **Corollary 3.5**.: _We have the following convergence_ \[\varlimsup_{K\to\infty}\mathbb{P}\left(\varlimsup_{n\to\infty}\frac{1}{n^{1/9}} \log\bar{Z}^{>}_{n,\omega}(K,0)\geq-1\right)=\varlimsup_{L\to\infty}\mathbb{P} \left(\varlimsup_{n\to\infty}\frac{1}{n^{1/9}}\log\bar{Z}^{>}_{n,\omega}(0,L) \geq-1\right)=0\,.\] _Moreover, \(\mathbb{P}\)-a.s., there exists \(K_{0}=K_{0}(\omega)\) and \(L_{0}=L_{0}(\omega)\) such that, for all \(K\geq K_{0}\) and \(L\geq L_{0}\), \(\bar{Z}^{\omega,\beta}_{n,h}=(1+\bar{o}(1))\bar{Z}^{\omega,\beta}_{n,h}\big{(} |\Delta_{n}|\leq Ln^{2/9}\,;\,|M_{n}^{-}+u_{*}n^{-1/3}|\leq Kn^{-1/9}\big{)}\)._ Proof.: Using Proposition 1.5 and Section 1.5, we can redo the proof of Proposition 3.2 using the processes \(\mathbf{B}\) and \(\mathbf{Y}\) for \(n\geq n_{0}(\omega)\). Indeed, take such \(n\geq n_{0}(\omega)\) such that \(\varepsilon_{n}<\delta(\omega)\) (recall its definition from Proposition 1.5), then using the same notation as in the proofs of Lemmas 3.4,3.3, we have \[-\inf_{|u|\in 2^{k}[1,2)K}n^{1/18}\{X_{u_{*}}-X_{u_{*}+un^{-1/9}}\}=-\inf_{|u| \in 2^{k}[1,2)K}\mathbf{B}_{u}\,.\] On the other hand, \[X_{v}^{(2)}-X_{c_{h}-u}^{(2)}=\frac{1}{2}\left(Y_{c_{h}-v}-X_{c_{h}-v}+Y_{u}-X_ {u}\right)\] since \(c_{h}-v=\Delta_{n}^{x,y}n^{-1/3}+u\) and \(u=u_{*}+s\) with \(s\leq\varepsilon_{n}\leq\delta_{0}(\omega)\), this is equal to \(\frac{1}{2}\left(\mathbf{Y}_{c_{h}-v}-\mathbf{X}_{c_{h}-v}+\mathbf{Y}_{u}- \mathbf{X}_{u}\right)\), which means that \[n^{1/18}\sup_{c_{h}-u-v\in n^{-1/9}}|X_{v}^{(2)}-X_{c_{h}-u}^{(2)}|=\frac{1}{ 2}\sup_{r\in[l,l+1),s\leq\delta}\left(\mathbf{Y}_{r+s}-\mathbf{B}_{r+s}+ \mathbf{Y}_{s}-\mathbf{B}_{s}\right)\,.\] Thus, we see that the random quantities \(\xi_{k,l}\) and \(\mathcal{X}_{l}^{(2)}\) do not depend on \(n\) when \(n\) is large enough (meaning \(n\geq n_{0}\)), thus \[\varlimsup_{n\to\infty}\frac{1}{n^{1/9}}\log\bar{Z}^{>}_{n,\omega}(K,0)=\frac{ 1}{n^{1/9}}\log\bar{Z}^{>}_{n,\omega}(K,0)\,,\] the same being true for \(\bar{Z}^{>}_{n,\omega}(0,L)\), which proves the announced convergences. The existence of \(K_{0}\) and \(L_{0}\) follows from Borel-Cantelli lemma and the monotony of \(\bar{Z}^{>}_{n,\omega}(K,L)\) in both of its arguments, coupled with the fact that we prove next section that \(\liminf_{n\to\infty}n^{-1/9}\log\bar{Z}^{<}_{n,\omega}(K,L)>0\). ### Convergence of the log partition function In this section we study the convergence of \(n^{-1/9}\log\bar{Z}^{<}_{n,\omega}(K,L)\) for fixed \(K\) and \(L\) (large), in which we recall that \(\bar{Z}^{<}_{n,\omega}(K,L):=\bar{Z}^{\omega,\beta}_{n,h}\big{(}|\Delta_{n}| \leq Ln^{2/9},|M_{n}^{-}+u_{*}n^{1/3}|\leq Kn^{2/9}\big{)}\). It is a bit more convenient to transform the condition \(|\Delta_{n}|\leq L\) into the condition \(|M_{n}^{+}-(c_{h}-u_{*})n^{1/3}|\leq Ln^{2/9}\), which restricts to the same trajectories after adjusting the value of \(L\). Finally, since we plan to take the limit for \(K,L\to\infty\) it is enough to treat the case where \(K=L\). Thus, we define \[\bar{\mathcal{Z}}^{\leq K}_{n,\omega}:=\bar{Z}^{\omega,\beta}_{n,h}\big{(}|M_ {n}^{-}+u_{*}n^{1/3}|\leq Kn^{2/9},|M_{n}^{+}-(c_{h}-u_{*})n^{1/3}|\leq Kn^{2/9 }\big{)}\,.\] As explained in the beginning of this section, as \(K\to\infty\), \(\bar{\mathcal{Z}}^{\leq K}_{n,\omega}\) contains all the relevant trajectories giving the main contribution to the partition function. **Proposition 3.6**.: _For any \(h,\beta>0\) and any \(K>1\), \(\mathbb{P}\)-almost surely,_ \[\lim_{n\to\infty}\frac{\sqrt{2}}{\beta n^{1/9}}\log\bar{\mathcal{Z}}_{n,\omega}^ {\leq K}=\sup_{-K\leq u,v\leq K}\left\{\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{\beta c _{h}^{4}\sqrt{2}}\big{(}u+v\big{)}^{2}\right\}\,,\] _with \(\mathcal{Y}_{u,v}:=\mathbf{Y}_{u}-\mathbf{Y}_{-v}-\chi(\mathbf{B}_{u}+ \mathbf{B}_{v})\) and \((\mathbf{B},\mathbf{Y})\) as in Proposition 1.5._ Proof of Proposition 3.6.: For any \(\delta>0\), we define the following subsets of \(\mathbb{N}\) \[\mathscr{C}_{n,\delta}^{-}(k_{1}):=\left\{x\,:\,\left\lfloor\frac{x-u_{*}n^{1 /3}}{\delta n^{2/9}}\right\rfloor=k_{1}\right\}\,,\quad\mathscr{C}_{n,\delta}^ {+}(k_{2}):=\left\{y\,:\,\left\lfloor\frac{y-(c_{h}-u_{*})n^{1/3}}{\delta n^{2 /9}}\right\rfloor=k_{2}\right\} \tag{3.12}\] as well as \(\mathscr{C}_{n,\delta}(k_{1},k_{2}):=\mathscr{C}_{n,\delta}^{-}(k_{1})\times \mathscr{C}_{n,\delta}^{+}(k_{2})\). Recall (3.2) and the notation \[\Omega_{n}(x,y)=\Omega_{n}^{x,y}=-\left(X_{u_{*}}-X_{xn^{-1/3}}\right)+X_{yn^{ -1/3}}^{(2)}-X_{c_{h}-xn^{-1/3}}^{(2)}\,.\] Similarly to the proof of Theorem 1.3, we define \[\bar{\Lambda}_{n,h}^{\omega,\beta}(K,\delta):=\sum_{k_{1}=-K/\delta}^{K/\delta }\sum_{k_{2}=-K/\delta}^{K/\delta}\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{2}, \delta)\,,\] where \[\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta):=\sum_{(x,y)\in\mathscr{C}_{ n,\delta}(k_{1},k_{2})}\exp\left(-\beta n^{1/6}\Omega_{n}^{x,y}-\hat{c}_{h}\frac{( \Delta_{n}^{x,y})^{2}}{n^{1/3}}(1+\bar{o}(1))\right)\,. \tag{3.13}\] with \(\hat{c}_{h}=\frac{3\pi^{2}}{2c_{h}^{4}}\). Then, we can write \[\log\bar{\mathcal{Z}}_{n,\omega}^{\leq K}=\log\big{(}1+\bar{o}(1)\big{)}\psi_ {h}\sin\left(\frac{\pi u_{*}}{c_{h}}\right)+\log\bar{\Lambda}_{n,h}^{\omega, \beta}(K,\delta)\,.\] Note that both \(\bar{o}(1)\to 0\) are the deterministic quantities mentioned in Section 1.2. Again, we only have to get bounds on the maximum of \(\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)\), as \[0\leq\log\bar{\Lambda}_{n,h}^{\omega,\beta}(K,\delta)-\max_{-\frac{K}{\delta} \leq k_{1},k_{2}\leq\frac{K}{\delta}}\log\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{ 2},\delta)\leq 2\log\frac{4K}{\delta}\,, \tag{3.14}\] and \(n^{-1/9}2\log\frac{4K}{\delta}\) goes to \(0\) as \(n\to\infty\). Now, if we factorize \(\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)\) by the contribution of \(x=\hat{x}_{k_{1}}:=u_{*}n^{1/3}+k_{1}\delta n^{2/9}\) and \(y=\hat{y}_{k_{2}}:=(c_{h}-u_{*})n^{1/3}+k_{2}\delta n^{2/9}\) respectively, we have \[\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)=e^{\Xi(k_{1},k_{2},\delta)} \sum_{(x,y)\in\mathscr{C}_{n,\delta}(k_{1},k_{2})}e^{-\beta n^{1/6}\xi_{n, \delta}^{k_{1},k_{2}}(\frac{x}{n^{1/3}},\frac{y}{n^{1/3}})+\frac{c_{h}}{n^{1/3 }}\big{(}(k_{1}+k_{2})^{2}\delta^{2}n^{4/9}-(\Delta_{n}^{x,y})^{2}\big{)}}\] where we have defined \[\Xi(k_{1},k_{2},\delta):=\beta n^{1/6}\Omega_{n}(\hat{x}_{k_{1}},\hat{y}_{k_{ 2}})-n^{1/9}\hat{c}_{h}\big{(}k_{1}\delta+k_{2}\delta\big{)}^{2} \tag{3.15}\] and \[\begin{split}\zeta_{n,\delta}^{k_{1},k_{2}}(u,v)&:= \Omega_{n}(un^{1/3},vn^{1/3})-\Omega_{n}(\hat{x}_{k_{1}},\hat{y}_{k_{2}})\\ &=X_{u}-X_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}-X_{c_{h}-u}^{(2)}+X_ {c_{h}-u_{*}-\frac{k_{1}\delta}{n^{1/9}}}^{(2)}+X_{v}^{(2)}-X_{c_{h}-u_{*}+ \frac{k_{2}\delta}{n^{1/9}}}^{(2)}\,.\end{split} \tag{3.16}\] Finally, define \[\bar{R}_{n}(k_{1},k_{2},\delta):=\sup_{(x,y)\in\mathscr{C}_{n,\delta}(k_{1},k_{2}) }\left\{\beta n^{1/6}\left|\zeta_{n,\delta}^{k_{1},k_{2}}(\frac{x}{n^{1/3}}, \frac{y}{n^{1/3}})\right|+\frac{\hat{c}_{h}}{n^{1/3}}\left|(k_{1}+k_{2})^{2} \delta^{2}n^{4/9}-(\Delta_{n}^{x,y})^{2}\right|\right\}\,,\] then we have \[\left|\log\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)-\Xi(k_{1},k_{2}, \delta)\right|\leq\bar{R}_{n}(k_{1},k_{2},\delta)+\log|\mathscr{C}_{n,\delta}( k_{1},k_{2})|. \tag{3.17}\] Since \(n^{-1/9}\log|\mathscr{C}_{n,\delta}(k_{1},k_{2})|\to 0\) as \(n\to\infty\), in the rest of the proof we have to control \(n^{-1/9}\bar{R}_{n}(k_{1},k_{2},\delta)\) and then prove the convergence of \(n^{-1/9}\Xi(k_{1},k_{2},\delta)\). Afterwards we will plug those convergences in (3.14) and (3.17) to prove Proposition 3.6. _Control of \(\bar{R}_{n}(k_{1},k_{2},\delta)\)._ We now seek a bound on \(\bar{R}_{n}(k_{1},k_{2},\delta)\). First we have \[\left|n^{1/9}\big{(}k_{1}\delta+k_{2}\delta\big{)}^{2}-\frac{( \Delta_{n}^{x,y})^{2}}{n^{1/3}}\right| \leq 2\frac{k_{1}\delta+k_{2}\delta}{n^{1/9}}\left|x+y-c_{h}n^{1/3} -k_{1}\delta n^{2/9}-k_{2}\delta n^{2/9}\right|\] \[\leq 4|k_{1}+k_{2}|\delta^{2}n^{1/9}\leq 4K\delta n^{1/9}\,.\] To control the random part \(\zeta_{n,\delta}^{k_{1},k_{2}}(xn^{-1/3},yn^{-1/3})\), we use the following proposition, that we prove afterwards. **Proposition 3.7**.: _Let \(\delta_{j}=2^{-j},j\in\mathbb{N}\), then, \(\mathbb{P}\)-almost surely, there exists a positive \(C_{\omega}\) such that for any \(n\) and \(j\) large enough, any \(k_{1},k_{2}\in\llbracket-\frac{K}{\delta_{j}},\frac{K}{\delta_{j}}\rrbracket\), we have_ \[n^{1/6}\sup_{(x,y)\in\mathscr{C}_{n,\delta_{j}}(k_{1},k_{2})}\left|\zeta_{n, \delta}^{k_{1},k_{2}}(xn^{-1/3},yn^{-1/3})\right|\leq C_{\omega}\delta_{j}^{1 /4}n^{1/9}\,.\] We will still denote this parameter by \(\delta\) while keeping in mind that \(\delta\to 0\) along a specific sequence. Assembling these results, we see that \[\frac{1}{n^{1/9}}\left|\bar{R}_{n}(k_{1},k_{2},\delta)\right|\leq\beta C_{ \omega}\delta^{1/4}+4\hat{c}_{h}K\delta=:\varepsilon(\omega,\delta)\,. \tag{3.18}\] Thus, \(n^{-1/9}\left|\bar{R}_{n}(k_{1},k_{2},\delta)\right|\) is bounded by a function \(\varepsilon(\omega,\delta)\) that goes to \(0\) as \(\delta\downarrow 0\) uniformly in \(-\frac{K}{\delta}\leq k_{1},k_{2}\leq\frac{K}{\delta}\), for almost all \(\omega\). _Convergence of \(n^{-1/9}\Xi(k_{1},k_{2},\delta)\)._ As in the proof of Theorem 1.3 we write \(u=k_{1}\delta\) and \(v=k_{2}\delta\) in (3.15): recalling the definition (3.3) of \(\Omega\), this leads to \[\frac{\Xi(k_{1},k_{2},\delta)}{\beta n^{1/9}}=-n^{1/18}\big{(}X_{u_{*}}-X_{u_{ *}+\frac{u}{n^{1/9}}}\big{)}-c_{h,\beta}\big{(}u+v\big{)}^{2}+n^{1/18}\big{(}X_ {c_{h}-u_{*}+\frac{v}{n^{1/9}}}^{(2)}-X_{c_{h}-u_{*}-\frac{u}{n^{1/9}}}^{(2)} \big{)}, \tag{3.19}\] with \(c_{h,\beta}=\beta^{-1}\hat{c}_{h}\). Recall Proposition 1.5 and its notation. Set \[X_{u}=X_{u}^{(1)}+X_{c_{h}-u}^{(2)}\,,\qquad Y_{u}=X_{u}^{(1)}-X_{c_{h}-u}^{(2 )},\] and denote by \(\mathbf{B}\) a two-sided three-dimensional Bessel process and by \(\mathbf{Y}\) a standard Brownian motion, independent from \(\mathbf{B}\). Then for any \(n\geq n_{0}(\omega)\), \[n^{1/18}\big{(}X_{u_{*}}-X_{u_{*}+\frac{u}{n^{1/9}}}\big{)}=\sqrt{2}\chi\mathbf{ B}_{u}\,,\] with \(\chi=\chi(\omega,u)=\big{(}\sqrt{c_{h}-u_{*}}\mathbb{1}_{\{u\geq 0\}}+\sqrt{u_{*}} \mathbb{1}_{\{u<0\}}\big{)}^{-1}\), and \[n^{1/18}\big{(}X_{c_{h}-u_{*}+\frac{v}{n^{1/9}}}^{(2)}-X_{c_{h}-u_ {*}-\frac{u}{n^{1/9}}}^{(2)}\big{)} =\frac{n^{1/18}}{2}\left(X_{u_{*}-\frac{v}{n^{1/9}}}-Y_{u_{*}-\frac {v}{n^{1/9}}}-X_{u_{*}+\frac{u}{n^{1/9}}}+Y_{u_{*}+\frac{u}{n^{1/9}}}\right)\] \[=\frac{1}{\sqrt{2}}\Big{(}\chi(\mathbf{B}_{u}-\mathbf{B}_{v})+ \mathbf{Y}_{u}-\mathbf{Y}_{-v}\Big{)}\,.\] Assembling those results with (3.19), we established the following convergence (which is an identity for \(n\geq n_{0}(\omega)\)): \[\frac{\Xi(k_{1},k_{2},\delta)}{\beta n^{1/9}}\xrightarrow[n\to\infty]{\mathbb{ P}-a.s.}\frac{1}{\sqrt{2}}\left(-\chi(\mathbf{B}_{u}+\mathbf{B}_{v})+\mathbf{Y}_{u}- \mathbf{Y}_{-v}\right)-c_{h,\beta}\big{(}u+v\big{)}^{2}\,. \tag{3.20}\] Conclusion of the proof. If we define \(\mathcal{Y}_{u,v}:=\mathbf{Y}_{u}-\mathbf{Y}_{-v}-\chi(\mathbf{B}_{u}+\mathbf{ B}_{v})\), combining (3.18) and (3.20) with (3.17) proves \[\operatorname*{\overline{\lim}}_{n\to\infty}\left|\frac{1}{\beta n^{1/9}} \log\bar{Z}_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)-\mathcal{Y}_{k_{1}\delta,k_{2}\delta}+c_{h,\beta}(k_{1}\delta+k_{2}\delta)^{2}\right|\leq\varepsilon( \omega,\delta). \tag{3.21}\] Then, (3.14) and (3.21) lead to \[\operatorname*{\overline{\lim}}_{n\to\infty}\frac{\log\bar{\mathcal{Z}}_{n, \omega}^{\leq K}}{\beta n^{1/9}}\leq\sup_{-K\leq u,v\leq K}\left\{\frac{1}{ \sqrt{2}}\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{2\beta c_{h}^{4}}\big{(}u+v\big{)}^ {2}+\varepsilon(\delta,\omega)\right\}\quad\mathbb{P}\text{-a.s.}\,,\] and, using the uniform continuity of \(\mathcal{Y}_{u,v}\) and of \((u+v)^{2}\) on \([-K,K]^{2}\), we have \[\operatorname*{\underline{\lim}}_{n\to\infty}\frac{\log\bar{\mathcal{Z}}_{n, \omega}^{\leq K}}{\beta n^{1/9}}\geq\sup_{-K\leq u,v\leq K}\left\{\frac{1}{ \sqrt{2}}\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{2\beta c_{h}^{4}}\big{(}u+v\big{)}^ {2}-\varepsilon^{\prime}(\delta,\omega)\right\}\quad\mathbb{P}\text{-a.s.}\] Finally, letting \(\delta\) go to \(0\) proves the convergence of \(n^{-1/9}\log\bar{\mathcal{Z}}_{n,\omega}^{\leq K}\). Proof of Proposition 3.7.: Recall the definition (3.12) of \(\mathscr{C}_{n,\delta}^{-}(k_{1})\) and \(\mathscr{C}_{n,\delta}^{+}(k_{2})\) as well as \(\mathscr{C}_{n,\delta}(k_{1},k_{2})=\mathscr{C}_{n,\delta}^{-}(k_{1})\times \mathscr{C}_{n,\delta}^{+}(k_{2})\) The proof essentially boils down to the following lemma and a use of Borel-Cantelli lemma. **Lemma 3.8**.: _There exists some positive constants \(\lambda,\mu\) such that for any \(\delta\in(0,1)\) and any \(C\geq 1\),_ \[\sup_{n\geq 1}\sup_{-\frac{K}{\delta}\leq k_{1},k_{2}\leq\frac{K}{\delta}} \mathbb{P}\left(\sup_{(x,y)\in\mathscr{C}_{n,\delta}(k_{1},k_{2})}\Big{|}\zeta _{n,\delta}^{k_{1},k_{2}}(xn^{-1/3},yn^{-1/3})\Big{|}\geq C\frac{\delta^{1/4}} {n^{1/18}}\right)\leq\mu e^{-\lambda\frac{C}{\delta^{1/4}}}.\] Using Lemma 3.8 and a union bound immediately yields \[\sup_{n\geq 1}\mathbb{P}\left(\sup_{-\frac{K}{\delta}\leq k_{1},k_{2}\leq \frac{K}{\delta}}\sup_{(x,y)\in\mathscr{C}_{n,\delta}(k_{1},k_{2})}\Big{|} \zeta_{n,\delta}^{k_{1},k_{2}}(u,v)\Big{|}\geq C\frac{\delta^{1/4}}{n^{1/18}} \right)\leq\left(\frac{K}{\delta}\right)^{2}\mu e^{-\lambda\frac{C}{\delta^{1/4 }}}.\] Summing over \(\delta_{j}=2^{-j}\) gives a bound which is summable in \(C\): this allows us to use a Borel-Cantelli lemma. This means that with \(\mathbb{P}\)-probability \(1\), there is a positive \(C_{\omega}\) such that for all \(j\geq 0\), for all \(-K2^{j}\leq k_{1},k_{2}\leq K2^{j}\), for all \((x,y)\in\mathscr{C}_{n,\delta}(k_{1},k_{2})\), we have \(\zeta_{n,\delta}^{k_{1},k_{2}}\left(\frac{x}{n^{1/3}},\frac{y}{n^{1/3}}\right) \leq C_{\omega}\delta^{1/4}n^{-1/18}\), thus proving the proposition. Proof of Lemma 3.8.: Recall the definition (3.16) \[\zeta_{n,\delta}^{k_{1},k_{2}}(u,v)=X_{u}-X_{u_{+}+\frac{k_{1}\delta}{n^{1/9}}}-X _{c_{b}-u}^{(2)}+X_{c_{b}-u_{*}-\frac{k_{1}\delta}{n^{1/9}}}^{(2)}+X_{v}^{(2)}-X _{c_{b}-u_{*}+\frac{k_{2}\delta}{n^{1/9}}}^{(2)}\] and that \(2X_{c_{b}-t}^{(2)}=X_{t}-Y_{t}\). Then we can rewrite \(\zeta_{n,\delta}^{k_{1},k_{2}}(u,v)\) as \[\zeta_{n,\delta}^{k_{1},k_{2}}(u,v)=X_{u}-X_{u_{*}+\frac{k_{1} \delta}{n^{1/9}}}-\frac{X_{u}-Y_{u}}{2} +\frac{X_{c_{b}-v}-Y_{c_{b}-v}}{2}\] \[+\frac{X_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}-Y_{u_{*}+\frac{k_{1} \delta}{n^{1/9}}}}{2}-\frac{X_{u_{*}-\frac{k_{2}\delta}{n^{1/9}}}-Y_{u_{*}- \frac{k_{2}\delta}{n^{1/9}}}}{2}\,,\] which simplifies to \[2\zeta_{n,\delta}^{k_{1},k_{2}}(u,v) =X_{u}+Y_{u}-(X_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}+Y_{u_{*}+\frac {k_{1}\delta}{n^{1/9}}})+X_{c_{b}-v}-Y_{c_{b}-v}-(X_{u_{*}-\frac{k_{2}\delta}{ n^{1/9}}}-Y_{u_{*}-\frac{k_{2}\delta}{n^{1/9}}})\] \[=X_{u}-X_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}+X_{c_{b}-v}-X_{u_{*}- \frac{k_{2}\delta}{n^{1/9}}}+Y_{u}-Y_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}+Y_{u_ {*}-\frac{k_{2}\delta}{n^{1/9}}}-Y_{c_{b}-v}\,.\] We split \(\zeta_{n,\delta}^{k_{1},k_{2}}(u,v)\) into four parts corresponding to the terms in \(X\) and those in \(Y\): * \(X_{u}-X_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}\) and \(X_{c_{b}-v}-X_{u_{*}-\frac{k_{2}\delta}{n^{1/9}}}\) which we call "meander parts" because of (3.5) * \(|Y_{u}-Y_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}|\) and \(|Y_{u_{*}-\frac{k_{2}\delta}{n^{1/9}}}-Y_{c_{b}-v}|\) which we call "Brownian parts". We use a union bound to separately control the probability for each increment to be greater than \(\frac{C\mathcal{S}^{1/4}}{8n^{1/18}}\). _Control of the Brownian parts._ First, recall that \(Y\) and \(u_{*}\) are independent (since \(u_{*}\) is \(X\)-measurable). Thus, the Brownian reflection principle yields \[\sup_{un^{1/3}\in\mathscr{C}_{n,\delta}^{-}(k_{1})}\left|Y_{u}-Y_{u_{*}+\frac {k_{1}\delta}{n^{1/9}}}\right|\stackrel{{(d)}}{{=}}\sup_{vn^{1/3 }\in\mathscr{C}_{n,\delta}^{+}(k_{2})}\left|Y_{u_{*}-\frac{k_{2}\delta}{n^{1/9 }}}-Y_{c_{b}-v}\right|\stackrel{{(d)}}{{=}}\left|W_{\frac{ \delta}{n^{1/9}}}\right|,\] where \(W\) is a standard Brownian motion. This leads us to \[\mathbb{P}\left(\sup_{(u,v)n^{1/3}\in\mathscr{C}_{n,\delta}(k_{1},k_{2})} \left|Y_{u}-Y_{u_{*}+\frac{k_{1}\delta}{n^{1/9}}}\right|\geq\frac{C\delta^{1/4 }}{8n^{1/18}}\right)\leq\mathbb{P}\left(|W_{\frac{\delta}{n^{1/9}}}|\geq\frac{ C\delta^{1/4}}{8n^{1/18}}\right)\leq e^{-\frac{C^{2}}{128\sqrt{3}}}\,,\] and similarly for \(|Y_{u_{*}-\frac{k_{2}\delta}{n^{1/9}}}-Y_{c_{b}-v}|\). _Control of the meander parts._ We have to bound the following: \[\mathbb{P}\left(\sup_{un^{1/3}\in\mathscr{C}_{n,\delta}^{-}(k_{1})}|X_{u}-X_{ u_{*}+\frac{k_{1}\delta}{n^{1/9}}}|\geq\frac{C\delta^{1/4}}{8n^{1/18}}\right) \tag{3.22}\] and similarly for \(|X_{c_{b}-v}-X_{u_{*}-\frac{k_{2}\delta}{n^{1/9}}}|\); we will focus on bounding (3.22) since the other bound follows from itb. Recall (3.5) to get Footnote b: Observe that if \(vn^{1/3}\in\mathscr{C}_{n,\delta}^{+}(k_{2})\), writing \(\tilde{v}:=c_{b}-v\) and assuming \(v\neq c_{h}-u_{*}+\frac{k_{2}\delta}{n^{1/9}}\), we have \(\left\lfloor\frac{\tilde{v}n^{1/3}-u_{*}n^{1/3}}{\delta n^{2/9}}\right\rfloor= \left\lfloor-\frac{vn^{1/3}-(c_{h}-u_{*})n^{1/3}}{\delta n^{2/9}}\right\rfloor= -1-k_{2}\), thus \((c_{h}-v)n^{1/3}\in\mathscr{C}_{n,\delta}^{+}(-1-k_{2})\). \[\sup_{un^{1/3}\in\mathscr{C}_{n,\delta}^{-}(k_{1})}|X_{u}-X_{u_{*}+\frac{k_{1} \delta}{n^{1/9}}}|=\sup_{un^{1/3}\in\mathscr{C}_{n,\delta}^{-}(k_{1})}\chi \left|\mathcal{M}_{\frac{u-u_{*}}{\chi^{2}}}-\mathcal{M}_{\frac{k_{1}\delta n^{-1/ 9}}{\chi^{2}}}\right|\,.\] Observe that \(\chi=\chi(u,\omega)\) is a constant that only depends on the sign of \(k_{1}\) and that since \(K\) and \(C\) are arbitrary chosen, we only need to get a bound on the probability of \(\sup_{um^{1/3}\in\mathscr{C}^{-}_{n,\delta}(k_{1})}|\mathcal{M}_{u-u_{*}}- \mathcal{M}_{\frac{k_{1}\delta}{n^{1/9}}}|\) being greater than \(\frac{C\delta^{1/4}}{8n^{1/38}}\). Without any loss of generality we can suppose that \(k_{1}\geq 0\): to get the case where \(k_{1}\leq 0\) we only need to do the same proof with \(|k_{1}+1|\) instead. Use Lemma B.1 and Markov's inequality to get \[\mathbb{P}\left(\sup_{um^{1/3}\in\mathscr{C}^{-}_{n,\delta}(k_{1} )}\left|\mathcal{M}_{u-u_{*}}-\mathcal{M}_{\frac{k_{1}\delta}{n^{1/9}}}\right| \geq C\frac{\delta^{1/4}}{8n^{1/18}}\right) \leq 4\mathbb{P}\left(\mathcal{M}_{\frac{(k_{1}+1)\delta}{n^{1/9 }}}-\mathcal{M}_{\frac{k_{1}\delta}{n^{1/9}}}\geq\frac{C\delta^{1/4}}{8n^{1/ 18}}\right)\] \[\leq 4\mathbb{E}\left[e^{\frac{n^{1/18}}{\sqrt{\delta}}(\mathcal{ M}_{\frac{(k_{1}+1)\delta}{n^{1/9}}}-\mathcal{M}_{\frac{k_{1}\delta}{n^{1/9}}} }\right]e^{-\frac{C}{8}\delta^{\frac{1}{4}}}.\] We show below that there is a constant \(c=c(K)>0\) such that, for \(n\) large enough \[\mathbb{E}\left[e^{\frac{n^{1/18}}{\sqrt{\delta}}(\mathcal{M}_{\frac{(k_{1}+1 )\delta}{n^{1/9}}}-\mathcal{M}_{\frac{k_{1}\delta}{n^{1/9}}})}\right]\leq c \tag{3.23}\] uniformly in \(k_{1}\in\{0,1,\ldots,K/\delta\}\) and \(0<\delta<1\), thus proving Lemma 3.8. Proof of (3.23).: We want to get an upper bound on quantities \(\mathbb{E}\left[e^{\alpha(\mathcal{M}_{v}-\mathcal{M}_{u})}\right]\) for specific \(u<v,\alpha>0\). In order to do so, we first condition on the value of \(\mathcal{M}_{u}\) and use the transition probabilities of the Brownian meander to get an upper bound, which we then integrate with respect to the law of \(\mathcal{M}_{u}\). Let us set \(\kappa^{n}_{\delta}\coloneqq k_{1}\delta n^{-1/9}\) with \(k_{1}\neq 0\) (we treat the case \(k_{1}=0\) at the end) and \(\alpha:=n^{1/18}/\sqrt{\delta}\). _Step 1: Meander increment conditioned on \(\mathcal{M}_{\kappa^{n}_{\delta}}\coloneqq:x\)._ We write \(\varphi_{t}(x)\coloneqq\frac{1}{\sqrt{2\pi}t}e^{-\frac{x^{2}}{2t}}\) and \(\Phi_{t}(y)\coloneqq\int_{0}^{y}\varphi_{t}(x)dx\). Using (B.1), the density of an increment between time \(\kappa^{n}_{\delta}\neq 0\) and a time \(u=(k_{1}+1)\delta n^{-1/9}=\kappa^{n}_{\delta}+\alpha^{-2}\), when starting at \(M_{\kappa^{n}_{\delta}}=x\), is given by \[\left[\varphi_{u-\kappa^{n}_{\delta}}(m)-\varphi_{u-\kappa^{n}_{\delta}}(m+2 x)\right]\frac{\Phi_{1-u}(m+x)}{\Phi_{1-\kappa^{n}_{\delta}}(x)}\mathbbm{1}_{\{m \geq-x\}}\,.\] Then we use that (recall that \(u-\kappa^{n}_{\delta}=\alpha^{-2}\)) \[\varphi_{u-\kappa^{n}_{\delta}}(m)-\varphi_{u-\kappa^{n}_{\delta}}(m+2x)= \frac{1}{\sqrt{2\pi(u-\kappa^{n}_{\delta})}}\left(e^{-\frac{m^{2}}{2(u-\kappa ^{n}_{\delta})}}-e^{-\frac{(m+2x)^{2}}{2(u-\kappa^{n}_{\delta})}}\right)\leq \frac{\alpha}{\sqrt{2\pi}}e^{-\frac{\alpha^{2}m^{2}}{2}}\,,\] and since \(u\) is taken close to \(u_{*}\) (recall \(|u-u_{*}|\leq\varepsilon_{n}\)), \[\frac{\Phi_{1-u}(m+x)}{\Phi_{1-\kappa^{n}_{\delta}}(x)}\leq\frac{x+m}{x}e^{ \frac{x^{2}}{2(1-\kappa^{n}_{\delta})}}\sqrt{\frac{1-\kappa^{n}_{\delta}}{1 -u}}\leq(cst.)\Big{(}1+\frac{m}{x}\Big{)}e^{\frac{x^{2}}{2(1-\kappa^{n}_{ \delta})}}\,.\] Thus we have to bound \[\mathbb{E}\left[\exp\left(\alpha(\mathcal{M}_{\frac{(k_{1}+1)\delta}{n^{1/9}} }-\mathcal{M}_{\kappa^{n}_{\delta}})\right)\Big{|}\,\mathcal{M}_{\kappa^{n}_{ \delta}}=x\right]\leq(cst.)\frac{\alpha}{\sqrt{2\pi}}e^{\frac{x^{2}}{2(1-\kappa^ {n}_{\delta})}}\int_{-x}^{\infty}\Big{(}1+\frac{m}{x}\Big{)}e^{\alpha m}e^{- \frac{\alpha^{2}m^{2}}{2}}\,\mathrm{d}m\,.\] Now, setting \(\Psi(m)\coloneqq e^{-\frac{(\alpha m)^{2}}{2}+\alpha m}\), after integrating by parts (writing \(m\Psi(m)=(me^{-\frac{(\alpha m)^{2}}{2}})\times e^{\alpha m}\)) we can rewrite the above as \[(cst.)\frac{\alpha e^{\frac{x^{2}}{2(1-\kappa^{n}_{\delta})}}}{\sqrt{2\pi}} \left[\int_{-x}^{\infty}\Psi(m)\,\mathrm{d}m+\frac{2}{x\alpha^{2}}\left(\Psi(- x)+\alpha\int_{-x}^{\infty}\Psi(m)\,\mathrm{d}m\right)\right]\,.\] Usual bounds for Gaussian integrals (notice that \(\Psi(m)=e^{1/2}e^{-\frac{1}{2}(am-1)^{2}}\)) then yield the upper bound conditioned to \(x=\mathcal{M}_{\kappa_{\delta}^{n}}\) \[\mathbb{E}\left[\exp\left(\alpha(\mathcal{M}_{\frac{(k_{1}+1)\delta}{n^{1/9}}}-x )\right)\left|\mathcal{M}_{\kappa_{\delta}^{n}}=x\right]\leq\frac{\alpha e^{ \frac{x^{2}}{2(1-\kappa_{\delta}^{2})}}}{\sqrt{2\pi}}\left[\left(1+\frac{2}{ \alpha x}\right)\sqrt{2\pi}\frac{1}{\alpha}e^{1/2}+\frac{2\Psi(-x)}{x\alpha^{2 }}\right]\,,\] which we simplify (using that \(\Psi(m)\leq e^{1/2}\) for all \(m\)) as \[\mathbb{E}\left[\exp\left(\alpha(\mathcal{M}_{\frac{(k_{1}+1)\delta}{n^{1/9}}} -x)\right)\left|\mathcal{M}_{\kappa_{\delta}^{n}}=x\right]\leq(cst.)e^{\frac{ x^{2}}{2(1-\kappa_{\delta}^{2})}}\left[1+\frac{1}{x\alpha}\right]\,. \tag{3.24}\] _Step 2: Averaging on \(x=\mathcal{M}_{k_{1}\delta n^{-1/9}}\)._ In order to take the expectation in (3.24), we use the following bounds, given in (B.3) (using that \(\sqrt{2\pi}\geq 2\)): for \(0<a<r/2\), \[\mathbb{E}\left[e^{a\mathcal{M}_{\epsilon}^{2}}\right]\leq(1-2ar)^{-3/2}\,\,, \qquad\mathbb{E}\left[(\mathcal{M}_{r})^{-1}e^{a\mathcal{M}_{\epsilon}^{2}} \right]\leq\frac{\sqrt{2\pi}}{\sqrt{r(1-r)}}\left(1-2ra\right)^{-1}\,. \tag{3.25}\] Recalling that \(\kappa_{\delta}^{n}=k_{1}\delta n^{-1/9}\leq\varepsilon_{n}\to 0\), we can use the above to get that for \(n\) sufficiently large, there is a \(C>0\) such that \[\mathbb{E}\left[e^{\frac{1}{2(1-\kappa_{\delta}^{2})}\mathcal{M}_{\kappa_{ \delta}^{2}}^{2}}\Big{(}1+\frac{1}{\alpha\mathcal{M}_{\kappa_{\delta}^{n}}} \Big{)}\right]\leq\left(1-\frac{\kappa_{\delta}^{n}}{1-\kappa_{\delta}^{n}} \right)^{-3/2}+\frac{2\sqrt{\pi}}{\alpha\sqrt{\kappa_{\delta}^{n}}}\left(1- \frac{\kappa_{\delta}^{n}}{1-\kappa_{\delta}^{n}}\right)^{-1}\leq C\,,\] recalling that \(\alpha\sqrt{\kappa_{\delta}^{n}}=\sqrt{k_{1}}\geq 1\). This proves the bound (3.23) in the case \(k_{1}>0\). _Case \(k_{1}=0\)._ When \(k_{1}=0\) we have \[\mathbb{E}\left[\exp\left(\alpha\big{(}\mathcal{M}_{\frac{(k_{1}+1)\delta}{n^ {1/9}}}-\mathcal{M}_{\frac{k_{1}\delta}{n^{1/9}}}\big{)}\right)\right]= \mathbb{E}\left[\exp\left(\alpha\mathcal{M}_{\frac{\delta}{n^{1/9}}}\right) \right]\leq 4\left(1-\frac{2}{\alpha}\right)^{-3/2}\] where we used the previous bound and the fact that \(\delta n^{-1/9}=1/\alpha^{2}\). Since \(\sqrt{\delta}n^{-1/18}=\bar{o}(1)\), this gives the bound (3.23) when \(k_{1}=0\). Combining Proposition 3.2 with the fact that the right-hand side quantity in Proposition 3.6 increases with \(K\) and thus converges almost surely as \(K\to\infty\) yields Theorem 1.6 in the case of a Gaussian environment. We only need to see that the convergence is towards a non trivial quantity, which is the object of the following lemma. **Lemma 3.9**.: \(\mathbb{P}\)_-almost surely, there exists a unique \((\mathcal{U},\mathcal{V})\) such that_ \[\mathcal{W}_{2}:=\sup_{u,v}\left\{\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{\beta c_{ h}^{4}\sqrt{2}}\big{(}u+v\big{)}^{2}\right\}=\mathcal{Y}_{\mathcal{U},\mathcal{V}}- \frac{3\pi^{2}}{\beta c_{h}^{4}\sqrt{2}}\big{(}\mathcal{U}+\mathcal{V}\big{)} ^{2}\in(0,+\infty)\,. \tag{3.26}\] Proof.: Choose \(v=0\) to get \(\mathcal{W}_{2}\geq\sup_{u}\big{\{}\mathbf{Y}_{u}-\mathbf{B}_{u}-c_{h,\beta} \sqrt{2}u^{2}\big{\}}\). We get a positive lower bound since almost surely, there are real numbers \((u_{k})\downarrow 0\) such that \(\mathbf{Y}_{u_{k}}\geq 2\sqrt{u_{k}}\) and \(\mathbf{B}_{u_{k}}\leq\sqrt{u_{k}}\). This leads to \(\mathcal{W}_{2}\geq\sup_{k}\big{\{}\sqrt{u_{k}}-c_{h,\beta}\sqrt{2}u_{k}^{2} \big{\}}>0\) almost surely. In order to show that \(\mathcal{W}_{2}\) is almost surely finite, see \(\mathbf{B}_{u}\) as the modulus of a 3-dimensional standard Brownian motion \(W_{u}^{(3)}\) and consider a one dimensional Wiener process \(W\). We use the fact that \(t^{-1}(|W_{t}^{(3)}|+W_{t})\to 0\) almost surely to get \(\mathcal{Y}_{u,v}/|u+v|\to 0\) as \(|u+v|\to\infty\). Thus, \(\mathcal{Y}_{u,v}-c_{h,\beta}\sqrt{2}(u+v)^{2}\leq 0\)\(\mathbb{P}\)-a.s. when \(|u+v|\) is large enough, meaning that the supremum of this continuous process is almost surely taken on a compact set, thus it is finite. The existence of \((\mathcal{U},\mathcal{V})\) is also a consequence of the continuity of \(\mathcal{Y}_{u,v}-c_{h,\beta}\sqrt{2}(u+v)^{2}\) and of the fact that the supremum is \(\mathbb{P}\)-a.s. taken on a compact set. The uniqueness of the maximum follows from standard methods for Brownian motion with parabolic drift (see [5, Appendix A.3]). _Comment_. We could have taken another form of \(Z^{\omega,\beta}_{n,h}(k_{1},k_{2},\delta)\) given by (3.3), without using the process \(X\) that was only useful to reject trajectories whose minimum is too far from \(-u_{*}n^{1/3}\). This would have led us to the alternative form \[\mathcal{W}^{\prime}_{2}:=\sup_{u,v}\Big{\{}\bar{X}^{(1)}_{u}+\bar{X}^{(2)}_{v }-c_{h,\beta}\big{(}u+v\big{)}^{2}\Big{\}}=\mathcal{W}_{2}\] where \(\bar{X}^{(i)}_{u}\) are Brownian-related processes provided by a suitable coupling. However, these limit processes are not independent and their distribution may not be known processes, making \(\mathcal{W}^{\prime}_{2}\) less exploitable. ### Path properties at second order _Proof of Theorem 1.6-(1.8)._ The proof of (1.8) is a repeat of the proof of Lemma 2.3, this time writing \[\mathcal{U}^{\varepsilon,\varepsilon^{\prime}}_{2}:=\left\{(u,v)\in\mathbb{R }^{2}\,:\,\sup_{(s,t)\in B_{\varepsilon}(u,v)}\Big{\{}\mathcal{Y}_{s,t}-c_{ \beta,h}(s+t)^{2}\Big{\}}\geq\mathcal{W}_{2}-\varepsilon^{\prime}>0\right\}\,,\] with \(B_{\varepsilon}(u,v)\) the Euclidean ball of radius \(\varepsilon\) centered at \((u,v)\). Define the event \[\mathcal{A}^{\varepsilon,\varepsilon^{\prime}}_{2,n}:=\left\{\left(\frac{M_{ n}^{-}+u_{*}n^{1/3}}{n^{2/9}},\frac{M_{n}^{+}-(c_{h}-u_{*})n^{1/3}}{n^{2/9}} \right)\not\in\mathcal{U}^{\varepsilon,\varepsilon^{\prime}}_{2}\right\}\,,\] then \[\log\mathbf{P}^{\omega,\beta}_{n,h}\big{(}\mathcal{A}^{\varepsilon,\varepsilon^ {\prime}}_{2,n}\big{)}=\log\mathcal{Z}^{\omega,\beta}_{n,h}\big{(}\mathcal{A} ^{\varepsilon,\varepsilon^{\prime}}_{2,n}\big{)}-\log\mathcal{Z}^{\omega, \beta}_{n,h}\,.\] Afterwards, using the definition of \(\mathcal{U}^{\varepsilon,\varepsilon^{\prime}}_{2}\) we prove as above that \[\limsup_{n\to\infty}\frac{\sqrt{2}}{\beta n^{1/9}}\log\mathcal{Z}^{\omega, \beta}_{n,h}\big{(}\mathcal{A}^{\varepsilon,\varepsilon^{\prime}}_{2,n}\big{)} =\sup_{(u,v)\not\in\mathcal{U}^{\varepsilon,\varepsilon^{\prime}}_{2}}\Big{\{} \mathcal{Y}_{s,t}-\frac{3\pi^{2}}{\beta c_{h}^{4}\sqrt{2}}(s+t)^{2}\Big{\}}< \mathcal{W}_{2}-\varepsilon^{\prime}\,,\] and thus \(\limsup_{n\to\infty}n^{-1/9}\log\mathbf{P}^{\omega,\beta}_{n,h}\big{(} \mathcal{A}^{\varepsilon,\varepsilon^{\prime}}_{2,n}\big{)}<0\), proving (1.8) since \(\bigcap\limits_{\varepsilon^{\prime}>0}\mathcal{U}^{\varepsilon,\varepsilon^{ \prime}}_{2}\subset B_{2\varepsilon}((\mathcal{U},\mathcal{V}))\). ## 4 Generalizing with the Skorokhod embedding ### Proof of Theorem 1.6, case of a finite \((3+\eta)\)th moment For now, Theorem 1.6 has only been established for a Gaussian environment \(\omega\), meaning that the variables \((\omega_{z})\) are \(i.i.d\) with a normal distribution. In the following, we will explain how we can generalize those results to any random \(i.i.d.\) field with sufficient moment conditions after doing the work in Section 3. We first expand on the coupling between the random field \(\omega\) and the Brownian motions \(X^{(i)},i=1,2\). Our starting point is the following statement from [24, Chapter 7.2]. **Theorem 4.1** (Skorokhod).: _Let \(\xi_{1},\ldots,\xi_{m}\) be i.i.d. centered variables with finite second moment. For a Brownian motion \(W\), there exists independent positive variables \(\tau_{1},\ldots,\tau_{m}\) such that_ \[\big{(}\xi_{1},\ldots,\xi_{m}\big{)}\stackrel{{ d}}{{=}}\left(W( \tau_{1}),W(\tau_{1}+\tau_{2})-W(\tau_{1}),\ldots,W(\sum_{i=1}^{m}\tau_{i})-W( \sum_{i=1}^{m-1}\tau_{i})\right)\,.\] _Moreover, for all \(k\leq m\), we have_ \[\mathbb{E}\left[\tau_{k}\right]=\mathbb{E}\left[\xi_{k}^{2}\right]\quad\text{and} \quad\forall p>1,\exists C_{p}>0,\mathbb{E}\left[(\tau_{k})^{p}\right]\leq C_{p} \mathbb{E}\left[(\xi_{k})^{2p}\right]\,.\] The following theorem gives us asymptotic estimates for the error of this coupling. **Theorem 4.2** ([11, Theorem 2.2.4]).: _Let \((\theta_{i})\) be i.i.d. centered variables, and assume that \(\mathbb{E}\left[|\theta_{1}|^{p}\right]<\infty\) for a real number \(p\in(2,4)\). Then, if the underlying probability space is rich enough, there is a Brownian motion \(W\) such that_ \[\Big{|}\sum_{i=1}^{m}\theta_{i}-W_{m}\Big{|}=\bar{o}\big{(}m^{1/p}(\log m)^{1/ 2}\big{)}\quad a.s.\text{ as }m\to\infty\,.\] We can easily adapt this statement and choose the Wiener processes \(X^{(1)}\) and \(X^{(2)}\) to be independent Brownian motions such that, as \(n\to\infty\), \[\Big{|}\sum_{z=1}^{un^{1/3}}\omega_{-z}-n^{1/6}X^{(1)}_{u}\Big{|}\lor\Big{|} \sum_{z=0}^{un^{1/3}}\omega_{z}-n^{1/6}X^{(2)}_{v}\Big{|}=\bar{o}\big{(}(u\lor v )^{1/p}(n^{1/3})^{1/p}(\log n)^{1/2}\big{)}\] as long as \(\mathbb{E}[|\omega_{z}|^{p}]<+\infty\) for some \(p\in(2,4)\). Since in the partition function we can restrict to trajectories with \(x\) and \(y\) are taken between \(0\) and \((c_{h}+\varepsilon_{n})n^{1/3}\) (recall (1.4)), we can obtain a uniform bound over every \(u,v\) we consider, meaning that \(\mathbb{P}\)-a.s. there is some constant \(C(\omega)\) such that for all \(n\geq 1\), \[\Big{|}\sum_{z=1}^{x}\omega_{-z}-n^{1/6}X^{(1)}_{u}\Big{|}\lor\Big{|}\sum_{z=0 }^{y}\omega_{z}-n^{1/6}X^{(2)}_{v}\Big{|}\leq C(\omega)\;n^{1/3p}(\log n)^{1/ 2}\,, \tag{4.1}\] uniformly for \(|x|,|y|\leq 2c_{h}n^{1/3}\). Let us also recall the notation \(\bar{Z}_{n,h}^{\omega,\beta}=Z_{n,h}^{\omega,\beta}e^{\frac{3}{2}hT_{n}^{*}- \beta n^{1/6}X_{u_{*}}}\). Proof of Theorem 1.6 with \(\mathbb{E}\left[|\omega_{0}|^{3+\eta}\right]<\infty\).: We now repeat the proof for a Gaussian field, but with the introduction of an error term given by Theorem 4.2. Recall (3.3): with a Gaussian environment, we had \(\Sigma_{x}^{-}=X^{(1)}_{xn^{-1/3}}\) and \(\Sigma_{y}^{+}=X^{(2)}_{yn^{-1/3}}\). Now, we must introduce an error term \(E_{n}(x,y):=\Sigma_{x}^{-}-X^{(1)}_{xn^{-1/3}}+\Sigma_{y}^{+}-X^{(2)}_{yn^{-1/3}}\): the equation (3.2) becomes \[\bar{Z}_{n,h}^{\omega,\beta}\sim\psi_{h}\sin\left(\frac{u_{*}\pi}{c_{h}} \right)\sum_{\begin{subarray}{c}|x-u^{*}n^{1/3}|\leq\varepsilon_{n}n^{1/3}\\ |y-(c_{h}-u^{*}|n^{1/3}|\leq\varepsilon_{n}n^{1/3}\end{subarray}}\exp\left( \beta\bar{\Omega}_{n}^{x,y}+E_{n}(x,y)-\frac{3\pi^{2}(\Delta_{n}^{x,y})^{2}}{2 c_{h}^{4}n^{1/3}}(1+\bar{o}(1))\right), \tag{4.2}\] with \(\bar{o}(1)\) deterministic and uniform in \(x,y\), and \[\bar{\Omega}_{n}^{x,y}:=n^{1/6}\left(X^{(1)}_{xn^{-1/3}}+X^{(2)}_{yn^{-1/3}}-X _{u_{*}}\right)\,,\qquad E_{n}(x,y):=\sum_{z=-x}^{y}\omega_{z}-\bar{\Omega}_{n }^{x,y}.\] Take \(p=3+\eta\) with \(\eta\in(0,1)\), and assume that \(\mathbb{E}\left[|\omega_{0}|^{p}\right]<+\infty\). Then, using (4.1), we have \(|E_{n}(x,y)|\leq C(\omega)n^{1/(9+3\eta)}(\log n)^{1/2}\) for all summed \((x,y)\). Therefore, combining with (4.2), we get that \(\mathbb{P}\)-a.s. \[\Bigg{|}\log\bar{Z}_{n,h}^{\omega,\beta}-\log\sum_{\begin{subarray}{c}|x-u^{* }n^{1/3}|\leq\varepsilon_{n}n^{1/3}\\ |y-(c_{h}-u^{*})n^{1/3}|\leq\varepsilon_{n}n^{1/3}\end{subarray}}\exp\left( \beta\bar{\Omega}_{n}^{x,y}-\frac{3\pi^{2}(\Delta_{n}^{x,y})^{2}}{2c_{h}^{4} n^{1/3}}\right)\Bigg{|}\leq C(\omega)n^{\frac{1}{9+3\eta}}(\log n)^{1/2}\,.\] Since \(n^{\frac{1}{9+3\eta}}(\log n)^{1/2}=\bar{o}(n^{1/9})\), we can restrict our study to (exactly) the same sum that appeared in the Gaussian case, see (3.2). Then, (1.8) follows identically from the same proof as in Section 3.4. ### Adaptation to the case of a finite \((2+\eta)\)th moment We now explain how we can infer (1.9), _i.e._ a version of Theorem 1.6 where we only assume that \(\mathbb{E}\left[|\omega_{0}|^{2+\eta}\right]<\infty\) for some positive \(\eta\), from adapting the proofs of Section 3. We are able to prove that the relevant trajectories converge to the suspected limit for \(Z^{\omega,\beta}_{n,h}\), however some technicalities prevent us from getting the full theorem. The key observation is the following: when subtracting \(\beta\sum_{u_{*}n^{1/3}}^{(c_{h}-u_{*})n^{1/3}}\omega_{z}\) instead of \(\beta n^{1/6}X_{u_{*}}\) from \(\log Z^{\omega,\beta}_{n,h}+\frac{3}{2}hc_{h}n^{1/3}\), we precisely cancel out the \((\omega_{z})\) present in both \(\Sigma^{-}_{|M^{-}_{n}|}+\Sigma^{+}_{M^{+}_{n}}\) and \(\Sigma^{-}_{u_{*}n^{1/3}}+\Sigma^{+}_{(c_{h}-u_{*})n^{1/3}}\). This leaves us with a smaller sample of the variables \((\omega_{z})\), with size \(|M^{-}_{n}+u_{*}n^{1/3}|+|M^{+}_{n}-(c_{h}-u_{*})n^{1/3}|\) which is at most \(2\varepsilon_{n}n^{1/3}\) (see (3.2)), and of order \(n^{2/9}\) when restricting to trajectories giving the main contribution (see Proposition 3.2). We thus write \(\tilde{Z}^{\omega,\beta}_{n,h}:=Z^{\omega,\beta}_{n,h}\exp(\frac{3}{2}hc_{h}n ^{1/3}-\beta\sum_{z=-u_{*}n^{1/3}}^{(c_{h}-u_{*})n^{1/3}}\omega_{z})\), and for \(\delta^{>}_{a}(x):=\operatorname{sgn}(x-a)\), we define \[\Omega^{u_{*}}_{n,h}(x,y) :=\sum_{z=-x}^{y}\omega_{z}-\sum_{-u_{*}n^{1/3}}^{(c_{h}-u_{*})n^ {1/3}}\omega_{z}\] \[=\delta^{>}_{u_{*}}(\frac{x}{n^{1/3}})\sum_{z=x\wedge u_{*}n^{1/3 }}^{x\lor u_{*}n^{1/3}}\omega_{-z}+\delta^{>}_{c_{h}-u_{*}}(\frac{y}{n^{1/3}} )\sum_{z=y\wedge(c_{h}-u_{*})n^{1/3}}^{y\vee(c_{h}-u_{*})n^{1/3}}\omega_{z},\] to have \[\tilde{Z}^{\omega,\beta}_{n,h}\sim\psi_{h}\sin\left(\frac{u_{*} \pi}{c_{h}}\right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! which we formulate as \[\bar{\mathcal{Z}}_{n,\omega}^{\leq K}=(1+\bar{o}(1))\sum_{\begin{subarray}{c}|x-u _{*}n^{1/3}|\leq Kn^{2/9}\\ |y-(c_{h}-u_{*})n^{1/3}|\leq Kn^{2/9}\end{subarray}}\psi_{h}\sin\left(\frac{u_ {*}\pi}{c_{h}}\right)\exp\left(\Omega_{n,h}^{u_{*}}(x,y)-\beta c_{h,\beta}\frac {(\Delta_{n}^{x,y})^{2}}{n^{1/3}}(1+\bar{o}(1))\right)\] where \(\bar{o}(1)\) is deterministic, uniform in \((x,y)\). Afterwards, using Proposition 4.3 with \(p=2+\eta\) leads to \[\left|\log\bar{\mathcal{Z}}_{n,\omega}^{\leq K}-\log\sum_{\begin{subarray}{c }|x-u_{*}n^{1/3}|\leq Kn^{2/9}\\ |y-(c_{h}-u_{*})n^{1/3}|\leq Kn^{2/9}\end{subarray}}\exp\left(\beta\bar{\Omega }_{n}^{x,y}-\frac{3\pi^{2}(\Delta_{n}^{x,y})^{2}}{2c_{h}^{4}n^{1/3}}\right) \right|\leq C(\omega)(Kn^{2/9})^{1/p}(\log n)^{1/2}\,.\] Since we have \((Kn^{2/9})^{1/p}(\log n)^{1/2}=\bar{o}(n^{1/9})\), this shows that Proposition 3.6 still holds (the sum that remains to control is exactly the one treated in Proposition 3.6). Thus, we proved that \[\lim_{n\to+\infty}\lim_{n\to\infty}\frac{\sqrt{2}}{\beta n^{1/9}}\log\bar{ \mathcal{Z}}_{n,\omega}^{\leq K}=\sup_{-K\leq u,v\leq K}\left\{\mathcal{Y}_{u,v}-\frac{3\pi^{2}}{\beta c_{h}^{4}\sqrt{2}}\big{(}u+v\big{)}^{2}\right\}\,,\] what remains is to show that \(n^{-1/9}\log\bar{\mathcal{Z}}_{n,\omega}^{>K}\) has a non-positive limsup as \(K,n\to+\infty\) in the same spirit as Proposition 3.2. In Lemma 3.3,3.4, we used union bounds to prove that \(\mathbb{P}\left(n^{-1/9}\log\bar{\mathcal{Z}}_{n,\omega}^{>K}\geq-1\right)\to 0\) as \(K\to\infty\). If we repeat the same steps, for Lemma 3.3 we would need to compute probabilities such as (recall the notations in the proof) \[\mathbb{P}\left(\beta n^{1/6}(\mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n})+ E_{k,l}^{n}\geq c_{h}^{\prime}n^{1/9}2^{2l}L^{2}\right)\] with \[E_{k,l}^{n}:=\sup_{\begin{subarray}{c}c_{h}-u-v\in\mathcal{Z}^{l}[1,2)Ln^{-1 /9}\\ |u-u_{*}|\in[k,k+1)2^{l}Ln^{-1/9}]\end{subarray}}\left|\Omega_{n,h}^{u_{*}}(x, y)-n^{1/6}\Big{(}X_{xn^{-1/3}}^{(1)}+X_{yn^{-1/3}}^{(2)}-X_{u_{*}}\Big{)}\right|.\] Note that assuming a moment \(2+\eta\), we have \(E_{k,l}^{n}\leq C(\omega)(k2^{l}Ln^{2/9})^{1/2+\eta^{\prime}}\) for some \(0<\eta^{\prime}<\eta\). However, \(\mathcal{X}_{k,l}^{(2)}\) and \(\mathcal{M}_{k,l}^{n}\) are of order \((k2^{l}Ln^{2/9})^{1/2}\) which means that \(E_{k,l}^{n}\) should somewhat be negligible. In fact, we can prove with the exact same calculations as in Lemma 3.3 that there is a \(N\in\mathbb{N}\) such that \[\overline{\lim}_{k,l\to+\infty}\sup_{n\geq N}\mathbb{P}\left(\beta n^{1/6}( \mathcal{X}_{k,l}^{(2)}-\mathcal{M}_{k,l}^{n})+E_{k,l}^{n}\geq c_{h}^{\prime} n^{1/9}2^{2l}L^{2}\right)=0\,.\] However, with this method, the convergence rate is not fast enough, thus the union bound fails to conclude the proof. To do so, we would need another way of proving the result which is beyond the scope of this paper. Simplified model : range with a fixed bottom In this section we shall focus on a somewhat simpler model in which one of the range's edges is fixed at \(0\). The polymer is modeled by a non-negative random walk and the polymer measure is given by \[\tilde{\mathbf{P}}_{n,h}^{\omega,\beta}(S):=\frac{1}{\tilde{Z}_{n,h}^{\omega, \beta}}\exp\Big{(}-hM_{n}^{+}+\beta\sum_{i=0}^{M_{n}^{+}}\omega_{i}\Big{)}1_{ \{\forall k\leq n,S_{k}\geq 0\}}\mathbf{P}(S).\] For now, we will keep studying the case where the field \(\omega\) is composed of i.i.d. Gaussian variables. We once again take a Brownian motion \(X\) such that \(\frac{1}{n^{1/6}}\sum_{z=0}^{T}\omega_{z}=X_{Tn^{-1/3}}\). The partition function is given by \[\tilde{Z}_{n,h}^{\omega,\beta}=\sum_{T=1}^{+\infty}e^{-hT+\sum_{i\leq T}\omega _{i}}\mathbf{P}(\mathcal{R}_{n}=\llbracket 0,T\rrbracket)=\sum_{T=1}^{+\infty} \phi_{n}(T)e^{-hT+\sum_{i\leq T}\omega_{i}-g(T)n}\] with \(g(T)=\pi^{2}/2T^{2}\) (see [7], this is analogous to what is done in Section 1.2). It is not difficult to see that our results up to Section 3 still hold, meaning \[\frac{1}{n^{1/3}}\log\tilde{Z}_{n,h}^{\omega,\beta}\xrightarrow[n\to\infty]{ \mathbb{P}-a.s.}-\frac{3}{2}hc_{h}\,,\hskip 28.452756pt\frac{1}{\beta n^{1/6}} \left(\log\tilde{Z}_{n,h}^{\omega,\beta}+\frac{3}{2}hc_{h}n^{1/3}\right) \xrightarrow[n\to\infty]{\mathbb{P}-a.s.}X_{c_{h}}\,.\] Factorizing by \(e^{\beta n^{1/6}X_{c_{h}}}\) yields the following exponential term \[\beta\sum_{z=0}^{T}\omega_{z}-\beta n^{1/6}X_{c_{h}}-\frac{3\pi^{2}(T-T_{n}^{ *})^{2}}{2c_{h}^{4}n^{1/3}}=-\beta n^{1/6}\big{(}X_{c_{h}}-X_{Tn^{-1/3}}\big{)} -\frac{3\pi^{2}(T-T_{n}^{*})^{2}}{2c_{h}^{4}n^{1/3}}.\] **Proposition 5.1**.: _For any \(h,\beta>0\) there is a standard Brownian motion \(W\) such that \(\mathbb{P}\)-a.s.,_ \[\lim_{n\to\infty}\frac{1}{\beta n^{1/9}}\left(\log\tilde{Z}_{n,h}^{\omega, \beta}+\frac{3}{2}hc_{h}n^{1/3}-\beta n^{1/6}X_{c_{h}}\right)=\sup_{s\in \mathbb{R}}\left\{W_{s}-\frac{3\pi^{2}}{2\beta c_{h}^{4}}s^{2}\right\}. \tag{5.1}\] Proof scheme.: Since \(|X_{c_{h}}-X_{Tn^{-1/3}}|\leq Cn^{-1/6}\sqrt{|T_{n}^{*}-T|}\) with probability at least \(1-e^{-C^{2}}\), we can repeat the proof of Proposition 3.2 and restrict the trajectories. This leads to studying \(\tilde{Z}_{n,h}^{\omega,\beta}(|T_{n}^{*}-T|\leq Kn^{2/9})\) which contains all the main contributions for \(K\) large. Split over \(k\delta n^{2/9}\leq T_{n}^{*}-T\leq(k+1)\delta n^{2/9}\) and the main contribution will be given by the supremum over \(k\) of \[-\beta n^{1/6}\big{(}X_{c_{h}}-X_{c_{h}+\frac{k\delta n^{2/9}}{n^{1/3}}}\big{)} -\frac{3\pi^{2}(k\delta n^{2/9})^{2}}{2c_{h}^{4}n^{1/3}}=-\beta n^{1/6}\big{(} X_{c_{h}}-X_{c_{h}+\frac{\delta}{n^{1/9}}}\big{)}-\frac{3\pi^{2}s^{2}}{2c_{h}^{4 }}n^{1/9}\,,\] where we wrote \(s=k\delta\). We can conclude similarly to the proof of Theorem 1.6 by changing the limit process \(\mathcal{Y}_{u,v}\) to \(B\) which is the limit of the processes \(B^{(n)}=n^{1/18}\big{(}X_{c_{h}}-X_{c_{h}+\frac{u}{n^{1/9}}}\big{)}_{u}\) and is a standard Brownian motion. Once again, we can couple the Brownian motion \(X=X^{(n)}\) so that the processes \(B^{(n)}\) are equal to \(B\) when \(n\) is large, in the same fashion as Proposition 1.5. We can prove that the right-hand side of (5.1) is \(\mathbb{P}\)-a.s. positive and finite, attained at a unique point \(s_{*}\). To sum up the results of this simplified model, we write the following statement **Theorem 5.2**.: _Recall the notation of (1.11), this time with \(\tilde{Z}^{\omega,\beta}_{n,h}\). Then \(\mathbb{P}\)-almost surely,_ \[\tilde{f}^{(1,\frac{1}{3})}_{\omega}(h,\beta)=-\frac{3}{2}(\pi h)^{2/3}\,,\quad \tilde{f}^{(2,\frac{1}{6})}_{\omega}(h,\beta)=\beta X_{c_{h}}\,,\quad\frac{1}{ \beta}\tilde{f}^{(3,\frac{1}{6})}_{\omega}(h,\beta)=\sup_{s\in\mathbb{R}} \left\{B_{s}-\frac{3\pi^{2}}{2\beta c_{h}^{4}}s^{2}\right\}.\] Recall the following notation of (1.2): \[T_{n}:=M_{n}^{+}-M_{n}^{-}=|\mathcal{R}_{n}|-1\,,\qquad T_{n}^{*}:=\left(\frac {n\pi^{2}}{h}\right)^{1/3}=c_{h}n^{1/3}\,,\qquad\Delta_{n}:=T_{n}-T_{n}^{*}.\] **Corollary 5.3**.: _There is a vanishing sequence \((\varepsilon_{n})\) such that_ \[\limsup_{n\to\infty}\tilde{\mathbf{P}}^{\omega,\beta}_{n,h}\left(|\Delta_{n}- s_{*}n^{2/9}|\geq\varepsilon_{n}n^{2/9}\right)=0\,.\] Our goal is now to find out whether factorizing the partition function by this quantity leads to a bounded logarithm or not; in other words, we are looking for the 4th order free energy, in the spirit of Section 1.11. We develop here some heuristic to justify that the 4th order free energy is at scale \(\alpha_{4}=0\). Going forward we work conditionally to \(s_{*}\). We define \[\tilde{\mathcal{Z}}^{\omega,\beta}_{n,h}:=\tilde{Z}^{\omega,\beta}_{n,h}\exp \left(\frac{3}{2}hc_{h}n^{1/3}-\beta n^{1/6}X_{c_{h}}-\beta n^{1/9}\sup_{u} \left\{W_{u}-\frac{3\pi^{2}}{2\beta c_{h}^{4}}u^{2}\right\}\right)\phi(T_{n}^ {*})^{-1}\,. \tag{5.2}\] We first rewrite the factorized partition function \(\tilde{\mathcal{Z}}^{\omega,\beta}_{n,h}\). If we write \(T_{n}=c_{h}n^{1/3}+\Delta_{n}\) and we recall that thanks to the coupling, for \(u\) in a neighborhood of \(0\), we have \(W_{u}=n^{1/18}\left(X_{c_{h}+\frac{u}{n^{1/9}}}-X_{c_{h}}\right)\) for sufficiently large \(n\), we can rewrite \[\beta n^{1/6}\left(X_{T_{n}n^{-1/3}}-X_{c_{h}}\right)=\beta n^{1/6}\left(X_{c _{h}+\Delta_{n}n^{-1/3}}-X_{c_{h}}\right)=\beta n^{1/9}B_{\Delta_{n}n^{-2/9} }\,.\] Then, we have \[\tilde{\mathcal{Z}}^{\omega,\beta}_{n,h}\sim\sum_{|k-s_{*}n^{2/9}|\leq \varepsilon_{n}n^{2/9}}\exp\left(\beta n^{1/9}\left[\left(W_{kn^{-2/9}}-c_{h, \beta}\frac{k^{2}}{n^{4/9}}\right)-\sup_{s\in\mathbb{R}}\left\{W_{s}-c_{h, \beta}s^{2}\right\}\right]\right)\,. \tag{5.3}\] We define the process \(Y_{s}:=B_{s}-c_{h,\beta}s^{2}\) which is a Brownian motion with quadratic drift, and \(s_{*}\) the point at which it attains its maximum on \(\mathbb{R}\). (5.3) can thus be rewritten as \[\tilde{\mathcal{Z}}^{\omega,\beta}_{n,h}\sim\sum_{|k-s_{*}n^{2/9}|\leq \varepsilon_{n}n^{2/9}}\exp\left(\beta n^{1/9}\left(Y_{kn^{-2/9}}-Y_{s_{*}} \right)\right)\,. \tag{5.4}\] The exponential term is non-positive, which means that the typical trajectories for the polymer are those that minimize the difference in (5.3). _Comment_.: Previous works studied with some extent the laws of \(s_{*}\) and \(Y_{s_{*}}\) (see [19]). In particular \(s_{*}\) follows the so-called Chernov distribution, which is symmetric. Writing Ai for the Airy function, [19, Theorem 1.1] states that \[\mathbb{E}\left[s_{*}^{2}\right]=\frac{2^{-2/3}c_{h,\beta}^{-4/3}}{6i\pi}\int _{\mathbb{R}}\frac{y\,dy}{\mathrm{Ai}(iy)}<\infty\quad\text{and}\quad\forall p \in\mathbb{N},\ \mathbb{E}\left[s_{*}^{p}\right]<+\infty\,.\] In all the following, we use the fact that the distribution of \(s_{*}\) is symmetric to reduce to the case \(s_{*}>0\). We will also work conditionally on the value of \(s_{*}\), meaning on the location of the maximum of \(Y\). We write \(\alpha=2c_{h,\beta}s_{*}\), then observe that for any \(s>0\), \[Y_{s_{*}}-Y_{s_{*}\pm s}=B_{s_{*}}-B_{s_{*}\pm s}\pm\alpha s-c_{h,\beta}s^{2} \leq B_{s}\pm\alpha s=:R_{s}^{\pm}\,.\] Thus we have \(e^{\beta n^{1/9}(Y_{kn^{-2/9}}-Y_{s_{*}})}\leq e^{\beta n^{1/9}(R_{kn^{-2/9}}- R_{s_{*}})}\) which means that we can get an upper bound on the contribution of a given trajectory just by studying the processes \(R^{\pm}\) conditioned to be positive, provided the existence of a coupling between these processes and \(Y\). Moreover, since we are interested in the setting \(s\to 0\), we should have a lower bound that reads \(Y_{s_{*}}-Y_{s_{*}\pm s}\geq(1+o(1))R_{s}^{\pm}\) as \(s\to 0\). This motivates our first conjecture, which is an analog of Proposition 1.5. We will write \(R=R^{-}\mathbbm{1}_{\mathbb{R}^{-}}+R^{+}\mathbbm{1}_{\mathbb{R}^{+}}\). **Conjecture 5.4**.: _One can do a coupling of \((R,Y)\) and a two-sided \(\mathrm{BES}_{3}\ \tilde{\mathbf{B}}\) such that almost surely, there exists \(\delta_{1}>0\) and \(n_{1}\in\mathbb{N}\) for which \(\forall n\geq n_{1}\), for all \(|u|<\delta_{1}\)_ \[n^{1/9}R_{un^{-2/9}}=n^{1/9}(Y_{s_{*}}-Y_{s_{*}+\frac{u}{n^{2/9}}})=\tilde{ \mathbf{B}}_{u}. \tag{5.5}\] The fact that the three-dimensional Bessel process appears is mainly due to the following result from San Martin and Ramirez [21]. **Theorem 5.5**.: _Define \(X_{t}^{\alpha}:=x+W_{t}-\alpha t\), with \(\alpha,x>0\). Then the process \(X^{\alpha}\) conditioned to stay positive on \([0,T]\) converges in distribution to the Bessel process as \(T\to\infty\)._ Our second conjecture is a description of the simplified model and the idea should follow along the steps of Section 3, excluding trajectories and using Conjecture 5.4 to get an almost-sure convergence of \(\tilde{Z}_{n,h}^{\omega,\beta}\). **Conjecture 5.6**.: _There exist \(\tilde{\mathbf{B}}\) (given by (5.5)) a two-sided three-dimensional Bessel process such that for \(n\) large enough, writing \(s_{*}^{n}:=s_{*}n^{2/9}-\lfloor s_{*}n^{2/9}\rfloor\) we have_ \[\mathbf{P}_{n,h}^{\omega,\beta}\left(M_{n}^{+}=c_{h}n^{1/3}+\lfloor s_{*}n^{2 /9}\rfloor+k\right)\sim\frac{1}{\theta_{\omega}(n)}e^{-\beta\mathcal{W}_{s_{* }^{n}+k}}\,,\quad\text{with}\quad\theta_{\omega}(n):=\sum_{k\in\mathbb{Z}}e^{- \beta\mathcal{W}_{s_{*}^{n}+k}(\omega)}\,.\] **Heuristic.** To minimize \(n^{1/9}(Y_{s_{*}}-Y_{s_{*}+s})=n^{1/9}R_{s}\), since \(R_{s}\asymp\sqrt{s}-\alpha s\asymp\sqrt{s}\) with high probability when \(s\to 0\) (we are close to \(s_{*}\)) we roughly need to have \(s=\tilde{\mathcal{O}}(n^{-2/9})\). In the definition of \(\tilde{\mathcal{Z}}_{n,h}^{\omega,\beta}\), we take \(s+s_{*}=\Delta n^{-2/9}\), thus we should be able to prove that \(\tilde{\mathcal{Z}}_{n,h}^{\omega,\beta}(|\Delta_{n}-s_{*}n^{2/9}|>K)/\tilde{ \mathcal{Z}}_{n,h}^{\omega,\beta}\to 0\) when \(K\to\infty\) and \(n\) is large enough. On the other hand, for \(\tilde{\mathcal{Z}}_{n,h}^{\omega,\beta}(|\Delta_{n}-s_{*}n^{2/9}|\leq K)\), when \(n\) is large enough, we have \(Y_{kn^{-2/9}}-Y_{s_{*}}=\tilde{\mathbf{B}}_{k}\) for any \(|k|\leq K\). Thus, we should be able to prove that when \(n\to+\infty\), we have \(|\tilde{\mathcal{Z}}_{n,h}^{\omega,\beta}(|\Delta_{n}-s_{*}n^{2/9}|\leq K)- \sum_{|k|\leq K}e^{-\beta\tilde{\mathbf{B}}_{k}}|\to 0\) in similar fashion to the proof of Proposition 3.6. _Comment_. It should be possible to obtain an analog of Conjecture 5.6 in the general model for a Gaussian environment \(\omega\), which supports Conjecture 1.7. This would require a coupling of \(\mathcal{Y}_{\mathcal{U},\mathcal{V}}-\mathcal{Y}_{u,v}\) (recall the definitions in Theorem 1.6) with some suitable process on a small neighborhood of \((\mathcal{U},\mathcal{V})\). Disorder in a domain of attraction of a Levy process In this section we will extend the Theorem 1.3 to the case where \((\omega_{z})_{z\in\mathbb{Z}}\) is in the domain of attraction of an \(\alpha\)-stable law, with \(\alpha\in(1,2)\); we refer to [5] where the case \(\alpha<1\) is shown to have a different behavior. More precisely, we assume that the field \(\omega\) is such that \(\mathbb{E}\left[\omega_{0}\right]=0\) and that there exists \(\alpha\in(1,2)\) such that \[\mathbb{P}\left(\omega_{0}>t\right)\sim pt^{-\alpha}\quad,\quad\mathbb{P} \left(\omega_{0}<-t\right)\sim qt^{-\alpha}\quad\text{as $t\to\infty$ with $p+q=1$}.\] (A.1) This ensures that \(\frac{1}{k^{1/3}}\sum_{z=0}^{k}\omega_{z}\) converges in law to an \(\alpha\)-stable Levy process, \(\alpha\in(0,2)\). Note that we treat the case of a pure power tail in (A.1), _i.e._ the normal domain of attraction to an \(\alpha\)-stable law, only for simplicity, to avoid dealing with slowly varying corrections in the tail behavior. As in the case where \(\mathbb{E}\left[\omega_{0}^{2}\right]=1\), one can define a coupling \(\hat{\omega}=\hat{\omega}^{(n)}\) such that \[\left(\frac{1}{n^{1/3\alpha}}\Sigma_{un^{1/3}}^{-}(\hat{\omega})\right)_{u \geq 0}\xrightarrow[n\to\infty]{a.s.}\left(X_{u}^{(1)}(\hat{\omega}) \right)_{u\geq 0}\,\quad\left(\frac{1}{n^{1/3\alpha}}\Sigma_{un^{1/3}}^{+}(\hat{\omega}) \right)_{v\geq 0}\xrightarrow[n\to\infty]{a.s.}\left(X_{v}^{(2)}(\hat{ \omega})\right)_{v\geq 0},\] where \(X^{(1)},X^{(2)}\) are two independent \(\alpha\)-stable Levy processes, see [5, SS1.2]. If the range is of size of order \(n^{\xi}\), then we have that \(\sum_{z\in\mathcal{R}_{n}}\omega_{z}\) is of order \(n^{\xi/\alpha}\), which is negligible compared to \(n^{\xi}\) since \(\alpha>1\). Hence the disorder should be negligible at first order, and this is what is proven in [5, Thm. 1.2]: we have \[\lim_{n\to\infty}\frac{1}{n^{1/3}}\log Z_{n,h}^{\omega,\beta}=-\frac{3}{2}( \pi h)^{2/3},\quad\forall\varepsilon>0,\mathbf{P}_{n,h}^{\omega,\beta}\left( \left|n^{-1/3}|\mathcal{R}_{n}|-c_{h}\right|>\varepsilon\right)\xrightarrow[n \to\infty]{}0\,.\] Our result here is to obtain the second order asymptotic for the convergence of \(\log Z_{n,h}^{\omega,\beta}\); we deduce a result on the position of the range under \(\mathbf{P}_{n,h}^{\omega,\beta}\). **Theorem A.1**.: _Suppose that \((\omega_{z})_{z\in\mathbb{Z}}\) verifies (A.1). Then, for any \(h,\beta>0\), we have the following \(\mathbb{P}\)-a.s. convergence_ \[\lim_{n\to\infty}\frac{1}{\beta n^{1/3\alpha}}\left(\log Z_{n,h}^{\omega,\beta }+\frac{3}{2}hc_{h}n^{1/3}\right)=\sup_{0\leq u\leq c_{h}}\left\{X_{u}^{(1)}+ X_{c_{h}-u}^{(2)}\right\}\,,\] _where \(X^{(1)}\) and \(X^{(2)}\) are two independent \(\alpha\)-stable Levy processes. Furthermore, \(u_{*}:=\arg\max_{u\in[0,c_{h}]}\left\{X_{u}^{(1)}+X_{c_{h}-u}^{(2)}\right\}\) exists \(\mathbb{P}\)-almost surely and_ \[\forall\varepsilon>0,\mathbf{P}_{n,h}^{\omega,\beta}\left(\left|\frac{1}{n^{1 /3}}(M_{n}^{-},M_{n}^{+})-(-u_{*},c_{h}-u_{*})\right|>\varepsilon\right) \xrightarrow[n\to\infty]{}0\qquad\mathbb{P}\text{-a.s.}\] Proof.: The proof is essentially the same as the one of Theorem 1.3. As in (2.1), we can write \[\log Z_{n,h}^{\omega,\beta}+\frac{3}{2}hc_{h}n^{1/3}=\log\big{(}1+\bar{o}(1) \big{)}\psi_{h}+\log\sum_{k_{1}=0}^{c_{h}/\delta}\sum_{k_{2}=\frac{c_{h}}{ \delta}-k_{1}-1}^{\frac{c_{h}}{\delta}-k_{1}}Z_{n,h}^{\omega,\beta}(k_{1},k_{2 },\delta)\,,\] with \(Z_{n,h}^{\omega,\beta}(k_{1},k_{2},\delta)\) defined as in (2.3). Once again we have \[\Big{|}\sum_{z=-x}^{y}\omega_{z}-\left(\Sigma_{k_{2}\delta n^{1/3}}^{+}+ \Sigma_{k_{1}\delta n^{1/3}}^{-}\right)\Big{|}\leq R_{n}^{\delta}(k_{1}\delta,k _{2}\delta)\,,\] (A.2) where the error remainer \(R_{n}^{\delta}\) is defined for \(u,v\geq 0\) by \[R_{n}^{\delta}(u,v):=\max_{un^{1/3}+1\leq j\leq(u+\delta)n^{1/3}-1}\left|\Sigma_{j }^{-}-\Sigma_{un^{1/3}}^{-}\right|+\max_{vn^{1/3}+1\leq j\leq(u+\delta)n^{1/3}-1 }\left|\Sigma_{j}^{+}-\Sigma_{vn^{1/3}}^{+}\right|.\] Using the coupling \(\hat{\omega}\) and Lemma A.5 of [5], we have \(\mathbb{P}-a.s.\)\(\forall\varepsilon>0\), \(\exists n_{0}=n_{0}(\varepsilon,\delta,\omega)\) such that \(\forall n\geq n_{0}\), \[\frac{1}{n^{1/3\alpha}}R_{n}^{\delta}(u,v)\leq\varepsilon+\sup_{u \leq u^{\prime}\leq u+\varepsilon+\delta}\left|X_{u^{\prime}}^{(1)}-X_{u}^{(1 )}\right|+\sup_{v\leq v^{\prime}\leq v+\varepsilon+\delta}\left|X_{v^{\prime} }^{(2)}-X_{v}^{(2)}\right|,\] \[\left(\left|\frac{1}{n^{1/3\alpha}}\Sigma_{vn^{1/3}}^{+}-X_{v}^ {(2)}\right|\vee\left|\frac{1}{n^{1/3\alpha}}\Sigma_{un^{1/3}}^{-}-X_{u}^{(1 )}\right|\right)\leq\varepsilon\,,\] uniformly in \(u,v\in U_{\delta}\) as \(U_{\delta}\) is a finite set (recall the definition (2.5) of \(U_{\delta}\)). Letting \(N\to\infty\) then \(\varepsilon\to 0\) we obtain that \(\mathbb{P}\)-almost surely, \[\limsup_{n\to\infty}\frac{1}{\beta n^{1/6}}\left(\log Z_{n,h}^{ \omega,\beta}+\frac{3}{2}hc_{h}n^{1/3}\right)\leq\sup_{\begin{subarray}{c}u,v \in U_{\delta}\\ u+v\in\{c_{h},c_{h}-\delta\}\end{subarray}}\mathcal{W}^{+}(u,v,\delta)\,,\] \[\liminf_{n\to\infty}\frac{1}{\beta n^{1/6}}\left(\log Z_{n,h}^{ \omega,\beta}+\frac{3}{2}hc_{h}n^{1/3}\right)\geq\sup_{\begin{subarray}{c}u,v \in U_{\delta}\\ u+v\in\{c_{h},c_{h}-\delta\}\end{subarray}}\mathcal{W}^{-}(u,v,\delta)\] in which we wrote \[\mathcal{W}^{\pm}(u,v,\delta)=X_{u}^{(1)}+X_{v}^{(2)}\pm\sup_{u\leq u^{\prime} \leq u+\delta}\left|X_{u^{\prime}}^{(1)}-X_{u}^{(1)}\right|\pm\sup_{v\leq v^{ \prime}\leq v+\delta}\left|X_{v^{\prime}}^{(2)}-X_{v}^{(2)}\right|.\] Using the cadlag structure of Levy processes \(X^{(1)}\) and \(X^{(2)}\) we push \(\delta\) to \(0\) and get the desired convergence. Afterwards, we can use [3, Theorem 2.1] and [22, Section 3] to prove that the variational problem is positive and finite (in the sense that \(\sup_{0\leq u\leq c_{h}}\left\{X_{u}^{(1)}+X_{c_{h}-u}^{(2)}\right\}\) is almost surely positive and finite), which relies on the same reasoning as Lemma 2.2. Then, [5, Proposition 3.1] proves the existence and unicity of the maximizer \(u_{*}\). The proof of the second part of Theorem A.1 is exactly the proof of Lemma 2.3. ## Appendix B Technical results for the Brownian meander Let \(W\) be a standard Brownian motion on \([0,1]\) and denote \(\tau:=\sup\left\{t\in[0,1]\::\:W_{t}=0\right\}\). The Brownian meander on \([0,1]\) is defined as the rescaled trajectory of \(W\) between \(\tau\) and \(1\). More precisely it is the process \(M\) defined on \([0,1]\) by \[M_{t}:=\frac{1}{\sqrt{1-\tau}}|W_{\tau+t(1-\tau)}|\,.\] Note that we could define the meander to be on any interval \([0,T]\) by changing how we rescale the trajectory, leading to define a Brownian meander of duration \(T\) as the rescaled process \(\sqrt{\frac{T}{1-\tau}}|W_{\tau+\frac{t}{T}(1-\tau)}|\) on \([0,T]\). Recall the notation \(\varphi_{t}(x):=\frac{1}{\sqrt{2\pi t}}e^{-\frac{x^{2}}{2t}}\) and \(\Phi_{t}(y):=\int_{0}^{y}\varphi_{t}(x)dx\). The Brownian meander on \([0,1]\) is a continuous, non-homogeneous Markov process starting at \(0\), with transition kernel given by \[\mathbb{P}\left(M_{t}\in dy\,|\,M_{s}=x\right)=p^{+}(s,x,t,y)dy=\left[\varphi_{ t-s}(x-y)-\varphi_{t-s}(x+y)\right]\frac{\Phi_{1-t}(y)}{\Phi_{1-s}(x)}dy\] (B.1) \[\mathbb{P}\left(M_{t}\in dy\right)=p^{+}(0,0,t,y)dy=2yt^{-3/2}e^{-\frac{y^{2}}{2t} }\Phi_{1-t}(y)dy\,.\] (B.2) For the proofs of these facts, we refer to [14] and its references. Using \(\mathbb{P}\left(M_{t}\in dy\right)\leq 2yt^{-3/2}e^{-\frac{y^{2}}{2t}}dy\) and \(\Phi_{1-t}(y)\leq y/\sqrt{2\pi(1-t)}\), we have the following estimates: for any \(a<r/2\), \[\begin{split}\mathbb{E}\left[e^{aM_{r}^{2}}\right]& \leq\frac{2}{r^{3/2}\sqrt{2\pi}}\int_{0}^{\infty}ye^{-\frac{1-2 \alpha r}{2r}y^{2}}\,\mathrm{d}y=\left(1-2ra\right)^{-3/2}\,,\\ \mathbb{E}\left[(M_{r})^{-1}e^{a\mathcal{M}_{r}^{2}}\right]& \leq\frac{1}{r^{3/2}\pi\sqrt{(1-r)}}\int_{0}^{\infty}ye^{-\frac{1-2 \alpha r}{2r}y^{2}}\,\mathrm{d}y=\frac{\sqrt{2\pi}}{\sqrt{r(1-r)}}\left(1-2ra \right)^{-1}\,,\end{split}\] (B.3) The asymmetry of the meander can be used to prove the following "reflection principle". **Lemma B.1** (Reflection principle for the meander).: _Let \(M\) be a Brownian meander, then for all \(b>0\) and all \(0\leq s<t\leq 1\),_ \[\mathbb{P}\Big{(}\sup_{0\leq s\leq t}M_{r}\geq b\Big{)}\leq 2\mathbb{P}\left(M _{t}\geq b\right)\,,\quad\mathbb{P}\Big{(}\sup_{s\leq r\leq t}|M_{r}-M_{s}|\geq b \Big{)}\leq 4\mathbb{P}\left(M_{t}-M_{s}\geq b\right)\,.\] Proof.: If we denote by \(T_{b}\) the hitting time of \(b\), we have \[\mathbb{P}\left(\sup_{0\leq s\leq t}M_{s}\geq b\right)=\int_{0}^{t}\mathbb{P} \left(T_{b}\in ds\right)=\int_{0}^{t}\mathbb{P}\left(T_{b}\in ds,M_{t}<b \right)+\int_{0}^{t}\mathbb{P}\left(T_{b}\in ds,M_{t}\geq b\right)\,.\] Now, write \(L_{b}\) the lime of last visit to \(b\) before time \(t\), on \([L_{b},t]\) the process \(M_{r}-b\) is a Brownian bridge conditioned to be above \(-b\). We only need to see that any trajectory of \(M\) from \(b\) to \((0,b]\) which stays above \(0\) can thus be transformed into a trajectory from \(b\) to \([b,2b)\) that stays above \(0\) by reflecting the trajectory between the last visit \(L_{b}\) to \(b\) and \(t\) (see Figure 2). Since these two Brownian bridges have the same probability and \([b,2b)\subset[b,+\infty)\) it shows that this operation is injective and thus \(\mathbb{P}\left(T_{b}\in ds,M_{t}<b\right)\leq\mathbb{P}\left(T_{b}\in ds,M_{ t}\geq b\right)\) for all \(s\leq t\) (note that this is a consequence of the Brownian reflection principle). Therefore, we proved \[\mathbb{P}\left(\sup_{0\leq s\leq t}M_{s}\geq b\right)\leq 2\int_{0}^{t} \mathbb{P}\left(T_{b}\in ds,M_{t}\geq b\right)=2\mathbb{P}\left(T_{b}\leq t, M_{t}\geq b\right)=2\mathbb{P}\left(M_{t}\geq b\right)\,.\] Figure 2: Reflection of the trajectory \(b\to(0,b]\) with respect to the horizontal line at \(b\) If we study the supremum of an increment \(M_{r}-M_{s}\), \(s\leq r\leq t\) we only need to repeat the proof for a starting point \(M_{s}=x\) and integrate over all the positions \(x\). Since the meander is a Markov process, we get \(\mathbb{P}\big{(}\sup_{s\leq r\leq t}M_{r}-M_{s}\geq b\big{)}\leq 2\mathbb{P} \left(M_{t}-M_{s}\geq b\right).\) Afterwards, we only need to see that again using the asymmetry of \(M\), we have that \[\mathbb{P}\Big{(}\sup_{s\leq r\leq t}|M_{r}-M_{s}|\geq b\Big{)}\leq 2\mathbb{P} \Big{(}\sup_{s\leq r\leq t}M_{r}-M_{s}\geq b\Big{)}\,,\] hence the result. **Corollary B.2**.: _For any \(\lambda>1,a>0\) and \(0\leq s<t<\frac{1}{2}\), we have_ \[\mathbb{P}\Big{(}\inf_{s\leq r\leq t}M_{r}\leq a\Big{)}\leq\mathbb{P}\left(M_ {s}\leq\lambda a\right)+\mathbb{P}\left(M_{t}\leq\lambda a\right)+\frac{4a \sqrt{2t}}{t-s}\frac{e^{-\frac{2}{t-s}a^{2}(\lambda-1)^{2}}}{1-e^{-\frac{2}{t -s}a^{2}\lambda^{2}}}\,,\] _as well as \(\mathbb{P}\left(M_{t}\leq a\right)\leq\frac{4a}{\sqrt{rt}}\left(1\wedge\frac{ a^{2}}{2t}\right).\)_ Proof.: We decompose the probability on whether \(M_{s},M_{t}\leq\lambda a\), meaning we only have to consider \(\mathbb{P}\big{(}\inf_{s\leq r\leq t}M_{r}\leq a,M_{s}>\lambda a,M_{t}>\lambda a \big{)}\). For this, we first use Brownian bridge estimates: see that for any \(z,w,T>0\), we have \[\mathbb{P}_{z}\left(W_{T}\in dw,\inf_{t\in[0,T]}W_{t}>0\right) =\frac{1}{\sqrt{2\pi T}}\left(e^{-\frac{1}{2T}(z-w)^{2}}-e^{-\frac {1}{2T}(z+w)^{2}}\right)dw\] \[\mathbb{P}_{z}\left(W_{T}\in dw\right) =\frac{1}{\sqrt{2\pi T}}e^{-\frac{1}{2T}(z-w)^{2}}dw\] thus we have \[\mathbb{P}\left(\inf_{t\in[0,T]}W_{t}^{z\to w}>0\right)=1-e^{\frac{1}{2T}(z-w )^{2}-\frac{1}{2T}(z+w)^{2}}=1-e^{-\frac{2}{2}zw}\,.\] (B.4) For any \(\alpha>0\) and \(z,w>\alpha\), we define \[P_{T}^{\alpha}(z,w):=\mathbb{P}\left(\inf_{t\in[0,T]}W_{t}^{z\to w}\leq \alpha\,|\,\inf_{t\in[0,T]}W_{t}^{z\to w}>0\right)=1-\frac{\mathbb{P}\left( \inf_{t\in[0,T]}W_{t}^{z\to w}>\alpha\right)}{\mathbb{P}\left(\inf_{t\in[0,T]}W _{t}^{z\to w}>0\right)}\,.\] Then, using (B.4) with \(z,w,z-\alpha,w-\alpha>0\), we can deduce \[P_{T}^{\alpha}(z,w)=1-\frac{1-e^{-\frac{2}{T}(z-\alpha)(w-\alpha)}}{1-e^{- \frac{2}{T}zw}}=\frac{e^{-\frac{2}{T}(z-\alpha)(w-\alpha)}-e^{-\frac{2}{T}zw} }{1-e^{-\frac{2}{T}zw}}\,.\] (B.5) Consider the mapping \(f_{T}:(x,y)\mapsto e^{-\frac{2}{T}xy}\). Using the mean value theorem, there is a \(c\in[0,1]\) such that \[f_{T}(z,w)-f_{T}(z-\alpha,w-\alpha) =\nabla f_{T}\Big{(}(1-c)\begin{pmatrix}z\\ w\end{pmatrix}+c\begin{pmatrix}z-\alpha\\ w-\alpha\end{pmatrix}\Big{)}\cdot\Big{(}\begin{pmatrix}z\\ w\end{pmatrix}-\begin{pmatrix}z-\alpha\\ w-\alpha\end{pmatrix}\Big{)}\] \[=-\frac{2\alpha}{T}(z+w-2c\alpha)e^{-\frac{2}{T}(z-c\alpha)(w-c \alpha)}\,.\] Injecting in (B.5), this yields \[P_{T}^{\alpha}(z,w)=\frac{2\alpha}{T}(z+w-2c\alpha)\frac{e^{-\frac{2}{T}(z-c \alpha)(w-\alpha)}}{1-e^{-\frac{2}{T}zw}}\leq\frac{2\alpha}{T}(z+w)\frac{e^{- \frac{2}{T}(z-c\alpha)(w-c\alpha)}}{1-e^{-\frac{2}{T}zw}}\,.\] (B.7) In particular, if we assume \(z,w\geq\lambda\alpha\) for some \(\lambda>1\), then \(f_{T}(z,w)\leq f_{T}(\lambda a,\lambda a)\) and we obtain \[P_{T}^{\alpha}(z,w)\leq\frac{2\alpha}{T}(z+w)\frac{e^{-\frac{2}{T}\alpha^{2}( \lambda-c)^{2}}}{1-e^{-\frac{2}{T}\alpha^{2}\lambda^{2}}}\,.\] Therefore, for any \(\lambda>1\) and \(a>0\), \[\begin{split}\mathbb{P}\left(\inf_{s\leq r\leq t}M_{r}\leq a,M_ {s}>\lambda a,M_{t}>\lambda a\right)&=\mathbb{E}\left[P_{t-s}^{ a}(M_{s},M_{t})\mathbbm{1}_{\{M_{s},M_{t}\geq\lambda a\}}\right]\\ &\leq\frac{2a}{t-s}\frac{e^{-\frac{2}{t-s}a^{2}(\lambda-c)^{2}}}{ 1-e^{-\frac{2}{t-s}a^{2}\lambda^{2}}}\mathbb{E}\left[M_{s}+M_{t}\right]\,, \end{split}\] (B.8) and we compute \(\mathbb{E}\left[M_{t}\right]\leq\frac{2\sqrt{2}}{\pi}\sqrt{t}\leq\sqrt{2t}\) for \(t<1/2\) to get the desired result. On the other hand, using (B.2), we write for \(0<t<\frac{1}{2}\) \[\begin{split}\mathbb{P}\left(\mathcal{M}_{t}\leq a\right)& =\frac{2}{t^{3/2}}\int_{0}^{a}ye^{-\frac{y^{2}}{2t}}\int_{0}^{y} \frac{e^{-\frac{u^{2}}{2(1-t)}}du}{\sqrt{2\pi(1-t)}}dy\leq\frac{2}{t^{3/2}} \int_{0}^{a}ye^{-\frac{y^{2}}{2t}}\int_{0}^{y}\frac{du}{\sqrt{2\pi(1-t)}}dy\\ &\leq\frac{2at^{-3/2}}{\sqrt{2\pi(1-t)}}\int_{0}^{a}ye^{-\frac{y^ {2}}{2t}}dy=\frac{4a(1-e^{-\frac{u^{2}}{2t}})}{\sqrt{2\pi t(1-t)}}\leq\frac{4 a}{\sqrt{\pi t}}\left(1\wedge\frac{a^{2}}{2t}\right)\,.\qed\end{split}\] Let us mention that a process related to the meander is the \(3\)-dimensional Bessel process \(B\). It can be defined as the solution of the SDE \(\mathrm{d}B_{t}=\mathrm{d}W_{t}+B_{t}^{-1}\mathrm{d}t\), or as the sum \(B_{t}=|W_{t}|+L_{t}\) where \(L\) is the local time of \(W\) at \(0\); it is a homogeneous Markov process that has the Brownian scaling property \((B_{\alpha t})_{t}\stackrel{{ d}}{{=}}(\sqrt{\alpha}B_{t})_{t}\). We refer to [23] for those results. The link between the Bessel process and the meander is given by the following result. **Proposition B.3**.: _The law \(\mathbb{P}^{+,T}\) of the Brownian meander on \([0,T]\) has a density with respect to \(\mathbb{P}^{B}\) the law of the three-dimensional process: if \(X\) is the canonical process, we have_ \[\mathbb{P}^{+,T}(A,X_{T}\in dx)=\frac{1}{x}\sqrt{\frac{\pi T}{2}}\,\mathbb{P} ^{B}(A,X_{T}\in dx)\,.\] _In particular, \(\forall\alpha>0,\forall s\leq T,\mathbb{P}^{+,\alpha T}(X_{as}\in dx)= \mathbb{P}^{+,T}(\sqrt{\alpha}X_{s}\in dx)\)._ Proof.: The formula for the density can be found in [17, Section 4]. Afterwards, for any positive measurable function \(f\) and any \(\alpha>0\), we have \[\mathbb{E}^{+,\alpha T}\left[f\Big{(}\frac{X_{as}}{\sqrt{\alpha}}\Big{)} \right]=\mathbb{E}^{B}\left[\frac{1}{X_{\alpha T}}\sqrt{\frac{\pi\alpha T}{2 }}f\Big{(}\frac{X_{as}}{\sqrt{\alpha}}\Big{)}\right]=\sqrt{\frac{\pi}{2}} \mathbb{E}^{B}\left[\frac{\sqrt{T}}{X_{T}}f(X_{s})\right]=\mathbb{E}^{+,T} \left[f(X_{s})\right]\,.\] Appendix C Coupling of Brownian meander, a three-dimensional Bessel process and a Brownian excursion In this section we will expand on the way we can construct our different processes to have the almost sure results of Theorems 1.3 and 1.6. In particular we want the following result: \[\frac{1}{n^{1/6}}\sum_{-un^{1/3}}^{un^{1/3}}\omega_{z}\xrightarrow[n\to\infty ]{a.s.}X_{u}^{(1)}+X_{v}^{(2)}\quad\text{and}\quad n^{1/18}(X_{u_{*}+\frac{u}{n ^{1/9}}}-X_{u_{*}})\xrightarrow[n\to\infty]{a.s.}\mathbf{B}_{u}\,.\] Skorokhod's embedding theorem (Theorem 4.1) allows us to sample the Brownian motions \(X^{(i)},i=1,2\) to get a new environment \(\hat{\omega}^{(n)}\) to obtain the first convergence. Thus we must find how we can couple both processes \(X^{(i)}\) to the processes \(\mathbf{B},\mathbf{Y}\) in Theorem 1.6, that is we need to prove Proposition 1.5. This is based on two intermediate results, Lemmas C.1 and C.2 below, which couple a meander, resp. a Bessel-3 process, to a Brownian excursion. **Lemma C.1** ([6, Theorem 2.3]).: _Let \(\mathbf{e}\) be a standard Brownian excursion and \(U\) a uniform variable on \([0,1]\). Then, the process \(M_{t}=\mathbf{e}_{t}\mathbbm{1}_{\{t\leq U\}}+(\mathbf{e}_{U}+\mathbf{e}_{1- (t-U)})\mathbbm{1}_{\{t>U\}}\) is a Brownian meander on \([0,1]\). In particular, there exists a coupling of the Brownian meander \(M\) and the Brownian excursion \(\mathbf{e}\) on \([0,1]\) such that \(M_{t}=\mathbf{e}_{t}\) if \(t\leq U\)._ **Lemma C.2**.: _For any \(T\in[0,1]\), There exists a coupling of the Brownian excursion \(\mathbf{e}\) on \([0,1]\) and the three-dimensional Bessel process \(\mathbf{B}\) such that there is a positive \(\varepsilon(\omega)\) for which we have \(\mathbf{B}_{t}=\mathbf{e}_{t}\) for any \(t\in[0,\varepsilon(\omega)]\)._ Proof.: It is known (see for example [18, p79]) that the Brownian excursion can be decomposed into two Bessel bridges of duration \(\frac{1}{2}\) joining at a point \(V\) whose law has density \(\frac{16}{\sqrt{2\pi}}v^{2}e^{-2v^{2}}\). Thus we only need to define a coupling between a \(3d\)-Bessel process \(\mathbf{B}\) and a \(3d\)-Bessel bridge \(\mathbf{B}^{\prime}\) with duration \(\frac{1}{2}\) and endpoint \(V\). We use the fact that both processes can be realized by the modulus of a three-dimensional Brownian motion. Consider two independent, three-dimensional Brownian bridges \(X\) and \(Y\) of duration \(1/2\), such that \(X_{0}=x\in\mathbb{R}^{3}\) (resp. \(Y_{0}=y\in\mathbb{R}^{3}\)) and \(X_{\frac{1}{2}}=Y_{\frac{1}{2}}=0\). Denote \(\tau:=\inf\left\{0\leq t\leq\frac{1}{2}\,:\,|X_{t}|=|Y_{t}|\right\}\) the first time \(X\) and \(Y\) have the same modulus. We have the following result. **Lemma C.3**.: _Almost surely, there exists \(\varepsilon(\omega)>0\) such that \(\tau\leq\frac{1}{2}-\varepsilon(\omega)\)._ Using this lemma, we can conclude the construction of the coupling. After time \(\tau\), we define a coupling by taking the trajectory of \(X\) between \(\tau\) and \(\frac{1}{2}\) and plugging it at \(Y_{\tau}\) after a rotation: \[\text{write }X_{t}=|X_{t}|e^{i\theta_{t}^{X}},Y_{t}=|Y_{t}|e^{i\theta_{t}^{Y}} \text{ and define }\hat{Y}_{t}=\begin{cases}Y_{t}&\text{if}\quad t\leq\tau\,,\\ |X_{t}|e^{i\theta_{t}^{X}+i(\theta_{\tau}^{Y}-\theta_{\tau}^{X})}&\text{if} \quad\tau<t\leq\frac{1}{2}\,.\end{cases}\] The new process \(\hat{Y}\) is such that for every \(t\in[\tau,\frac{1}{2}]\), we have \(|X_{t}|=|\hat{Y}_{t}|\). Recall that the Brownian bridge is a diffusion process (as the solution to an SDE), thus is Markovian, and \(\tau\) is a stopping time for both processes \(X\) and \(Y\). It follows that \(\hat{Y}\) is a Brownian bridge between \(y\) and \(0\). To create the coupling between the two Bessel processes \(\mathbf{B}\) and \(\mathbf{B}^{\prime}\), we choose the starting points \(x\) and \(y\) so that they respectively correspond to \(W_{\frac{1}{2}}\) (with \(W\) a \(3d\)-Brownian motion) and a uniform variable on the sphere centered at \(0\) of radius \(V\). Then the processes \(\mathbf{B}_{t}=|X_{\frac{1}{2}-t}|\) and \(\mathbf{B}^{\prime}_{t}=|\hat{Y}_{\frac{1}{2}-t}|\) are Bessel processes starting at \(0\) that coincide on \([0,\frac{1}{2}-\tau]\) and such that \(\mathbf{B}^{\prime}_{\frac{1}{2}}=V\). In particular, the Bessel process \(\mathbf{B}\) and the Brownian excursion \(\mathbf{e}\) coincide on \([0,\frac{1}{2}-\tau]\). Proof of Lemma c.3.: On \([0,\frac{1}{2}]\), consider \(\mathbf{B}\) a \(3\)-dimensional Bessel process starting at \(0\) and \(\mathbf{e}\) the Brownian excursion, which is a Bessel bridge of duration \(\frac{1}{2}\) starting at \(0\) and ending at \(V\). We define \(I_{s,t}:=\left\{\forall r\in(s,t),\mathbf{e}_{r}\neq\mathbf{B}_{r}\right\}\) the event on which \(\mathbf{e}\) and \(\mathbf{B}\) never intersect between \(0\) and \(t\) (with the exception of \(0\)). From [17, (3.1)], we have \(\mathbb{P}_{x}\left(A,\mathbf{B}_{t}\in dz\right)=\frac{z}{z}\mathbb{P}_{x}\left(A,W_ {t}\in dz,H_{0}>t\right)\), where \(W\) is a Brownian motion and \(H_{0}\) its first hitting time of \(0\). Then for any \(\varepsilon>0\), conditioning on the values of \((\mathbf{e}_{\varepsilon},\mathbf{B}_{\varepsilon})\) and \((V,\mathbf{B}_{t})\), we can write \[\mathbb{P}\left(I_{0,t}\right)\leq\mathbb{E}\left[\mathbb{P}\left(I_{ \varepsilon,t}\,|\,\mathbf{e}_{\varepsilon},\mathbf{B}_{\varepsilon}\right) \right]\leq\mathbb{E}\left[\frac{\mathbf{e}_{t}\mathbf{B}_{t}}{\mathbf{e}_{ \varepsilon}\mathbf{B}_{\varepsilon}}\mathbb{P}\left(\mathscr{I}_{\mathbf{e} _{\varepsilon}\to\mathbf{e}_{t}}^{\mathbf{B}_{\varepsilon}\to\mathbf{B}_{t}}( t-\varepsilon)\right)\right]\,,\] where we have defined \[\mathscr{I}_{x\to y}^{a\to b}(T):=\left\{\forall r\in(0,T),W_{T}^{x \to y}(r)>0,W_{T}^{a\to b}(r)>0,W_{T}^{x\to y}(r)\neq W_{T}^{a\to b}(r)\right\}\,,\] in which \(W_{T}^{a\to b}\) is a Brownian bridge \(a\to b\) of duration \(T\) (resp. for \(x\to y\)). We are interested in taking \(t=1/2\), but this result could be used for any fixed \(t>0\), in the sense that the Bessel process and the Brownian excursion almost surely cross each-other on \(\left]0,t\right]\) for any fixed \(t\). Take a positive \(C>0\) to be chosen later (we will choose \(C=\varepsilon^{-1/8}\)). Then, we first get a bound using Cauchy-Schwartz inequality twice: \[\mathbb{E}\left[\frac{|\mathbf{e}_{t}\mathbf{B}_{t}|}{\mathbf{e}_{\varepsilon }\mathbf{B}_{\varepsilon}}\mathbb{P}\left(\mathscr{I}_{\mathbf{e}_{\varepsilon }\to V}^{\mathbf{B}_{\varepsilon}\to\mathbf{B}_{t}}(t-\varepsilon)\right) \mathbb{1}_{\left\{\mathbf{B}_{t}\lor V>C\right\}}\right]\leq\mathbb{E}\left[ \frac{1}{(\mathbf{e}_{\varepsilon}\mathbf{B}_{\varepsilon})^{2}}\right]^{ \frac{1}{2}}\mathbb{E}\left[(\mathbf{e}_{t}\mathbf{B}_{t})^{4}\right]^{\frac{ 1}{4}}\mathbb{P}\left(\mathbf{B}_{t}\lor V>C\right)^{\frac{1}{4}}\,.\] Since \(\varepsilon<t\) and \(\mathbf{e},\mathbf{B}\) are independent, we have \(\mathbb{E}\left[(\mathbf{e}_{t}\mathbf{B}_{t})^{4}\right]\leq c(t)\) and \[\mathbb{E}\left[\frac{1}{(\mathbf{e}_{\varepsilon}\mathbf{B}_{\varepsilon})^{ 2}}\right]\leq\frac{2}{\sqrt{\pi}}\Gamma(\frac{3}{2})(1-\varepsilon)^{-\frac{3 }{2}}\varepsilon^{-3}\int_{\mathbb{R}_{+}^{2}}e^{-\frac{x^{2}}{2c}}e^{-\frac{ y^{2}}{2c(1-c)}}dxdy\leq(1-\varepsilon)^{-1}\varepsilon^{-2}\,,\] where we used the transition probabilities for the Bessel process [23, VI SS3 Prop. 3.1], the Brownian excursion [18, Section 2.9 (3a)] and \(\Gamma(\frac{3}{2})=\sqrt{\pi}/2\). Finally we compute \(\mathbb{P}\left(\mathbf{B}_{t}\lor V>C\right)\leq e^{-C^{2}/t^{1/6}}\) to get \[\mathbb{E}\left[\frac{|\mathbf{e}_{t}\mathbf{B}_{t}|}{\mathbf{e}_{\varepsilon }\mathbf{B}_{\varepsilon}}\mathbb{P}\left(\mathscr{I}_{\mathbf{e}_{\varepsilon }\to V}^{\mathbf{B}_{\varepsilon}\to\mathbf{B}_{t}}(t-\varepsilon)\right) \mathbb{1}_{\left\{\mathbf{B}_{t}\lor\mathbf{v}\mathbf{r}>C\right\}}\right] \leq c_{t}(1-\varepsilon)^{-\frac{1}{2}}\varepsilon^{-2}e^{-\frac{C^{2}}{t^{1/ 6}}}\,.\] (C.1) On the other hand, \[\mathbb{E}\left[\frac{|\mathbf{e}_{t}\mathbf{B}_{t}|}{\mathbf{e}_{\varepsilon }\mathbf{B}_{\varepsilon}}\mathbb{P}\left(\mathscr{I}_{\mathbf{e}_{\varepsilon }\to\mathbf{e}_{t}}^{\mathbf{B}_{\varepsilon}\to\mathbf{B}_{t}}(t-\varepsilon) \right)\mathbb{1}_{\left\{\mathbf{B}_{t}\lor\mathbf{v}\mathbf{r}\leq C\right\}} \right]\leq\mathbb{E}\left[\frac{C^{2}}{\mathbf{e}_{\varepsilon}\mathbf{B}_{ \varepsilon}}\mathbb{P}\left(\mathscr{I}_{\mathbf{e}_{\varepsilon}\to \mathbf{e}_{t}}^{\mathbf{B}_{\varepsilon}\to\mathbf{B}_{t}}(t-\varepsilon) \right)\right]\,.\] (C.2) We will use the following lemma to get a bound on \(\mathbb{P}\left(\mathscr{I}_{\mathbf{e}_{\varepsilon}\to\mathbf{e}_{t}}^{ \mathbf{B}_{\varepsilon}\to\mathbf{B}_{t}}(t-\varepsilon)\right)\). **Lemma C.4**.: _For any \(T>0\), there is a \(C_{T}>0\) such that for any \(x,y,a,b>0\),_ \[\mathbb{P}\left(\mathscr{I}_{x\to y}^{a\to b}(T)\right)\leq C_{T}(x^{2}+a^{2})^{ 2}(y^{2}+b^{2})^{2}\,.\] (C.3) Thus, using Lemma C.4 in (C.2), we have the upper bound \[\mathbb{E}\left[\frac{|\mathbf{e}_{t}\mathbf{B}_{t}|}{\mathbf{e}_{\varepsilon }\mathbf{B}_{\varepsilon}}\mathbb{P}\left(\mathscr{I}_{\mathbf{e}_{\varepsilon }\to\mathbf{e}_{t}}^{\mathbf{B}_{\varepsilon}\to\mathbf{B}_{t}}(t-\varepsilon) \right)\mathbb{1}_{\left\{\mathbf{B}_{t}\lor\mathbf{v}\mathbf{r}\leq C\right\}} \right]\leq C^{6}C_{t,\varepsilon}\mathbb{E}\left[\frac{1}{\mathbf{e}_{ \varepsilon}\mathbf{B}_{\varepsilon}}\Big{(}(\mathbf{e}_{\varepsilon})^{2}+( \mathbf{B}_{\varepsilon})^{2}\Big{)}^{2}\right]\,,\] and we compute \[\mathbb{E}\left[\frac{\Big{(}(\mathbf{e}_{\varepsilon})^{2}+( \mathbf{B}_{\varepsilon})^{2}\Big{)}^{2}}{\mathbf{e}_{\varepsilon}\mathbf{B}_{ \varepsilon}}\right] =\frac{2}{\sqrt{\pi}}\Gamma(\frac{3}{2})(1-\varepsilon)^{-\frac{3 }{2}}\varepsilon^{-3}\int_{\mathbb{R}_{+}^{2}}(x^{2}+y^{2})^{2}xye^{-\frac{x^{2}}{2 c}}e^{-\frac{y^{2}}{2c(1-c)}}dxdy\] \[\leq(cst.)\varepsilon^{-3}\int_{\mathbb{R}_{+}^{2}}\varepsilon^{ 2}(u^{2}+v^{2})^{2}uv\varepsilon e^{-\frac{y^{2}}{2}}e^{-\frac{v^{2}}{2}} \varepsilon dudv\leq(cst.)\varepsilon\,.\] to get \[\mathbb{E}\left[\frac{|\mathbf{e}_{t}\mathbf{B}_{t}|}{\mathbf{e}_{\varepsilon} \mathbf{B}_{\varepsilon}}\mathbb{P}\left(\mathscr{I}_{\mathbf{e}_{\varepsilon} \rightarrow\mathbf{e}_{t}}^{\mathbf{B}_{\varepsilon}\rightarrow\mathbf{B}_{t }}(t-\varepsilon)\right)\mathbbm{1}_{\{\mathbf{B}_{t}\vee\mathbf{e}_{t} \leq C\}}\right]\leq C^{6}C_{t}\varepsilon\,.\] (C.4) Thus, assembling (C.1) and (C.4) while taking \(C=\varepsilon^{-1/8}\), for any \(t>0\) we then have \[\mathbb{P}\left(I_{0,t}\right)\leq C_{t}\left(\varepsilon^{-1}e^{-\frac{C^{2} }{4(t-\varepsilon)^{1/6}}}+C^{6}\varepsilon\right)\leq C_{t}\left(\varepsilon ^{-1}\exp\Big{(}-\frac{\varepsilon^{-1/4}}{2}\Big{)}+\varepsilon^{1/4}\right) \xrightarrow[\varepsilon\to 0]{}0\,.\] This means that \(\mathbb{P}\left(I_{0,t}\right)=0\) and in particular, taking \(t=\frac{1}{2}\), one can almost-surely find a positive \(\varepsilon\) such that Lemma C.3 is true. Proof of Lemma c.4.: We can assume \(0<x<a\) and \(0<y<b\) (otherwise the probability is zero), then we have \[\mathbb{P}\left(\mathscr{I}_{x\to y}^{a\to b}(T)\right)=\mathbb{P} \left(\forall r\in[0,T],0<W_{T}^{x\to y}(r)<W_{T}^{a\to b}(r)\right).\] (C.5) Observe that (C.5) is exactly the probability for the Brownian bridge \(W_{T}^{x\to y,a\to b}:=(W_{T}^{x\to y},W_{T}^{a\to b})\) to stay in the cone \(\mathscr{C}:=\{(x,y)\in\mathbb{R}^{2}\,:\,0\leq x\leq y\}\) for a time \(T\), meaning \[\mathbb{P}\left(\mathscr{I}_{x\to y}^{a\to b}(T)\right)=\mathbb{P} \left(\forall t\in[0,T],W_{T}^{x\to y,a\to b}(t)\in\mathscr{C}\right).\] The isotropy of Brownian motion allows us to consider instead \(\hat{\mathscr{C}}:=\left\{re^{i\theta},0\leq\theta\leq\frac{\pi}{4}\right\}\). **Lemma C.5**.: _Let \(W^{z\to z^{\prime}}\) be a two dimensional Brownian bridge from \(z\) to \(z^{\prime}\). Then, there is a positive \(C_{T}\) such that uniformly as \(|z|\to 0\) we have_ \[\mathbb{P}\left(\forall t\in[0,T],W_{t}^{z\to z^{\prime}}\in\hat{ \mathscr{C}}\right)=(1+\bar{o}(1))C_{T}|z|^{4}|z^{\prime}|^{4}\sin\left(4\arg z \right)\sin\left(4\arg z^{\prime}\right)\,.\] Proof.: Recall that we identify \(\mathbb{R}^{2}\) with \(\mathbb{C}\), by writing \(W\) for a standard two-dimensional Brownian motion, we have \[\mathbb{P}\left(\forall t\in[0,T],W_{t}^{z\to z^{\prime}}\in\hat{ \mathscr{C}}\right) =\lim_{\eta\to 0}\frac{\mathbb{P}_{x}\left(\forall t\in[0,T],W_{t} \in\hat{\mathscr{C}},W_{T}\in B(z^{\prime},\eta)\right)}{\mathbb{P}_{z}\left( W_{T}\in B(z^{\prime},\eta)\right)}\] \[=\lim_{\eta\to 0}\left(C(T)\eta^{2}e^{-|z^{\prime}|^{2}/2T} \right)^{-1}\int_{B(z^{\prime},\eta)}K_{T}^{\hat{\mathscr{C}}}(z,w)dw\,,\] where \(K_{T}^{\hat{\mathscr{C}}}(z,w)\) is the heat kernel killed on exiting \(\hat{\mathscr{C}}\) and \(B(z,r)\) is the ball of radius \(r\) centered at \(z\). The key ingredient is the following statement, which is a consequence of [12, Lemma 18 - (32)]: as \(\delta\to 0\), uniformly in \(|z|\leq\delta\sqrt{T},|w|\leq\sqrt{T/\delta}\), we have \[K_{T}^{\hat{\mathscr{C}}}(z,w)\sim\frac{\chi_{0}}{T^{5}}e^{-|w|^{2}/2T}u(w)u(z) \quad\text{for some }\chi_{0}>0.\] where \(u(re^{i\theta}):=r^{4}\sin(4\theta)\) (this expression is given in [12, (3)]). This result is also stated in [2, Corollary 1]. In particular, as \(|z|\to 0\), \[\mathbb{P}\left(\forall t\in[0,T],W_{t}^{z\to z^{\prime}}\in\hat{ \mathscr{C}}\right) \sim\lim_{\eta\to 0}\left(C(T)\eta^{2}e^{-|z^{\prime}|^{2}/2T} \right)^{-1}\int_{B(z^{\prime},\eta)}\frac{\chi_{0}}{T^{5}}e^{-|w|^{2}/2T}u(w)u( z)dw\] \[=\lim_{\eta\to 0}e^{|z^{\prime}|^{2}/2T}\frac{\chi_{0}}{T^{5}}e^{-|z ^{\prime}|^{2}/2T}u(z^{\prime})u(z)\frac{\operatorname{Vol}(B(z^{\prime},\eta)) }{C(T)\eta^{2}}(1+h(\eta))\,,\] with \(h(\eta)\to 0\) and \(\operatorname{Vol}(B(z^{\prime},\eta))=\pi\eta^{2}\), leading us to the formula of Lemma C.5. _Comment_.: We could also use the fact that \(\mathscr{C}\) is the Weyl chamber \(B_{2}\), thus we can use results from [16, SS5.3] after a time scaling by \(\varepsilon\) to have that the probability in (C.2) is of order \((\varepsilon/t)^{2}\), which is ultimately what we proved. Thus, we proved Lemma C.4 by injecting \(z=(x,a)\) and \(z^{\prime}=(y,b)\). Assembling Lemmas C.1 and C.2 yields that one can do a coupling of the Brownian meander \(\mathcal{M}\) and the three-dimensional Bessel process \(\mathbf{B}\) such that almost surely, there is a positive time \(\sigma\) for which \(\mathcal{M}_{t}=\mathbf{B}_{t}\) on \([0,\sigma]\), thus proving Proposition 1.5 using (3.5). ### Acknowledgements The author would like to thank his PhD advisors Quentin Berger and Julien Poisat for their continual help, as well as Pierre Tarrago for his proof of Lemma C.5.
2305.01356
A Quadtree, a Steiner Spanner, and Approximate Nearest Neighbours in Hyperbolic Space
We propose a data structure in $d$-dimensional hyperbolic space that can be considered a natural counterpart to quadtrees in Euclidean spaces. Based on this data structure we propose a so-called L-order for hyperbolic point sets, which is an extension of the Z-order defined in Euclidean spaces. Using these quadtrees and the L-order we build geometric spanners. Near-linear size $(1+\epsilon)$-spanners do not exist in hyperbolic spaces, but we are able to create a Steiner spanner that achieves a spanning ratio of $1+\epsilon$ with $\mathcal O_{d,\epsilon}(n)$ edges, using a simple construction that can be maintained dynamically. As a corollary we also get a $(2+\epsilon)$-spanner (in the classical sense) of the same size, where the spanning ratio $2+\epsilon$ is almost optimal among spanners of subquadratic size. Finally, we show that our Steiner spanner directly provides a solution to the approximate nearest neighbour problem: given a point set $P$ in $d$-dimensional hyperbolic space we build the data structure in $\mathcal O_{d,\epsilon}(n\log n)$ time, using $\mathcal O_{d,\epsilon}(n)$ space. Then for any query point $q$ we can find a point $p\in P$ that is at most $1+\epsilon$ times farther from $q$ than its nearest neighbour in $P$ in $\mathcal O_{d,\epsilon}(\log n)$ time. Moreover, the data structure is dynamic and can handle point insertions and deletions with update time $\mathcal O_{d,\epsilon}(\log n)$.
Sándor Kisfaludi-Bak, Geert van Wordragen
2023-05-02T12:23:41Z
http://arxiv.org/abs/2305.01356v2
# A Quadtree for Hyperbolic Space ###### Abstract We propose a data structure in \(d\)-dimensional hyperbolic space that can be considered a natural counterpart to quadtrees in Euclidean spaces. Based on this data structure we propose a so-called L-order for hyperbolic point sets, which is an extension of the Z-order defined in Euclidean spaces. We demonstrate the usefulness of our hyperbolic quadtree data structure by giving an algorithm for constant-approximate closest pair and dynamic constant-approximate nearest neighbours in hyperbolic space of constant dimension \(d\). Hyperbolic geometry, quadtree, approximate nearest neighbour + Footnote †: This is an abstract of a presentation given at CG:YRF 2023. It has been made public for the benefit of the community and should be considered a preprint rather than a formally reviewed paper. Thus, this work is expected to appear in a conference with formal proceedings and/or in a journal. ## 1 Introduction Hyperbolic geometry has applications in several fields, including special relativity, topology, visualisation, machine learning, complex network modelling, etc. [14, 15, 16, 17, 19]. With the growing interest in the larger scientific community, there are growing computational and graphical/visualisation needs. It is becoming increasingly important to develop basic data structures and algorithms for hyperbolic spaces. Quadtrees in Euclidean spaces [9] are among the few geometric data structures that have proven to be useful both in practical algorithms and in theory [1, 8, 10]. They form the basis of various data structures by being able to 'zoom in' efficiently. Quadtrees provide a hierarchical structure, as well as a way to think of ordering points of the plane (or higher-dimensional spaces) using the so-called Z-order curve. They can be used as a basis for nearest neighbour algorithms [10]. The central question addressed by this article is as follows. _Is there a natural hyperbolic equivalent to Euclidean quadtrees?_ Given a point set \(P\) in the Euclidean plane (henceforth denoted by \(\mathbb{R}^{2}\)), a quadtree of \(P\) can be defined as follows. Let \(\sigma_{0}\) be a minimal axis-parallel square containing \(P\), and let \(T\) be a tree graph whose root corresponds to \(\sigma_{0}\). Then consider a square \(\sigma\) and the corresponding vertex \(v_{\sigma}\) of \(T\) where \(|\sigma\cap P|\geq 2\) (starting with \(\sigma=\sigma_{0}\)). We subdivide \(\sigma\) into four squares of half the side length of \(\sigma\). Each smaller square is associated with a new vertex that is connected to \(v_{\sigma}\). This procedure is repeated for each square \(\sigma\) where \(\sigma\cap P\geq 2\) exhaustively, until all leaves of \(T\) correspond to squares that contain at most one point from \(P\). The squares are called the _cells_ of the quadtree, and we can speak of parent/child and ancestor/descendant relationships of cells by applying the terms of the corresponding vertices in \(T\). The _level_ of a cell or vertex is its distance to the root of \(T\) along the shortest path in \(T\) (i.e., the root has level 0). Some crucial properties of Euclidean quadtrees are used by several algorithms. 1. If \(C^{\prime}\) is a child cell of \(C\), then \(c_{1}<\mathrm{diam}(C^{\prime})/\mathrm{diam}(C)<c_{2}\) where \(0<c_{1}<c_{2}<1\) are fixed constants, and \(\mathrm{diam}(C)\) denotes the diameter of the cell \(C\). 2. Each cell \(C\) contains a ball that has diameter at least constant times the diameter of \(C\). Thus cells are so-called _fat_ objects in \(\mathbb{R}^{2}\). 3. Each cell has at most \(k\) children cells for some fixed constant \(k\). 4. Cells of the same level are isometric, that is, any cell can be obtained from any other cell of the same level by using a distance-preserving transformation. Could the above four properties be replicated by a quadtree in the hyperbolic plane? Unfortunately this is not possible: the volume of a ball in hyperbolic space grows exponentially with its radius (thus hyperbolic spaces are not _doubling spaces_). Consequently, for large cells, a cell of constant times smaller diameter than its parent will only cover a vanishingly small volume of its parent. This rules out having properties 1, 2, and 3 together. Property 4 also poses a unique challenge: while the hyperbolic plane provides many types of _tilings_ one could start with, there is no transformation that would be equivalent to scaling in Euclidean spaces. This is unlike the scaling-invariance exhibited by Euclidean quadtrees. Moreover, in small neighbourhoods hyperbolic spaces are locally Euclidean, meaning that a small ball in hyperbolic space can be embedded into a Euclidean ball of the same radius with distortion infinitesimally close to 1. Thus a hyperbolic quadtree needs to operate at two different scales: at small distances we need to work with an almost-Euclidean metric, while at larger distances we need to work on a non-doubling metric. Our contributionWe propose a hyperbolic quadtree that satisfies properties 1, 2, as well as property 4 in case of cells of super-constant diameter. Moreover, our hyperbolic quadtree resembles a Euclidean quadtree for cells of sub-constant diameter. Based on the quadtree we are able to construct a new order and space-filling curve, named the L-order, which serves as a hyperbolic extension of the Euclidean Z-order. We show that a few hyperbolic quadtrees (and corresponding L-orders) can create a useful cover of \(\mathbb{H}^{d}\) in the following sense. For any \(\Delta\in\mathbb{R}_{+}\), there is a set of at most \(3d+3\) infinite hyperbolic quadtrees such that any two points \(p,q\in\mathbb{H}^{d}\) with \(\mathrm{dist}_{\mathbb{H}^{d}}(p,q)\leq\Delta\) are contained in a cell with diameter \(\mathcal{O}\!\left(d\sqrt{d}\right)\cdot\mathrm{dist}_{\mathbb{H}^{d}}(p,q)\) in one of the quadtrees. The above theorem matches the Euclidean result given by Chan, Har-Peled and Jones [6, Lemma 3.7]. We demonstrate the usefulness of our new data structure by presenting two applications related to approximate nearest neighbours. A pair \(p,p^{\prime}\in P\) of distinct points is a \(c\)_-approximate closest pair_ in \(P\) if \(\mathrm{dist}(p,p^{\prime})\leq c\cdot\min_{a,a^{\prime}\in P,\ a\neq a^{ \prime}}\mathrm{dist}(a,a^{\prime})\). Given the point set \(P\) and some query point \(q\) in the ambient space, we say that \(p\in P\) is a \(c\)_-approximate nearest neighbour_ of \(q\) if \(\mathrm{dist}(q,p)\leq c\cdot\min_{a\in P}\mathrm{dist}(q,a)\). Let \(P\subset\mathbb{H}^{d}\) be a given set of \(n\) points. * We can find an \(\mathcal{O}\!\left(d\sqrt{d}\right)\)-approximate closest pair of \(P\) in \(\mathcal{O}\!\left(d^{2}n\log n\right)\) time. * We can construct a data structure in \(\mathcal{O}\!\left(d^{2}n\log n\right)\) time that uses \(\mathcal{O}\!\left(d^{2}n\right)\) space and can answer queries for an \(\mathcal{O}\!\left(d\sqrt{d}\right)\)-approximate nearest neighbour in \(P\) in \(\mathcal{O}\!\left(d^{2}\log n\right)\) time, and perform updates (point insertions and removals) in \(\mathcal{O}\!\left(d^{2}\log n\right)\) time. A natural next step is to extend the above to \((1+\varepsilon)\)-approximate nearest neighbours as in [6]. Unfortunately this fails to give \(o(n)\) query times when \(\mathrm{diam}(P)=\log n\), so the problem is left for future work. Related workApproximate nearest neighbour search in hyperbolic spaces has been studied by Krauthgamer and Lee [13] and by Wu and Charikar [20]. Krauthgamer and Lee describe a data structure of size \(\mathcal{O}\big{(}n^{2}\big{)}\) that allows queries for a \(\mathcal{O}(1)\)-additive approximation of the nearest neighbour in \(\mathcal{O}\big{(}\log^{2}n\big{)}\) time. They note that this can be further improved to a \((1+\varepsilon)\)-approximation using [12]. Wu and Charikar describe various practical methods of using black-box algorithms for Euclidean nearest neighbour search to find exact and \((1+\varepsilon)\)-approximate nearest neighbours. ## 2 Preliminaries We assume that the reader is familiar with the basics of hyperbolic geometry and trigonometry, as well as the Poincare half-space model. For more background on hyperbolic geometry, please see [4] and the textbooks [2, 11, 18]. Note that our results are presented in the half-space model, but are in fact model-independent. Our algorithms are also presented for point sets whose coordinates are given in the half-space model. Apart from numerical challenges --something we will not tackle in this article--, working in the half-space model is not restrictive as conversion between various models of hyperbolic geometry is straightforward. Let \(\mathbb{H}^{d}\) denote the \(d\)-dimensional hyperbolic space of sectional curvature \(-1\). We denote points as \((x,z)\) for \(x\in\mathbb{R}^{d-1}\) and \(z\in\mathbb{R}^{+}\), with the distance \[\operatorname{dist}_{\mathbb{H}^{d}}((x,z),(x^{\prime},z^{\prime}))=2\operatorname {arsinh}\left(\frac{1}{2}\sqrt{\frac{\|x-x^{\prime}\|^{2}+(z-z^{\prime})^{2}}{ zz^{\prime}}}\right),\] where \(\|x\|\) refers to the \((d-1)\)-dimensional Euclidean norm of \(x\). When \(x=x^{\prime}\) this reduces to \(\left|\ln\left(\frac{z}{z^{\prime}}\right)\right|\). For the rest of the paper, we fix a particular half-space model, and describe our results in this model. Note that there are standard ways to convert between different models, allowing the results to be used also for inputs defined in different models. We will think of the \(z\) direction as going "up", and the \(d-1\) other axes are going "sideways". The transformations \(T_{\sigma,\tau}(x,z)=(\sigma x+\tau,\sigma z)\), where we scale all coordinates by \(\sigma\in\mathbb{R}^{+}\) and then translate \(x\) with a vector \(\tau\in\mathbb{R}^{d-1}\), are isometric. This can be verified by applying \(T_{\sigma,\tau}\) to both arguments in the distance formula. In fact, any isometry of \((d-1)\)-dimensional Euclidean space induces a hyperbolic isometry. Consider now the translations \(T_{\sigma,\tau}\) where \(\sigma=2^{k}\) for some integer \(k\) and \(\tau\) is an integer vector. One can observe that acting upon the Euclidean unit cube with corners \((0,\ldots,0,1)\) and \((1,\ldots,1,2)\) these translations create a tiling of the half-space model with isometric tiles. This tiling has been named the _binary tiling_, and it was introduced by Boroczky [3]. The binary tiling is the basis of our quadtree construction. The \(2\)-dimensional binary tiling is illustrated in Figure 1. Fix a point \(p\) and a hyperplane \(h\) in \(\mathbb{H}^{d}\). (Here hyperplane is understood in the hyperbolic sense, i.e. in the half-space model it is either a \(d-1\)-dimensional hemisphere perpendicular to \(z=0\) or a hyperplane perpendicular to \(z=0\).) The _reflection_ of \(p\) on \(h\) is a point \(p^{\prime}\) such that the geodesic \(pp^{\prime}\) is perpendicular to \(h\), and the midpoint \(t\) of \(pp^{\prime}\) is on \(h\). We call \(t\) the _hyperbolic projection_ of \(p\) onto \(h\). The distance from \((x,z)\in\mathbb{H}^{d}\) to the hyperplane \(x_{1}=0\) is \(\operatorname{arsinh}\frac{|x_{1}|}{z}\). Proof.: Mirroring \((x,z)\) in the hyperplane gives a point \((x^{\prime},z)\) where \(x^{\prime}_{1}=-x_{1}\) but still \(x^{\prime}_{i}=x_{i}\) for all other \(i\). Let \(p\) be the projection of \((x,z)\) onto the \(z\)-axis. This point must be exactly in the middle of the line segment connecting \((x,z)\) and its mirror image: \(\operatorname{dist}_{\mathbb{H}^{d}}((x,z),p)=\frac{1}{2}\operatorname{dist}_{ \mathbb{H}^{d}}((x,z),(x^{\prime},z))=\operatorname{arsinh}\frac{|x_{i}|}{z}\). We will use \(\log\) to denote the base-\(2\) logarithm. ## 3 A hyperbolic quadtree The Euclidean quadtree is a tree whose vertices are associated with axis-parallel hypercubes. Correspondingly, our hyperbolic quadtree will be a tree whose vertices are associated with so-called _cube-based horoboxes_. For this paper, it will be useful to define an _axis-parallel horobox_ as the shape that corresponds to a Euclidean axis-parallel box in a fixed half-space model. Notice that a horobox is bounded by \(2\) horospheres and \(2(d-1)\) hyperplanes. The cube-based horobox is a special axis-parallel horobox, defined as follows. In a fixed half-space model, a cube-based horobox \(R(x,z,w,h)\) is a Euclidean box with corner points \((x,z)\) and \((x+z\cdot(w,\dots,w),z\cdot 2^{h})\). We call \(w\) its width and \(h\) its height. It is worth noting that at \(z=1\), the width corresponds to the Euclidean width of the horobox. On top of that, defining the width and height in this way ensures that horoboxes with the same width and height are congruent to one another: \(T_{\sigma,\tau}\) transforms a horobox \(R(x,z,w,h)\) into \(R(x^{\prime},z^{\prime},w,h)\) when \(\sigma=\frac{z^{\prime}}{z},\tau=x^{\prime}-x\cdot\frac{z^{\prime}}{z}\). Horobox \(C=R(x,z,w,h)\) has diameter \(\operatorname{diam}(C)=2\operatorname{arsinh}\left(\frac{1}{2}w\sqrt{d-1}\right)\) when \(w\geq\sqrt{\frac{2^{h}-1}{d-1}}\) and \(\operatorname{diam}(C)=2\operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{(d- 1)w^{2}+(2^{h}-1)^{2}}{2^{h}}}\right)\) otherwise. Proof.: The distance \(\operatorname{dist}_{\mathbb{H}^{d}}((x,z),(x^{\prime},z^{\prime}))\) is monotone increasing in \(\|x-x^{\prime}\|\). Assuming \(z^{\prime}\leq z\), the distance is also monotone decreasing in \(z^{\prime}\). This means the diameter is w.l.o.g. given by \(\operatorname{dist}_{\mathbb{H}^{d}}\left((0,z),((w,\dots,w),1)\right)=2 \operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{(d-1)w^{2}+(z-1)^{2}}{z}}\right)\), for some \(1\leq z\leq 2^{h}\). In this interval the function is convex in \(z\), so the maximum is attained at either \(z=1\) or \(z=2^{h}\). Thus it is either \(2\operatorname{arsinh}\left(\frac{1}{2}w\sqrt{d-1}\right)\) or \(2\operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{(d-1)w^{2}+(2^{h}-1)^{2}}{ 2^{h}}}\right)\). These are equal when \(w=\sqrt{\frac{2^{h}-1}{d-1}}\). Both are continuous, so it suffices to check two combinations of values to see that \(z=1\) gives the largest value if and only if \(w\geq\sqrt{\frac{2^{h}-1}{d-1}}\). ### Hyperbolic quadtree construction. One property of hyperbolic space that makes quadtrees more complicated is that it behaves differently at different scales: in small enough neighbourhoods the distortion compared to Figure 1: A portion of the binary tiling in the half-plane model Euclidean space approaches one, but at larger scales the hyperbolic nature becomes more and more pronounced. This means that the quadtree also has to work differently at different scales. For a point set \(P\subset\mathbb{H}^{d}\), the hyperbolic quadtree \(\mathcal{Q}(P)\) is a graph whose vertices are regions of hyperbolic space. We can construct \(\mathcal{Q}(P)\) as follows. First, we find the Euclidean minimum bounding box of \(P\) in the half-space model. From this we can get a minimum bounding horobox \(R(x,z,w,h)\) where \(z=\min_{p\in P}\pi_{z}(p)\) (i.e., we shift the horobox up as much as possible). In case of \(d=2\), our goal is to ensure that quadtree cells of level \(\ell\geq 0\) correspond to horoboxes whose vertices come from a fixed binary tiling. The levels can be constructed starting at level \(\ell=0\), where cells are the tiles of a binary tiling. The binary tiling is closely related to binary trees: we get a binary tree from the tiling by making a graph where each vertex corresponds to a horobox and edges correspond to them being vertically adjacent. When we have a binary tree, we can partition it into a small number of identical subgraphs by 'cutting' at half its depth, see Figure 2. This gives a natural way to split cells of level \(\ell\geq 1\) into \(1+2^{\ell}\) isomorphic cells, one corresponding to the 'top' part of the binary tree, and the rest corresponding to the subtrees defined by the vertices at depth \(\ell\). When splitting cells of level \(\ell\leq 0\), we are already in a setting where the distortion is very small compared to the Euclidean setting; here we simply use Euclidean dissection into four smaller cells, each with the same Euclidean width and hyperbolic height. For general \(d\), we define the quadtree as follows: * If \(w\leq\frac{1}{\sqrt{d-1}}\) and \(h\leq 1\), we find the smallest integer \(\ell\) such that \(w\leq\frac{2^{\ell}}{\sqrt{d-1}}\) and \(h\leq 2^{\ell}\), then use \(R(x,z,\frac{2^{\ell}}{\sqrt{d-1}},2^{\ell})\) as the root cell of our quadtree. * Otherwise, we find the smallest integer \(\ell\) such that \(w\leq\frac{2^{\ell^{\prime}-1}}{\sqrt{d-1}}\) and \(h\leq 2^{\ell}\), then use \(R(x,z,\frac{2^{\ell^{\prime}-1}}{\sqrt{d-1}},2^{\ell})\) as the root cell. We then subdivide cells to get their children, but unlike with the Euclidean quadtree this subdivision depends on the size of the cell. If we have a cell \(R(x^{\prime},z^{\prime},w,h)\) with \(h\leq 1\), then we split it into \(2^{d}\) smaller ones using the Euclidean hyperplanes \(x_{i}=x^{\prime}_{i}+\frac{w}{2z}\) and \(z=z^{\prime}\cdot 2^{\frac{1}{2}}\). Figure 2: Left: A quadtree cell of level 2 is split into 5 cells of level 1. Right: The binary tree of depth 4 is split into 5 isomorphic binary trees of depth 2. For larger \(h\), we also use the Euclidean hyperplane \(z=z^{\prime}\cdot 2^{\frac{h}{2}}\). This gives two horoboxes with height \(h/2\), where the top one has width \(w/2^{\frac{h}{2}}\) but the bottom one still \(w\). Thus, we also split the bottom horobox into a grid of \(2^{\frac{h}{2}(d-1)}\) horoboxes of width \(w/2^{\frac{h}{2}}\) so that in total we have \(2^{\frac{h}{2}(d-1)}+1\) cells of the same size. **Lemma 6**.: _At any level \(\ell\), cells are cube-based horoboxes with height \(2^{\ell}\). For \(\ell\geq 0\), the width is \(\frac{2^{2^{\ell}-1}}{\sqrt{d-1}}\) and the diameter is \(2\operatorname{arsinh}(2^{2^{\ell}-2})\). For \(\ell<0\), the width is \(\frac{\alpha\cdot 2^{\ell}}{\sqrt{d-1}}\) and the diameter is \(2\operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{\alpha^{2}\cdot 4^{\ell}+(2^{2^{ \ell}}-1)^{2}}{2^{\ell}}}\right)\), where \(\alpha\in\left(\frac{1}{2},1\right]\) is a cell-specific value. Moreover, if a cell \(C\) of level \(\ell\) has corresponding value \(\alpha\) and a child cell \(C^{\prime}\) of \(C\) has corresponding value \(\alpha^{\prime}\), then \(\alpha^{\prime}/\alpha\in\{1,2^{-2^{\ell-1}}\}\)._ Proof.: The statement for \(\ell\geq 0\) follows directly from the construction and Lemma 5. At \(\ell\leq 0\), a cell of width \(w\) and height \(h\) gets split into four cells that all have height \(h/2\), where the lower two have width \(\frac{w}{2}\) and the upper two have width \(\frac{w}{2}/2^{\frac{h}{2}}\). At level \(0\) the height is \(1\) and the width \(\frac{1}{\sqrt{d-1}}\), so cells at level \(\ell<0\) have height \(2^{\ell}\) and width \(\frac{\alpha\cdot 2^{\ell}}{\sqrt{d-1}}\), for a yet to be determined value \(\alpha\). Its lower two child cells have width \(\frac{w}{2}=\frac{\alpha\cdot 2^{\ell-1}}{\sqrt{d-1}}\), while the upper two have width \(\frac{w}{2}/2^{\frac{h}{2}}=\frac{\alpha\cdot 2^{\ell-1}}{2^{2^{\ell-1}} \sqrt{d-1}}\). Thus the width of the child cells follows the same formula, with \(\alpha\) the same or replaced by \(\alpha\cdot 2^{-2^{\ell-1}}\). At \(\ell=0\) we have \(\alpha=1\), thus it is also the highest value it can take for other cells. The lowest possible value of \(\alpha\) at level \(\ell\) is given by \(\prod_{i=\ell+1}^{0}2^{-2^{i-1}}=2^{-\sum_{i=\ell}^{-1}2^{i}}=2^{2^{\ell}-1}> \frac{1}{2}\). The hyperbolic quadtree of \(P\subset\mathbb{H}^{d}\) has the following properties: 1. If \(C^{\prime}\) is a child cell of \(C\), then \(0.42<\operatorname{diam}(C^{\prime})/\operatorname{diam}(C)<0.561\). 2. Cells are \(\Omega(1/\sqrt{d})\)-fat. 3. A quadtree cell \(C\) has at most \(\max(2^{d},2^{\mathcal{O}(d\cdot\operatorname{diam}(C))})\) children; in particular, the root has \(\max(2^{d},d^{\mathcal{O}(d\cdot\operatorname{diam}(P))})\) children. 4. Cells of the same level \(\ell\geq 0\) are isometric, and cells of level \(\ell<0\) are cube-based horoboxes with the same height whose width differs by less than a factor two. Proof.: 1. First assume \(C\) is a cell at level \(\ell\geq 4\) with child \(C^{\prime}\). Then by Lemma 6, Figure 3: A hyperbolic quadtree where the root (red) is a level \(2\) cell, which is split into five isometric cells of level \(1\) that are separated by blue Euclidean segments. \[\frac{\operatorname{diam}(C^{\prime})}{\operatorname{diam}(C)}=\frac{2 \operatorname{arsinh}\left(2^{2^{\ell-1}-2}\right)}{2\operatorname{arsinh}\left(2 ^{2^{\ell-2}}-2\right)}.\] Here we can use \(\ln 2x<\operatorname{arsinh}x<\operatorname{ln}4x\) to get \[\frac{\operatorname{arsinh}\left(2^{2^{\ell-1}-2}\right)}{ \operatorname{arsinh}\left(2^{2^{\ell-2}}\right)}<\frac{\ln\left(4\cdot 2^{2^{\ell-1} -2}\right)}{\operatorname{ln}\left(2\cdot 2^{2^{\ell-2}}\right)}=\frac{2^{\ell-1}}{2^{ \ell}-1}=\frac{1}{2}+\frac{1}{2^{\ell+1}-2}\leq\frac{8}{15}\] \[\frac{\operatorname{arsinh}\left(2^{2^{\ell-1}-2}\right)}{ \operatorname{arsinh}\left(2^{2^{\ell-2}}\right)}>\frac{\ln\left(2\cdot 2^{2^{ \ell-1}-2}\right)}{\operatorname{ln}\left(4\cdot 2^{2^{\ell-2}}\right)}=\frac{2^{ \ell-1}-1}{2^{\ell}}=\frac{1}{2}-2^{-\ell}\geq\frac{7}{16}\] Now assume \(C\) is at level \(\ell\leq-2\). Then by Lemma 6, \[\frac{\operatorname{diam}(C^{\prime})}{\operatorname{diam}(C)}=\frac{2 \operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{(\alpha^{\prime})^{2}\cdot 4 ^{\ell-1}+(2^{2^{\ell-1}}-1)^{2}}{2^{2^{\ell-1}}}}\right)}{2\operatorname{arsinh }\left(\frac{1}{2}\sqrt{\frac{\alpha^{2}\cdot 4^{\ell}+(2^{2^{\ell}}-1)^{2}}{2^{2^{ \ell}}}}\right)},\] where \(\alpha\) corresponds to \(C\) and \(\alpha^{\prime}\) to \(C^{\prime}\). From the proof of Lemma 6 it follows that either \(\alpha^{\prime}=\alpha\) or \(\alpha^{\prime}=\alpha\cdot 2^{-2^{\ell-1}}\). First, we look for an upper bound. We observe that in the denominator, the argument \(x\) of \(\operatorname{arsinh}x\) is between \(0\) and \(0.15\), thus we may assume \(\operatorname{arsinh}x>0.997x\). For the positive numerator we can use that \(\operatorname{arsinh}x\leq x\). This gives \[\frac{2\operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{(\alpha^{\prime})^{2 }\cdot 4^{\ell-1}+(2^{2^{\ell-1}}-1)^{2}}{2^{2^{\ell-1}}}}\right)}{2 \operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{\alpha^{2}\cdot 4^{\ell}+(2^{2^{\ell -1}}-1)^{2}}{2^{2^{\ell}}}}\right)}<\frac{2^{2^{\ell-2}}}{0.997}\sqrt{\frac{( \alpha^{\prime})^{2}\cdot 4^{\ell-1}+(2^{2^{\ell-1}}-1)^{2}}{\alpha^{2}\cdot 4^{ \ell}+(2^{2^{\ell}}-1)^{2}}}.\] We can also use that \(\alpha^{\prime}\leq\alpha\), that \((2^{2^{\ell}}-1)^{2}\geq 0\) and that \((2^{2^{\ell-1}}-1)^{2}<0.07\cdot 4^{\ell}\) for \(\ell\leq-2\), giving \[\frac{2^{2^{\ell-2}}}{0.997}\sqrt{\frac{(\alpha^{\prime})^{2}\cdot 4^{\ell-1}+(2 ^{2^{\ell-1}}-1)^{2}}{\alpha^{2}\cdot 4^{\ell}+(2^{2^{\ell}}-1)^{2}}}<\frac{2^{2^{ \ell-2}}}{0.997}\sqrt{\frac{1.07\cdot 4^{\ell-1}}{4^{\ell}}}<0.55.\] To prove a lower bound we work similarly. We now additionally use that \(\alpha^{\prime}\geq 2^{-2^{\ell-1}}\alpha\) and that \((2^{2^{\ell}}-1)^{2}<0.15\cdot 4^{\ell}\) for \(\ell\leq-2\), giving \[\frac{2\operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{(\alpha^ {\prime})^{2}\cdot 4^{\ell-1}+(2^{2^{\ell-1}}-1)^{2}}{2^{2^{\ell-1}}}}\right)}{2 \operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{\alpha^{2}\cdot 4^{\ell}+(2^{2^{\ell -1}}-1)^{2}}{2^{2^{\ell}}}}\right)} >0.997\cdot 2^{2^{\ell-2}}\sqrt{\frac{(\alpha^{\prime})^{2}\cdot 4^{\ell-1}+(2 ^{2^{\ell-1}}-1)^{2}}{\alpha^{2}\cdot 4^{\ell}+(2^{2^{\ell}}-1)^{2}}}\] \[\geq 0.997\sqrt{\frac{4^{\ell-1}}{4^{\ell}+(2^{2^{\ell}}-1)^{2}}}\] \[>0.997\sqrt{\frac{4^{\ell-1}}{1.15\cdot 4^{\ell}}}\] \[>0.46.\] This proves the statement for \(\ell\leq-2\) and \(\ell\geq 4\). We check the remaining cases in Table 1. The lower bound comes from \(\ell=2\) and the upper bound from \(\ell=0\) with \(\alpha^{\prime}=1\). 2. For a cell \(C=R(x,z,w,h)\), the inscribed ball will touch the bounding horospheres at the top and bottom, or the bounding hyperplanes at the sides; see Figure 4. The hyperbolic diameter of a ball is given by the distance between its highest and lowest point. In the first case this is \(\ln\left(\frac{z\cdot 2^{h}}{z}\right)=h\ln 2\). In the second case, we only know the ball's Euclidean diameter is \(z\cdot w\), but from that we can calculate that the highest point of the ball is at height \(z+z\cdot w\), so the distance is \(\ln\left(\frac{z+z\cdot w}{z}\right)=\ln(w+1)\). In general the diameter of the inscribed ball is the smallest of these two. We will show that for quadtree cells this is always \(\Omega\left(\ln(w+1)\right)\). At level \(\ell\), the first case gives \(h\ln 2=\ln\left(2^{2^{\ell}}\right)\) for every cell. In the second case, for \(\ell\geq 0\) we have \(\ln(w+1)=\ln\left(\frac{2^{2^{\ell}-1}}{\sqrt{d-1}}+1\right)\), which is at most \(\ln\left(2^{2^{\ell}}+1-\frac{1}{2}\cdot 2^{2^{\ell}}\right)\) and thus smaller. For \(\ell<0\) we have \(\ln(w+1)\leq\ln\left(\frac{1\cdot 2^{\ell}}{\sqrt{d-1}}+1\right)\) by Lemma 6, which is at most \(\frac{2^{\ell}}{\sqrt{d-1}}\) and thus \(h\ln 2=\Omega\left(\ln(w+1)\right)\). Next we show that the diameter of \(C\) is always \(\mathcal{O}\!\left(2^{\ell}\right)\). For \(\ell\geq 0\), \[\mathrm{diam}(C)=2\,\mathrm{arsinh}(2^{2^{\ell}-2})=\mathcal{O}\!\left(2^{ \ell}\right)\] and also for \(\ell<0\), \[\mathrm{diam}(C)=2\,\mathrm{arsinh}\left(\frac{1}{2}\sqrt{\frac{\alpha^{2} \cdot 4^{\ell}+(2^{2^{\ell}}-1)^{2}}{2^{2^{\ell}}}}\right)=\mathcal{O}\!\left( \sqrt{\frac{4^{\ell}+(2^{2^{\ell}}-1)^{2}}{2^{2^{\ell}}}}\right)=\mathcal{O} \!\left(2^{\ell}\right)\!.\] We need to divide the diameter of the inscribed ball by the diameter of the circumscribed ball to get the fatness. Notice that \(2\mathrm{diam}(C)\) is an upper bound on the diameter of the circumscribed ball. First assume \(w<1\) and \(\ell<0\). Then the fatness is at least \[\frac{\Omega\left(\ln(w+1)\right)}{\mathcal{O}\!\left(2^{\ell}\right)}=\Omega \left(\frac{w}{2^{\ell}}\right)=\Omega\left(\frac{2^{\ell}}{2^{\ell}\sqrt{d} }\right)=\Omega\left(\frac{1}{\sqrt{d}}\right).\] Now assume \(\ell\geq 0\) instead. \[\frac{\Omega\left(\ln(w+1)\right)}{\mathcal{O}\!\left(2^{\ell}\right)}=\Omega \left(\frac{w}{2^{\ell}}\right)=\Omega\left(\frac{2^{2^{\ell}}}{2^{\ell}\sqrt {d}}\right)=\Omega\left(\frac{1}{\sqrt{d}}\right).\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(\ell\) & \multicolumn{4}{|c|}{-1} & \multicolumn{4}{|c|}{0} & \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{3} \\ \hline \(\alpha\) & \(1/\sqrt{2}\) & \multicolumn{4}{|c|}{1} & \multicolumn{4}{|c|}{1} & \multicolumn{4}{|c|}{} & \multicolumn{4}{|c|}{} \\ \hline \(\alpha^{\prime}\) & \(1/\sqrt[3]{8}\) & \(1/\sqrt{2}\) & \(1/\sqrt[3]{2}\) & \(1\) & \(1/\sqrt[3]{2}\) & \(1\) & & & & \\ \hline \hline & 0.485 & 0.5218 & 0.4795 & 0.5312 & 0.4718 & 0.5605 & 0.526 & 0.4208 & 0.4317 \\ \hline \end{tabular} \end{table} Table 1: Ratios between the diameter of a child cell and the diameter of its level \(\ell\) parent, up to four decimal places. Figure 4: The two possible situations for the inscribed ball of a cube-based horobox, with last coordinates indicated as well as the Euclidean width of the rectangle in the second case. Finally, assume \(w\geq 1\), which means automatically means \(\ell\geq 0\) and \(2^{2^{\ell}-1}\geq\sqrt{d-1}\). \[\frac{\Omega\left(\ln(w+1)\right)}{\mathcal{O}(2^{\ell})}=\Omega\left(\frac{ \log w}{2^{\ell}}\right)=\Omega\left(\frac{2^{\ell}+\log\frac{1}{d}}{2^{\ell} }\right)=\Omega\left(1\right).\] 3. Consider a quadtree cell \(C\) of level \(\ell\). If \(\ell\leq 0\) then it has \(2^{d}\) children, so we now consider \(\ell>0\). There we have \(\operatorname{diam}(C)=2\operatorname{arsinh}(2^{2^{\ell}-2})\) by Lemma 6, which means \(2^{2^{\ell}}=4\sinh(\frac{1}{2}\operatorname{diam}(C))\). The number of children of \(C\) is \[2^{\frac{2^{\ell}}{2}(d-1)}+1=\left(4\sinh\left(\frac{1}{2}\operatorname{diam} (C)\right)\right)^{\frac{1}{2}(d-1)}+1=2^{\mathcal{O}(d\cdot\operatorname{diam }(C))}.\] Now we will prove the result for the root of the quadtree. Assume this root lies at level \(\ell>0\), because otherwise it has \(2^{d}\) children. By construction, we know that no quadtree cell \(C\) of level \(\ell-1\) can cover \(P\). In particular this means that the inscribed ball \(B\) of \(C\) cannot cover \(P\), meaning \(\operatorname{diam}(P)>\operatorname{diam}(B)\). In (2) we showed \(\operatorname{diam}(B)=\ln\left(\frac{2^{2^{\ell-1}-1}}{\sqrt{d-1}}+1\right)\), so \(2^{2^{2^{\ell-1}}}=2\sqrt{d-1}(e^{\operatorname{diam}(B)}-1)\). This means the number of children under the root is \[2^{\frac{d}{2^{\ell}}(d-1)}+1=\left(2\sqrt{d-1}\left(e^{\operatorname{diam}( B)}-1\right)\right)^{d-1}+1=d^{\mathcal{O}(d\cdot\operatorname{diam}(P))}.\] 4. This follows from Lemma 6. ### Covering with hyperbolic quadtrees Euclidean quadtrees are useful in computing nearest neighbours and other related problems because of a particular distance property: there is a small collection of quadtrees one can define such that any pair of points at distance \(\delta\) will be contained in a cell of diameter \(\mathcal{O}(\delta)\) in one of the quadtrees. Moreover, the quadtrees can be generated by simply taking one quadtree and translating (_shifting_) it with different vectors. We will prove an analogous property for our hyperbolic quadtrees, though our "shifts" are the transformations \(T_{\sigma,\tau}\) instead of translations. Let us first introduce an infinite quadtree. Consider the binary tiling that contains the Euclidean hypercube with opposite corners \((0,\ldots,0,1)\) and \((1,\ldots,1,2)\). This tiling forms level \(0\) of the infinite quadtree \(\mathcal{Q}_{\infty}^{d}\). Then each level \(\ell<0\) is defined by subdividing these cells according to the construction in Section 3.1. For \(\ell>0\) we define the level \(\ell\) cells by unifying \(2^{(d-1)\ell}+1\) cells of level \(\ell-1\) into a horobox, doing the splitting described in Section 3.1 in reverse. We do the unification in such a way that the Euclidean hyperplanes \(x_{i}=0\) for \(i=1,\ldots,d-1\) as well as \(z=0\) remain cell boundaries at each level \(\ell\). More formally, cells of level \(\ell\) are the horoboxes \[R\left(a\cdot\frac{2^{2^{\ell}-1}}{\sqrt{d-1}},2^{b\cdot 2^{\ell}},\frac{2^{2^{ \ell}-1}}{\sqrt{d-1}},2^{\ell}\right)\] for each \((a,b)\) where \(a\in\mathbb{Z}^{d-1}\) and \(b\in\mathbb{Z}\). As a result we get the infinite quadtree \(\mathcal{Q}_{\infty}^{d}\) where for each integer \(\ell\) the cells of the quadtree define a subdivision of \(\mathbb{H}^{d}\). For \(x,y\in\mathbb{R}^{+}\) we define \(x\bmod y=x-y\lfloor x/y\rfloor\). Chan _et al._[6] observed that shifting a quadtree by certain special vectors results in a useful shifts also for levels with smaller cells. The following lemma was used to define these useful shifts. [Chan _et al._[6]] Let \(n>1\) be a positive odd integer, and consider the set \[X=\{i/n\mid i=0,\ldots,n-1\}.\] _Then, for any \(\alpha=2^{-\ell}\), where \(\ell\geq 0\) is an integer, we have that_ \[X\bmod\alpha=\{i/n\bmod\alpha\mid i=0,\ldots,n-1\}\] _is equal to the set \(\alpha X=\{\alpha i/n\mid i=0,\ldots,n-1\}\)._ We will look at the projection \(\pi_{z}(x,z)=z\), or more precisely \(\log\pi_{z}(x,z)\). Applying \(\log\pi_{z}\) to a cell \(C\) gives an interval in \(\mathbb{R}\), which we will refer to as the _\(z\)-range_ of \(C\). We can also apply \(\log\pi_{z}\) to the quadtree as a whole and merge all nodes that have the same \(z\)-range. A cell \(R(x,z,w,h)\) has \(z\)-range \(\left[\log z,\log z+h\right)\) and by construction its children can only have \(z\)-range \(\left[\log z,\log z+\frac{h}{2}\right)\) or \(\left[\log z+\frac{h}{2},\log z+h\right)\). Thus, we have a tree of intervals where splitting an interval in the middle gives its two children. This is exactly the structure of a one-dimensional Euclidean quadtree. Additionally, \(\log\pi_{z}(T_{\sigma,\tau}(p))=\log\sigma+\log\pi_{z}(p)\), meaning that shifts are an isometry we can apply to this one-dimensional quadtree. This lets us use the following lemma: Let \(\mathcal{T}\) be a one-dimensional Euclidean quadtree whose largest cell is \([0,2)\). For any two points \(p,q\in[0,1)\), there is a shift \(\sigma\in\{0,\frac{1}{3},\frac{2}{3}\}\) such that when added to the quadtree \(p+\sigma\) and \(q+\sigma\) are contained in a cell of \(\mathcal{T}\) with length \(<3|p-q|\) and one of the points is in the lower \(\frac{1}{3}\) of the cell. Proof.: Let \(\alpha=2^{-\ell}\) for some \(\ell\in\mathbb{N}_{0}\) such that \(\alpha/2<|p-q|\leq\alpha\). We apply Lemma 3 for \(n=3\) and \(\alpha\): as a consequence, a cell of size \(\alpha\) can be thought of as being shifted by one of \(\{0,\frac{\alpha}{3},\frac{2\alpha}{3}\}\). These shifts divide the space into intervals of length \(\frac{\alpha}{3}\) where any three adjacent intervals will make up the cell of the quadtree under some shift. Thus, \(p\) and \(q\) will always be contained in a cell with length \(\alpha\) when \(|p-q|\leq\frac{2\alpha}{3}\). When \(|p-q|>\frac{2\alpha}{3}\), we still know \(|p-q|\leq\alpha\leq\frac{4\alpha}{3}\) so they must still be contained in a cell \(C\) with length \(2\alpha\). Thus, in the worst case, \(C\) has length \(2\alpha<3|p-q|\). If \(C\) has \(p\) and \(q\) both in its higher \(\frac{2}{3}\), then the quadtree shifted by \(\frac{\alpha}{3}\) or \(\frac{2\alpha}{3}\) will contain a cell \(C^{\prime}\) of the same level as \(C\) with the desired property. Now we will consider all cells with some given \(z\)-range and apply the transformation \(\tilde{\pi}_{x}(x,z)=x\sqrt{d-1}\). This produces a grid in \(\mathbb{R}^{d-1}\) where the cells have some power of two as the side length. We again have access to shifts as an isometry, because \(\tilde{\pi}_{x}(T_{\sigma,\tau}(p))=\tau\sqrt{d-1}+\sigma\cdot\tilde{\pi}_{x}(p)\). This combination lets us use a lemma due Chan [5] about _\(\delta\)-centrality_. We say that a point is _\(\delta\)-central_ in an axis-parallel hypercube of side length \(r\) when its Euclidean distance to the cell boundary is at least \(\delta r\). [ Chan [5]] Suppose \(d\) is even. Let \(v^{(j)}=(j/(d+1),\ldots,j/(d+1))\in\mathbb{R}^{d}\). For any point \(p\in\mathbb{R}^{d}\) and \(r=2^{-\ell}\)\((\ell\in\mathbb{N})\), there exists \(j\in\{0,1,\ldots,d\}\) such that \(p+v^{(j)}\) is \((1/(2d+2))\)-central in its hypercube with side length \(r\). Using the behaviour under projections we can now also say something about the behaviour of the hyperbolic quadtree itself. For any \(\Delta\in\mathbb{R}_{+}\), there is a set of at most \(3d+3\) infinite hyperbolic quadtrees such that any two points \(p,q\in\mathbb{H}^{d}\) with \(\operatorname{dist}_{\mathbb{H}^{d}}(p,q)\leq\Delta\) are contained in a cell with diameter \(\mathcal{O}\Big{(}d\sqrt{d}\Big{)}\cdot\operatorname{dist}_{\mathbb{H}^{d}}(p,q)\) in one of the quadtrees. Proof.: Let \(L\in\mathbb{N}\) be such that the sphere with radius \(\Delta\) can be covered by a cube-based horobox with width \(W=\frac{2^{2L-1}}{\sqrt{d-1}}\) and height \(H=2^{L}\). Let \(D\) be \(d\) rounded down to the nearest even number, i.e. \(D=2\lfloor d/2\rfloor\). For each combination of \(i\in\{0,1,2\}\) and \(j\in\{0,\ldots,D\}\) we define the quadtree \(\mathcal{Q}_{ij}=T_{\sigma_{i},\tau_{j}}^{-1}(\mathcal{Q}_{\infty}^{d})\), with \(\sigma_{i}=2^{H\cdot i/3}\) and \(\tau_{j}=(W\cdot j/(D+1),\ldots,W\cdot j/(D+1))\). In the proof we will apply \(T_{\sigma_{i},\tau_{j}}\) to all points in \(\mathcal{Q}_{ij}\) instead of transforming the cells of the quadtree itself, but this has the same effect. Let \(p,q\) be arbitrary points with \(\operatorname{dist}_{\mathbb{H}^{d}}(p,q)\leq\Delta\). Lemma 9 gives a \(\sigma_{i}\) such that \(\log\pi_{z}(T_{\sigma_{i},\tau}(p))\) and \(\log\pi_{z}(T_{\sigma_{i},\tau}(q))\) are contained in a cell with length less than \(3|\log\pi_{z}(p)-\log\pi_{z}(q)|\), regardless of the value of \(\tau\). This corresponds to \(T_{\sigma_{i},\tau}(p)\) and \(T_{\sigma_{i},\tau}(q)\) being contained in a \(z\)-range with the same length. One of \(\log\pi_{z}(T_{\sigma_{i},\tau}(p))\) and \(\log\pi_{z}(T_{\sigma_{i},\tau}(q))\) is in the lower \(\frac{1}{3}\) of this \(z\)-range; without loss of generality we assume this is \(p\). We now look at the \(d\)-dimensional cell \(p\) itself lies in. Without loss of generality, we can assume this cell is \(R(0,1,w,h)\) for some \(w,h\). Let \((x,z)=T_{\sigma_{i},\tau_{j}}(p)\) and \(H\) denote the hyperplane \(x_{1}=0\). To get the minimum distance from \((x,z)\) to another cell with the same \(z\)-range, we can now consider the minimum distance between \((x,z)\) and \(H\). By Lemma 3\(d((x,z),H)=\operatorname{arsinh}\frac{|x_{1}|}{z}\), which is minimised when \(x_{1}\) is minimal and \(z\) is maximal. According to Lemma 10, there is a \(\tau_{j}\) such that \(\tilde{\pi}_{x}(T_{\sigma_{i},\tau_{j}}(p))\) is \(\frac{1}{2D+2}\)-central in its \((d-1)\)-dimensional cell. Thus, \(x_{1}\geq\frac{w}{2D+2}\). Additionally \(\log\pi_{z}(T_{\sigma_{i},\tau_{j}}(p))\) is in the lower \(\frac{1}{3}\) of its one-dimensional cell, thus \(z\leq 2^{h/3}\). This makes the distance at most \(\operatorname{arsinh}\left(\frac{w}{(2D+2)2^{h/3}}\right)\). Let \(\ell\) be the level of the smallest cell containing \(p\) and \(q\). If the level-\(\ell\) cells containing \(p\) and \(q\) have the same \(z\)-range and \(\operatorname{dist}_{\mathbb{H}^{d}}(p,q)\leq\operatorname{arsinh}\left(\frac{ w}{(2D+2)2^{h/3}}\right)\) with \(w,h\) given by level \(\ell\), they must be together in a cell of level \(\ell\). We define \(f_{d}(\ell)\) as an upper bound on the ratio between the cell's diameter and \(\operatorname{dist}_{\mathbb{H}^{d}}(p,q)\). We first consider \(\ell\geq 0\), where by Lemma 6 the diameter is \(2\operatorname{arsinh}\left(2^{2^{\ell}-2}\right)\) and \(w=\frac{2^{2^{\ell}-1}}{\sqrt{d-1}}\), so this distance is \(\operatorname{arsinh}\left(\frac{\sqrt[3]{2^{2^{\ell+1}}}}{4(D+1)\sqrt{d-1}}\right)\). Thus, \[f_{d}(\ell)=\frac{2\operatorname{arsinh}\left(2^{2^{\ell}-2}\right)}{ \operatorname{arsinh}\left(\frac{\sqrt[3]{2^{\ell}}}{4(D+1)\sqrt{d-1}}\right) }\leq\frac{2\ln\left(2^{2^{\ell}-1}\right)+2\operatorname{arsinh}\frac{1}{2} }{\operatorname{arsinh}\left(\frac{\sqrt[3]{2^{\ell}}}{4(D+1)\sqrt{d-1}}\right)},\] To bound this further, we let \(\alpha=\frac{3\sqrt[3]{2^{\ell}}}{4(D+1)\sqrt{d-1}}\) and first assume \(\alpha\geq 2\). This means \(2^{2^{\ell}}=\mathcal{O}\!\left(\alpha^{3}d^{\frac{3}{2}}\right)\). We always have \(\operatorname{arsinh}\alpha\geq\ln 2\alpha>0\), so \[f_{d}(\ell)=\mathcal{O}\!\left(\frac{\log\left(\alpha^{3}d^{\frac{3}{2}} \right)}{\operatorname{arsinh}\alpha}\right)=\mathcal{O}\!\left(\frac{\log \alpha+\log d}{\log\alpha}\right)=\mathcal{O}\!\left(\log d\right).\] When we instead assume \(\alpha<2\), then \(\operatorname{arsinh}\alpha\geq\frac{\operatorname{arsinh}2}{2}\cdot\alpha\) and \[f_{d}(\ell)=\mathcal{O}\!\left(\frac{2\log\left(2^{2^{\ell}-2}\right)}{ \frac{\sqrt[3]{2^{\ell}}}{4(D+1)\sqrt{d-1}}}\right)=\mathcal{O}\!\left(\frac{2^ {\ell}d\sqrt{d}}{\sqrt[3]{2^{2^{\ell}}}}\right)=\mathcal{O}\!\left(d\sqrt{d} \right)\!.\] For \(\ell<0\), Lemma 5 implies that the width is at least \(\frac{\frac{1}{2}\cdot 2^{\ell}}{\sqrt{d-1}}\), the height is at most \(1\), and the diameter is at most \(2\operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{4^{\ell}+(2^{2^{\ell}}-1)^{2 ^{\ell}}}{2^{2^{\ell}}}}\right)\). Finally, we use that \(\operatorname{arsinh}x\leq x\) for any \(x\) and \(\operatorname{arsinh}x=\Omega\left(x\right)\) for bounded \(x\): \[f_{d}(\ell)\leq\frac{2\operatorname{arsinh}\left(\frac{1}{2}\sqrt{\frac{4^{\ell }+(2^{2^{\ell}}-1)^{2}}{2^{2^{\ell}}}}\right)}{\operatorname{arsinh}\left( \frac{\frac{1}{2}\cdot 2^{\ell}}{(2D+2)\sqrt[3]{2\sqrt{d-1}}}\right)}=\mathcal{O}\!\left( \frac{\sqrt{\frac{4^{\ell}+(2^{2^{\ell}}-1)^{2}}{2^{2^{\ell}}}}}{\frac{2^{ \ell}}{d\sqrt{d}}}\right)=\mathcal{O}\!\left(d\sqrt{d}\right)\!.\qed\] ### L-order When we have the Euclidean quadtree for a set of points, we can do a depth-first traversal of the tree and note in which order the points are visited. This gives rise to the Z-order. As it turns out, adding or removing points does not change the Z-order and for a pair of points we can determine which comes first without ever constructing a quadtree. The only thing to specify is which infinite quadtree their quadtree would be a subset of, because a differently shifted quadtree can give different results. We can do the same to get the L-order from a hyperbolic quadtree. Here, we first need to define how exactly we do the depth-first traversal. For levels \(\ell>0\), we first visit the top child and then visit the bottom children in Z-order. For lower levels, the split is the same as for Euclidean quadtrees so we visit the children in the same order as the Z-order. For two points \(p,p^{\prime}\in\mathbb{H}^{d}\), we can check which comes first in the L-order for \(\mathcal{Q}^{d}_{\infty}\) by using \(\mathcal{O}(d)\) floor, logarithm, bitwise logical and standard arithmetic operations. Proof.: To compare \(p\) and \(p^{\prime}\), we first define \(\tilde{\pi}(p)=(\tilde{\pi}_{x}(p),\log\pi_{z}(p))\), then let \((x,z)=\tilde{\pi}(p)\) and \((x^{\prime},z^{\prime})=\tilde{\pi}(p^{\prime})\) for convenience. We then check if both \(\lfloor x/2^{z}\rfloor=\lfloor x^{\prime}/2^{z^{\prime}}\rfloor\) and \(\lfloor z\rfloor=\lfloor z^{\prime}\rfloor\). If that is the case, the points are in the same level-0 cell. Thus, under \(\tilde{\pi}\) they are in a \(d\)-dimensional Euclidean quadtree and we can use the Z-order of \((x,z)\) and \((x^{\prime},z^{\prime})\) to determine which comes first. Otherwise, we need to look at the situation at the highest level \(\ell\) where \(p\) and \(p^{\prime}\) are in different cells. If one of the points is in the top child cell of its parent, that point comes first. If both are in one of the bottom child cells, then we look at which of the cells comes first in the Z-order. We can distinguish these cases by checking if \(p\) and \(p^{\prime}\) are in cells of the same \(z\)-range at level \(\ell\). Let \(L(a,b)=1+\lfloor\log(a\oplus b)\rfloor\) give the smallest index such that all bits of index at least \(L(a,b)\) in the binary representations of \(a\) and \(b\) match (where \(\oplus\) denotes bitwise exclusive-or). At some level \(\ell\), the points \(p\) and \(p^{\prime}\) are in cells of the same \(z\)-range if \(\lfloor z/2^{\ell}\rfloor=\lfloor z^{\prime}/2^{\ell}\rfloor\). In other words, their binary expansions match for the bits of index at least \(\ell\), meaning \(L(z,z^{\prime})\) is the smallest value of \(\ell\) for which this holds. Under the \(\log\pi_{z}\) projection this \(z\)-range starts at the nearest multiple of \(2^{L(z,z^{\prime})}\) below \(z\), which is \(\lfloor z/2^{L(z,z^{\prime})}\rfloor\cdot 2^{L(z,z^{\prime})}\). Without the projection this becomes \(2^{\lfloor z/2^{L(z,z^{\prime})}\rfloor\cdot 2^{L(z,z^{\prime})}}\). We can now look at the cells with this \(z\)-range. As noted in Section 3.2, under \(\tilde{\pi}_{x}\) these cells form a \((d-1)\)-dimensional Euclidean grid. The side length is \(\sqrt{d-1}\) times the Euclidean width of the original horobox cells, which is by definition their width multiplied by their \(z\)-coordinate and thus \[2^{\lfloor z/2^{L(z,z^{\prime})}\rfloor\cdot 2^{L(z,z^{\prime})}}\cdot\frac{2^{2^ {L(z,z^{\prime})}-1}}{\sqrt{d-1}}.\] We can rewrite this as \(\frac{2^{L^{*}}}{\sqrt{d-1}}\) with \(L^{*}=\lfloor z/2^{L(z,z^{\prime})}\rfloor\cdot 2^{L(z,z^{\prime})}+2^{L(z,z^{ \prime})}-1\). Thus, \(p\) and \(p^{\prime}\) will be in the same level \(L(z,z^{\prime})\) cell if for all \(i=1,\ldots,d-1\) we have \(\lfloor x_{i}/2^{L^{*}}\rfloor=\lfloor x_{i}^{\prime}/2^{L^{*}}\rfloor\) or equivalently \(L(x_{i},x_{i}^{\prime})\leq L^{*}\). This check lets us know if \(p\) and \(p^{\prime}\) are already in the same cell at level \(L(z,z^{\prime})\). If they are, then they were first split with a horosphere normal to the \(z\) axis. We can thus determine their L-order by simply checking if \(z<z^{\prime}\). Otherwise, they were split first with some hyperplane normal to some \(x_{i}\)-axis for some \(i\in 1,\ldots,d-1\), thus we can find which of \(p\) or \(p^{\prime}\) comes first by determining the \(d-1\)-dimensional Euclidean Z-order of \(x\) and \(x^{\prime}\) ## 4 Applications Theorem 1 and Lemma 11 are equivalent to statements about Euclidean quadtrees Chan _et al._[6] use to find an approximate nearest neighbour and closest pair, so we can do the same in hyperbolic space. For both problems the algorithm is similar. Given a point set \(P\), we first make \(3d+3\) self-balancing binary search trees (e.g. red-black tree [7]) where each sorts the points based on the L-order from one of the infinite quadtrees from Theorem 1. By Lemma 11 this takes \(\mathcal{O}\big{(}d^{2}n\log n\big{)}\) time. Notice that we can add or remove a point in all of these trees in \(\mathcal{O}\big{(}d^{2}\log n\big{)}\) time. To get the nearest neighbour of some point \(p\), we determine where it would end up in the L-order for each of the trees, then return \(q\); the closest of the neighbours. This takes \(\mathcal{O}\big{(}d^{2}\log n\big{)}\) time and gives an \(\mathcal{O}\Big{(}d\sqrt{d}\Big{)}\)-approximate nearest neighbour: the actual nearest neighbour \(q^{\prime}\) of \(p\) will be in the smallest cell \(p\) is in. Because the returned point \(q\) was closer to \(p\) in the L-order than \(q^{\prime}\), it must also be in that cell. By Theorem 1 this means it can be at most \(\mathcal{O}\Big{(}d\sqrt{d}\Big{)}\) times further away, making it an \(\mathcal{O}\Big{(}d\sqrt{d}\Big{)}\)-approximate nearest neighbour. The same reasoning can be used to get an \(\mathcal{O}\Big{(}d\sqrt{d}\Big{)}\)-approximate closest pair. To find an approximate closest pair we go through the sorted lists and return the pair of neighbouring points that is closest. This takes \(\mathcal{O}\big{(}d^{2}n\big{)}\) time. The above reasoning yields the following theorem. Let \(P\subset\mathbb{H}^{d}\) be a given set of \(n\) points. * We can find an \(\mathcal{O}\Big{(}d\sqrt{d}\Big{)}\)-approximate closest pair of \(P\) in \(\mathcal{O}\big{(}d^{2}n\log n\big{)}\) time. * We can construct a data structure in \(\mathcal{O}\big{(}d^{2}n\log n\big{)}\) time that uses \(\mathcal{O}\big{(}d^{2}n\big{)}\) space and can answer queries for an \(\mathcal{O}\Big{(}d\sqrt{d}\Big{)}\)-approximate nearest neighbour in \(P\) in \(\mathcal{O}\big{(}d^{2}\log n\big{)}\) time, and perform updates (point insertions and removals) in \(\mathcal{O}\big{(}d^{2}\log n\big{)}\) time.
2302.05197
On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces
In this work we consider stochastic gradient descent (SGD) for solving linear inverse problems in Banach spaces. SGD and its variants have been established as one of the most successful optimisation methods in machine learning, imaging and signal processing, etc. At each iteration SGD uses a single datum, or a small subset of data, resulting in highly scalable methods that are very attractive for large-scale inverse problems. Nonetheless, the theoretical analysis of SGD-based approaches for inverse problems has thus far been largely limited to Euclidean and Hilbert spaces. In this work we present a novel convergence analysis of SGD for linear inverse problems in general Banach spaces: we show the almost sure convergence of the iterates to the minimum norm solution and establish the regularising property for suitable a priori stopping criteria. Numerical results are also presented to illustrate features of the approach.
Z. Kereta, B. Jin
2023-02-10T12:00:49Z
http://arxiv.org/abs/2302.05197v1
# On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces ###### Abstract In this work we consider stochastic gradient descent (SGD) for solving linear inverse problems in Banach spaces. SGD and its variants have been established as one of the most successful optimisation methods in machine learning, imaging and signal processing, etc. At each iteration SGD uses a single datum, or a small subset of data, resulting in highly scalable methods that are very attractive for large-scale inverse problems. Nonetheless, the theoretical analysis of SGD-based approaches for inverse problems has thus far been largely limited to Euclidean and Hilbert spaces. In this work we present a novel convergence analysis of SGD for linear inverse problems in general Banach spaces: we show the almost sure convergence of the iterates to the minimum norm solution and establish the regularising property for suitable _a priori_ stopping criteria. Numerical results are also presented to illustrate features of the approach. ## 1 Introduction This work considers (stochastic) iterative solutions for linear operator equations of the form \[\mathbf{A}\mathbf{x}=\mathbf{y} \tag{1}\] where \(\mathbf{A}:\mathcal{X}\rightarrow\mathcal{Y}\) is a bounded linear operator between Banach spaces \(\mathcal{X}\) and \(\mathcal{Y}\) (equipped with the norms \(\|\cdot\|_{\mathcal{X}}\) and \(\|\cdot\|_{\mathcal{Y}}\), respectively), and \(\mathbf{y}\in\operatorname{range}\left(\mathbf{A}\right)\) is the exact data. In practice, we only have access to noisy data \(\mathbf{y}^{\delta}=\mathbf{y}+\boldsymbol{\xi}\), where \(\boldsymbol{\xi}\) denotes the measurement noise with a noise level \(\delta\geq 0\) such that \(\|\mathbf{y}^{\delta}-\mathbf{y}\|_{\mathcal{Y}}\widehat{\leq}\delta\). Linear inverse problems arise naturally in many applications in science and engineering, and also form the basis for studying nonlinear inverse problems. Hence, design and analysis of stable reconstruction methods for linear inverse problems have received much attention. Iterative regularisation is a powerful algorithmic paradigm that has been successfully employed for many inverse problems [14, Chapters 6 and 7][30]. Classical iterative methods for inverse problems include (accelerated) Landweber method, conjugate gradient method, Levenberg-Marquardt method, and Gauss-Newton method, to name a few. The per-iteration computational bottleneck of many iterative methods lies in utilising all the data at each iteration, which can be of a prohibitively large size. For example, this occurs while computing the derivative of an objective. One promising strategy to overcome this challenge is stochastic gradient descent (SGD), due to Robbins and Monro [40]. SGD decomposes the original problem into (finitely many) sub-problems, and then at each iteration uses only a single datum, or a mini-batch of data, typically selected uniformly at random. This greatly reduces the computational complexity per-iteration, and enjoys excellent scalability with respect to data size. In the standard, and best studied setting, \(\mathcal{X}\) and \(\mathcal{Y}\) are finite dimensional Euclidean spaces and the corresponding data fitting objective is the (rescaled) least squares \(\Psi(\mathbf{x})=\frac{1}{2N}\|\mathbf{A}\mathbf{x}-\mathbf{y}\|_{\mathcal{Y} }^{2}=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{2}\|\mathbf{A}_{i}\mathbf{x}-\mathbf{ y}_{i}\|_{\mathcal{Y}}^{2}\). In this setting SGD takes the form \[\mathbf{x}_{k+1}=\mathbf{x}_{k}-\mu_{k+1}\mathbf{A}_{i_{k+1}}^{*}(\mathbf{A}_ {i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}),\quad k=0,1,\ldots,\] where \(\mu_{k}>\) is the step-size, \(i_{k+1}\) a randomly selected index, \(\mathbf{A}_{i}\) the \(i\)-th row of a matrix \(\mathbf{A}\), and \(\mathbf{y}_{i}\) the \(i\)-th entry of \(\mathbf{y}\). In the seminal work [40], Robbins and Monro presented SGD as a Markov chain, laying the groundwork for the field of stochastic approximation [32]. SGD has since had a major impact on statistical inference and machine learning, especially for the training of neural networks. SGD has been extensively studied in the Euclidean setting; see [2] for an overview of the convergence theory from the viewpoint of optimisation. SGD has also been a popular method for image reconstruction, especially in medical imaging. For example, the (randomised) Kaczmarz method is a reweighted version of SGD that has been extensively used in computed tomography [18, 37]. Other applications of SGD and its variants include optical tomography [6], phonon transmission coefficient recovery [15], positron emission tomography [31], as well as general sparse recovery [42, 43]. For linear inverse problems in Euclidean spaces, Jin and Lu [24] gave a first proof of convergence of SGD iterates towards the minimum norm solution, and analysed the regularising behaviour in the presence of noise; see [22, 25, 26, 34, 39] for further convergence results, _a posteriori_ stopping rules (discrepancy principle), nonlinear problems, and general step-size schedules, etc. Iterative methods in Euclidean and Hilbert spaces are effective for reconstructing smooth solutions but fail to capture special features of the solutions, such as sparsity and piecewise constancy. In practice, many imaging inverse problems are more adequately described in non-Hilbert settings, including sequence spaces \(\ell^{p}(\mathbb{R})\) and Lebesgue spaces \(\mathcal{L}^{p}(\Omega)\), with \(p\in[1,\infty]\setminus\{2\}\), which requires changing either the solution, the data space, or both. For example, inverse problems with impulse noise are better modelled by setting the data space \(\mathcal{Y}\) to a Lebesgue space \(\mathcal{L}^{p}(\Omega)\) with \(p\approx 1\)[11], whereas the recovery of sparse solutions is modelled by doing the same to the solution space \(\mathcal{X}\)[4]. Thus, it is of great importance to develop and analyse algorithms for inverse problems in Banach spaces, and this has received much attention [41, 46]. For the Landweber method for linear inverse problems in Banach spaces, Schopfer et al [44] were the first to prove strong convergence of the iterates under a suitable step-size schedule for a smooth and uniformly convex Banach space \(\mathcal{X}\) and an arbitrary Banach space \(\mathcal{Y}\). This has since been extended and refined in various aspects, e.g. regarding acceleration [45, 17, 49, 51], nonlinear forward models [12, 35], and Gauss-Newton methods [29]. In this work, we investigate SGD for inverse problems in Banach spaces, which has thus far lagged behind due to outstanding challenges in extending the analysis of standard Hilbert space approaches to the Banach space setting. The main challenges in analysing SGD-like gradient-based methods in Banach spaces are two-fold: 1. The use of duality maps results in non-linear update rules, which greatly complicates the convergence analysis. For example, the (expected) difference between successive updates can no longer be identified as the (sub-)gradient of the objective. 2. Due to geometric characteristics of Banach spaces, it is more common to use the Bregman distance for the convergence analysis, which results in the loss of useful algebraic tools, e.g. triangle inequality and bias-variance decomposition, that are typically needed for the analysis. In this work, we develop an SGD approach for the numerical solution of linear inverse problems in Banach spaces, using the sub-gradient approach based on duality maps, and present a novel convergence analysis. We first consider the case of exact data, and show that SGD iterates converge to a minimising solution (first almost surely and then in expectation) under standard assumptions on summability of step-sizes, and geometric properties of the space \(\mathcal{X}\), cf. Theorems 3.8 and 3.10. This solution is identified as the minimum norm solution if the initial guess \(\mathbf{x}_{0}\) satisfies the range condition \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\text{range}( \mathbf{A}^{*})}\). Further, we give a convergence rate in Theorem 3.14 when the forward operator \(\mathbf{A}\) satisfies a conditional stability estimate. In case of noisy observations, we show the regularising property of SGD, for properly chosen stopping indices, cf. Theorem 4.3. The analysis rests on a descent property in Lemma 3.6 and Robbins-Siegmund theorem for almost super-martingales. In addition, we perform extensive numerical experiments on a model inverse problem (linear integral equation) and computed tomography (with parallel beam geometry) to illustrate distinct features of the proposed Banach space SGD, and examine the influence of various factors, such as the choice of the spaces \(\mathcal{X}\) and \(\mathcal{Y}\), mini-batch size and noise characteristics (Gaussian or impulse). When finalising the paper, we became aware of the independent and simultaneous work [27] on a stochastic mirror descent method for linear inverse problems between a Banach space \(\mathcal{X}\) and a Hilbert space \(\mathcal{Y}\). The method is a randomised version of the well-known Landweber-Kaczmarz method. The authors prove convergence results under _a priori_ stopping rules, and also establish an order-optimal convergence rate when the exact solution \(\mathbf{x}^{\dagger}\) satisfies a benchmark source condition, by interpreting the method as a randomised block gradient method applied to the dual problem. Thus, the current work differs significantly from [27] in terms of problem setting, main results and analysis techniques. The rest of the paper is organised as follows. In Section 2, we recall background materials on the geometry of Banach spaces, e.g. duality maps and Bregman distance. In Section 3, we present the convergence of SGD for exact data and in Section 4, we discuss the regularising property of SGD for noisy observations. Finally, in Section 5, we provide some experimental results on a model inverse problem and computed tomography. In the Appendix we collect several useful inequalities and auxiliary estimates. Throughout, let \(\mathcal{X}\) and \(\mathcal{Y}\) be two real Banach spaces, with their norms denoted by \(\|\cdot\|_{\mathcal{X}}\) and \(\|\cdot\|_{\mathcal{Y}}\), respectively. \(\mathcal{X}^{*}\) and \(\mathcal{Y}^{*}\) are their respective dual spaces, with their norms denoted by \(\|\cdot\|_{\mathcal{X}^{*}}\) and \(\|\cdot\|_{\mathcal{Y}^{*}}\), respectively. For \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{x}^{*}\in\mathcal{X}^{*}\), we denote the corresponding duality pairing by \(\langle\mathbf{x}^{*},\mathbf{x}\rangle=\langle\mathbf{x}^{*},\mathbf{x} \rangle_{\mathcal{X}^{*}\times\mathcal{X}}=\mathbf{x}^{*}(\mathbf{x})\). For a continuous linear operator \(\mathbf{A}:\mathcal{X}\rightarrow\mathcal{Y}\), we use \(\|\mathbf{A}\|_{\mathcal{X}\rightarrow\mathcal{Y}}\) to denote the operator norm (often with the subscript omitted). The adjoint of \(\mathbf{A}\) is denoted as \(\mathbf{A}^{*}:\mathcal{Y}^{*}\rightarrow\mathcal{X}^{*}\) and it is a continuous linear operator, with \(\|\mathbf{A}\|_{\mathcal{X}\rightarrow\mathcal{Y}}=\|\mathbf{A}^{*}\|_{ \mathcal{Y}^{*}\rightarrow\mathcal{X}^{*}}\). The conjugate exponent of \(p\in(1,\infty)\) is denoted by \(p^{*}\), such that \(1/p+1/p^{*}=1\) holds. The Cauchy-Schwarz inequality of the following form holds for any \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{x}^{*}\in\mathcal{X}^{*}\) \[|\,\langle\mathbf{x}^{*},\mathbf{x}\rangle\,|\leq\|\mathbf{x}^{*}\|_{\mathcal{ X}^{*}}\|\mathbf{x}\|_{\mathcal{X}}. \tag{2}\] For reals \(a,b\) we write \(a\wedge b=\min\{a,b\}\) and \(a\lor b=\max\{a,b\}\). By \((\mathcal{F}_{k})_{k\in\mathbb{N}}\), we denote the natural filtration, i.e. a growing sequence of \(\sigma\)-algebras such that \(\mathcal{F}_{k}\subset\mathcal{F}_{k+1}\subset\mathcal{F}\), for all \(k\in\mathbb{N}\) and a \(\sigma\)-algebra \(\mathcal{F}\), and \(\mathcal{F}_{k}\) is generated by random indices \(i_{j}\), for \(j\leq k\). In the context of SGD, \(k\in\mathbb{N}\) is the iteration number and \(\mathcal{F}_{k}\) denotes the iteration history, that is, information available at time \(k\), and for a given initialisation \(\mathbf{x}_{0}\), we can identify \(\mathcal{F}_{k}=\sigma(\mathbf{x}_{1},\ldots,\mathbf{x}_{k})\). For a filtration \((\mathcal{F}_{k})_{k\in\mathbb{N}}\) we denote by \(\mathbb{E}_{k}[\cdot]=\mathbb{E}[\cdot\mid\mathbf{x}_{1},\ldots\mathbf{x}_{k}]\) the conditional expectation with respect to \(\mathcal{F}_{k}\). A sequence of random variables \((x_{k})_{k\in\mathbb{N}}\) (adapted to the filtration \((\mathcal{F}_{k})_{k\in\mathbb{N}}\)) is a called super-martingale if \(\mathbb{E}_{k}[x_{k+1}]\leq x_{k}.\) Throughout, the notation a.s. denotes almost sure events. ## 2 Preliminaries on Banach spaces In this section we recall relevant concepts from Banach space theory and the geometry of Banach spaces. ### Duality map In a Hilbert space \(\mathcal{H}\), for every \(\mathbf{x}\in\mathcal{H}\), there exists a unique \(\mathbf{x}^{*}\in\mathcal{H}^{*}\) such that \(\langle\mathbf{x}^{*},\mathbf{x}\rangle=\|\mathbf{x}\|_{\mathcal{H}}\| \mathbf{x}^{*}\|_{\mathcal{H}^{*}}\) and \(\|\mathbf{x}^{*}\|_{\mathcal{H}^{*}}=\|\mathbf{x}\|_{\mathcal{H}}\), by the Riesz representation theorem. For Banach spaces, however, such an \(\mathbf{x}^{*}\) is not necessarily unique, motivating the notion of duality maps. **Definition 2.1** (Duality map).: _For any \(p>1\), a duality map \(\mathcal{J}_{p}^{\mathcal{X}}:\mathcal{X}\to 2^{\mathcal{X}^{*}}\) is the sub-differential of the (convex) functional \(\frac{1}{p}\|\mathbf{x}\|_{\mathcal{X}}^{p}\)_ \[\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x})=\left\{\mathbf{x}^{*}\in\mathcal{X}^{ *}:\langle\mathbf{x}^{*},\mathbf{x}\rangle=\|\mathbf{x}\|_{\mathcal{X}}\| \mathbf{x}^{*}\|_{\mathcal{X}^{*}},\text{ and }\|\mathbf{x}\|_{\mathcal{X}}^{p-1}=\|\mathbf{x}^{*}\|_{ \mathcal{X}^{*}}\right\}, \tag{3}\] _with gauge function \(t\mapsto t^{p-1}\). A single-valued selection of \(\mathcal{J}_{p}^{\mathcal{X}}\) is denoted by \(j_{p}^{\mathcal{X}}\)._ In practice, the choice of the power parameter \(p\) depends on geometric properties of the space \(\mathcal{X}\). For single-valued duality maps, we use \(\mathcal{J}_{p}^{\mathcal{X}}\) and \(j_{p}^{\mathcal{X}}\) interchangeably. Next we recall standard notions of smoothness and convexity of Banach spaces. For an overview of Banach space geometry, we refer an interested reader to the monographs [9, 10, 46]. **Definition 2.2**.: _Let \(\mathcal{X}\) be a Banach space. \(\mathcal{X}\) is said to be reflexive if the canonical map \(\mathbf{x}\mapsto\widehat{\mathbf{x}}\) between \(\mathcal{X}\) and the bidual \(\mathcal{X}^{**}\), defined by \(\widehat{\mathbf{x}}(\mathbf{x}^{*})=\mathbf{x}^{*}(\mathbf{x})\), is surjective. \(\mathcal{X}\) is smooth if for every \(0\neq\mathbf{x}\in\mathcal{X}\) there is a unique \(\mathbf{x}^{*}\in\mathcal{X}^{*}\) such that \(\langle\mathbf{x}^{*},\mathbf{x}\rangle=\|\mathbf{x}\|_{\mathcal{X}}\) and \(\|\mathbf{x}^{*}\|_{\mathcal{X}^{*}}=1\). The function \(\delta_{\mathcal{X}}:(0,2]\rightarrow\mathbb{R}\) defined as_ \[\delta_{\mathcal{X}}(\tau)=\inf\left\{1-\tfrac{1}{2}\|\mathbf{z}+\mathbf{w}\|_{ \mathcal{X}}:\|\mathbf{z}\|_{\mathcal{X}}=\|\mathbf{w}\|_{\mathcal{X}}=1,\| \mathbf{z}-\mathbf{w}\|_{\mathcal{X}}\geq\tau\right\}\] _is the modulus of convexity of \(\mathcal{X}\). \(\mathcal{X}\) is said to be uniformly convex if \(\delta_{\mathcal{X}}(\tau)>0\) for all \(\tau\in(0,2]\), and \(p\)-convex, for \(p>1\), if \(\delta_{\mathcal{X}}(\tau)\geq K_{p}\tau^{p}\) for some \(K_{p}>0\) and all \(\tau\in(0,2]\). The function \(\rho_{\mathcal{X}}:[0,\infty)\to[0,\infty)\) defined as_ \[\rho_{\mathcal{X}}(\tau)=\sup\Big{\{}\frac{\|\mathbf{z}+\tau\mathbf{w}\|_{ \mathcal{X}}+\|\mathbf{z}-\tau\mathbf{w}\|_{\mathcal{X}}}{2}-1:\|\mathbf{z}\| _{\mathcal{X}}=\|\mathbf{w}\|_{\mathcal{X}}=1\Big{\}}\] _is the modulus of smoothness of \(\mathcal{X}\), and is a convex and continuous function such that \(\frac{\rho_{\mathcal{X}}(\tau)}{\tau}\) is a non-decreasing function with \(\rho_{\mathcal{X}}(\tau)\leq\tau\). \(\mathcal{X}\) is said to be uniformly smooth if \(\lim_{\tau\searrow 0}\frac{\rho_{\mathcal{X}}(\tau)}{\tau}=0\), and \(p\)-smooth, for \(p>1\), if \(\rho_{\mathcal{X}}(\tau)\leq K_{p}\tau^{p}\) for some \(K_{p}>0\) and all \(\tau\in(0,\infty)\)._ The following relationships between Banach spaces and duality maps will be used extensively. **Theorem 2.3** ([46, Theorems 2.52 and 2.53, and Lemma 5.16]).: 1. _For every_ \(\mathbf{x}\in\mathcal{X}\)_, the set_ \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x})\) _is non-empty, convex, and weakly-_\(\star\) _closed in_ \(\mathcal{X}^{*}\)_._ 2. \(\mathcal{X}\) _is_ \(p\)_-smooth if and only if_ \(\mathcal{X}^{*}\) _is_ \(p^{*}\)_-convex._ \(\mathcal{X}\) _is_ \(p\)_-convex if and only if_ \(\mathcal{X}^{*}\) _is_ \(p^{*}\)_-smooth._ 3. \(\mathcal{X}\) _is smooth if and only if_ \(\mathcal{J}_{p}^{\mathcal{X}}\) _is single valued. If_ \(\mathcal{X}\) _is convex of power type and smooth, then_ \(\mathcal{J}_{p}^{\mathcal{X}}\) _is invertible and_ \(\big{(}\mathcal{J}_{p}^{\mathcal{X}}\big{)}^{-1}=\mathcal{J}_{p^{*}}^{\mathcal{ X}^{\star}}\)_. If_ \(\mathcal{X}\) _is uniformly smooth and uniformly convex, then_ \(\mathcal{J}_{p}^{\mathcal{X}}\) _and_ \(\mathcal{J}_{p^{*}}^{\mathcal{X}^{\star}}\) _are both uniformly continuous._ 4. _Let_ \(\mathcal{X}\) _be a uniformly smooth Banach space with duality map_ \(\mathcal{J}_{p}^{\mathcal{X}}\) _with_ \(p\geq 2\)_. Then, for all_ \(\mathbf{x},\widetilde{\mathbf{x}}\in\mathcal{X}\)_, there holds_ \[\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x})-\mathcal{J}_{p}^{\mathcal{X}}( \widetilde{\mathbf{x}})\|_{\mathcal{X}^{*}}^{p^{*}}\leq C\max\{1,\|\mathbf{x} \|_{\mathcal{X}},\|\widetilde{\mathbf{x}}\|_{\mathcal{X}}\}^{p}\,\overline{ \rho}_{\mathcal{X}}(\|\mathbf{x}-\widetilde{\mathbf{x}}\|_{\mathcal{X}})^{p^{*}},\] _where_ \(\overline{\rho}_{\mathcal{X}}(\tau)=\rho_{\mathcal{X}}(\tau)/\tau\) _is a modulus of smoothness function such that_ \(\overline{\rho}(\tau)\leq 1\)_._ Next we list some common Banach spaces, the corresponding duality maps and convexity and smoothness properties. **Example 2.4**.: 1. _A Hilbert space_ \(\mathcal{X}\) _is_ \(2\)_-smooth and_ \(2\)_-convex, and_ \(\mathcal{J}_{2}^{\mathcal{X}}\) _is the identity._ 2. _If_ \(\mathcal{X}\) _is smooth, then_ \(\mathcal{J}_{p}^{\mathcal{X}}\) _is the Gateaux derivative of the functional_ \(\mathbf{x}\mapsto\frac{1}{p}\|\mathbf{x}\|_{\mathcal{X}}^{p}\)_._ 3. _If_ \(\mathcal{X}=\ell^{r}(\mathbb{R})\) _with_ \(1<r<\infty\)_, then_ \(\mathcal{J}_{p}^{\mathcal{X}}\) _is single-valued, and the duality map is given by_ \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x})=\|\mathbf{x}\|_{r}^{p-r}|\mathbf{x}|^ {r-1}\operatorname{sign}(\mathbf{x}).\) _Moreover,_ \(\mathcal{J}_{p}^{\mathcal{X}}=\nabla(\frac{1}{p}\|\cdot\|_{\mathcal{X}}^{p})\) _since_ \(\mathcal{X}\) _is smooth._ 4. _Lebesgue spaces_ \(\mathcal{L}^{p}(\Omega)\)_, Sobolev spaces_ \(W^{s,p}(\Omega)\)_, with_ \(s>0\)_,_ \((\)_for an open bounded domain_ \(\Omega)\)_, and sequence spaces_ \(\ell^{p}(\mathbb{R})\) _are_ \(p\wedge 2\)_-smooth and_ \(p\lor 2\)_-convex, for_ \(1<p<\infty\)_. For_ \(p\in\{1,\infty\}\)_, they are neither smooth nor strictly convex._ ### Bregman distance Due to the geometry of Banach spaces, it is often more convenient to use the Bregman distance than the standard Banach space norm \(\|\cdot\|_{\mathcal{X}}\) in the convergence analysis. **Definition 2.5** (Bregman distance).: _For a smooth Banach space \(\mathcal{X}\), the functional_ \[\mathbf{B}_{p}(\mathbf{z},\mathbf{w})=\frac{1}{p^{*}}\|\mathbf{z}\|_{\mathcal{ X}}^{p}+\frac{1}{p}\|\mathbf{w}\|_{\mathcal{X}}^{p}-\left\langle\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{z}),\mathbf{w}\right\rangle,\] _is called the Bregman distance, where \(1/p+1/p^{*}=1\)._ Note that the dependence of the Bregman distance \(\mathbf{B}_{p}(\mathbf{z},\mathbf{w})\) on the space \(\mathcal{X}\) is omitted, which is often clear from the context. The Bregman distance does not satisfy the triangle inequality, and is generally non-symmetric. Thus it is not a distance. The next theorem lists useful properties of the Bregman distance, which show the relationship between the geometry of the underlying Banach space and duality maps. **Theorem 2.6** ([46, Theorem 2.60, Lemmas 2.62 and 2.63]).: _The following properties hold._ 1. _If_ \(\mathcal{X}\) _is smooth and reflexive, then_ \(\mathbf{B}_{p}(\mathbf{z},\mathbf{w})=\mathbf{B}_{p^{*}}\!\Big{(}\mathcal{J} _{p}^{\mathcal{X}}(\mathbf{w}),\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{z})\! \Big{)}\)_._ 2. _Bregman distance satisfies the three-point identity_ \[\mathbf{B}_{p}(\mathbf{z},\mathbf{w})=\mathbf{B}_{p}(\mathbf{z},\mathbf{w})+ \mathbf{B}_{p}(\mathbf{w},\mathbf{w})+\left\langle\mathcal{J}_{p}^{\mathcal{X }}(\mathbf{v})-\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{z}),\mathbf{w}-\mathbf{v }\right\rangle.\] (4) 3. _If_ \(\mathcal{X}\) _is_ \(p\)_-convex, then it is reflexive,_ \(p\geq 2\) _and there exists_ \(C_{p}>0\) _such that_ \[\mathbf{B}_{p}(\mathbf{z},\mathbf{w})\geq p^{-1}C_{p}\|\mathbf{w}-\mathbf{z} \|_{\mathcal{X}}^{p}.\] (5) 4. _If_ \(\mathcal{X}^{*}\) _is_ \(p^{*}\)_-smooth, then it is reflexive,_ \(p^{*}\leq 2\) _and there exists_ \(G_{p^{*}}>0\) _such that_ \[\mathbf{B}_{p^{*}}\big{(}\mathbf{z}^{*},\mathbf{w}^{*}\big{)}\leq(p^{*})^{-1}G _{p^{*}}\|\mathbf{w}^{*}-\mathbf{z}^{*}\|_{\mathcal{X}^{*}}^{p^{*}}.\] (6) 5. \(\mathbf{B}_{p}(\mathbf{z},\mathbf{w})\geq 0\)_, and if_ \(\mathcal{X}\) _is uniformly convex,_ \(\mathbf{B}_{p}(\mathbf{z},\mathbf{w})=0\) _if and only if_ \(\mathbf{z}=\mathbf{w}\)_._ 6. \(\mathbf{B}_{p}(\mathbf{z},\mathbf{w})\) _is continuous in the second argument. If_ \(\mathcal{X}\) _is smooth and uniformly convex, then_ \(\mathcal{J}_{p}^{\mathcal{X}}\) _is continuous on bounded subsets and_ \(\mathbf{B}_{p}(\mathbf{z},\mathbf{w})\) _is continuous in its first argument._ ## 3 Convergence analysis for exact data Now we develop an SGD type approach for problem (1) and analyse its convergence. Throughout, we make the following assumption on the Banach spaces \(\mathcal{X}\) and \(\mathcal{Y}\), unless indicated otherwise. **Assumption 3.1**.: _The Banach space \(\mathcal{X}\) is \(p\)-convex and smooth, and \(\mathcal{Y}\) is arbitrary._ To recover the solution \(\mathbf{x}^{\dagger}\), we minimise a least-squares type objective \(\operatorname*{argmin}_{\mathbf{x}\in\mathcal{X}}\frac{1}{p}\|\mathbf{A} \mathbf{x}-\mathbf{y}\|_{\mathcal{Y}}^{p}\), for some \(p>1\). By \(\mathcal{X}_{\min}\), we denote the (non-empty) set of minimisers over \(\mathcal{X}\). Among the elements of \(\mathcal{X}_{\min}\), the regularisation theory focuses on the so-called minimum norm solution. **Definition 3.2**.: _An element \(\mathbf{x}^{\dagger}\in\mathcal{X}\) is called a minimum norm solution (MNS) of (1) if_ \[\mathbf{A}\mathbf{x}^{\dagger}=\mathbf{y}\quad\text{ and }\quad\|\mathbf{x}^{ \dagger}\|_{\mathcal{X}}=\inf\{\|\mathbf{x}\|_{\mathcal{X}}:\mathbf{x}\in \mathcal{X},\,\mathbf{A}\mathbf{x}=\mathbf{y}\}.\] The MNS \(\mathbf{x}^{\dagger}\) is not unique for general Banach spaces. The following lemma states sufficient geometric assumptions on \(\mathcal{X}\) for uniqueness. **Lemma 3.3** ([46, Lemma 3.3]).: _Let Assumption 3.1 hold. Then there exists a unique MNS \(\mathbf{x}^{\dagger}\). Furthermore, \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}^{\dagger})\in\operatorname*{range}( \mathbf{A}^{*})\), for \(1<p<\infty\). If some \(\widehat{\mathbf{x}}\in\mathcal{X}\) satisfies \(\mathcal{J}_{p}^{\mathcal{X}}(\widehat{\mathbf{x}})\in\operatorname*{range}( \mathbf{A}^{*})\) and \(\widehat{\mathbf{x}}-\mathbf{x}^{\dagger}\in\operatorname*{null}(\mathbf{A})\), then \(\widehat{\mathbf{x}}=\mathbf{x}^{\dagger}\)._ By Lemma 3.3, the MNS \(\mathbf{x}^{\dagger}\) is unique modulo the null space of \(\mathbf{A}\), under certain smoothness and convexity assumptions on \(\mathcal{X}\). These conditions exclude Lebesgue and sequence spaces \(\mathcal{L}^{1}(\Omega)\) and \(\ell^{1}(\mathbb{R})\), cf. Example 2.4(iv). The standard Landweber method [33, 44] constructs an approximation to the MNS \(\mathbf{x}^{\dagger}\) by running the iterations \[\mathbf{x}_{k+1}=\mathcal{J}_{p^{*}}^{\mathcal{X}^{*}}\left(\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}\mathbf{A}^{*}\mathcal{J}_{p}^{\mathcal{ Y}}(\mathbf{A}\mathbf{x}_{k}-\mathbf{y})\right),\quad k=0,1,\ldots, \tag{7}\] where \(\mu_{k+1}>0\) is the step-size. Asplund's theorem [46, Theorem 2.28] allows characterising the duality map as the sub-differential, \(\mathcal{J}_{p}^{\mathcal{X}}=\partial(\frac{1}{p}\|\cdot\|_{\mathcal{X}}^{p})\) for \(p>1\). This identifies the descent direction \(\mathbf{A}^{*}j_{p}^{\mathcal{Y}}(\mathbf{A}\mathbf{x}_{k}-\mathbf{y})\) as the sub-gradient: \(\mathbf{A}^{*}j_{p}^{\mathcal{Y}}(\mathbf{A}\mathbf{x}_{k}-\mathbf{y})= \partial(\frac{1}{p}\|\mathbf{A}\cdot-\mathbf{y}\|_{\mathcal{Y}})(\mathbf{x}_ {k})\). Note that \(\mathcal{J}_{p}^{\mathcal{X}}\) is single valued by Assumption 3.1 and Theorem 2.3, though \(\mathcal{J}_{p}^{\mathcal{Y}}\) is not. For well selected step-sizes, Landweber iterations (7) converge to an MNS of (1) [44, Theorem 3.3]. The evaluation of the sub-gradient \(\mathbf{A}^{*}j_{p}^{\mathcal{Y}}(\mathbf{A}\mathbf{x}_{k}-\mathbf{y})\) represents the main per-iteration cost of the iteration (7). In this work, we consider the following Kaczmarz type setting: \[\mathbf{A}=\begin{pmatrix}\mathbf{A}_{1}\\ \vdots\\ \mathbf{A}_{N}\end{pmatrix}\quad\text{and}\quad\mathbf{A}\mathbf{x}=\begin{pmatrix} \mathbf{A}_{1}\mathbf{x}\\ \vdots\\ \mathbf{A}_{N}\mathbf{x}\end{pmatrix}=\begin{pmatrix}\mathbf{y}_{1}\\ \vdots\\ \mathbf{y}_{N}\end{pmatrix}, \tag{8}\] where \(\mathbf{A}_{i}:\mathcal{X}\to\mathcal{Y}_{i}\), \(\mathbf{y}_{i}\in\mathcal{Y}_{i}\), for \(i\in[N]=\{1,\ldots,N\}\). Problem (8) is defined on the direct product \((\otimes_{i=1}^{N}\mathcal{Y}_{i},\ell^{r})\), equipped with the \(\ell^{r}\) norm, for \(r\geq 1\) \[\|\mathbf{y}\|_{\mathcal{Y}}:=\|(\mathbf{y}_{1},\ldots,\mathbf{y}_{N})\|_{ \mathcal{Y}}=\|(\|\mathbf{y}_{1}\|_{\mathcal{Y}_{1}},\ldots,\|\mathbf{y}_{N}\| _{\mathcal{Y}_{N}})\|_{\ell^{r}}=\Big{(}\sum_{i=1}^{N}\|\mathbf{y}_{i}\|_{ \mathcal{Y}_{i}}^{r}\Big{)}^{1/r}. \tag{9}\] Below we identify \(\mathcal{Y}_{i}=\mathcal{Y}\) for notational brevity, and use \(\|\cdot\|_{\mathcal{Y}}\) to denote both the norm of the direct product space and the component spaces, though all the relevant proofs and concepts easily extend to the general case. Then the objective \(\Psi(\mathbf{x})\) is given by \[\Psi(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\Psi_{i}(\mathbf{x}),\quad\text{with }\Psi_{i}(\mathbf{x})=\frac{1}{p}\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_ {\mathcal{Y}}^{p}.\] Note that for many common imaging problems we use \(\mathcal{Y}=\ell^{p}(\mathbb{R})\), which then naturally gives \(\Psi(\mathbf{x})=\frac{1}{pN}\|\mathbf{A}\mathbf{x}-\mathbf{y}\|_{\mathcal{Y} }^{p}\). To reduce the computational cost per-iteration, we exploit the finite-sum structure of the objective \(\Psi(\mathbf{x})\) and adopt SGD iterations of the form \[\mathbf{x}_{k+1}=\mathcal{J}_{p^{*}}^{\mathcal{X}^{*}}\left(\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}\mathbf{g}_{k+1}\right), \tag{10}\] where \(\mathbf{g}_{k+1}=g(\mathbf{x}_{k},\mathbf{y},i_{k+1})\) is the stochastic update direction given by \[g(\mathbf{x},\mathbf{y},i)=\mathbf{A}_{i}^{*}j_{p}^{\mathcal{Y}}(\mathbf{A}_{ i}\mathbf{x}-\mathbf{y}_{i})=\partial\Big{(}\tfrac{1}{p}\|\mathbf{A}_{i}\cdot- \mathbf{y}_{i}\|_{\mathcal{Y}}^{p}\Big{)}(\mathbf{x}), \tag{11}\] and the random index \(i_{k}\) is sampled uniformly over the index set \([N]\), independent of \(\mathbf{x}_{k}\). Clearly, it is an unbiased estimator of the sub-gradient \(\partial\Psi(\mathbf{x})\), i.e. \(\mathbb{E}[g(\mathbf{x},\mathbf{y},i)]=\partial\Psi(\mathbf{x})\), and the per-iteration cost is reduced by a factor of \(N\). **Remark 3.4**.: _In the model (8), if \(\mathcal{Y}\) admits a complemented sum \(\mathcal{Y}=\sum_{i=1}^{N}\mathcal{Y}_{i}\), we can take the \((\)internal\()\) direct sum \((\oplus_{i=1}^{N}\mathcal{Y}_{i},\ell^{r})\), so that \(\mathbf{y}=\mathbf{y}_{1}+\ldots+\mathbf{y}_{N}\) and the corresponding norm \(\|\mathbf{y}\|=\|(\|\mathrm{Proj}_{\mathcal{Y}_{1}}(\mathbf{y})\|_{\mathcal{Y} _{1}},\ldots,\|\mathrm{Proj}_{\mathcal{Y}_{N}}(\mathbf{y})\|_{\mathcal{Y}_{N}} )\|_{r}\). With this identification the spaces \((\otimes_{i=1}^{N}\mathcal{Y}_{i},\ell^{r})\) and \((\oplus_{i=1}^{N}\mathcal{Y}_{i},\ell^{r})\) are isometrically isomorphic [48] and the norms are equivalent for all \(r\geq 1\)._ We now collect some useful properties about the objective \(\Psi\) and the Bregman divergence. Throughout, \(L_{\max}=\max_{i\in[N]}\|\mathbf{A}_{i}\|\). Note that \(c_{N}=1/N\) if \(\mathcal{Y}=\mathcal{L}^{p}(\Omega)\). **Lemma 3.5**.: _For all \(i\in[N]\), \(\mathbf{x}\in\mathcal{X}\), and any \(\widehat{\mathbf{x}}\in\mathcal{X}_{\min}\) (such that \(\mathbf{A}\widehat{\mathbf{x}}=\mathbf{y}\)), we have_ \[\langle\partial\Psi_{i}(\mathbf{x}),\mathbf{x}-\widehat{\mathbf{x}}\rangle=p \Psi_{i}(\mathbf{x})\quad\text{and}\quad\langle\partial\Psi(\mathbf{x}), \mathbf{x}-\widehat{\mathbf{x}}\rangle=p\Psi(\mathbf{x}). \tag{12}\] _Moreover, \(\Psi_{i}(\mathbf{x})\leq\frac{\|\mathbf{A}_{i}\|^{p}}{C_{p}}\mathbf{B}_{p}( \mathbf{x},\widehat{\mathbf{x}})\), \(\Psi(\mathbf{x})\leq\frac{L_{\max}^{p}}{C_{p}}\mathbf{B}_{p}(\mathbf{x}, \widehat{\mathbf{x}})\), and for some \(C_{N}>0\) we have \(\Psi(\mathbf{x})\geq\frac{C_{N}}{p}\|\mathbf{A}\mathbf{x}-\mathbf{y}\|_{ \mathcal{Y}}^{p}\)._ Proof.: It follows from the identity \(\mathbf{A}\widehat{\mathbf{x}}=\mathbf{y}\) that \[\left\langle\partial\Psi_{i}(\mathbf{x}),\mathbf{x}-\widehat{\mathbf{x}}\right\rangle =\left\langle\mathbf{A}_{i}^{*}J_{p}^{\mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}- \mathbf{y}_{i}),\mathbf{x}-\widehat{\mathbf{x}}\right\rangle=\left\langle J_{p }^{\mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}),\mathbf{A}_{i} \mathbf{x}-\mathbf{y}_{i}\right\rangle=p\Psi_{i}(\mathbf{x}).\] Since \(\partial\Psi(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\partial\Psi_{i}(\mathbf{x})\), the second identity in (12) follows from the linearity of the dual product. By the \(p\)-convexity of the space \(\mathcal{X}\) and Theorem 2.6(iii), we get \[\Psi_{i}(\mathbf{x})=\frac{1}{p}\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_ {\mathcal{Y}}^{p}=\frac{1}{p}\|\mathbf{A}_{i}(\mathbf{x}-\widehat{\mathbf{x}} )\|_{\mathcal{Y}}^{p}\leq\frac{\|\mathbf{A}_{i}\|^{p}}{p}\|\mathbf{x}- \widehat{\mathbf{x}}\|_{\mathcal{X}}^{p}\leq\frac{\|\mathbf{A}_{i}\|^{p}}{C_{ p}}\mathbf{B}_{p}(\mathbf{x},\widehat{\mathbf{x}}).\] The second claim follows since \(\Psi(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\Psi_{i}(\mathbf{x})\). Lastly, by the norm equivalence (9) for \(1<r<\infty\), there exists \(C_{N}>0\) such that \[\Psi(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{p}\|\mathbf{A}_{i}\mathbf{ x}-\mathbf{y}_{i}\|_{\mathcal{Y}}^{p}\geq\frac{C_{N}}{p}\|\mathbf{A}\mathbf{x}- \mathbf{y}\|_{\mathcal{Y}}^{p}.\] We now focus on the convergence study of the iterations (10), without and with noise in the data, and discuss convergence rates under conditional stability. ### Convergence for the Kaczmarz model Below the notation \(\mathbb{E}[\cdot]\) denotes taking expectation with respect to the sampling of the random indices \(i_{k}\) and \(\mathbb{E}_{k}[\cdot]\) denotes taking conditional expectation with respect to \(\mathcal{F}_{k}\). The remaining variables, e.g. \(\mathbf{x}\) and \(\mathbf{y}\), are measurable with respect to the underlying probability measure. To study the convergence of SGD (10), we first establish a descent property in terms of the Bregman distance. **Lemma 3.6**.: _Let Assumption 3.1 hold. For any \(\widehat{\mathbf{x}}\in\mathcal{X}\), the iterates in (10) satisfy_ \[\mathbf{B}_{p}(\mathbf{x}_{k+1},\widehat{\mathbf{x}})\leq\mathbf{B}_{p}( \mathbf{x}_{k},\widehat{\mathbf{x}})-\mu_{k+1}\left\langle\mathbf{g}_{k+1}, \mathbf{x}_{k}-\widehat{\mathbf{x}}\right\rangle+\frac{G_{p^{*}}}{p^{*}}\mu_{ k+1}^{p^{*}}\|\mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}}. \tag{13}\] Proof.: Let \(\Delta_{k}:=\mathbf{B}_{p}(\mathbf{x}_{k},\widehat{\mathbf{x}})\). By Definition 2.5 and expression (10), we have \[\Delta_{k+1} =\frac{1}{p}\|\widehat{\mathbf{x}}\|_{\mathcal{X}}^{p}+\frac{1}{p ^{*}}\|\mathbf{x}_{k+1}\|_{\mathcal{X}}^{p}-\left\langle\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k+1}),\widehat{\mathbf{x}}\right\rangle\] \[\quad=\frac{1}{p}\|\widehat{\mathbf{x}}\|_{\mathcal{X}}^{p}+\frac{ 1}{p^{*}}\|\mathcal{J}_{p^{*}}^{\mathcal{X}^{*}}\left(\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}\mathbf{g}_{k+1}\right)\|_{\mathcal{X}}^ {p}-\left\langle\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k+1}),\widehat{ \mathbf{x}}\right\rangle.\] Using Definition 2.1, the identity \(p(p^{*}-1)=p^{*}\) and Theorem 2.3(iii), we deduce \[\Delta_{k+1} =\frac{1}{p}\|\widehat{\mathbf{x}}\|_{\mathcal{X}}^{p}+\frac{1}{p ^{*}}\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}\mathbf{g}_{k+1} \|_{\mathcal{X}^{*}}^{p(p^{*}-1)}-\left\langle\mathcal{J}_{p}^{\mathcal{X}}( \mathbf{x}_{k})-\mu_{k+1}\mathbf{g}_{k+1},\widehat{\mathbf{x}}\right\rangle\] \[=\frac{1}{p}\|\widehat{\mathbf{x}}\|_{\mathcal{X}}^{p}+\frac{1}{p ^{*}}\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}\mathbf{g}_{k+1} \|_{\mathcal{X}^{*}}^{p^{*}}-\left\langle\mathcal{J}_{p}^{\mathcal{X}}( \mathbf{x}_{k})-\mu_{k+1}\mathbf{g}_{k+1},\widehat{\mathbf{x}}\right\rangle.\] Since \(\mathcal{X}\) is \(p\)-convex, \(\mathcal{X}^{*}\) is \(p^{*}\)-smooth, cf. Theorem 2.3(i). By [9, Corollary 5.8], this implies \[\frac{1}{p^{*}}\|\mathbf{x}^{*}-\tilde{\mathbf{x}}^{\|p^{*}}_{\mathcal{X}^{*}} \leq\frac{1}{p^{*}}\|\mathbf{x}^{*}\|_{\mathcal{X}^{*}}^{p^{*}}+\frac{G_{p^{*}}}{p ^{*}}\|\tilde{\mathbf{x}}^{\|p^{*}}_{\mathcal{X}^{*}}-\left\langle\mathcal{J}_{p ^{*}}^{\mathcal{X}^{*}}(\mathbf{x}^{*}),\tilde{\mathbf{x}}^{*}\right\rangle, \quad\forall\mathbf{x},\tilde{\mathbf{x}}^{*}\in\mathcal{X}^{*}.\] Using the identities \(p^{*}(p-1)=p\) and \((\mathcal{J}_{p}^{\mathcal{X}})^{-1}=\mathcal{J}_{p^{*}}^{\mathcal{X}^{*}}\), cf. Theorem 2.3(iii), we get \[\frac{1}{p^{*}}\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mu_ {k+1}\mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}} \leq\frac{1}{p^{*}}\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})\|_{ \mathcal{X}^{*}}^{p^{*}}+\frac{G_{p^{*}}}{p^{*}}\|\mu_{k+1}\mathbf{g}_{k+1}\|_{ \mathcal{X}^{*}}^{p^{*}}-\left\langle\mu_{k+1}\mathbf{g}_{k+1},\mathbf{x}_{k}\right\rangle\] \[=\frac{1}{p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p}+\frac{G_{p^{*} }}{p^{*}}\mu_{k+1}^{p^{*}}\|\mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}}-\mu_{k+1 }\left\langle\mathbf{g}_{k+1},\mathbf{x}_{k}\right\rangle.\] Combining the preceding estimates gives the desired assertion through \[\Delta_{k+1} \leq\frac{1}{p}\|\widehat{\mathbf{x}}\|_{\mathcal{X}^{*}}^{p}+\frac {1}{p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}^{*}}^{p}-\left\langle\mathcal{J}_{p}^ {\mathcal{X}}(\mathbf{x}_{k}),\widehat{\mathbf{x}}\right\rangle+\frac{G_{p^{*}} }{p^{*}}\mu_{k+1}^{p^{*}}\|\mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}}-\mu_{ k+1}\left\langle\mathbf{g}_{k+1},\mathbf{x}_{k}-\widehat{\mathbf{x}}\right\rangle\] \[=\Delta_{k}-\mu_{k+1}\left\langle\mathbf{g}_{k+1},\mathbf{x}_{k}- \widehat{\mathbf{x}}\right\rangle+\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}}\| \mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}}.\] Lemma 3.6 allows showing that the sequence of Bregman distances \((\mathbf{B}_{p}(\mathbf{x}_{k},\widehat{\mathbf{x}}))_{k\in\mathbb{N}}\) forms an almost super-martingale (in the Robbins-Siegmund sense defined below) for \(\widehat{\mathbf{x}}\in\mathcal{X}_{\min}\) and well chosen step-sizes \((\mu_{k})_{k\in\mathbb{N}}\). We will show the almost sure convergence of the iterates using Robbins-Siegmund theorem. **Theorem 3.7** (Robbins-Siegmund theorem on the convergence of almost super-martingales, [38, Lemma 11]).: _Consider a filtration \((\mathcal{F}_{k})_{k\in\mathbb{N}}\) and four non-negative, \((\mathcal{F}_{k})_{k\in\mathbb{N}}\) adapted processes \((\alpha_{k})_{k\in\mathbb{N}}\), \((\beta_{k})_{k\in\mathbb{N}}\), \((\gamma_{k})_{k\in\mathbb{N}}\), and \((\delta_{k})_{k\in\mathbb{N}}\). Let the sequence \((\alpha_{k})_{k\in\mathbb{N}}\) be an almost super-martingale, i.e. for all \(k\) we have \(\mathbb{E}_{k}[\alpha_{k+1}]\leq(1+\beta_{k})\alpha_{k}+\gamma_{k}-\delta_{k}\). Then the sequence \((\alpha_{k})_{k\in\mathbb{N}}\) converges a.s. to a random variable \(\alpha_{\infty}\), and \(\sum_{k=1}^{\infty}\delta_{k}<\infty\) a.s. on the set \(\{\sum_{k=1}^{\infty}\beta_{k}<\infty,\)\(\sum_{k=1}^{\infty}\gamma_{k}<\infty\}\)._ Under certain conditions on \(\mathbf{x}_{0}\), the limit is the MNS \(\mathbf{x}^{\dagger}\). Below \(\mathbb{E}_{k}\) denotes the conditional expectation with respect to the filtration \(\mathcal{F}_{k}\). **Theorem 3.8**.: _Let \((\mu_{k})_{k\in\mathbb{N}}\) satisfy \(\sum_{k=1}^{\infty}\mu_{k}=\infty\) and \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty,\) Assumption 3.1 hold, and \(\mathbf{x}^{\dagger}\) be the MNS. Then the sequence \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) converges a.s. to a solution of (1):_ \[\mathbb{P}\Big{(}\lim_{k\to\infty}\inf_{\widehat{\mathbf{x}}\in\mathcal{X}_{ \min}}\|\mathbf{x}_{k}-\widehat{\mathbf{x}}\|_{\mathcal{X}}=0\Big{)}=1.\] _Moreover, if \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\operatorname{range} (\mathbf{A}^{*})}\), we have \(\lim_{k\to\infty}\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})=0\) a.s._ Proof.: By Lemma 3.5, we have \(\left\langle\partial\Psi(\mathbf{x}_{k}),\mathbf{x}_{k}-\mathbf{x}^{\dagger }\right\rangle=p\Psi(\mathbf{x}_{k}).\) Moreover, \[\|g(\mathbf{x},\mathbf{y},i)\|_{\mathcal{X}^{*}}=\|\mathbf{A}_{i}^{*}j_{p}^{ \mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i})\|_{\mathcal{X}^{*}} \leq\|\mathbf{A}_{i}\|\|j_{p}^{\mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}-\mathbf{ y}_{i})\|_{\mathcal{Y}^{*}}\leq L_{\max}\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_{ \mathcal{Y}}^{p-1},\] with \(L_{\max}=\max_{i\in[N]}\|\mathbf{A}_{i}\|\). Thus, since \(p^{*}(p-1)=p\), we have \[\mathbb{E}\big{[}\|g(\mathbf{x},\mathbf{y},i)\|_{\mathcal{X}^{*}}^{p^{*}} \big{]}\leq pL_{\max}^{p^{*}}\frac{1}{N}\sum_{i=1}^{N}\frac{1}{p}\|\mathbf{A} _{i}\mathbf{x}-\mathbf{y}_{i}\|_{\mathcal{Y}}^{p}=pL_{\max}^{p^{*}}\Psi( \mathbf{x}).\] Upon taking the conditional expectation \(\mathbb{E}_{k}[\cdot]\) of the descent property (13) (with \(\widehat{\mathbf{x}}=\mathbf{x}^{\dagger}\)), and using the measurability of \(\mathbf{x}_{k}\) with respect to \(\mathcal{F}_{k}\), we deduce \[\mathbb{E}_{k}[\Delta_{k+1}]\leq\Delta_{k}-p\mu_{k+1}\Psi(\mathbf{x}_{k})+pL_{ \max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}}\Psi(\mathbf{x}_{k}).\] Using Lemma 3.5 again we have \(\Psi(\mathbf{x}_{k})\leq\frac{L_{\max}^{p}}{C_{p}}\Delta_{k}\), which yields \[\mathbb{E}_{k}[\Delta_{k+1}]\leq\left(1+L_{\max}^{p^{*}+p}\frac{p}{C_{p}}\frac{ G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}}\right)\Delta_{k}-p\mu_{k+1}\Psi(\mathbf{x}_{k}).\] Since \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\) by assumption, we can apply Theorem 3.7 and deduce that the sequence \((\Delta_{k})_{k\in\mathbb{N}}\) converges a.s. to a random variable \(\Delta_{\infty}\) and \(\sum_{k=0}^{\infty}\mu_{k+1}\Psi(\mathbf{x}_{k})<\infty\) a.s. Let \(\Omega\) be the measurable set on which \((\Delta_{k})_{k\in\mathbb{N}}\) converges, \(\sum_{k=0}^{\infty}\mu_{k+1}\Psi(\mathbf{x}_{k})<\infty\), and \(\mathbb{P}(\Omega)=1\). Next we show \(\liminf_{k}\Psi(\mathbf{x}_{k})=0\) a.s. Consider an event \(\omega\) on which this is not the case, i.e. where \(\liminf_{k}\Psi(\mathbf{x}_{k})>0\). Then there exist \(\epsilon>0\) and \(k_{\epsilon}\in\mathbb{N}\) such that for all \(k\geq k_{\epsilon}\), \(\Psi(\mathbf{x}_{k})\geq\epsilon\), giving \(\sum_{k\geq k_{\epsilon}}\mu_{k+1}\Psi(\mathbf{x}_{k})\geq\epsilon\sum_{k\geq k_{ \epsilon}}\mu_{k+1}\). Since for all events in \(\Omega\) this would lead to a contradiction: the right hand side diverges (\(\sum_{k=1}^{N}\mu_{k}=\infty\) by assumption), whereas the left hand side is the remainder of a convergent series, we conclude \(\omega\not\in\Omega\). Since \(\mathbb{P}(\Omega^{c})=0\), we have \(\liminf_{k}\Psi(\mathbf{x}_{k})=0\) a.s. For every event in the set where \(\liminf_{k}\Psi(\mathbf{x}_{k})=0\) holds we can then find a sub-sequence \((\mathbf{x}_{n_{k}})_{k\in\mathbb{N}}\) such that \(\lim_{k\to\infty}\Psi(\mathbf{x}_{n_{k}})=0\). Define also \(\widehat{\Psi}(\mathbf{x})=\sum_{i=1}^{N}\widehat{\Psi}_{i}(\mathbf{x})\), with \(\widehat{\Psi}_{i}(\mathbf{x})=\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_{ \mathcal{Y}}\). We have \(\liminf_{k}\widehat{\Psi}(\mathbf{x}_{k})=0\) and \(\lim_{j\to\infty}\widehat{\Psi}(\mathbf{x}_{n_{j}})=0\) (on the same subsequence), since by Young's inequality, \[\Big{(}\sum_{i=1}^{N}\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_{\mathcal{Y }}^{p}\Big{)}^{1/p}\leq\sum_{i=1}^{N}\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i }\|_{\mathcal{Y}}\leq N\Big{(}\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{A}_{i} \mathbf{x}-\mathbf{y}_{i}\|_{\mathcal{Y}}^{p}\Big{)}^{1/p}.\] Moreover, \(\widehat{\Psi}(\mathbf{x})^{p}\leq pN^{p}\Psi(\mathbf{x})\). The following argument is understood pointwise on the a.s. set \(\Omega\) where \((\Delta_{k})_{k\in\mathbb{N}}\) converges, \(\sum_{k=0}^{\infty}\mu_{k+1}\Psi(\mathbf{x}_{k})<\infty\), and \(\liminf_{k}\widehat{\Psi}(\mathbf{x}_{k})=0\). Since \((\Delta_{k})_{k\in\mathbb{N}}\) converges it is bounded. By the coercivity of the Bregman distance (see Lemma A.3) so are \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) and \((\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k}))_{k\in\mathbb{N}}\). By further passing to a subsequence, we can find a subsequence of \((\mathbf{x}_{n_{k}})_{k\in\mathbb{N}}\), that we denote the same, such that \((\|\mathbf{x}_{n_{k}}\|_{\mathcal{X}})_{k\in\mathbb{N}}\) is convergent, \((\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{n_{k}}))_{k\in\mathbb{N}}\) is weakly convergent, and \[\lim_{k\to\infty}\widehat{\Psi}(\mathbf{x}_{n_{k}})=0\quad\text{and}\quad \widehat{\Psi}(\mathbf{x}_{n_{k}})\leq\widehat{\Psi}(\mathbf{x}_{n})\text{ for all }n<n_{k}. \tag{14}\] The latter can be obtained by setting \(n_{1}=1\), and then recursively defining \(n_{k+1}=\min\{k>n_{k}:\Psi(\mathbf{x}_{k})\leq\Psi(\mathbf{x}_{n_{k}})/2\}\), for \(k\in\mathbb{N}\). Any following subsequence satisfies the same property. Using Theorem 2.6(ii), we have for \(k>\ell\) \[\mathbf{B}_{p}(\mathbf{x}_{n_{\ell}},\mathbf{x}_{n_{k}})\!=\!\frac{1}{p^{*}} \!\Big{(}\!\|\mathbf{x}_{n_{\ell}}\|_{\mathcal{X}}^{p}\!-\!\|\mathbf{x}_{n_{k} }\|_{\mathcal{X}}^{p}\!\Big{)}\!+\!\left\langle\mathcal{J}_{p}^{\mathcal{X}} (\mathbf{x}_{n_{k}})\!-\!\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{n_{0}}), \mathbf{x}^{\!\prime}\!\right\rangle\!+\!\left\langle\mathcal{J}_{p}^{\mathcal{ X}}(\mathbf{x}_{n_{k}})\!-\!\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{n_{0}}), \mathbf{x}_{n_{k}}\!-\!\mathcal{X}^{\!\prime}\!\right\rangle.\] Since the first two terms involve Cauchy sequences, it suffices to treat the last term, denoted by \(\mathrm{I}_{k,\ell}\). Using telescopic sum and applying the iterate update rule, we have \[\mathrm{I}_{k,\ell} =\sum_{n=n_{\ell}}^{n_{k}-1}\!\left\langle\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{n+1})\!-\!\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{n }),\mathbf{x}_{n_{k}}\!-\!\mathbf{x}^{\dagger}\right\rangle=\!\sum_{n=n_{\ell}}^ {n_{k}-1}\!\mu_{n+1}\left\langle\mathbf{A}_{i_{n+1}}^{*}\mathcal{J}_{p}^{ \mathcal{Y}}(\mathbf{A}_{i_{k+1}}\mathbf{x}_{n}-\mathbf{y}_{i_{n+1}}),\mathbf{ x}_{n_{k}}-\mathbf{x}^{\dagger}\right\rangle\] \[=\sum_{n=n_{\ell}}^{n_{k}-1}\!\mu_{n+1}\left\langle\mathcal{J}_{p }^{\mathcal{Y}}(\mathbf{A}_{i_{n+1}}\mathbf{x}_{n}-\mathbf{y}_{i_{k+1}}), \mathbf{A}_{i_{n+1}}\mathbf{x}_{n_{k}}-\mathbf{y}_{i_{n+1}}\right\rangle.\] By the Cauchy-Schwarz inequality and properties of the duality map, we get \[|\mathrm{I}_{k,\ell}|\leq\sum_{n=n_{\ell}}^{n_{k}-1}\!\!\mu_{n+1}\|\!\|\mathbf{ A}_{i_{n+1}}\mathbf{x}_{n}\!-\!\mathbf{y}_{i_{n+1}}\|_{\mathcal{Y}}^{p-1}\|\mathbf{A}_{i_{n +1}}\mathbf{x}_{n_{k}}\!-\!\mathbf{y}_{i_{n+1}}\|_{\mathcal{Y}}\leq\!\sum_{n=n_{ \ell}}^{n_{k}-1}\!\!\mu_{n+1}\widehat{\Psi}_{i_{n+1}}(\mathbf{x}_{n})^{p-1} \widehat{\Psi}_{i_{n+1}}(\mathbf{x}_{n_{k}}).\] Since \(\widehat{\Psi}_{i}(\mathbf{x})\leq\widehat{\Psi}(\mathbf{x})\), for all \(i\in[N]\), we use (14) and get \[|\mathrm{I}_{k,\ell}|\leq\sum_{n=n_{\ell}}^{n_{k}-1}\mu_{n+1}\widehat{\Psi}( \mathbf{x}_{n})^{p-1}\widehat{\Psi}(\mathbf{x}_{n_{k}})\leq\sum_{n=n_{\ell}}^{ n_{k}-1}\mu_{n+1}\widehat{\Psi}(\mathbf{x}_{n})^{p}.\] Since \(\widehat{\Psi}(\mathbf{x})^{p}\leq pN^{p}\Psi(\mathbf{x})\), the right hand side of the inequality converges to \(0\) as \(n_{\ell}\to\infty\). Therefore, by [44, Theorem 2.12(e)], it follows that \((\mathbf{x}_{n_{k}})_{k\in\mathbb{N}}\), is a Cauchy sequence, and thus converges strongly to an \(\widehat{\mathbf{x}}\) such that \(\Psi(\widehat{\mathbf{x}})=0\). The above argument showing the a.s. convergence of \((\Delta_{k})_{k\in\mathbb{N}}\) can be applied pointwise to any solution. Namely, on the event where \((\mathbf{x}_{n_{k}})_{k\in\mathbb{N}}\) converges strongly to an \(\widehat{\mathbf{x}}\in\mathcal{X}_{\min}\) (i.e. \(\mathbf{A}\widehat{\mathbf{x}}=\mathbf{y}\)), define \(\widehat{\Delta}_{k}:=\mathbf{B}_{p}(\mathbf{x}_{k},\widehat{\mathbf{x}})\). By repeating the argument using Lemma 3.5, we deduce \[\widehat{\Delta}_{k+1}\leq\left(1+L_{\max}^{p^{*}+p}\frac{p}{C_{p}}\frac{G_{p^ {*}}}{p^{*}}\mu_{k+1}^{p^{*}}\right)\widehat{\Delta}_{k}-p\mu_{k+1}\Psi_{i_{k}} (\mathbf{x}_{k}).\] Since \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\), it follows that the (deterministic) sequence \((\widehat{\Delta}_{k})_{k\in\mathbb{N}}\) converges to a \(\widehat{\Delta}_{\infty}\geq 0\). The continuity of the Bregman distance in the first argument (Theorem 2.6(vi)) gives \(\lim_{j\to\infty}\mathbf{B}_{p}(\mathbf{x}_{n_{j}},\widehat{\mathbf{x}})= \mathbf{B}_{p}(\widehat{\mathbf{x}},\widehat{\mathbf{x}})=0\), and thus \(\widehat{\Delta}_{\infty}=0\). Moreover, by the \(p\)-convexity of \(\mathcal{X}\) (Theorem 2.6(iii)), we have \(0\leq\|\mathbf{x}_{k}-\widehat{\mathbf{x}}\|_{\mathcal{X}}^{p}\leq\frac{p}{C_ {p}}\widehat{\Delta}_{k}.\) From the squeeze theorem it follows that \(\lim_{k\to\infty}\|\mathbf{x}_{k}-\widehat{\mathbf{x}}\|_{\mathcal{X}}=0\). Thus, for every event in an a.s. set \(\Omega\), the sequence \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) strongly converge to some minimising solution, that is \[\mathbb{P}\Big{(}\lim_{k\to\infty}\inf_{\widehat{\mathbf{x}}\in\mathcal{X}_{ \min}}\|\mathbf{x}_{k}-\widehat{\mathbf{x}}\|_{\mathcal{X}}=0\Big{)}=1.\] Next assume \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\operatorname{ range}(\mathbf{A}^{*})}\). From (10), it follows that \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})\in\overline{\operatorname{ range}(\mathbf{A}^{*})}\) holds for all \(k\geq 1\). By the continuity of \(\mathcal{J}_{p}^{\mathcal{X}}\), we have \(\mathcal{J}_{p}^{\mathcal{X}}(\widehat{\mathbf{x}})\in\overline{\operatorname{ range}(\mathbf{A}^{*})}\). Thus, from \(\mathbf{A}(\widehat{\mathbf{x}}-\mathbf{x}^{\dagger})=0\) and Lemma 3.3 it follows \(\widehat{\mathbf{x}}=\mathbf{x}^{\dagger}\). The assumptions and conclusions of Theorem 3.8 can be broken down into two parts. The step-size conditions \(\sum_{k=1}^{\infty}\mu_{k}=\infty\) and \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\) are required to show the a.s. convergence of \((\mathbf{B}_{p}(\mathbf{x}_{k},\widehat{\mathbf{x}}))_{k\in\mathbb{N}}\) to \(0\), for some non-deterministic \(\widehat{\mathbf{x}}\in\mathcal{X}_{\min}\). The remaining assumption \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\operatorname{ range}(\mathbf{A}^{*})}\) is needed to identify this limit as the MNS \(\mathbf{x}^{\dagger}\), as the Landweber method [44, Remark 3.12]. If \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\not\in\overline{\operatorname{ range}(\mathbf{A}^{*})}\), we commonly establish convergence to an MNS relative to \(\mathbf{x}_{0}\), i.e. a solution which minimises \(\|\mathbf{x}-\mathbf{x}_{0}\|_{\mathcal{X}}\), analogous to the Euclidean case [24]. **Remark 3.9**.: _The stepsize conditions \(\sum_{k=1}^{\infty}\mu_{k}=\infty\) and \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\) are satisfied by a polynomially decaying step-size schedule \((\mu_{k})_{k\in\mathbb{N}}=(\mu_{0}k^{-\beta})_{k\in\mathbb{N}}\), with \(\frac{1}{p^{*}}<\beta\leq 1\)._ Theorem 3.8 states sufficient conditions ensuring the a.s. convergence of \((\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger}))_{k\in\mathbb{N}}\) to \(0\). To strengthen this to the convergence in expectation, we require an additional assumption to ensure that \((\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger}))_{k\in\mathbb{N}}\) is a uniformly integrable super-martingale and the space \(\mathcal{X}\) being uniformly smooth. Note that removing the assumptions of Theorem 3.8 from Theorem 3.10 would still result in convergence in expectation to some non-negative random variable, but not necessarily to \(0\). Recall that a family \((X_{t})_{t}\) of random variables is uniformly integrable provided \(\lim_{k\to\infty}\sup_{t}\mathbb{E}[\|X_{t}\|\mid\mathbf{1}_{\|X_{t}\|\geq k}]=0\), where \(\mathbf{1}(\cdot)\) is the indicator function. **Theorem 3.10**.: _Let the conditions of Theorem 3.8 hold with \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\operatorname{range}( \mathbf{A}^{*})}\) and let \(\mu_{k}^{p^{*}-1}\leq\frac{p^{*}}{G_{p^{*}}L_{\max}^{p^{*}}}\) for all \(k\in\mathbb{N}\). Then there holds \(\lim_{k\to\infty}\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger })]=0.\) Moreover, for \(1\leq r\leq p\), we have \(\lim_{k\to\infty}\mathbb{E}[\|\mathbf{x}_{k}-\mathbf{x}^{\dagger}\|_{\mathcal{ X}}^{r}]=0\) and if \(\mathcal{X}\) is additionally uniformly smooth, then \(\lim_{k\to\infty}\mathbb{E}[\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})- \mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}^{\dagger})\|_{\mathcal{X}^{*}}^{p^{*} }]=0\)._ Proof.: The step-size conditions allow applying Lemma A.2, which yields \(\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})\leq\mathbf{B}_{p}(\mathbf{ x}_{0},\mathbf{x}^{\dagger})\) for all \(k\). It follows that \((\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger}))_{k\in\mathbb{N}}\) is bounded, and is thus uniformly integrable, and by Theorem 3.8, it converges a.s. to \(0\). Then, by Vitali's convergence theorem [1, Theorem 4.5.4], we deduce that \((\Delta_{k})_{k\in\mathbb{N}}\) converges to \(0\) in expectation as well. Using now the \(p\)-convexity of \(\mathcal{X}\) and the monotonicity of expectation, we have \[0\leq\frac{C_{p}}{p}\lim_{k\to\infty}\mathbb{E}[\|\mathbf{x}_{k}-\mathbf{x}^{ \dagger}\|_{\mathcal{X}}^{p}]\leq\lim_{k\to\infty}\mathbb{E}[\mathbf{B}_{p}( \mathbf{x}_{k},\mathbf{x}^{\dagger})]=0.\] By the continuity of the power function and the Lyapunov inequality for \(1\leq r{\leq p}\), we have \[0\leq\lim_{k\to\infty}\mathbb{E}[\|\mathbf{x}_{k}-\mathbf{x}^{\dagger}\|_{ \mathcal{X}}^{r}]\leq\lim_{k\to\infty}(\mathbb{E}[\|\mathbf{x}_{k}-\mathbf{x}^{ \dagger}\|_{\mathcal{X}}^{p}])^{r/p}=0.\] To prove the last claim we use uniform smoothness of \(\mathcal{X}\) and Theorem 2.3(iv), to deduce \[\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mathcal{J}_{p}^{\mathcal{X}}( \mathbf{x}^{\dagger})\|_{\mathcal{X}^{*}}^{p^{*}}\leq C\max\{1,\|\mathbf{x}_{k} \|_{\mathcal{X}},\|\mathbf{x}^{\dagger}\|_{\mathcal{X}}\}^{p}\,\overline{ \rho}_{\mathcal{X}}(\|\mathbf{x}_{k}-\mathbf{x}^{\dagger}\|_{\mathcal{X}})^{p^{*}},\] where \(\overline{\rho}_{\mathcal{X}}(\tau)=\rho_{\mathcal{X}}(\tau)/\tau\) is a modulus of smoothness function such that \(\overline{\rho}(\tau)\leq 1\) and \(\lim_{\tau\to 0}\overline{\rho}(\tau)=0\), cf. Definition 2.2. By Lemmas A.2 and A.3 (\(\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p}\))\({}_{k\in\mathbb{N}}\) is (uniformly) bounded, giving that the sequence \((\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mathcal{J}_{p}^{\mathcal{X}}( \mathbf{x}^{\dagger})\|_{\mathcal{X}^{*}}^{p^{*}})_{k\in\mathbb{N}}\) is bounded and thus uniformly integrable. Since \(\lim_{k\to\infty}\mathbb{E}[\|\mathbf{x}_{k}-\mathbf{x}^{\dagger}\|_{\mathcal{ X}}]=0\), it follows that \(\|\mathbf{x}_{k}-\mathbf{x}^{\dagger}\|_{\mathcal{X}}\) converges to \(0\) in probability, and thus by the continuous mapping theorem \(\overline{\rho}_{\mathcal{X}}(\|\mathbf{x}_{k}-\mathbf{x}^{\dagger}\|_{\mathcal{ X}})^{p^{*}}\) also converges to \(0\) in probability. Applying Vitaly's theorem to the uniformly integrable sequence \((\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mathcal{J}_{p}^{\mathcal{X}} (\mathbf{x}^{\dagger})\|_{\mathcal{X}^{*}}^{p^{*}})_{k\in\mathbb{N}}\) yields that it converges to \(0\) in measure, and the claim follows. **Remark 3.11**.: _Note that the condition \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\operatorname{range }(\mathbf{A}^{*})}\) on \(\mathbf{x}_{0}\) is crucial for ensuring that all the limits are the same. Landweber iterations converge for uniformly convex and smooth \(\mathcal{X}\), and any Banach space \(\mathcal{Y}\)[44, Theorem 3.3]. In our analysis, we have assumed that \(\mathcal{X}\) is \(p\)-convex to simplify the analysis. First, \(p\)-convexity is used in the proof of Lemma 3.6. If \(\mathcal{X}\) were only uniformly convex (and \(\mathcal{X}^{*}\) only uniformly smooth), then we may use the modulus of smoothness function \(\rho_{\mathcal{X}}\), cf. (2.2) and [46, Theorem 2.41], to establish a suitable analogue of the descent property (13). Second, \(p\)-convexity is used in the proof of Theorem 3.8, allowing a more direct application of Robbins-Siegmund theorem by relating the objective values to Bregman distances. Meanwhile, the Landweber method in [44] requires step-sizes that depend on the modulus of smoothness, the current iterate and objective value, which is more restrictive than that in this work._ ### Convergence analysis for the generalised Kaczmarz model Schopfer et al [44] studied general powers of the Banach space norm and sub-gradients of the form \(\partial(\frac{1}{q}\|\mathbf{A}\cdot-\mathbf{y}\|_{\mathcal{Y}}^{q})(\mathbf{ x})\). Now we take an analogous perspective for the objective \[\Psi(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\Psi_{i}(\mathbf{x}), \quad\text{with }\Psi_{i}(\mathbf{x}):=\frac{1}{q}\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i} \|_{\mathcal{Y}}^{q},\] with \(1<q\leq 2\). This model is herein called the generalised Kaczmarz model. (Note that this is different from the randomised extended Kaczmarz method [53].) We shall show the convergence of SGD with stochastic directions \[g(\mathbf{x},\mathbf{y},i)=\mathbf{A}_{i}^{*}\mathcal{J}_{q}^{ \mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i})=\partial(\tfrac{1}{q}\| \mathbf{A}_{i}\cdot-\mathbf{y}_{i}\|_{\mathcal{Y}}^{q})(\mathbf{x}). \tag{15}\] The descent property (13) is unaffected, and a direct computation again yields \[\mathbf{B}_{p}(\mathbf{x}_{k+1},\mathbf{x}^{\dagger})\leq\mathbf{B}_{p}( \mathbf{x}_{k},\mathbf{x}^{\dagger})-\mu_{k+1}\left\langle\mathbf{g}_{k+1}, \mathbf{x}_{k}-\mathbf{x}^{\dagger}\right\rangle+\frac{G_{p^{*}}}{p^{*}}\mu_{k +1}^{p^{*}}\|\mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}}. \tag{16}\] However, Robbins-Siegmund theorem cannot be applied directly. Instead, we pursue a different proof strategy by first establishing the uniform boundedness of iterates. **Lemma 3.12**.: _Let Assumption 3.1 hold. Consider SGD with descent directions (15) for \(1<q\leq 2\), and assume that \(\mu_{k}^{p^{*}-1}<\frac{p^{*}}{G_{p^{*}}L_{\max}^{p^{*}}}\) holds for all \(k\in\mathbb{N}\) and \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}=:\Gamma<\infty\). Then \((\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger}))_{k\in\mathbb{N}}\) and \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) are uniformly bounded._ Proof.: Let \(\overline{\Psi}_{i}(\mathbf{x})=\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_{ \mathcal{Y}}^{q}\), and \(\Delta_{k}=\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})\). Then we have \(\left\langle\mathbf{g}_{k+1},\mathbf{x}_{k}-\mathbf{x}^{\dagger}\right\rangle= \overline{\Psi}_{i_{k+1}}(\mathbf{x}_{k})\) and \[\|\mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}} =\|\mathbf{A}_{i_{k+1}}^{*}\mathcal{J}_{q}^{\mathcal{Y}}(\mathbf{A} _{i_{k+1}}\mathbf{x}-\mathbf{y}_{i_{k+1}})\|_{\mathcal{X}^{*}}^{p^{*}}\leq L_{ \max}^{p^{*}}\|\mathbf{A}_{i_{k+1}}\mathbf{x}-\mathbf{y}_{i_{k+1}}\|_{\mathcal{Y }}^{p^{*}(q-1)}\] \[\leq L_{\max}^{p^{*}}\overline{\Psi}_{i_{k+1}}(\mathbf{x}_{k})^{p^ {*}\frac{q-1}{q}}=L_{\max}^{p^{*}}\overline{\Psi}_{i_{k+1}}(\mathbf{x}_{k})^{ \frac{p^{*}}{q^{*}}},\] where \(q^{*}\geq 2\) is the conjugate exponent of \(q\). Plugging this into (16) gives \[\Delta_{k+1}\leq\Delta_{k}-\mu_{k+1}\overline{\Psi}_{i_{k+1}}( \mathbf{x}_{k})+L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}} \overline{\Psi}_{i_{k+1}}(\mathbf{x}_{k})^{\frac{p^{*}}{q^{*}}}. \tag{17}\] Since \(1<p^{*}\leq 2\) by Theorem 2.6(iii), and \(q^{*}\geq 2\), we have \(\frac{p^{*}}{q^{*}}\leq 1\). Now we define two sets of indices \[\mathcal{I}=\{j\leq k:\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})\geq 1\}\text{ and } \mathcal{J}=\{j\leq k:\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})<1\},\] so that \(\mathcal{I}\cap\mathcal{J}=\emptyset\), and \(\mathcal{I}\cup\mathcal{J}=[k]\). Note that \(\mathcal{I}\) and \(\mathcal{J}\) actually depend on the current iterate index \(k\). Applying the inductive argument to (17) gives \[\Delta_{k+1} \leq\Delta_{0}-\sum_{j=0}^{k}\mu_{j+1}\overline{\Psi}_{i_{j+1}}( \mathbf{x}_{j})+L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\sum_{j=0}^{k}\mu_{j+1}^ {p^{*}}\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})^{\frac{p^{*}}{q^{*}}}\] \[=\Delta_{0}\underbrace{-\sum_{j\in\mathcal{I}}\mu_{j+1}\overline{ \Psi}_{i_{j+1}}(\mathbf{x}_{j})+L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\sum_{j \in\mathcal{I}}\mu_{j+1}^{p^{*}}\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})^{ \frac{p^{*}}{q^{*}}}}_{(\star\star)}-\underbrace{\sum_{j\in\mathcal{J}}\mu_{j+ 1}\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})}_{(\star\star)}\] \[+\underbrace{L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\sum_{j\in \mathcal{J}}\mu_{j+1}^{p^{*}}\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})^{\frac {p^{*}}{q^{*}}}}_{(\star\star)}.\] Next we analyse these three terms separately. First, for \(j\in\mathcal{I}\), we have \(\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})\geq 1\) and since \(\frac{p^{*}}{q^{*}}<1\), we have \(\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})^{\frac{p^{*}}{q^{*}}}\leq\overline{ \Psi}_{i_{j+1}}(\mathbf{x}_{j})\), giving \[(\star)\] \[= -\sum_{j\in\mathcal{I}}\Bigl{(}\!1\!-\!L_{\max}^{p^{*}}\frac{G_{p ^{*}}}{p^{*}}\mu_{j+1}^{p^{*}-1}\!\Bigr{)}\!\!\mu_{j+1}\overline{\Psi}_{i_{j+ 1}}(\mathbf{x}_{j}).\] Since \(\mu_{j+1}^{p^{*}-1}<\frac{p^{*}}{G_{p^{*}}L_{\max}^{p^{*}}}\) holds by assumption, the term \((\star)\) is non-positive. Moreover, \((\star\star)\) is trivially non-positive. Since \(\overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})<1\) for \(j\in\mathcal{J}\), the last term \((\star\star\star)\) can be bounded as \[L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\sum_{j\in\mathcal{J}}\mu_{j+1}^{p^{*}} \overline{\Psi}_{i_{j+1}}(\mathbf{x}_{j})^{\frac{p^{*}}{q^{*}}}\leq L_{\max}^ {p^{*}}\frac{G_{p^{*}}}{p^{*}}\sum_{j\in\mathcal{J}}\mu_{j+1}^{p^{*}}\leq L_{ \max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\sum_{j=1}^{\infty}\mu_{j}^{p^{*}}=L_{\max }^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\Gamma.\] By combining the last three bounds on \((\star)\), \((\star\star)\) and \((\star\star\star)\), we get \[\Delta_{k+1}\leq\Delta_{0}+L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\Gamma,\text { for all }k\geq 0.\] Thus, \((\Delta_{k})_{k\in\mathbb{N}}\) is uniformly bounded and by Lemma A.3, so is \((\mathbf{x}_{k})_{k\in\mathbb{N}}\). The proof of Lemma 3.12 exposes the challenge in extending the convergence results to general stochastic directions. Namely, in the proof of Theorem 3.8, we showed the convergence by taking conditional expectation of (13), recasting the resulting expression as an almost super-martingale, and then relating objective values to Bregman distances via \(\Psi(\mathbf{x}_{k})\leq C\Delta_{k}\), for some \(C>0\). Here, using \(\frac{q}{q^{*}}=q-1\) and \(\frac{p^{*}}{p}=p^{*}-1\), we instead have \[\Psi(\mathbf{x}_{k})^{\frac{p^{*}}{q^{*}}}\leq C\Delta_{k}^{(p^{*}-1)(q-1)}, \quad\text{with }C=q^{-\frac{p^{*}}{q^{*}}}L_{\max}^{p^{*}(q-1)}\Bigl{(}\frac{p}{C_{p}} \Bigr{)}^{(p^{*}-1)(q-1)},\] which gives \[\mathbb{E}_{k}[\Delta_{k+1}]\leq\Delta_{k}+CL_{\max}^{p^{*}}q^{\frac{p^{*}}{q^ {*}}}\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}}\Delta_{k}^{(p^{*}-1)(q-1)}-q\mu _{k+1}\Psi(\mathbf{x}_{k}).\] Here \(0<(p^{*}-1)(q-1)<1\), provided \(p^{*}\neq 2\) and \(q\neq 2\). Therefore, Robbins-Siegmund theorem cannot be applied directly. Nonetheless, we still have the following analogue of Theorem 3.10. **Theorem 3.13**.: _Consider iterations (10) with descent directions (15) for \(1<q\leq 2\) and let Assumption 3.1 hold and \(\mathbf{x}^{\dagger}\) be the MNS. Let the step-sizes \((\mu_{k})_{k\in\mathbb{N}}\) satisfy \(\sum_{k=1}^{\infty}\mu_{k}=\infty\), \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\), and \(\mu_{k}^{p^{*}-1}<\frac{p^{*}}{G_{p^{*}}L_{\max}^{p^{*}}}\) for all \(k\in\mathbb{N}\). Then the sequence \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) converges a.s. to a solution of (1):_ \[\mathbb{P}\Big{(}\lim_{k\to\infty}\inf_{\widetilde{\mathbf{x}}\in\mathcal{X}_{ \min}}\|\mathbf{x}_{k}-\widetilde{\mathbf{x}}\|_{\mathcal{X}}=0\Big{)}=1.\] _Moreover, if \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\operatorname{ range}(\mathbf{A}^{*})}\), we have_ \[\lim_{k\to\infty}\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})=0\ \ \text{a.s.}\ \ \text{and}\ \ \ \lim_{k\to\infty}\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger })]=0.\] Proof.: To establish the a.s. convergence of iterates, we first take the conditional expectation of the descent property (16) and obtain \[\mathbb{E}_{k}[\Delta_{k+1}]\leq\Delta_{k}-\mu_{k+1}\left\langle\mathbb{E}_{k} [\mathbf{g}_{k+1}],\mathbf{x}_{k}-\mathbf{x}^{\dagger}\right\rangle+\frac{G_{ p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}}\mathbb{E}_{k}\big{[}\|\mathbf{g}_{k+1}\|_{ \mathcal{X}^{*}}^{p^{*}}\big{]}. \tag{18}\] We now have \(\left\langle\mathbb{E}_{k}[\mathbf{g}_{k+1}],\mathbf{x}_{k}-\mathbf{x}^{ \dagger}\right\rangle=\left\langle\partial\Psi(\mathbf{x}_{k}),\mathbf{x}_{k}- \mathbf{x}^{\dagger}\right\rangle=q\Psi(\mathbf{x}_{k})\), and \[\|g(\mathbf{x},\mathbf{y},i)\|_{\mathcal{X}^{*}}\leq L_{\max}\|\mathbf{A}_{i} \mathbf{x}-\mathbf{y}_{i}\|_{\mathcal{Y}}^{q-1}.\] Then taking the conditional expectation of \(\|g(\mathbf{x},\mathbf{y},i)\|_{\mathcal{X}^{*}}^{p^{*}}\) yields \[\mathbb{E}\big{[}\|g(\mathbf{x},\mathbf{y},i)\|_{\mathcal{X}^{*}}^{p^{*}} \big{]}\leq L_{\max}^{p^{*}}\mathbb{E}\Big{[}\|\mathbf{A}_{i}\mathbf{x}- \mathbf{y}_{i}\|_{\mathcal{Y}}^{p^{*}(q-1)}\Big{]}=L_{\max}^{p^{*}}\mathbb{E} \Big{[}(\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_{\mathcal{Y}}^{q})^{\frac {p^{*}}{q^{*}}}\Big{]}.\] We have \(0<\frac{p^{*}}{q^{*}}\leq 1\), with the equality achieved only if \(p^{*}=q^{*}=2\). In the latter case, it trivially follows that \(\mathbb{E}[\|g(\mathbf{x},\mathbf{y},i)\|_{\mathcal{X}^{*}}^{p^{*}}]\leq qL_{ \max}^{p^{*}}\Psi(\mathbf{x})\). If \(0<\frac{p^{*}}{q^{*}}<1\), by Jensen's inequality, we have \[\mathbb{E}\big{[}\|g(\mathbf{x},\mathbf{y},i)\|_{\mathcal{X}^{*}}^{p^{*}} \big{]}\leq L_{\max}^{p^{*}}\mathbb{E}\Big{[}(\|\mathbf{A}_{i}\mathbf{x}- \mathbf{y}_{i}\|_{\mathcal{Y}}^{q})^{\frac{p^{*}}{q^{*}}}\Big{]}\leq L_{\max} ^{p^{*}}(\mathbb{E}[\|\mathbf{A}_{i}\mathbf{x}-\mathbf{y}_{i}\|_{\mathcal{Y} }^{q}])^{\frac{p^{*}}{q^{*}}}\leq L_{\max}^{p^{*}}q^{\frac{p^{*}}{q^{*}}}\Psi( \mathbf{x})^{\frac{p^{*}}{q^{*}}}.\] Plugging this estimate into the conditional descent property (18) yields \[\mathbb{E}_{k}[\Delta_{k+1}]\leq\Delta_{k}-q\mu_{k+1}\Psi(\mathbf{x}_{k})+L_{ \max}^{p^{*}}q^{\frac{p^{*}}{q^{*}}}\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}} \Psi(\mathbf{x}_{k})^{\frac{p^{*}}{q^{*}}}.\] Since the sequence \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) is uniformly bounded by Lemma 3.12, so is \((\Psi(\mathbf{x}_{k}))_{k\in\mathbb{N}}\), and we thus have \[\sum_{k=0}^{\infty}\mu_{k+1}^{p^{*}}\Psi(\mathbf{x}_{k})^{\frac{p^{*}}{q^{*}}} \leq C\sum_{k=0}^{\infty}\mu_{k+1}^{p^{*}}<\infty.\] Thus, we can apply Robbins-Siegmund theorem for almost super-martingales, and deduce that \((\Delta_{k})_{k\in\mathbb{N}}\) converges a.s. to a non-negative random variable \(\Delta_{\infty}\). Moreover, \(\sum_{k=0}^{\infty}\mu_{k+1}\Psi(\mathbf{x}_{k})<\infty\) holds a.s. By repeating the argument for Theorem 3.8, there exists a subsequence \((\mathbf{x}_{k_{j}})_{j\in\mathbb{N}}\) that a.s. converges to some \(\widehat{\mathbf{x}}\in\mathcal{X}_{\min}\), and hence \(\Delta_{\infty}=0\), as desired. Moreover, by Lemma 3.12, the sequence \((\Delta_{k})_{k\in\mathbb{N}}\) is bounded, and thus uniformly integrable. Since it converges to \(0\) a.s., from Vitali's theorem it follows that \(\lim_{k\to\infty}\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger })]=0\). The results in Theorem 3.13 are similar to that of Theorem 3.10, but the generality of the latter is compensated for by an additional step-size assumption ensuring boundedness of iterates \((\mathbf{x}_{k})_{k\in\mathbb{N}}\). ### Convergence rates for conditionally stable operators Theorem 3.10 states the conditions needed for the convergence of Bregman distances in expectation. However, it does not provide convergence rates. In order to obtain convergence rates, one needs additional conditions on the MNS \(\mathbf{x}^{\dagger}\), which are collectively known as source conditions. One approach is via conditional stability: for a locally conditionally stable operator, we can extract convergence in expectation and quantify the convergence speed. Conditional stability is known for many inverse problems for PDEs, and has been used extensively to investigate regularised solutions [8, 13]. It is useful for analysing ill-posed problems that are locally well-posed, and in case of a (possibly) non-linear forward operator \(F\) it is of the form \[\|\mathbf{x}_{1}-\mathbf{x}_{2}\|_{\mathcal{X}}\leq\Phi(\|F(\mathbf{x}_{1})-F (\mathbf{x}_{2})\|_{\mathcal{Y}}),\quad\forall\mathbf{x}_{1},\mathbf{x}_{2} \in\mathcal{M}\subset\mathcal{X}, \tag{19}\] where \(\Phi:[0,\infty)\to[0,\infty)\) with \(\Phi(0)=0\) is a continuous, non-decreasing function, and \(\mathcal{M}\) is typically a ball in the ambient norm [19]. In Banach space settings, the conditional stability needs to be adjusted, by replacing the left hand side of (19) with a non-negative error measure [7]. Since the most relevant error measure for Banach space analysis is the Bregman distance \(\mathbf{B}_{p}(\mathbf{x}_{1},\mathbf{x}_{2})\), a Holder type stability estimate then reads: for some \(\alpha\geq 1\) and \(C_{\alpha}>0\) \[\mathbf{B}_{p}(\mathbf{x},\mathbf{x}^{\dagger})^{\alpha}\leq C_{\alpha}^{-1} \|\mathbf{A}\mathbf{x}-\mathbf{A}\mathbf{x}^{\dagger}\|_{\mathcal{Y}}^{p}. \tag{20}\] Now we give a convergence rate under conditional stability bound (20). The constant \(C_{N}\) appears in Lemma 3.5 and denotes the norm equivalence constant. **Theorem 3.14**.: _Let the forward operator \(\mathbf{A}\) satisfy the conditional stability bound (20) for some \(\alpha\geq 1\) and \(C_{\alpha}>0\). Let \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})\in\overline{\operatorname{range} (\mathbf{A}^{*})}\), and for \(C_{k}=C_{N}C_{\alpha}(1-L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k}^{p^{*}- 1})>0\), the step-sizes satisfy \(\sum_{k=1}^{\infty}\mu_{k}C_{k}=\infty\). Then there holds_ \[\lim_{k\to\infty}\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger })]=0.\] _Moreover,_ \[\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})]\leq\left\{ \begin{aligned} &\frac{\mathbf{B}_{p}(\mathbf{x}_{0}, \mathbf{x}^{\dagger})}{\Big{(}1+(\alpha-1)\mathbf{B}_{p}(\mathbf{x}_{0}, \mathbf{x}^{\dagger})^{\alpha-1}\sum_{j=1}^{k}\mu_{j}C_{j}\Big{)}^{\frac{1}{ \alpha-1}}},&\text{ if }\alpha>1,\\ &\exp\Big{(}-\sum_{j=1}^{k}\mu_{j}C_{j}\Big{)}\mathbf{B}_{p}( \mathbf{x}_{0},\mathbf{x}^{\dagger}),&\text{ if }\alpha=1.\end{aligned}\right.\] Proof.: Let \(\Delta_{k}:=\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})\). The proof of Theorem 3.8 and the conditional stability bound (20) imply \[\mathbb{E}_{k}[\Delta_{k+1}] \leq\Delta_{k}-p\mu_{k+1}\left(1-L_{\max}^{p^{*}}\frac{G_{p^{*}}} {p^{*}}\mu_{k+1}^{p^{*}-1}\right)\Psi(\mathbf{x}_{k}) \tag{21}\] \[\leq\Delta_{k}-p\mu_{k+1}\frac{C_{N}C_{\alpha}}{p}\Big{(}1-L_{ \max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}-1}\Big{)}\Delta_{k}^{ \alpha},\] since by Lemma 3.5, there exists a \(C_{N}>0\) such that \(\Psi(\mathbf{x})\geq\frac{C_{N}}{p}\|\mathbf{A}\mathbf{x}-\mathbf{y}\|_{ \mathcal{Y}}^{p}\). Taking the full expectation and using Jensen's inequality lead to \[\mathbb{E}[\Delta_{k+1}]\leq\mathbb{E}[\Delta_{k}]-\mu_{k+1}C_{k+1}\mathbb{E}[ \Delta_{k}]^{\alpha}.\] Since \(C_{k+1}>0\) by assumption, \((\mathbb{E}[\Delta_{k}])_{k\in\mathbb{N}}\) is a monotonically decreasing sequence. By the convexity of the function \(x\mapsto x^{\alpha}\) (for \(\alpha\geq 1\)), for any \(\epsilon>0\) and \(x\geq\epsilon\), we have \(\epsilon^{\alpha}\geq\frac{\epsilon}{x}x^{\alpha}\). We claim that for every \(\epsilon>0\), there exists a \(k_{\epsilon}\in\mathbb{N}\) such that \(\mathbb{E}[\Delta_{k}]\leq\epsilon\) for all \(k\geq k_{\epsilon}\). Assuming the contrary, \(\mathbb{E}[\Delta_{k}]\geq\epsilon\) for all \(k\), gives \[\mathbb{E}[\Delta_{k+1}]\leq\mathbb{E}[\Delta_{k}]-\mu_{k+1}C_{k+1}\mathbb{E}[ \Delta_{k}]^{\alpha}\leq\mathbb{E}[\Delta_{k}]-\mu_{k+1}C_{k+1}\epsilon^{\alpha} \leq\Delta_{0}-\epsilon^{\alpha}\sum_{j=1}^{k+1}\mu_{j}C_{j}\to-\infty,\] since \(\sum_{j=1}^{\infty}\mu_{j}C_{j}=\infty\) by assumption, which is a contradiction. Therefore, \(\lim_{k\to\infty}\mathbb{E}[\Delta_{k}]=0\). For \(\alpha>1\), by Polyak's inequality (cf. Lemma A.1), we have \[\mathbb{E}[\Delta_{k+1}]\leq\frac{\Delta_{0}}{\Big{(}1+(\alpha-1)\Delta_{0}^{ \alpha-1}\sum_{j=1}^{k+1}\mu_{j}C_{j}\Big{)}^{\frac{1}{\alpha-1}}}.\] Meanwhile, for \(\alpha=1\), using the inequality \(1-x\leq e^{-x}\) for \(x\geq 0\), a direct computation yields \[\mathbb{E}[\Delta_{k+1}]\leq(1-\mu_{k+1}C_{k+1})\mathbb{E}[\Delta_{k}]\leq \prod_{j=1}^{k+1}(1-\mu_{j}C_{j})\Delta_{0}\leq\exp\Big{(}-\sum_{j=1}^{k+1}\mu _{j}C_{j}\Big{)}\Delta_{0},\] completing the proof of the theorem. **Remark 3.15**.: _We have the following comments on Theorem 3.14._ 1. _The estimates for_ \(\alpha>1\) _and_ \(\alpha=1\) _in Theorem_ 3.14 _are consistent in the sense that_ \[\lim_{\alpha\searrow 1}\frac{\mathbf{B}_{p}(\mathbf{x}_{0},\mathbf{x}^{ \dagger})}{\Big{(}1+(\alpha-1)\mathbf{B}_{p}(\mathbf{x}_{0},\mathbf{x}^{ \dagger})^{\alpha-1}\sum_{j=1}^{k}\mu_{j}C_{j}\Big{)}^{\frac{1}{\alpha-1}}}= \exp\Big{(}-\sum_{j=1}^{k}\mu_{j}C_{j}\Big{)}\mathbf{B}_{p}(\mathbf{x}_{0}, \mathbf{x}^{\dagger}).\] 2. _While it might seem counter-intuitive,_ \(\alpha=1\) _gives a better convergence rate than_ \(\alpha>1\)_, because of the following_ \[\mathbf{B}_{p}(\mathbf{x},\mathbf{x}^{\dagger})^{\alpha}\geq\mathbf{B}_{p}( \mathbf{x},\mathbf{x}^{\dagger})^{\bar{\alpha}}\text{ if and only if }\alpha\log\mathbf{B}_{p}(\mathbf{x},\mathbf{x}^{ \dagger})\geq\tilde{\alpha}\log\mathbf{B}_{p}(\mathbf{x},\mathbf{x}^{\dagger}).\] _Hence, whenever_ \(\mathbf{B}_{p}(\mathbf{x},\mathbf{x}^{\dagger})<1\)_, we have_ \(\mathbf{B}_{p}(\mathbf{x},\mathbf{x}^{\dagger})\geq\mathbf{B}_{p}(\mathbf{x}, \mathbf{x}^{\dagger})^{\alpha}\) _for_ \(\alpha>1\)_. Plugging this into the conditional stability bound (_20_) yields_ \[\mathbf{B}_{p}(\mathbf{x},\mathbf{x}^{\dagger})^{\alpha}\leq\mathbf{B}_{p}( \mathbf{x},\mathbf{x}^{\dagger})\leq C_{1}^{-1}\|\mathbf{A}\mathbf{x}-\mathbf{ A}\mathbf{x}^{\dagger}\|_{\mathcal{Y}}^{p}=C_{1}^{-1}pN\Psi(\mathbf{x}).\] _Meanwhile, the proof of Theorem_ 3.14 _uses the conditional stability bound to establish a relationship between the objective value and the Bregman distance to the MNS_ \(\mathbf{x}^{\dagger}\)_, cf. (_21_). Putting these together gives that_ \(\alpha=1\) _provides a greater decrease of the expected Bregman distance, once we are close enough to the solution._ The conditional stability estimate (20) for a linear operator \(\mathbf{A}\) implies its injectivity. Then the objective \(\Psi(\mathbf{x})\) is strongly convex. Under condition (20), there can indeed be only one solution: if \(\mathbf{A}\tilde{\mathbf{x}}=\mathbf{A}\mathbf{x}\), then \(\mathbf{B}_{p}(\tilde{\mathbf{x}},\mathbf{x})=0\) follows from (20). The step-size condition \(\sum_{k=1}^{\infty}\mu_{k}C_{k}=\infty\) is weaker than that in Theorem 3.10. Namely, it follows from step-size conditions in Theorem 3.8, since \[\sum_{k=1}^{\infty}\mu_{k}C_{k}=C_{N}C_{\alpha}\Big{(}\sum_{k=1}^{\infty}\mu_{ k}-L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\sum_{k=1}^{\infty}\mu_{k}^{p^{*}} \Big{)}=\infty\] holds if \(\sum_{k=1}^{\infty}\mu_{k}=\infty\) and \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\). Further, if there exists a \(C>0\) such that \(1-L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k}^{p^{*}-1}>C\) holds for all \(k\in\mathbb{N}\), e.g. if \(\mu_{k}\) is a constant satisfying this condition, then \(\sum_{k=1}^{\infty}\mu_{k}C_{k}=\infty\) is weaker than the conditions in Theorem 3.8, since the condition \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\) is no longer needed for convergence, and \(\sum_{k=1}^{\infty}\mu_{k}=\infty\) suffices. Moreover, we can choose constant step-sizes. Indeed, setting \(\mu_{k}=\mu_{0}\), with \(1-L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{0}^{p^{*}-1}=\frac{1}{2}\), we get an exponential convergence rate for \(\alpha=1\), since \(C_{k}=\frac{C_{N}C_{\alpha}}{2}\), we have \[\mathbb{E}[\Delta_{k+1}] \leq(1-\mu_{0}C_{k+1})\mathbb{E}[\Delta_{k}]\leq\bigg{(}1-2^{-1-1/ (p^{*}-1)}L_{\max}^{-p^{*}/p^{*}-1}\Big{(}\frac{p^{*}}{G_{p^{*}}}C_{N}C_{ \alpha}\Big{)}^{1/p^{*}-1}\bigg{)}^{k}\mathbb{E}[\Delta_{0}]\] \[\leq\bigg{(}1-2^{-p}L_{\max}^{-p}\Big{(}\frac{p^{*}}{G_{p^{*}}} \Big{)}^{p^{*}/p}C_{N}C_{\alpha}\bigg{)}^{k}\Delta_{0}.\] Note that this convergence rate is largely comparable with that in the Hilbert case: the conditional stability bound implies the strict convexity of the quadratic objective \(\Psi(\mathbf{x})\), and the SGD is known to converge exponentially fast (see e.g. [16, Theorem 3.1]), with the rate determined by a variant of the condition number. **Remark 3.16**.: _The conditional stability bound (20) is stated globally. However, such conditions are often valid only locally. A local definition could have been employed in (20), with minor modifications of the argument. Indeed, by the argument of Theorem 3.10, we appeal to Lemma A.2, showing that the Bregman distances of the iterates are non-increasing. Thus, it suffices to assume that the initial point \(\mathbf{x}_{0}\) is sufficiently close to the MNS \(\mathbf{x}^{\dagger}\)._ **Remark 3.17**.: _Conditional stability is intimately tied with classical source conditions. For example, as shown in [41], assuming \(\alpha=1\) in (20) allows to show a variational inequality_ \[\left\langle\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}^{\dagger}),\mathbf{x}- \mathbf{x}^{\dagger}\right\rangle\leq\|\mathbf{x}^{\dagger}\|_{\mathcal{X}}^{ p-1}C_{\alpha}^{-1}(pC_{p}^{-1})^{1/p}\|\mathbf{A}(\mathbf{x}-\mathbf{x}^{ \dagger})\|_{\mathcal{Y}}.\] _Then Hahn-Banach theorem and [41, Lemma 8.21] give the canonical range type condition \(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}^{\dagger})=\mathbf{A}^{*}\mathbf{w}\), for \(\mathbf{w}\in\mathcal{X}\) such that \(\|\mathbf{w}\|_{\mathcal{X}}\leq 1\). Connections between source conditions and conditional stability estimates have been studied, e.g. for linear operators in Hilbert spaces [47] and in \(\mathcal{L}^{p}\) spaces [5]. Moreover, variational source conditions often imply conditional stability estimates [20], and in case of bijective and continuous operators they are trivially inferred by a standard source condition \((\)albeit only in a possibly small neighbourhood around the solution\()\). See the book [50] about the connections between source conditions and conditional stability estimates, and [21] for inverse problems for differential equations._ ## 4 Regularising property In practice, we often do not have access to the exact data \(\mathbf{y}\) but only to noisy observations \(\mathbf{y}^{\delta}\), such that \(\|\mathbf{y}^{\delta}-\mathbf{y}\|_{\mathcal{Y}}{\leq}\delta\). The convergence study in the presence of observational noise requires a different approach, since the sequence of objective values \((\|\mathbf{A}\mathbf{x}_{k}^{\delta}-\mathbf{y}^{\delta}\|_{\mathcal{Y}}^{p}) _{k\in\mathbb{N}}\) generally will not converge to \(0\). In this section we show that SGD has a regularising effect, in the sense that the expected error \(\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k(\delta)}^{\delta},\mathbf{x}^{ \dagger})]\) converges to \(0\) as the noise level \(\delta\) decays to \(0\), for properly selected stopping indices \(k(\delta)\). Let \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) and \((\mathbf{x}_{k}^{\delta})_{k\in\mathbb{N}}\) be the noiseless and noisy iterates, defined respectively by \[\mathbf{x}_{k+1} =\mathcal{J}_{p^{*}}^{\mathcal{X}^{*}}\left(\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}\mathbf{g}_{k+1}\right),\quad\text{with }\mathbf{g}_{k+1}=g(\mathbf{x}_{k},\mathbf{y},i_{k+1}), \tag{22}\] \[\mathbf{x}_{k+1}^{\delta} =\mathcal{J}_{p^{*}}^{\mathcal{X}^{*}}\left(\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k}^{\delta})-\mu_{k+1}\mathbf{g}_{k+1}^{\delta}\right), \quad\text{with }\mathbf{g}_{k+1}^{\delta}=g(\mathbf{x}_{k}^{\delta}, \mathbf{y}^{\delta},i_{k+1}). \tag{23}\] The key step in proving the regularising property is to show the stability of SGD iterates with respect to noise. The noise enters into the iterations through the update directions \(\mathbf{g}_{k+1}^{\delta}\) and thus, the stability of the iterates requires that of update directions. This however requires imposing suitable assumptions on the observation space \(\mathcal{Y}\) since in general, the single valued duality maps \(\jmath_{p}^{\mathcal{Y}}\) are continuous only at \(0\). If \(\mathcal{Y}\) is uniformly smooth, the corresponding duality maps are also smooth. This assumption is also needed for deterministic iterates, cf. [46, Proposition 6.17] or [35, Lemma 9]. Thus we make the following assumption. **Assumption 4.1**.: _The Banach space \(\mathcal{X}\) is \(p\)-convex and uniformly smooth, and \(\mathcal{Y}\) is uniformly smooth._ We then have the following stability result on the iterates with respect to noise, whose elementary but lengthy proof is deferred to the appendix. **Lemma 4.2**.: _Let Assumption 4.1 hold. Consider the iterations (22) and (23) with the same initialisation \(\mathbf{x}_{0}^{\delta}=\mathbf{x}_{0}\), and following the same path \((\)i.e. using same random indices \(i_{k})\). Then, for any fixed \(k\in\mathbb{N}\), we have_ \[\lim_{\delta\searrow 0}\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta}, \mathbf{x}_{k})]=\lim_{\delta\searrow 0}\mathbb{E}[\|\mathbf{x}_{k}^{\delta}- \mathbf{x}_{k}\|_{\mathcal{X}}]=\lim_{\delta\searrow 0}\mathbb{E}[\|\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k}^{\delta})-\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_ {k})\|_{\mathcal{X}^{*}}]=0.\] Now we show the regularising property of SGD for suitable stopping indices \(k(\delta)\). **Theorem 4.3**.: _Let Assumption 4.1 hold, and the step-sizes \((\mu_{k})_{k\in\mathbb{N}}\) satisfy \(\sum_{k=1}^{\infty}\mu_{k}=\infty\), \(\sum_{k=1}^{\infty}\mu_{k}^{p^{*}}<\infty\) and \(1-L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k}^{p^{*}-1}>C>0\). If \(\lim_{\delta\searrow 0}k(\delta)=\infty\) and \(\lim_{\delta\searrow 0}\delta^{p}\sum_{\ell=1}^{k(\delta)}\mu_{\ell}=0\), then_ \[\lim_{\delta\searrow 0}\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k(\delta)}^{\delta}, \mathbf{x}^{\dagger})]=0.\] Proof.: Let \(\Delta_{k}=\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})\) and \(\Delta_{k}^{\delta}=\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta},\mathbf{x}^{\dagger})\). Take any \(\delta>0\) and \(k\in\mathbb{N}\). By the three point identity (4), we have \[\Delta_{k}^{\delta} =\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta},\mathbf{x}_{k})+\Delta_ {k}+\left\langle\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mathcal{J}_{p} ^{\mathcal{X}}(\mathbf{x}_{k}^{\delta}),\mathbf{x}_{k}-\mathbf{x}^{\dagger}\right\rangle\] \[\leq\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta},\mathbf{x}_{k})+\Delta _{k}+\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k}^{\delta})\|_{\mathcal{X}^{*}}\|\mathbf{x}_{k}- \mathbf{x}^{\dagger}\|_{\mathcal{X}}. \tag{24}\] Consider a sequence \((\delta_{j})_{j\in\mathbb{N}}\) decaying to zero. Taking any \(\epsilon>0\), it suffices to find a \(j_{\epsilon}\in\mathbb{N}\) such that for all \(j\geq j_{\epsilon}\) we have \(\mathbb{E}[\Delta_{k(\delta_{j})}^{\delta_{j}}]\leq 4\epsilon\). By Theorem 3.10, there exists a \(k_{\epsilon}\in\mathbb{N}\) such that for all \(k\geq k_{\epsilon}\) we have \[\mathbb{E}[\Delta_{k}]<\epsilon\quad\text{and}\quad\mathbb{E}[\|\mathbf{x}_{k} -\mathbf{x}^{\dagger}\|_{\mathcal{X}}]<\epsilon^{1/2}. \tag{25}\] Moreover, for any fixed \(k_{\epsilon}\), by Lemma 4.2, there exists \(j_{1}\in\mathbb{N}\) such that for all \(j\geq j_{1}\) we have \[\mathbb{E}[\mathbf{B}_{p}(\mathbf{x}_{k_{\epsilon}}^{\delta_{j}},\mathbf{x}_{ k_{\epsilon}})]<\epsilon\quad\text{and}\quad\mathbb{E}[\|\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k_{\epsilon}})-\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{ x}_{k_{\epsilon}}^{\delta_{j}})\|_{\mathcal{X}^{*}}]<\epsilon^{1/2}. \tag{26}\] Thus, plugging the estimates (25) and (26) into (24), we have \(\mathbb{E}[\Delta_{k_{\epsilon}}^{\delta_{j}}]<3\epsilon\), for all \(j\geq j_{1}\). Note, however, that the same does not necessarily hold for all \(k\geq k_{\epsilon}\), and thus for a monotonically increasing sequence of stopping indices \(k(\delta_{j})\), since \(\mathbb{E}[\Delta_{k(\delta_{j})}^{\delta_{j}}]\) are not necessarily monotone. Instead, taking the expectation of the descent property (13) with respect to \(\mathcal{F}_{k}\) yields \[\mathbb{E}_{k}[\Delta_{k+1}^{\delta}]\leq\Delta_{k}^{\delta}-\mu_{k+1}\left< \mathbb{E}_{k}[\mathbf{g}_{k+1}^{\delta}],\mathbf{x}_{k}^{\delta}-\mathbf{x} ^{\dagger}\right>+pL_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}} \Psi(\mathbf{x}_{k}^{\delta}).\] Then we decompose the middle term into \[\left\langle\mathbb{E}_{k}[\mathbf{g}_{k+1}^{\delta}],\mathbf{x} ^{\dagger}-\mathbf{x}_{k}^{\delta}\right\rangle =\frac{1}{N}\sum_{i=1}^{N}\left\langle\mathcal{J}_{p}^{\mathcal{ Y}}(\mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i}^{\delta}),-( \mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i}^{\delta})+\mathbf{y}_{i}- \mathbf{y}_{i}^{\delta}\right\rangle\] \[=-p\Psi(\mathbf{x}_{k}^{\delta})+\frac{1}{N}\sum_{i=1}^{N}\left< \mathcal{J}_{p}^{\mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y} _{i}^{\delta}),\mathbf{y}_{i}-\mathbf{y}_{i}^{\delta}\right\rangle\] \[\leq-p\Psi(\mathbf{x}_{k}^{\delta})+\frac{1}{N}\sum_{i=1}^{N}\| \mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i}^{\delta}\|_{\mathcal{Y}}^ {p-1}\|\mathbf{y}_{i}-\mathbf{y}_{i}^{\delta}\|_{\mathcal{Y}}\] \[\leq-p\Psi(\mathbf{x}_{k}^{\delta})+\delta\frac{1}{N}\sum_{i=1}^{N} \|\mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i}^{\delta}\|_{\mathcal{Y}} ^{p-1},\] where we have used (3) and the Cauchy-Schwarz inequality. Taking the full expectation gives \[\mathbb{E}[\Delta_{k+1}^{\delta}] \leq\mathbb{E}[\Delta_{k}^{\delta}]-p\mu_{k+1}\mathbb{E}[\Psi( \mathbf{x}_{k}^{\delta})]+pL_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{ p^{*}}\mathbb{E}[\Psi(\mathbf{x}_{k}^{\delta})]+\delta\mu_{k+1}\frac{1}{N}\sum_{i=1}^{N} \mathbb{E}[\|\mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i}^{\delta}\|_{ \mathcal{Y}}^{p-1}]\] \[=\mathbb{E}[\Delta_{k}^{\delta}]-p\mu_{k+1}C_{k+1}\mathbb{E}[\Psi( \mathbf{x}_{k}^{\delta})]+\delta\mu_{k+1}\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}[ \|\mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i}^{\delta}\|_{\mathcal{Y}} ^{p-1}],\] where \(C_{k}\!=\!1-L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}}\mu_{k}^{p^{*}-1}>C>0\). Now using the Lyapunov inequality \[\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}[\|\mathbf{A}_{i}\mathbf{x}_{k}^{\delta}- \mathbf{y}_{i}^{\delta}\|_{\mathcal{Y}}^{p-1}]\leq\frac{1}{N}\sum_{i=1}^{N} \left(\mathbb{E}[\|\mathbf{A}_{i}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i}^{\delta} \|_{\mathcal{Y}}^{p}]\right)^{(p-1)/p}=p^{1/p^{*}}\frac{1}{N}\sum_{i=1}^{N} \left(\mathbb{E}[\Psi_{i}(\mathbf{x}_{k}^{\delta})]\right)^{1/p^{*}},\] we deduce \[\mathbb{E}[\Delta_{k+1}^{\delta}]\leq\mathbb{E}[\Delta_{k}^{\delta}]-p\mu_{k+1}C_{ k+1}\mathbb{E}[\Psi(\mathbf{x}_{k}^{\delta})]+\delta\mu_{k+1}p^{1/p^{*}}\frac{1}{N} \sum_{i=1}^{N}\Big{(}\mathbb{E}[\Psi_{i}(\mathbf{x}_{k}^{\delta})]\Big{)}^{1/p^{ *}}. \tag{27}\] Next we remove the exponent in the last term. Using Young's inequality \(ab\leq\frac{a^{p}}{p}\omega^{-p}+\frac{b^{p^{*}}}{p^{*}}\omega^{p^{*}}\), with \(a=\delta\) and \(b=\mathbb{E}[\Psi_{i}(\mathbf{x}_{k}^{\delta})]^{1/p^{*}}\), we have \[\frac{1}{N}\sum_{i=1}^{N}\delta\Big{(}\mathbb{E}[\Psi_{i}(\mathbf{x}_{k}^{ \delta})]\Big{)}^{1/p^{*}}\leq\delta^{p}\frac{\omega^{-p}}{p}+\mathbb{E}\Big{[} \frac{1}{N}\sum_{i=1}^{N}\Psi_{i}(\mathbf{x}_{k}^{\delta})\Big{]}\frac{\omega ^{p^{*}}}{p^{*}}\leq\delta^{p}\frac{\omega^{-p}}{p}+\mathbb{E}[\Psi(\mathbf{x }_{k}^{\delta})]\frac{\omega^{p^{*}}}{p^{*}}.\] Plugging this back in (27) gives \[\mathbb{E}[\Delta_{k+1}^{\delta}]\leq\mathbb{E}[\Delta_{k}^{\delta}]-p\mu_{k+ 1}C_{k+1}\mathbb{E}[\Psi(\mathbf{x}_{k}^{\delta})]+p^{1/p^{*}}(p^{*})^{-1} \omega^{p^{*}}\mu_{k+1}\mathbb{E}[\Psi(\mathbf{x}_{k}^{\delta})]+p^{-1/p} \delta^{p}\omega^{-p}\mu_{k+1}.\] Taking \(\omega>0\) small enough so that \(\omega^{p^{*}}\leq p^{*}p^{1/p}C_{k}\) (which can be made uniformly on \(k\), thanks to the positive lower bound on \(C_{k}\)), replacing \(k+1\) with \(k(\delta)\) and using the inductive argument, we have \[\mathbb{E}[\Delta_{k(\delta)}^{\delta}]\leq\mathbb{E}[\Delta_{k(\delta)-1}^{ \delta}]+p^{-1/p}\omega^{-p}\delta^{p}\mu_{k(\delta)}\leq\mathbb{E}[\Delta_{k_ {*}}^{\delta}]+p^{-1/p}\omega^{-p}\delta^{p}\sum_{\ell=1}^{k(\delta)}\mu_{ \ell}.\] Since \(\lim_{\delta\searrow 0}\delta^{p}\sum_{\ell=1}^{k(\delta)}\mu_{\ell}=0\) and \(\lim_{\delta\searrow 0}k(\delta)=\infty\), there exists \(j_{2}\in\mathbb{N}\) such that for all \(j\geq j_{2}\) we have \(k(\delta_{j})\geq k_{*}\) and \(p^{-1/p}\omega^{-p}\delta^{p}_{j}\sum_{\ell=1}^{k(\delta_{j})}\mu_{\ell}<\epsilon\). Taking \(j_{\epsilon}=j_{1}\lor j_{2}\) shows \(\mathbb{E}[\Delta_{k(\delta_{j})}^{\delta_{j}}]<4\epsilon\) for all \(j\geq j_{\epsilon}\), and hence the desired claim follows. **Remark 4.4**.: _In the constant step-size regime, such as in the case of conditionally stable operators, the correspondence between the noise level and the step-size regime takes a more standard form. Namely, the condition in Theorem 4.3 reduces to \(\lim_{\delta\searrow 0}\delta^{p}k(\delta)=0\). In other words, we have \(k(\delta)=\mathcal{O}(\delta^{-p})\), mirroring the traditional conditions in Euclidean spaces. Note that the condition on \(k(\delta)\) is fairly broad, and does not give useful concrete stopping rules directly. Generally, the issue of a posterior stopping rules for stochastic iterative methods is completely open, even for the Hilbert setting [22]. For a polynomially decaying step-sizes \(\mu_{k}=c_{0}k^{-\beta}\), the conditions \(\frac{1}{p^{*}}<\beta\leq 1\) and \(c_{0}<(\frac{P^{*}}{L_{\max}^{p^{*}}G_{p^{*}}})^{\frac{1}{p^{*}-1}}\) give a valid step-size choice, and the stopping index \(k(\delta)\) should satisfy \(\lim_{\delta\searrow 0}k(\delta)=\infty\) and \(\lim_{\delta\searrow 0}k(\delta)\delta^{\frac{p}{1-\beta}}=0\)._ **Remark 4.5**.: _It is of much interest to derive a convergence rate for noisy data under a conditional stability condition as in Theorem 3.14, as a natural extension of the regularising property. However, this is still unavailable. Within the current analysis strategy, deriving the rate would require quantitative versions of stability estimates in Lemma 4.2 in terms of \(\delta\) and \(k\). Generally the convergence rate analysis for iterative regularisation methods in Banach space remains a very challenging task, and much more work is still needed._ ## 5 Numerical experiments We present numerical results on two sets of experiments to illustrate distinct features of the SGD (10). The first set of experiments deals with an integral operator and the reconstruction of a sparse signal in the presence of either Gaussian or impulse noise. On this model example, we investigate the impact of the number of batches and the choice of the spaces \(\mathcal{X}\) and \(\mathcal{Y}\) on the performance of the algorithm. To simplify the study we investigate spaces \(\mathcal{X}\) and \(\mathcal{Y}\) that are smooth and convex of power type, and thus the corresponding duality maps are singletons. To facilitate a direct comparison of the SGD with the Landweber method, we count the computational complexity with respect to the number of epochs, i.e. the size \(N_{b}\) of partition defined below. Note moreover that our implementation of the Landweber method does not use the stepsizes described in [44, Method 3.1], since the latter requires knowledge of quantities that are inconvenient to compute in practice. The second set of experiments is about tomographic reconstruction, with respect to different types of noise. All the shown reconstructions are obtained with a single stochastic run, as is often done in practice, and the stopping index is determined in a trial and error manner so that the corresponding reconstruction yields small errors. ### Model linear inverse problem First we consider the following model inverse problem studied in [28]. Let \(\kappa:\overline{\Omega}\times\overline{\Omega}\to\mathbb{R}^{+}\), with \(\Omega=(0,1)\), be a continuous function, and define an integral operator \(\mathcal{T}_{\kappa}:\mathcal{L}^{r_{\mathcal{X}}}(\Omega)\to\mathcal{L}^{r_{ \mathcal{Y}}}(\Omega)\), for \(1<r_{\mathcal{X}},r_{\mathcal{Y}}<\infty\), by \[(\mathcal{T}_{\kappa}x)(t)=\int_{\Omega}\kappa(t,s)x(s)ds. \tag{28}\] This is a compact linear operator between \(\mathcal{L}^{r_{\mathcal{X}}}(\Omega)\) and \(\mathcal{L}^{r_{\mathcal{Y}}}(\Omega)\), with the adjoint \(\mathcal{T}_{\kappa}^{*}\!:\!\mathcal{L}^{r_{\mathcal{Y}}^{\mathcal{Y}}}( \Omega)\!\to\!\mathcal{L}^{r_{\mathcal{X}}^{\mathcal{X}}}(\Omega)\) given by \((\mathcal{T}_{\kappa}^{*}y)(s)\!=\!\int_{\Omega}\kappa(t,s)y(t)dt\). To approximate the integrals, we subdivide the interval \(\overline{\Omega}\) into \(N\!=\!1000\) subintervals \([\frac{k}{N},\frac{k+1}{N}]\), for \(k\!=\!0\),\(\ldots\),\(N\!-\!1\), and then use quadrature, giving a finite-dimensional model \(\mathbf{A}\mathbf{x}\!=\!\mathbf{y}\), with \(\mathbf{A}\!=\!\frac{1}{N}\!\left(\kappa\big{(}\frac{j-1}{N},\frac{2k-1}{N} \big{)}\right)_{j,k=1}^{N}\) and \(\mathbf{x}\!=\!\left(x\left(\frac{2j-1}{2N}\right)\right)_{j=1}^{N}\). For SGD we use \(N_{b}\in[N]\) mini-batches. To obtain equisized batches, we assume that \(N_{b}\) divides \(N\). The mini-batch matrices \(\mathbf{A}_{j}\) are then constructed by taking every \(N_{b}\)-th row of \(\mathbf{A}\), shifted by \(j\), resulting in well-balanced mini-batches, in the sense that the norm \(\|\mathbf{A}_{j}\|\) is (nearly) independent of \(j\). The kernel function \(k(t,s)\) and the exact signal \(x^{\dagger}\) are defined respectively by \[\kappa(t,s)=\begin{cases}40t(1-s),&\text{if }t\leq s,\\ 40s(1-t),&\text{otherwise},\end{cases}\quad\text{and}\quad x^{\dagger}(s)= \begin{cases}1,&\text{if }s\in[\frac{9}{40},\frac{11}{40}]\cup[\frac{29}{40}, \frac{31}{40}],\\ 2,&\text{if }s\in[\frac{19}{40},\frac{21}{40}],\\ 0,&\text{otherwise}.\end{cases}\] This is a sparse signal and we expect sparsity promoting norms to perform well. To illustrate this, we compare the following four settings: (a) \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{2}(\Omega)\); (b) \(\mathcal{X}=\mathcal{L}^{2}(\Omega)\) and \(\mathcal{Y}=\mathcal{L}^{1.1}(\Omega)\); (c) \(\mathcal{X}=\mathcal{L}^{1.5}(\Omega)\) and \(\mathcal{Y}=\mathcal{L}^{2}(\Omega)\); (d) \(\mathcal{X}=\mathcal{L}^{1.1}(\Omega)\) and \(\mathcal{Y}=\mathcal{L}^{2}(\Omega)\). Setting (a) is the standard Hilbert space setting, suitable for recovering smooth solutions from measurement data with i.i.d. Gaussian noise, whereas settings (b)-(d) use Banach spaces. Settings (c) and (d) both aim at sparse solutions, and we expect the latter to yield sparser solutions, since spaces \(\mathcal{L}^{r}(\Omega)\) progressively enforce sparser solutions as the exponent \(r\) gets closer to \(1\). In the experiments, we employ the step-size schedule \(\mu_{k}=\frac{L_{\max}}{1+0.05(k/N_{b})^{1/p}\mathbf{r}^{+0.01}}\), with \(L_{\max}=\max_{j\in[N_{b}]}\|\mathbf{A}_{j}\|\). This satisfies the summability conditions \(\sum_{k=1}^{\infty}\mu_{k}=\infty\) and \(\sum_{k=1}^{\infty}\mu_{k}^{p}<\infty\) required by Theorem 3.8. The operator norm \(\|\mathbf{A}_{j}\|=\|\mathbf{A}_{j}\|_{\mathcal{L}^{r_{\mathcal{X}}}\to \mathcal{L}^{r_{\mathcal{Y}}}}=\max_{\mathbf{x}\neq 0}\frac{\|\mathbf{A}_{j}\|_{ \mathcal{L}^{r_{\mathcal{X}}}\mathcal{Y}}}{\|\mathbf{x}\|_{\mathcal{L}^{r_{ \mathcal{X}}}\mathcal{X}}}\) is estimated using Boyd's power method [3]. All the reconstruction algorithms are initialised with a zero vector. In Fig. 1, we compare the reconstructions with settings (a)-(d) for exact data. We observe from Fig. 1(a) that settings (a) and (b), with \(\mathcal{X}=\mathcal{L}^{2}(\Omega)\), result in smooth solutions that fail to capture the sparsity structure of the true signal \(\mathbf{x}^{\dagger}\). In contrast, the choice \(\mathcal{X}=\mathcal{L}^{1.5}(\Omega)\) recovers a sparser solution, and the choice \(\mathcal{X}=\mathcal{L}^{1.1}(\Omega)\) gives a truly sparse reconstruction, but with peaks that overshoot the magnitude of \(\mathbf{x}^{\dagger}\). This might be related to the fact \(\mathbf{x}^{\dagger}\) exhibits a cluster structure in addition to sparsity, which is not accounted for in the choice of the space \(\mathcal{X}=\mathcal{L}^{1.1}(\Omega)\)[23, 52]. Fig. 1(b) indicates that early stopping would result in lower peaks and significantly reduce the overshooting, but a more explicit form of regularisation [52, 23] might allow faster convergence. In Fig. 2, we investigate the convergence of the objective value with respect to the number of batches \(N_{b}\) and the choice of the solution space \(\mathcal{X}\). As expected, having a larger number of batches results in a faster initial convergence, but also in increased variance, as shown by the oscillations. Moreover, the variance is lower in the case of a smoother space \(\mathcal{X}\) (promoting smoother solutions), where the variance existing in early epochs is dramatically reduced later on. This observation can be explained by the gradient expression \(g(\mathbf{x},\mathbf{y},i)=\mathbf{A}_{i}^{*}\mathcal{Y}_{p}^{\flat}(\mathbf{A}_{ i}\mathbf{x}-\mathbf{y}_{i})\), which tends to zero as SGD iterates converge to the true solution \(\mathbf{x}^{\dagger}\) and so does its variance, and the larger is the exponent \(p\), the faster is the convergence. Next we examine the performance of the algorithm when the observational data \(\mathbf{y}^{\delta}\) contains (randomized) impulse noise, cf. Fig. 3, which is generated by \[y_{i}^{\delta}=\left\{\begin{aligned} y_{i}^{\dagger},& \text{with probability }1-p,\\ (1-\xi)y_{i}^{\dagger},&\text{with probability }p/2,\\ 1.4\xi+(1-\xi)y_{i}^{\dagger},&\text{with probability }p/2,\end{aligned}\right.\] where \(p\in(0,1)\) denotes the percentage of corruption (which is set to \(0.05\) in the experiment) and \(\xi\sim\text{Uni}(0.1,0.4)\) follows a uniform distribution over the interval \((0.1,0.4)\). It is known that \(\mathcal{L}^{r}(\Omega)\) fittings with \(r\) close to \(1\) is suitable for impulsive noise. This allows investigating the role of not only the space \(\mathcal{X}\) but also \(\mathcal{Y}\). The results in Fig. 3(b) show that the choice \(\mathcal{Y}=\mathcal{L}^{r_{\mathcal{Y}}}(\Omega)\), with \(r_{\mathcal{Y}}\) close to \(1\), performs significantly better. Indeed, the Hilbert setting \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{2}(\Omega)\) produces overly smooth, non-sparse solutions with pronounced artefacts. In sharp contrast, setting \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{1.1}(\Omega)\) yields solutions that can correctly identify the sparsity structure of the true solution, and have no artefacts. Similar as before, the reconstruction in this setting overestimates the signal magnitude on its support, which is exacerbated as the exponent \(r_{\mathcal{Y}}\) gets closer to \(1\). Lastly, we investigate the convergence behaviour of the method for the generalised model (15) in Section 3.2, where stochastic directions \(g(\mathbf{x},\mathbf{y},i)\) are defined as \(g(\mathbf{x},\mathbf{y},i)=\mathbf{A}_{i}^{*}\mathcal{Y}_{q}^{\flat}(\mathbf{ A}_{i}\mathbf{x}-\mathbf{y}_{i})\), with \(q=r_{\mathcal{Y}}\) different from the convexity parameter \(p\) of the space \(\mathcal{X}\). The results in Fig. 4 show that this can indeed be beneficial for the performance of the method: the reconstructions are more accurate not only in terms of the solution support, but also in terms of the magnitudes of the non-zero entries. However, the precise mechanism of the excellent performance remains largely elusive. Figure 1: Comparison of reconstructed solutions after \(500\) epochs. ### Computed Tomography Now we numerically investigate the behaviour of SGD on computed tomography (CT), with respect to the model spaces \(\mathcal{X}\) and \(\mathcal{Y}\) and data noise. In CT reconstruction, we aim at determining the density of cross sections of an object by measuring the attenuation of X-rays as they propagate through the object [36]. Mathematically, the forward map is given by the Radon transform. In the experiments, the discrete forward operator \(\mathbf{A}\) is defined by a \(2D\) parallel beam geometry, with 180 projection angles on a 1 angle separation, 256 detector elements, and pixel size of 0.1. The sought-for signal \(\mathbf{x}^{\dagger}\) is a (sparse) phantom, cf. Fig. 5(a). After applying the forward operator \(\mathbf{A}\), either Gaussian (with mean zero and variance 0.01) or salt-and-pepper noise is added. In the latter setting we consider low (with 5% of values changed to either salt or pepper values) and high (10% of values changed) noise regimes. The resulting sinograms (i.e. measurement data) are shown in Fig. 5(b)-(d). Note that standard quality metrics in image assessment, such as peak signal to noise ratio or mean squared error, are computed using the distance between images in the \(\ell^{2}\)-norm, which have an implicit bias towards Hilbert spaces and smooth signals, whereas using a metric that emphasises sparsity is more pertinent to sparsity promoting spaces. To provide a balanced comparison, we report the following two metrics based on normalised \(\ell^{1}\)- and \(\ell^{2}\)-norms: \(\delta_{1}(\mathbf{x})=\|\mathbf{x}^{\dagger}-\mathbf{x}\|_{\ell^{1}}/\| \mathbf{x}^{\dagger}\|_{\ell^{1}}\) and \(\delta_{2}(\mathbf{x})=\|\mathbf{x}^{\dagger}-\mathbf{x}\|_{\ell^{2}}/\| \mathbf{x}^{\dagger}\|_{\ell^{2}}\). First, we show the performance on Gaussian noise, where we compare the Hilbert setting (\(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{2}\)) with two Banach settings (\(\mathcal{X}=\mathcal{L}^{1.1}\), \(\mathcal{Y}=\mathcal{L}^{2}\), and \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{1.1}\)). In the reconstruction, we employ step-sizes \(\mu_{k}=\frac{L_{\max}/2}{1+0.05(k/N_{b})^{1/p^{*}+0.07}}\), with \(L_{\max}=\max_{j\in[N_{b}]}\|\mathbf{A}_{j}\|\). Fig. 6 shows exemplary reconstructions. In all three settings much of the noise is retained in the reconstruction, and whereas the Hilbert setting is better at recovering the magnitude of non-zero entries, the Banach settings are better at recovering the support. Figure 4: The dependence of the reconstructions in the case of impulse noise on the choice of \(q\) parameter in the generalised model (15). The results are obtained using \(N_{b}=100\) batches, after 250 epochs. Figure 3: The reconstruction performance in case of impulse noise. The algorithms utilised \(N_{b}=100\) batches and were run for 250 epochs. Moreover, we observe that the Banach setting with a sparse signal space \(\mathcal{X}=\mathcal{L}^{1.1}\), and a smooth observation space \(\mathcal{Y}=\mathcal{L}^{2}\), has the best performance in terms of \(\delta_{1}\) and \(\delta_{2}\) metrics. The Hilbert model performs better than the fully sparse model \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{1.1}\) in terms of the smooth metric \(\delta_{2}\), but worse in the sparsity promoting metric (\(\delta_{1}\)). We also consider the Banach setting for the generalised model (15), with \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{1.1}\) and \(p_{\mathcal{Y}}=1.1\), where we study the effects of early stopping. Fig. 7 shows that this setting recovers the support more accurately (and actually does so very early on) and recovers the magnitudes better, but that a form of regularisation (through e.g. early stopping) can be beneficial, since in the later epochs SGD iterates again tend to overshoot on the support. A similar behaviour can observed for other studied Banach space settings, but not for the Hilbert space setting, which does not recover the support. We next investigate the performance for low and high salt-and-pepper noise. We compare the Hilbert setting with two Banach settings: the standard SGD with \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{1.1}\) and the generalised model (15) with \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{1.1}\) and \(p_{\mathcal{Y}}=1.1\). For the reconstruction, we employ step-sizes \(\mu_{k}=\frac{0.5}{1+0.05(k/N_{h})^{1/p^{+}+0.01}}\). The results in Fig. 8 show the reconstructions after 200 epochs with \(N_{b}=60\) batches. In the low noise regime, the Hilbert setting can reconstruct the general shape of the phantom, but retains a lot of the noise and exhibits streaking artefacts in the background. The reconstruction in the high noise regime is of much poorer quality. The standard Banach SGD shows good behaviour in the low-noise setting, reconstructing well both the sparsity structure and the magnitudes, but its performance degrades in the high noise setting. In sharp contrast, the model (15) shows a nearly perfect reconstruction performance - the phantom is well recovered, with intensities on the correct scale, for both low and high noise regimes. Similar as before, we observe that Banach methods tend to slightly overestimate the overall intensities, though the recovered values Figure 5: The plot in (a) shows the phantom to be recovered and (b)-(d) show noisy measurements used in the recovery: in (b), random Gaussian noise was added, and (c)-(d) are sinogram data degraded by salt-and-pepper noise in the low (5%) and high (10%) noise regimes. are comparable to the true solution. Overall, the Hilbert setting shows a qualitatively worst performance, in both \(\ell^{1}\)- and \(\ell^{2}\)-norm sense, and the model (15) shows the best performance. Lastly, we investigate a more challenging setting with noise affecting not only the sinograms, but also the original phantoms. Then the ground-truth image is only approximately sparse. The phantom is degraded with Gaussian noise (zero mean and variance 0.01) after which we apply the forward operator to the resulting noisy phantom. We then add either Gaussian (zero mean and variance 0.01) or salt-and-pepper noise (affecting 3% of measurements); see Fig. 9 for representative images. The reconstruction algorithms use SGD with a decaying step-size schedule, \(\mu_{k}=\frac{0.2}{1+0.05(k/N_{b})^{1/p}+0.01}\). The reconstructions for data with Gaussian noise in both phantom and sinogram are shown in Fig. 10. As before, reconstructions in the Hilbert setting are comparable, but slightly worse than that with the Banach ones. Banach methods are better at recovering the sparsity structure of the solution, and have better reconstruction quality metrics, though they do not completely remove the noise. In the second setting, with the Gaussian noise affecting the phantom and salt-and-pepper noise affecting the sinogram, the difference in reconstruction quality in the Hilbert space and Banach space settings is significantly more pronounced, cf. Fig. 11. In both settings, the choice of spaces \(\mathcal{X}\) and \(\mathcal{Y}\) can have a big impact on the reconstruction quality, especially on the amount of noise retained in the background. Moreover, further improvements can be achieved by explicitly penalising the objective function. Figure 6: The reconstruction of the phantom from the observed sinograms degraded by Gaussian noise, cf. Fig. 5(b). The algorithms use \(N_{b}=60\) batches and were run for 200 epochs. Figure 7: The evolution of the quality of reconstruction from sinograms degraded by Gaussian noise with respect to the number of epochs. The algorithm uses \(\mathcal{X}=\mathcal{Y}=\mathcal{L}^{1.1}\) and \(p_{\mathcal{Y}}=1.1\), with \(N_{b}=60\) batches. ## Acknowledgements We are very grateful to three anonymous referees for their constructive comments which have led to a significant improvement of the quality of the paper. Figure 8: The reconstruction of the phantom from the observed sinograms, degraded with low (top) and high (bottom) salt-and-pepper noise, respectively, obtained using the Hilbert space model (left), the Banach model (middle) and the Banach model () with the generalised Kaczmarz scheme (right) (right). The algorithms use batches and were run for 200 epochs. Figure 9: The phantoms and sinograms for the forward model with both pre and post measurement noise. The phantom on the left is degraded by Gaussian noise. After applying the forward operator, either Gaussian (middle) or salt-and-pepper noise (right) is added to the sinogram. ## Appendix A Technical results and proofs **Lemma A.1** ([38, Lemma 6]).: _Let \((\delta_{n})_{n}\) be a sequence of non-negative scalars, \((\mu_{n})_{n}\) a sequence of positive scalars, and \(\alpha>0\). If_ \[\delta_{n+1}\leq\delta_{n}-\mu_{n+1}\delta_{n}^{1+\alpha},\text{ for all }n=0,\ldots,N,\] _then_ \[\delta_{N}\leq\delta_{0}\Big{(}1+\alpha\delta_{0}^{\alpha}\sum_{n=1}^{N}\mu_ {n}\Big{)}^{-1/\alpha}.\] ### Two elementary estimates In this section, we present two elementary estimates on the SGD iterates for exact data that are useful in establishing the regularising property. **Lemma A.2**.: _Let the sequence \((\mathbf{x}_{k})_{k\in\mathbb{N}}\) be generated by iterations (10), and let the step-sizes \((\mu_{k})_{k\in\mathbb{N}}\) satisfy \(\mu_{k}^{p^{*}-1}\leq\frac{p^{*}}{G_{p^{*}}L_{\max}^{p^{*}}}\) for all \(k\in\mathbb{N}\), and stochastic update directions \(\mathbf{g}_{k}\) be of the form (11). Then for any \(\mathbf{\tilde{x}}\in\mathcal{X}_{\min}\), the sequence \((\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{\tilde{x}}))_{k\in\mathbb{N}}\) is non-increasing. In particular, if \(\mathbf{B}_{p}(\mathbf{x}_{0},\mathbf{\tilde{x}})\leq\rho\), then \(\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{\tilde{x}})\leq\rho\) for all \(k\)._ Figure 11: The reconstructed phantom from the sinograms with a Gaussian pre-measurement and a salt-and-pepper (post-)measurement noise. The algorithms use \(N_{b}=60\) batches and were run for \(400\) epochs. Figure 10: The reconstruction of the phantom from the observed sinograms with pre- and post-measurement Gaussian noise. The algorithms use \(N_{b}=60\) batches and were run for \(200\) epochs. Proof.: Let \(\Delta_{k}=\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{\widehat{x}})\). By Lemma 3.6, we have \[\Delta_{k+1}\leq\Delta_{k}-\mu_{k+1}\left\langle\mathbf{g}_{k+1},\mathbf{x}_{k}- \mathbf{\widehat{x}}\right\rangle+\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}}\| \mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}}.\] By the definition of duality map and the choice of the update directions \(\mathbf{g}_{k}\), we have \[\left\langle\mathbf{g}_{k+1},\mathbf{x}_{k}-\mathbf{\widehat{x}}\right\rangle =\left\langle\mathcal{Y}_{p}^{\flat}(\mathbf{A}_{i_{k+1}}\mathbf{ x}_{k}-\mathbf{y}_{i_{k+1}}),\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}} \right\rangle=\|\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}\|_{ \mathcal{Y}}^{p^{*}},\] \[\|\mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}} =\|\mathbf{A}_{i_{k+1}}^{*}\mathcal{Y}_{p}^{\flat}(\mathbf{A}_{i _{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}})\|_{\mathcal{X}^{*}}^{p^{*}}\leq\| \mathbf{A}_{i_{k+1}}^{*}\|^{p^{*}}\|\mathcal{Y}_{p}^{\flat}(\mathbf{A}_{i_{k+1 }}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}})\|_{\mathcal{Y}^{*}}^{p^{*}}\] \[\leq L_{\max}^{p^{*}}\|\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{ y}_{i_{k+1}}\|_{\mathcal{Y}}^{(p-1)p^{*}}=L_{\max}^{p^{*}}\|\mathbf{A}_{i_{k+1}} \mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}\|_{\mathcal{Y}}^{p}.\] Consequently, \[\Delta_{k+1} \leq\Delta_{k}-\mu_{k+1}\left\langle\mathbf{g}_{k+1},\mathbf{x}_{ k}-\mathbf{\widehat{x}}\right\rangle+\frac{G_{p^{*}}}{p^{*}}\mu_{k+1}^{p^{*}}\| \mathbf{g}_{k+1}\|_{\mathcal{X}^{*}}^{p^{*}}\] \[\leq\Delta_{k}-\Big{(}1-L_{\max}^{p^{*}}\frac{G_{p^{*}}}{p^{*}} \mu_{k+1}^{p^{*}-1}\Big{)}\mu_{k+1}\|\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}- \mathbf{y}_{i_{k+1}}\|_{\mathcal{Y}}^{p}.\] Since \(\mu_{k}^{p^{*}-1}\leq\frac{p^{*}}{G_{p^{*}}L_{\max}^{p^{*}}}\) by assumption, \(\Delta_{k+1}\leq\Delta_{k}\leq\Delta_{0}\), completing the proof. **Lemma A.3** (Coercivity of the Bregman distance).: _If \(\Delta_{k}=\mathbf{B}_{p}(\mathbf{x}_{k},\mathbf{x}^{\dagger})\leq C<\infty\) for all \(k\), then \(\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p}\leq(2p^{*})^{p}(\|\mathbf{x}^{\dagger}\|_{ \mathcal{X}}^{p}\lor C)\), for all \(k\in\mathbb{N}\)._ Proof.: By the definition of \(\Delta_{k}\) and the Cauchy-Schwarz inequality, we have \[\Delta_{k}\geq\frac{1}{p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p}+\frac{1}{p} \|\mathbf{x}^{\dagger}\|_{\mathcal{X}}^{p}-\|\mathbf{x}^{\dagger}\|_{\mathcal{ X}}\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p-1}.\] Then we have \(\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p-1}(\frac{1}{p^{*}}\|\mathbf{x}_{k}\|_{ \mathcal{X}}-\|\mathbf{x}^{\dagger}\|_{\mathcal{X}})\leq\Delta_{k}.\) If now \(\frac{1}{p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}-\|\mathbf{x}^{\dagger}\|_{ \mathcal{X}}\leq\frac{1}{2p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}\), it follows \(\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p}\leq(2p^{*})^{p}\|\mathbf{x}^{\dagger}\|_{ \mathcal{X}}^{p}\). Otherwise, if \(\frac{1}{p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}-\|\mathbf{x}^{\dagger}\|_{ \mathcal{X}}\geq\frac{1}{2p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}\), we have \[\frac{1}{2p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p}\leq\|\mathbf{x}_{k}\|_{ \mathcal{X}}^{p-1}\left(\frac{1}{p^{*}}\|\mathbf{x}_{k}\|_{\mathcal{X}}-\| \mathbf{x}^{\dagger}\|_{\mathcal{X}}\right)\leq\Delta_{k}.\] Combining these two bounds gives \(\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p}\leq(2p^{*})^{p}\left(\|\mathbf{x}^{ \dagger}\|_{\mathcal{X}}^{p}\lor\Delta_{k}\right)\). ### Proof of Lemma 4.2 To prove Lemma 4.2, we need the following simple fact. **Lemma A.4**.: _For any fixed \(k\in\mathbb{N}\), the clean iterates \(\mathbf{x}_{k}\) generated by (22) are uniformly bounded, i.e. there exists \(C_{k}>0\) such that \(\sup_{\omega\in\mathcal{F}_{k}}\|\mathbf{x}_{k}\|_{\mathcal{X}}\leq C_{k}<\infty\)._ Proof.: If stepsizes \(\mu_{k}\) satisfy the conditions of Lemma A.2, the statement is direct from Lemma A.3, and moreover, \(C_{k}\) can be chosen to be independent of \(k\). Otherwise we proceed by induction. The induction basis is trivial. Indeed, by the triangle inequality and the definition of duality maps, we have \[\|\mathbf{x}_{k+1}\|_{\mathcal{X}}^{p-1} =\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}\mathbf{ g}_{k+1}\|_{\mathcal{X}^{*}}\] \[\leq\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p-1}+L_{\max}\mu_{k+1}\| \mathcal{Y}_{p}^{\flat}(\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}) \|_{\mathcal{Y}^{*}}\] \[\leq\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p-1}+L_{\max}\mu_{k+1}\| \mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}\|_{\mathcal{Y}}^{p-1}\] \[\leq\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p-1}+L_{\max}^{p}\mu_{k+1}\| \mathbf{x}_{k}-\mathbf{x}^{\dagger}\|_{\mathcal{X}}^{p-1}.\] Now under the inductive hypothesis \(\sup_{\omega\in\mathcal{F}_{k}}\|\mathbf{x}_{k}\|_{\mathcal{X}}\leq C_{k}<\infty\), we have \[\|\mathbf{x}_{k+1}\|_{\mathcal{X}}^{p-1} \leq\|\mathbf{x}_{k}\|_{\mathcal{X}}^{p-1}+L_{\max}^{p}\mu_{k+1}( \|\mathbf{x}_{k}\|_{\mathcal{X}}^{p-1}+\|\mathbf{x}^{\dagger}\|_{\mathcal{X}}^ {p-1})\] \[\leq C_{k}^{p-1}(1+L_{\max}^{p}\mu_{k+1})+L_{\max}^{p}\mu_{k+1}\| \mathbf{x}^{\dagger}\|_{\mathcal{X}}^{p-1}.\] This directly proves the statement of the lemma. Now we can present the proof of Lemma 4.2. Proof of Lemma 4.2.: For any sequence \((\delta_{j})_{j\in\mathbb{N}}\), with \(\lim_{j\to\infty}\delta_{j}=0\), we consider a sequence of random vectors \((\mathbf{x}_{k}^{\delta_{j}},\mathbf{x}_{k})_{j\in\mathbb{N}}\). We will show by induction that (for any fixed \(k\in\mathbb{N}\)) the sequence \((\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta_{j}},\mathbf{x}_{k}))_{j}\) is uniformly bounded, i.e. \(\sup_{\omega\in\mathcal{F}_{k}}\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta_{j}}, \mathbf{x}_{k})<\infty\), converges to \(0\) point-wise, and that \(\mathbf{x}_{k}^{\delta_{j}}\) is uniformly bounded. The remaining two claims regarding the convergence of \(\|\mathbf{x}_{k}^{\delta}-\mathbf{x}_{k}\|_{\mathcal{X}}\) and \(\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k}^{\delta})-\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k})\|_{\mathcal{X}^{*}}\) then follow directly. For notational brevity, we also suppress the sequence notation \(\delta_{j}\), and only use \(\delta\). For the induction base, by Theorem 2.6(i) and (iv), we have \[\mathbf{B}_{p}(\mathbf{x}_{1}^{\delta},\mathbf{x}_{1})=\mathbf{B} _{p^{*}}\!\left(\!\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{1}),\mathcal{J}_{p }^{\mathcal{X}}(\mathbf{x}_{1}^{\delta})\!\right)\] \[\leq \frac{G_{p^{*}}}{p^{*}}\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x} _{0})-\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{0})-\mu_{1}(\mathbf{g}_{1}^{ \delta}-\mathbf{g}_{1})\|_{\mathcal{X}^{*}}^{p^{*}}=\frac{G_{p^{*}}}{p^{*}} \mu_{1}^{p^{*}}\|\mathbf{g}_{1}^{\delta}-\mathbf{g}_{1}\|_{\mathcal{X}^{*}}^{ p^{*}},\] where \(\mathbf{g}_{1}^{\delta}=g(\mathbf{x}_{0},\mathbf{y}^{\delta},i_{1})\) and \(\mathbf{g}_{1}=g(\mathbf{x}_{0},\mathbf{y},i_{1})\). Specifically, in the case (11), we have \[\|\mathbf{g}_{1}^{\delta}\!-\!\mathbf{g}_{1}\|_{\mathcal{X}^{*}}^{ p^{*}}\!= \|\mathbf{A}_{i_{1}}^{*}\!\left(\!\mathcal{J}_{p}^{\mathcal{Y}}( \mathbf{A}_{i}\mathbf{x}_{0}\!-\!\mathbf{y}_{i_{1}}^{\delta})\!-\!\mathcal{J}_ {p}^{\mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}_{0}\!-\!\mathbf{y}_{i_{1}})\!\right)\! \|_{\mathcal{X}^{*}}^{p^{*}}\] \[\leq \,L_{\max}^{p^{*}}\|\mathcal{J}_{p}^{\mathcal{Y}}(\mathbf{A}_{i} \mathbf{x}_{0}\!-\!\mathbf{y}_{i_{1}}^{\delta})\!-\!\mathcal{J}_{p}^{\mathcal{ Y}}(\mathbf{A}_{i}\mathbf{x}_{0}\!-\!\mathbf{y}_{i_{1}})\!\|_{\mathcal{Y}^{*}}^{p^{*}}.\] Since \(\mathcal{Y}\) is by assumption uniformly smooth, by Theorem 2.3(iv), we have \[\|\mathcal{J}_{p}^{\mathcal{Y}}(\mathbf{A}_{i}\mathbf{x}_{0}\!- \!\mathbf{y}_{i_{1}}^{\delta})\!-\!\mathcal{J}_{p}^{\mathcal{Y}}(\mathbf{A}_{i }\mathbf{x}_{0}\!-\!\mathbf{y}_{i_{1}})\!\|_{\mathcal{Y}^{*}}\] \[\leq C\!\max\{1,\|\mathbf{A}_{i}\mathbf{x}_{0}\!-\!\mathbf{y}_{i_{1}}^{ \delta}\|_{\mathcal{Y}},\|\mathbf{A}_{i}\mathbf{x}_{0}\!-\!\mathbf{y}_{i_{1}} \|_{\mathcal{Y}}\}^{p-1}\bar{\rho}_{\mathcal{Y}}(\|\mathbf{y}_{i_{1}}\!-\! \mathbf{y}_{i_{1}}^{\delta}\|_{\mathcal{Y}}).\] Upon maximising over \(\mathcal{F}_{1}\), the term in the maximum is uniformly bounded. Since \(\bar{\rho}_{\mathcal{Y}}:=\rho_{\mathcal{Y}}(\tau)/\tau\leq 1\), \(\mathbf{B}_{p}(\mathbf{x}_{1}^{\delta},\mathbf{x}_{1})\) is uniformly bounded. Since \(\lim_{\tau\to 0}\bar{\rho}_{\mathcal{Y}}(\tau)=0\), it follows that \(\lim_{\delta\searrow 0}\mathbf{B}_{p}(\mathbf{x}_{1}^{\delta},\mathbf{x}_{1})=0\), point-wise. By the \(p\)-convexity of \(\mathcal{X}\), we have \[0\leq\frac{C_{p}}{p}\|\mathbf{x}_{1}^{\delta}-\mathbf{x}_{1}\|_{\mathcal{X}}^ {p}\leq\mathbf{B}_{p}(\mathbf{x}_{1}^{\delta},\mathbf{x}_{1}).\] Thus, \(\|\mathbf{x}_{1}^{\delta}-\mathbf{x}_{1}\|_{\mathcal{X}}\) is uniformly bounded and \(\lim_{\delta\searrow 0}\|\mathbf{x}_{1}^{\delta}-\mathbf{x}_{1}\|_{\mathcal{X}}=0\), point-wise. By the uniform boundedness of \(\|\mathbf{x}_{1}^{\delta}-\mathbf{x}_{1}\|_{\mathcal{X}}\) and Lemma A.4, the sequence \(\mathbf{x}_{1}^{\delta}\) is also uniformly bounded: \[\|\mathbf{x}_{1}^{\delta}\|_{\mathcal{X}}\leq\|\mathbf{x}_{1}^{\delta}-\mathbf{x }_{1}\|_{\mathcal{X}}+\|\mathbf{x}_{1}\|_{\mathcal{X}}. \tag{29}\] For some \(k>0\), assume that \(\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta},\mathbf{x}_{k})\) is uniformly bounded, converges to \(0\) point-wise, as \(\delta\to 0^{+}\). Using the \(p\)-convexity of \(\mathcal{X}\), it follows that \(\|\mathbf{x}_{k}^{\delta}-\mathbf{x}_{k}\|_{\mathcal{X}}\) is uniformly bounded and converges to \(0\) point-wise, and using again Lemma A.4, it follows that \(\mathbf{x}_{k}^{\delta}\) is also uniformly bounded. Then by Theorem 2.6(i) and (iv), we have \[\mathbf{B}_{p}(\mathbf{x}_{k+1}^{\delta},\mathbf{x}_{k+1})=\mathbf{B }_{p^{*}}\!\left(\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k+1}),\mathcal{J}_{p }^{\mathcal{X}}(\mathbf{x}_{k+1}^{\delta})\!\right)\] \[\leq \frac{G_{p^{*}}}{p^{*}}\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x }_{k}^{\delta})-\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})-\mu_{k+1}( \mathbf{g}_{k+1}^{\delta}-\mathbf{g}_{k+1})\|_{\mathcal{X}^{*}}^{p^{*}}\] \[\leq \frac{G_{p^{*}}}{p^{*}}\left(\|\mathcal{J}_{p}^{\mathcal{X}}( \mathbf{x}_{k}^{\delta})-\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k})\|_{ \mathcal{X}^{*}}+\mu_{k+1}\|\mathbf{g}_{k+1}^{\delta}-\mathbf{g}_{k+1}\|_{ \mathcal{X}^{*}}\right)^{p^{*}}.\] Now we separately analyse the two terms in the parenthesis. First, using the uniform smoothness of \(\mathcal{X}\) (and Theorem 2.3(iv) with \(\bar{\rho}_{\mathcal{X}^{*}}(\tau)<C\tau^{p^{*}-1}\), cf. Definition 2.2), we have \[\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k}^{\delta})-\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k})\|_{\mathcal{X}^{*}}\leq C\max\{1,\|\mathbf{x}_{k }^{\delta}\|_{\mathcal{X}},\|\mathbf{x}_{k}\|_{\mathcal{X}}\}^{p-1}\bar{\rho}_ {\mathcal{X}}(\|\mathbf{x}_{k}^{\delta}-\mathbf{x}_{k}\|_{\mathcal{X}}). \tag{30}\] Since the right hand side is uniformly bounded and converges to \(0\) point-wise by the induction hypothesis, the same holds for the left hand side. Next we decompose the second term into a sum of two perturbation terms \[\|g(\mathbf{x}_{k}^{\delta},\mathbf{y}^{\delta},i_{k+1})-g( \mathbf{x}_{k},\mathbf{y},i_{k+1})\|_{\mathcal{X}^{*}} \leq\|g(\mathbf{x}_{k},\mathbf{y}^{\delta},i_{k+1})-g(\mathbf{x}_{ k},\mathbf{y},i_{k+1})\|_{\mathcal{X}^{*}}\] \[\quad+\|g(\mathbf{x}_{k}^{\delta},\mathbf{y}^{\delta},i_{k+1})-g( \mathbf{x}_{k},\mathbf{y}^{\delta},i_{k+1})\|_{\mathcal{X}^{*}}:=\mathrm{I}+\Pi.\] First, by the assumption \(\mathcal{Y}\) being uniformly smooth and Theorem 2.3(iv), we have \[\mathrm{I} =\|\mathbf{A}_{i_{k+1}}^{*}\big{(}\mathcal{J}_{p}^{\mathcal{Y}}( \mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}^{\delta})-\mathcal{J}_ {p}^{\mathcal{Y}}(\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}) \big{)}\|_{\mathcal{X}^{*}}\] \[\leq L_{\max}\|\mathcal{J}_{p}^{\mathcal{Y}}(\mathbf{A}_{i_{k+1}} \mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}^{\delta})-\mathcal{J}_{p}^{\mathcal{Y}}( \mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{i_{k+1}})\|_{\mathcal{Y}^{*}}\] \[\leq CL_{\max}\max\{1,\|\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{ y}_{i_{k+1}}^{\delta}\|_{\mathcal{Y}},\|\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}- \mathbf{y}_{i_{k+1}}\|_{\mathcal{Y}}\}^{p-1}\bar{\rho}_{\mathcal{Y}}(\|\mathbf{ y}_{i_{k+1}}-\mathbf{y}_{i_{k+1}}^{\delta}\|_{\mathcal{Y}}).\] By the induction hypothesis and repeating the arguments from the base of induction, the right hand side is uniformly bounded and converges to \(0\) point-wise. Second, similarly, we have \[\mathrm{II} =\|\mathbf{A}_{i_{k+1}}^{*}\big{(}\mathcal{J}_{p}^{\mathcal{Y}}( \mathbf{A}_{i_{k+1}}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i_{k+1}}^{\delta})- \mathcal{J}_{p}^{\mathcal{Y}}(\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}-\mathbf{y}_{ i_{k+1}}^{\delta})\big{)}\|_{\mathcal{X}^{*}}\] \[\leq L_{\max}\|\mathcal{J}_{p}^{\mathcal{Y}}(\mathbf{A}_{i_{k+1}} \mathbf{x}_{k}^{\delta}-\mathbf{y}_{i_{k+1}}^{\delta})-\mathcal{J}_{p}^{ \mathcal{Y}}(\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}^{\delta}-\mathbf{y}_{i_{k+1}}^ {\delta})\|_{\mathcal{Y}^{*}}\] \[\leq CL_{\max}\max\{1,\|\mathbf{A}_{i_{k+1}}\mathbf{x}_{k}^{ \delta}-\mathbf{y}_{i_{k+1}}^{\delta}\|_{\mathcal{Y}},\|\mathbf{A}_{i_{k+1}} \mathbf{x}_{k}-\mathbf{y}_{i_{k+1}}^{\delta}\|_{\mathcal{Y}}\}^{p-1}\bar{\rho} _{\mathcal{Y}}(\|\mathbf{A}_{i_{k+1}}(\mathbf{x}_{k}^{\delta}-\mathbf{x}_{k}) \|_{\mathcal{Y}}).\] By the same arguments, the right hand side is uniformly bounded. Moreover, \(\|\mathbf{A}_{i_{k+1}}(\mathbf{x}_{k}^{\delta}-\mathbf{x}_{k})\|_{\mathcal{Y} }\leq L_{\max}\|\mathbf{x}_{k}^{\delta}-\mathbf{x}_{k}\|_{\mathcal{X}}\), which by the induction hypothesis converges point-wise to \(0\). Putting all these bounds together yields that \(\mathbf{B}_{p}(\mathbf{x}_{k+1}^{\delta},\mathbf{x}_{k+1})\) is uniformly bounded and converges point-wise to \(0\). Using Vitaly's theorem, the desired statement follows directly. Since \(\mathbf{B}_{p}(\mathbf{x}_{k}^{\delta},\mathbf{x}_{k})\) is uniformly bounded and converges point-wise to \(0\) for any \(k\), then so does \(\|\mathbf{x}_{k}^{\delta}-\mathbf{x}_{k}\|_{\mathcal{X}}\), and consequently by the inequality (30) (and (29)) so does \(\|\mathcal{J}_{p}^{\mathcal{X}}(\mathbf{x}_{k}^{\delta})-\mathcal{J}_{p}^{ \mathcal{X}}(\mathbf{x}_{k})\|_{\mathcal{X}^{*}}\). The second part of the claim thus follows. This completes the proof of the induction step, and hence also the lemma.
2306.17064
A Note on $L^1-$contractive property of the solutions of the scalar conservation laws through the method by Lax-Oleĭnik
In this note, we study the $L^1-$contractive property of the solutions the scalar conservation laws, got by the method of Lax-{O}le\u{\i}nik. First, it is proved when f is merely convex and the initial data is in $L^{\infty}(\mathbb{R})$. And then, it is shown for the case when the initial data is in $L^1(\mathbb{R})$ with the convex flux having super-linear growth. Finally, the $L^1-$contractive property is shown for the scalar conservation laws with the initial data in $L^1(\mathbb{R})$ and the flux is "semi-super-linear". This entire note does not assume any results mentioned through the approach by Kruzkov.
Abhishek Adimurthi
2023-06-29T16:10:34Z
http://arxiv.org/abs/2306.17064v1
A note on \(L^{1}-\)contractive property of the solutions of the scalar conservation laws through the method by Lax-Oleinik. ###### Abstract. In this note, we study the \(L^{1}-\)contractive property of the solutions the scalar conservation laws, got by the method of Lax-Oleinik. First, it is proved when f is merely convex and the initial data is in \(L^{\infty}(\mathbb{R})\). And then, it is shown for the case when the initial data is in \(L^{1}(\mathbb{R})\) with the convex flux having super-linear growth. Finally, the \(L^{1}-\)contractive property is shown for the scalar conservation laws with the initial data in \(L^{1}(\mathbb{R})\) and the flux is "semi-super-linear". This entire note does not assume any results mentioned through the approach by Kruzkov. ## Introduction. Let \(f:\mathbb{R}\mapsto\mathbb{R}\) be real-valued function, \(u_{0}\in L^{\infty}(\mathbb{R})\), for which let \(u\) in \(L^{\infty}(\mathbb{R}\times(0,\infty))\) be a weak solution of the scalar conservation law, \[\begin{cases}\frac{\partial}{\partial t}u+\frac{\partial}{ \partial x}\big{[}f(u)\big{]}=0;\quad x\in\mathbb{R},t>0,\\ u(x,0)=u_{0}(x);\quad x\in\mathbb{R},\end{cases} \tag{1}\] i.e, in the weak sense, one can write (1) as the following integral system: \[\int_{0}^{\infty}\int_{-\infty}^{+\infty}\left(u\varphi_{t}+f(u) \varphi_{x}\right)dxdt+\int_{-\infty}^{\infty}u_{0}(x)\varphi(x,0)\ dx=0, \tag{2}\] for all test functions \(\varphi\in C_{c}^{\infty}(\mathbb{R}\times[0,\infty))\), with \(\varphi_{t}=\frac{\partial}{\partial t}\varphi\) and \(\varphi_{x}=\frac{\partial}{\partial x}\varphi\). In general, (2) can admit many solutions. A question of interest to ask here is for what set of functions does (2) admit a unique solution? It is mentioned in [1] that if the function \(f\) is taken to be uniformly convex, i.e \(f\in C^{2}(\mathbb{R})\) and there exists \(C>0\) such that \(f^{\prime\prime}(x)\geq C\), for all \(x\in\mathbb{R}\), then by the [1] and [2], an explicit solution is obtained by looking at a corresponding Hamilton Jacobi Equation. The explicit formula then gives Oleinik-one-sided inequality : \[u(x+z,t)-u(x,t)\leq C(1+t^{-1})z, \tag{3}\] for some \(C\geq 0\) and for a.e \(x\in\mathbb{R}\), \(t>0\), \(z>0\). It is shown in [1] that there exist a unique solution in terms of (2), i.e if \(u_{1}\) and \(u_{2}\) satisfy the Oleinik-one-sided inequality and have the same initial condition, then \(u_{1}=u_{2}\) a.e. On the other hand, using the vanishing viscosity method, it is proved in [14] that the PDE (1) attains a weak solution and satisfy certain integral inequalities as mentioned in the Definition 1. **Definition 1**.: _Fix \(T>0\), for which define \(\pi_{T}:=\mathbb{R}\times(0,T)\). A bounded measurable function \(u:\pi_{T}\mapsto\mathbb{R}\) is called generalised entropy solution (in the sense of Kruzkov) of the PDE (1) if the following holds,_ * _For any constant_ \(K\in\mathbb{R}\) _and for any non-negative test function_ \(\varphi\in C_{c}^{\infty}(\pi_{T})\)_, there holds the inequality_ \[\int_{\pi_{T}}\Big{[}\big{|}u-K|\varphi_{t}+\text{sign}(u-k)\left[f(u)-f(K) \right]\varphi_{x}\Big{]}dxdt\geq 0.\] (4) * _The function_ \(u(t,.)\) _converges to_ \(u_{0}\) _as_ \(t\to 0^{+}\) _in the topology of_ \(L^{1}_{loc}(\mathbb{R})\)_, i.e,_ \[\forall[a,b]\subset\mathbb{R},\quad\lim_{t\to 0^{+}}\int_{a}^{b}|u(x,t)-u_{0}(x )|dx=0.\] (5) It's shown in [14] that if \(u_{1}\) and \(u_{2}\) are two generalised entropy solution (in the sense of Kruzkov), with initial data \(u_{10}\) and \(u_{20}\), then for a.e \(t>0\), there holds that \[\forall a<b,\quad\int_{a}^{b}|u_{1}(x,t)-u_{2}(x,t)|dx\leq\int_{a-Lt}^{b+Lt}| u_{10}(x)-u_{20}(x)|dx, \tag{6}\] where \[L:=\lim_{\epsilon\to 0}\text{essup}\Big{\{}|f^{\prime}(p)|;p\in I_{\epsilon} \Big{\}},\] with \[I_{\epsilon}:=[-max(\|u_{10}\|_{\infty},\|u_{20}\|_{\infty})-\epsilon,max(\|u _{10}\|_{\infty},\|u_{20}\|_{\infty})+\epsilon].\] **Definition 2**.: _The property mentioned in Eq. (6) above is referred to as the \(L^{1}\) contractive property of solutions._ The approach in [14] has advantages over the one by Lax-Oleinik as, * In [14], \(f\) is assumed to be just a local lipshitz function. * The method mentioned in [14] works in any dimension. In this note, we always will assume that \(f\) is convex. It is shown in [10] (see also [11], [12], [23]) that if the function \(f\) is uniformly convex and \(C^{4}\), then there exists \(C>0\) such that for a.e \(x\in\mathbb{R},z>0,t>0\), there holds, \[u(x+z,t)-u(x,t)\leq\frac{Cz}{t},\] where \(u\) is a solution obtained by [13]. By the uniqueness result by Oleinik, the solution obtained by the method of [13] and by the method of [12] are the same. Furthermore, using Oleinik's idea, [10] proves that if \(f\) is \(C^{1}\) and strictly convex, then for a.e \(x\in\mathbb{R},z>0,t>0\), there holds \[f^{\prime}\left(u(x+z,t)\right)-f^{\prime}\left(u(x,t)\right)\leq\frac{z}{t}. \tag{7}\] Moreover, [10] shows that if \(u\) and \(w\) are two weak solutions to the scalar conservation laws with the same initial data and satisfy Eq. (7) for \(f\) to be \(C^{1}\) and strictly convex, then \(u=w\) for a.e \((x,t)\in\mathbb{R}\times(0,\infty)\). However, in this note, we look into answering the following questions with the assumption that the flux function \(f\) is just convex : 1. Assume that the initial data \(u_{0}\) is in \(L^{\infty}(\mathbb{R})\). Suppose the regularity condition on the function \(f\) is relaxed i.e. there is no assumption made on the differentiability of the function \(f\), does the method by Lax-Oleinik through Hamilton Jacobi system provide a weak solution to the PDE (1)? 2. Suppose \(u_{10},u_{20}\) are in \(L^{\infty}(\mathbb{R})\). Consider the scalar conservation law with two different initial conditions, \[\begin{cases}\frac{\partial}{\partial t}u+\frac{\partial}{\partial x}\big{[} f(u)\big{]}=0;\quad x\in\mathbb{R},t>0,\\ u(x,0)=u_{10}(x);\quad x\in\mathbb{R},\end{cases}\] (8) \[\begin{cases}\frac{\partial}{\partial t}u+\frac{\partial}{\partial x}\big{[} f(u)\big{]}=0;\quad x\in\mathbb{R},t>0,\\ u(x,0)=u_{20}(x);\quad x\in\mathbb{R}.\end{cases}\] (9) Let the weak solutions (satisfying Eq. (2)) to the scalar conservation laws Eq. (8) and Eq. (9) be denoted by \(u_{1}\) and \(u_{2}\) respectively. Can we obtain \(L^{1}\) contractive property for the solutions of these two scalar conservation laws, obtained through the method of Lax-Oleinik? That is, for a.e \(t>0\), is Eq. (6) true i.e \[\int_{a}^{b}|u_{1}(x,t)-u_{2}(x,t)|dx\leq\int_{a-Lt}^{b+Lt}|u_{10}(x)-u_{20}( x)|dx?\] 3. Now, suppose that \(u_{10},u_{20}\) are in \(L^{1}(\mathbb{R})\). Also, assume that the flux \(f\) is super linear. Consider the scalar conservation law with two different initial conditions, \[\begin{cases}\dfrac{\partial}{\partial t}u+\dfrac{\partial}{\partial x}\big{[} f(u)\big{]}=0;\quad x\in\mathbb{R},t>0,\\ u(x,0)=u_{10}(x);\quad x\in\mathbb{R},\end{cases}\] (10) \[\begin{cases}\dfrac{\partial}{\partial t}u+\dfrac{\partial}{ \partial x}\big{[}f(u)\big{]}=0;\quad x\in\mathbb{R},t>0,\\ u(x,0)=u_{20}(x);\quad x\in\mathbb{R}.\end{cases}\] (11) Let the weak solutions (satisfying Eq. (2)) to the scalar conservation laws Eq. (10) and Eq. (11) be denoted by \(u_{1}\) and \(u_{2}\) respectively. Can we obtain \(L^{1}\) contractive property for the solutions of these two scalar conservation laws, obtained through the method of Lax-Oleinik? Furthermore, for a.e \(t>0\), does the weak solutions \(u_{1}\) and \(u_{2}\) satisfy \[\int_{\mathbb{R}}|u_{1}(x,t)-u_{2}(x,t)|dx\leq\int_{\mathbb{R}}|u_{10}(x)-u_{ 20}(x)|dx?\] 4. Is the (Q.3) true, when the flux is convex, but a relaxation is made on the super-linearity of the flux function \(f\) and the function \(f\) is assumed to be "semi-super-linear" i.e., \[\lim_{p\to\pm\infty}\dfrac{f(p)}{p}=\mu_{\pm}\in[-\infty,+\infty],\text{ with }\mu_{-}\leq 0\leq\mu_{+}?\] In this note, we show that there are (weak) solutions that answer questions (Q.1), (Q.2), (Q.3) and (Q.4) affirmatively (see Theorem 1, Theorem 2 and Theorem 3). For the scalar conservation law with the initital data taken to be in \(L^{\infty}(\mathbb{R})\), the Remark 1 mentioned below tells that if the function \(f\) is just taken to be convex, we can as well assume that \(f\) can be convex and super-linear. There are three main theorems mentioned in this note. The Theorem 1 tells about establishing the \(L^{1}\) contractivity for a scalar conservation law with the flux being just convex and super-linear and the initial data is in \(L^{\infty}(\mathbb{R})\). And therefore, \(L^{1}-\)contractive property holds when the flux is assumed to be just convex and the initial data is in \(L^{\infty}(\mathbb{R})\), by the Remark 1. Then, the Theorem 2 tells that similar results can be established for the initial conditions in the space \(L^{1}(\mathbb{R})\), but for the flux to be taken as super-linear and convex. Finally, a similar set of results is proved in Theorem 3 for the case when the flux is convex, but a relaxation is made on the super-linearity of the flux function \(f\) i.e. \[\lim_{p\to\pm\infty}\frac{f(p)}{p}=\mu_{\pm}\in[-\infty,+\infty],\text{ with }\mu_{-}\leq 0\leq\mu_{+}.\] The approach in this note does not use any results from [10]. Moreover, either in [11] or in [12], \(L^{1}-\)contraction property for the solutions is not shown. So, independent to [10], we plan on proving the \(L^{1}\) contraction property for the scalar conservation laws with the flux function \(f\) to be convex and having no conditions on it's regularity and thereby, establishing uniqueness. In order to state the main results, we mention some notations. **Notations.** For \(u_{0}\in L^{\infty}(\mathbb{R})\), we define \(v_{0}\) to be the primitive of \(u_{0}\) as \[v_{0}(x):=\int_{0}^{x}u_{0}(t)\ dt. \tag{12}\] Clearly, the function \(v_{0}\) satisfy \[|v_{0}(x)-v_{0}(y)|\leq\|u_{0}\|_{\infty}|x-y|, \tag{13}\] and hence, \(v_{0}\) is a lipshitz function with lipshitz constant \(lip(v_{0})\leq\|u_{0}\|_{\infty}\). Let \(f^{*}\) denote Fenchel dual of \(f\), which is given by \[f^{*}(q):=\sup\{pq-f(p);p\in\mathbb{R}\}. \tag{14}\] Let \(\eta_{\epsilon}\) be the mollifying sequence and define \[f_{\epsilon}(x):=f*\eta_{\epsilon}(x)=\int_{\mathbb{R}}f(y)\eta_{\epsilon}(x- y)dy.\] Let \(0\leq s<t\), for which we define some functions as follows : \[V(x,t) :=\inf_{y\in\mathbb{R}}\left\{v_{0}(y)+tf^{*}\left(\frac{x-y}{t} \right)\right\}, \tag{15}\] \[V(x,s,t) :=\inf_{y\in\mathbb{R}}\left\{V(y,s)+(t-s)f^{*}\left(\frac{x-y}{t -s}\right)\right\},\] \[Ch(x,s,t) :=\left\{\text{ all minimizers in the definition of }V(x,s,t)\ \right\},\] \[Ch(x,t) :=Ch(x,0,t).\] The function \(V\) is called as the value function for the flux function \(f\) and the set \(Ch\) is called the charecteristic set. For each \(t>0\), \(x\in\mathbb{R}\), define the functions \(y_{+}\) and \(y_{-}\) as \[\begin{split} y_{+}(x,t)&:=\sup\{y;y\in Ch(x,t)\}, \\ y_{-}(x,t)&:=\inf\{y;y\in Ch(x,t)\}.\end{split} \tag{16}\] **The Main Theorems.** **Remark 1**.: _Owing to the Theorem 1 mentioned below, if the function \(f\) is convex and super-linear, we see that the solution is given by \(\frac{\partial V(x,t)}{\partial x}\) which is bounded by \(Lip(v_{0})\), due to Rademacher's theorem. Therefore, noting that if \(u_{0}\in L^{\infty}(\mathbb{R})\) is fixed and if \(u\) is a weak solution as in (2) and \(g:\mathbb{R}\mapsto\mathbb{R}\) be any continuous function such that \(f(p)=g(p)\), for all_ \[|p|\leq\|u\|_{\infty}\leq Lip(v_{0})\leq\|u_{0}\|_{\infty},\] _then \(u\) is also a weak solution of_ \[\left\{\begin{aligned} \frac{\partial}{\partial t}u+\frac{ \partial}{\partial x}\big{[}g(u)\big{]}&=0;\quad x\in\mathbb{R},t>0,\\ u(x,0)&=u_{0}(x);\quad x\in\mathbb{R}.\end{aligned}\right. \tag{17}\] _Hence, we can change the function \(f\) which is just assumed to be convex, such that outside the interval,_ \[[-\|u_{0}\|_{\infty}-1,\|u_{0}\|_{\infty}+1],\] _the ratio \(f(p)/|p|\) blows up [refer (1\({}^{st}\)) part of the Appendix]. Hence, we can assume that \(f\) is convex and has super-linear growth, i.e,_ \[\lim_{|p|\to\infty}\frac{f(p)}{|p|}=\infty. \tag{18}\] _The property of super-linearity of the function \(f\) ensures that the Fenchel dual of \(f\) is finite._ **Theorem 1**.: _Let \(u_{0}\) be a function in \(L^{\infty}(\mathbb{R})\). Define the primitive of \(u_{0}\) as in Eq. (12). Let \(f:\mathbb{R}\mapsto\mathbb{R}\) be a convex function. By the Remark 1, we can assume that \(f\) is convex and super-linear. Furthermore, define the Fenchel dual of \(f\) as in Eq. (14). Also, define the value functions and the charecterstic sets as in Eq. (15). Then, there holds the following:_ 1. _The function_ \(V\) _is a lipshitz function with the property :_ \[\forall x,y\in\mathbb{R},t>0\text{, we have}\] \[|V(x,t)-V(y,t)|\leq Lip(v_{0})|x-y|.\] (19) 2. _The function_ \(V\) _satisfy the dynamic programming principle (ddp) i.e._ \[V(x,s,t)=V(x,t).\] (20) 3. _The function_ \(V\) _is a viscosity solution of the Hamilton Jacobi system_ \[\left\{\begin{aligned} V_{t}+f(V_{x})&=0,\quad x\in \mathbb{R},t>0\\ V(x,0)&=v_{0}(x),\quad x\in\mathbb{R}.\end{aligned}\right.\] (21) _ 4. _The function_ \(u(x,t):=\frac{\partial}{\partial x}V(x,t)\) _is a weak solution to the PDE (_1_) with_ \(\|u\|_{\infty}\leq\|u_{0}\|_{\infty}\)_._ 5. \(L^{1}\) _Contractivity : Let_ \(u_{10},u_{20}\in L^{\infty}(\mathbb{R})\) _and set_ \(L\) _as in (_6_). Let_ \(u_{1}\) _and_ \(u_{2}\) _be two weak solutions to the PDE (_1_) with initial data_ \(u_{10}\) _and_ \(u_{20}\)_, then for a.e_ \(t>0\) _and for_ \(a<b\)_, we have_ \[\int_{a}^{b}|u_{1}(x,t)-u_{2}(x,t)|dx\leq\int_{a-Lt}^{b+Lt}|u_{10}(x)-u_{20}(x) |dx.\] (22) **Remark 2**.: _This Theorem 1 is different from the one mentioned in [1]. In [1], the function \(f\) is assumed to be uniformly convex and smooth. However, in the above Theorem 1, we just assume that \(f\) is convex and has super-linear growth._ Note that if \(u_{0}\) is in \(L^{\infty}(\mathbb{R})\), then the functions \(u\) and \(f(u)\) are in \(L^{1}_{loc}(\mathbb{R})\). However, in general, if \(u_{0}\in L^{1}(\mathbb{R})\), apriori a weak solution \(u\) of the PDE (1) need not be well defined as \(u\) and \(f(u)\) need not be in \(L^{1}_{loc}(\mathbb{R})\). Also, the associated value function \(V\) as defined in (15) need not be lipshitz. So, we have the following definition: **Definition 3**.: _The measurable function \(u\) is said to be a weak solution to the PDE (1) with it's initial value \(u_{0}\) to be in \(L^{1}(\mathbb{R})\) if the following holds :_ * _The functions_ \(u\) _and_ \(f(u)\) _are in the space_ \(L^{1}_{loc}(\mathbb{R}\times(0,\infty))\)_,_ * _The function_ \(u\) _satisfy the Eq. (_2_) in_ \(\mathbb{R}\times(0,\infty)\) _i.e., for all_ \(\varphi\in C^{\infty}_{c}(\mathbb{R}\times(0,\infty)),\) _there holds_ \[\int_{0}^{\infty}\int_{-\infty}^{+\infty}\left(u\varphi_{t}+f(u)\varphi_{x} \right)dxdt=0\] * _The function_ \(u\) _satisfy the equation,_ \[\forall[a,b]\subset\mathbb{R},\quad\lim_{t\to 0^{+}}\int_{a}^{b}u(x,t)dx=\int_{a}^{ b}u_{0}(x)dx.\] **Theorem 2**.: _Assume now that the function \(f\) is convex and satisfy_ \[\lim_{|p|\to\infty}\frac{f(p)}{|p|}=\infty.\] _Let \(u_{0}\in L^{\infty}(\mathbb{R})\), \(V\) be it's corresponding value function as defined in (15) and set \(u:=\frac{\partial V}{\partial x}\). Then, from the Theorem 1, the function \(u\) is a weak solution obtained from the Hamilton-Jacobi method. There holds the following :_ 1. _(Comparison Principle.) For_ \(u_{10},u_{20}\in L^{\infty}(\mathbb{R})\)_, let_ \(u_{1}\)_,_ \(u_{2}\) _be the respective solutions obtained from the Hamilton-Jacobi method._ _Then, for a.e_ \(x\in\mathbb{R}\)_, a.e_ \(t\in(0,\infty)\)_, there holds the implication,_ \[u_{10}(x)\leq u_{20}(x)\implies u_{1}(x,t)\leq u_{2}(x,t)\] (23) 2. _For_ \(u_{10},u_{20}\in L^{1}(\mathbb{R})\)_, let_ \(u_{10n}\)_,_ \(u_{20n}\) _be the respective sequence of functions in_ \(L^{\infty}(\mathbb{R})\cap L^{1}(\mathbb{R})\) _such that_ \(u_{i0n}(x)\) _converges to_ \(u_{i0}(x)\) _in_ \(L^{1}(\mathbb{R})\)_, for_ \(i=1,2\)_. Let_ \(u_{1n}\)_,_ \(u_{2n}\) _be the corresponding solutions obtained from the Hamilton-Jacobi method for the initial data_ \(u_{10n}\)_,_ \(u_{20n}\) _respectively. Then, for_ \(i=1,2\)_, for_ \(T>0\)_, we have_ * \(\{u_{1n}\},\{u_{2n}\}\) _are cauchy sequences in_ \(L^{1}(\mathbb{R}\times(0,T))\)_._ * _For_ \(u_{i}\) _to be the limit of_ \(\{u_{in}\}\) _in_ \(L^{1}(\mathbb{R}\times(0,T))\) _and for_ \(0<\tau<T\)_, we have,_ \[\begin{split}\int_{-\infty}^{\infty}\int_{\tau}^{T}|u_{1}(x,t)& -u_{2}(x,t)|dxdt\\ &\leq(T-\tau)\int_{-\infty}^{\infty}|u_{10}(x)-u_{20}(x)|\,dx. \end{split}\] (24) * _If for_ \(i\in\{1,2\}\)_, the functions_ \(\{v_{i0n}\}_{n\in\mathbb{N}}\) _converges to_ \(u_{i0}\) _in_ \(L^{1}(\mathbb{R})\) _with_ \(v_{i0n}\in L^{\infty}(\mathbb{R})\cap L^{1}(\mathbb{R})\)_, then for a.e_ \((x,t)\) _in_ \(\mathbb{R}\times(0,\infty)\)_, we see that_ \[u_{i}(x,t)=v_{i}(x,t),\] (25) _where,_ \(v_{i}:=\lim_{n\to\infty}v_{in}\) _in_ \(L^{1}(\mathbb{R}\times(0,T))\)_,_ \(\forall T>0\)_._ 3. _Now, let_ \(u_{0}\in L^{1}(\mathbb{R})\)_. As in the previous point (_2_), upon approximating_ \(u_{0}\) _by functions_ \(\{u_{0n}\in L^{\infty}\}\)_, one has the existence of solutions_ \(\{u_{n}\}\) _for the scalar conservation law with the initial condition taken as_ \(u_{0n}\)_. Take_ \(u\) _to be the limit of_ \(\{u_{n}\}\) _in_ \(L^{1}(\mathbb{R}\times(0,T))\)_,_ \(\forall T>0\)_, as in the previous point (_2_). Then, for any compact set_ \(K\subset\mathbb{R}\times(0,\infty)\)_, we have_ * _The function_ \(u\) _is in_ \(L^{\infty}(K)\)_._ * _The function_ \(u\) _satisfy_ \[\int_{0}^{\infty}\int_{-\infty}^{\infty}\Big{[}u\varphi_{t}+f(u)\varphi_{x} \Big{]}(x,t)dxdt=0,\quad\forall\varphi\in C_{c}^{\infty}(\mathbb{R}\times(0, \infty)).\] * _Furthermore, the function_ \(u\) _also satisfy,_ \[\lim_{t\to 0+}\int_{a}^{b}u(x,t)dx=\int_{a}^{b}u_{0}(x)dx,\quad\forall[a,b] \subset\mathbb{R}.\] **Theorem 3**.: _Let \(f:\mathbb{R}\mapsto\mathbb{R}\) be a convex function such that_ \[\lim_{p\to\pm\infty}\frac{f(p)}{p}=\mu_{\pm},\] _with \(\mu_{-}\leq 0\leq\mu_{+}\) and \(u_{0}\in L^{1}(\mathbb{R})\). Then, there exist a weak solution \(u\in L^{1}_{loc}(\mathbb{R}\times(0,\infty))\) to the PDE (1) as mentioned in Definition 3 with the initial data \(u_{0}\). Moreover, suppose that \(u_{10}\) and \(u_{20}\) are in \(L^{1}(\mathbb{R})\) and if the corresponding weak solutions are \(u_{1}\) and \(u_{2}\) in the space \(L^{1}_{loc}(\mathbb{R}\times(0,\infty))\), then for a.e \(t>0\), they satisfy_ \[\int_{-\infty}^{\infty}|u_{1}(x,t)-u_{2}(x,t)|dx\leq\int_{-\infty}^{\infty}|u_{ 10}(x)-u_{20}(x)|dx. \tag{26}\] _Furthermore, for any compact set \(K\subset\mathbb{R}\times(0,\infty)\), we have_ * _The function_ \(u\) _is in_ \(L^{\infty}(K)\)_._ * _The function_ \(u\) _satisfy_ \[\int_{0}^{\infty}\int_{-\infty}^{\infty}\Big{[}u\varphi_{t}+f(u)\varphi_{x} \Big{]}(x,t)dxdt=0,\quad\forall\varphi\in C^{\infty}_{c}(\mathbb{R}\times(0, \infty)).\] * _Finally, the function_ \(u\) _also satisfy,_ \[\lim_{t\to 0+}\int_{a}^{b}u(x,t)=\int_{a}^{b}u_{0}(x)dx,\quad\forall[a,b] \subset\mathbb{R}.\] **Prerequisites for proving the Main Theorems.** The idea of the proof(s) rely on the approach by [10] and a stability result, along with some related lemmas which are stated below. We start proving the main theorems by first assuming the following: 1. \(f:\mathbb{R}\mapsto\mathbb{R}\) is convex. 2. \(f\) has super-linear growth. 3. For \(\epsilon>0\), let \(\eta_{\epsilon}\) be the mollifying sequence and define \[f_{\epsilon}(x):=f*\eta_{\epsilon}(x)=\int_{\mathbb{R}}f(y)\eta_{\epsilon}(x- y)dy.\] (27) Then, we see the following : * For every \(\epsilon>0\), the functions \(\{f_{\epsilon}\}\) are in \(C^{\infty}(\mathbb{R})\). * For \(\epsilon>0\), \(f_{\epsilon}\) is convex. * The functions \(\{f_{\epsilon}\}\) converges uniformly to \(f\) on compact subsets of \(\mathbb{R}\) as \(\epsilon\to 0\). * For every \(0<\epsilon<1\), the functions \(f_{\epsilon}\) has super-linear growth and the super-linear growth is uniform. **Lemma 1**.: _Let \(f\) be a real valued function which is convex and has super-linear growth. Also, define the mollified function \(f_{\epsilon}\) as in Eq. (27). Then, there holds the following,_ 1. \(f^{*}:\mathbb{R}\mapsto\mathbb{R}\) _is convex and has superlinear growth._ 2. _As_ \(\epsilon\to 0\)_, we see that the Fenchel dual of the mollified function_ \(f^{*}_{\epsilon}\) _goes to_ \(f^{*}\) _uniformly on compact sets._ 3. _As_ \(\epsilon\to 0\)_, we see that_ \[\lim_{|q|\to\infty}\frac{f^{*}(q)}{|q|}=\infty,\quad\lim_{|q|\to\infty}\inf_{0 \leq\epsilon\leq 1}\frac{f^{*}_{\epsilon}(q)}{|q|}=\infty.\] 4. _Let_ \(\alpha\in C_{c}^{\infty}(B(0,1))\) _be a non-negative function such that_ \[\int_{\mathbb{R}}\alpha(s)\ ds=1.\] _For_ \(\epsilon>0\)_, let_ \(\left\{\alpha_{\epsilon}(x):=\frac{1}{\epsilon}\alpha\left(\frac{x}{\epsilon} \right)\right\}\) _be the mollifying sequence of the function_ \(\alpha\)_. For every_ \(F:\mathbb{R}\mapsto\mathbb{R}\)_, a convex function with super-linear growth, the function_ \[F_{\epsilon}(x):=\left(\alpha_{\epsilon}*F\right)(x)+\epsilon x^{2},\] (28) _satisfy the properties (_1_)-(_3_) mentioned in the assumptions. Moreover, the functions_ \(F_{\epsilon}\) _is uniformly convex with respect to_ \(\epsilon\)_._ For the proof of the lemma (1), refer (\(2^{\rm nd}\)) part of the Appendix mentioned at the end of this note. **Lemma 2**.: _Assume the hypothesis of the Theorem 1. Then, there holds the following. (see [10], [10], [12])._ 1. _The function_ \(x\mapsto V(x,t)\) _is a lipshat function for all_ \(t>0\) _and we have_ \[|V(x,t)-V(y,t)|\leq lip(v_{0})|x-y|.\] 2. _There exist_ \(M>0\) _such that_ \[V(x,t)=\inf\left\{v_{0}(y)+tf^{*}\left(\frac{x-y}{t}\right);\left|\frac{x-y}{t }\right|\leq M\right\}.\] 3. _The set_ \(Ch(x,s,t)\) _is bounded and non-empty._ 4. _We have the equality_ \(V(x,s,t)=V(x,t)\)_, for all_ \(0\leq s<t\)_. We also see that the function_ \((x,t)\mapsto V(x,t)\) _is a lipshat function with_ \(V(x,0)=v_{0}(x)\)_._ 5. _Set_ \(u:=\frac{\partial}{\partial x}V\)_. Then,_ \(u\) _is a weak solution of the PDE (_1_) with_ \(\|u\|_{\infty}\leq\|u_{0}\|_{\infty}\)_._ Proof.: Plug \(y=x\) in (15) to get \[V(x,t)\leq v_{0}(x)+tf^{*}(0). \tag{29}\] From the property of super-linear growth of \(f^{*}\), we can choose \(q_{0}>0\) such that for all \(q\geq q_{0}\geq 1\), we have \(f^{*}(q)\geq\big{[}lip(v_{0})+2|f^{*}(0)|\big{]}|q|\) and so, for all \(\big{|}\frac{x-y}{t}\big{|}\geq q_{0}\), we see that \[v_{0}(y)+tf^{*}\left(\frac{x-y}{t}\right) \geq v_{0}(x)+(v_{0}(y)-v_{0}(x))+\big{[}lip(v_{0})+2|f^{*}(0)| \big{]}|x-y|\] \[\geq v_{0}(x)-lip(v_{0})|x-y|+\big{[}lip(v_{0})+2|f^{*}(0)|\big{]} |x-y|\] \[=v_{0}(x)+2t|f^{*}(0)|\left|\frac{x-y}{t}\right|\] \[\geq v_{0}(x)+2t|f^{*}(0)|. \tag{30}\] The inequalities (29), (30) along with \(\big{|}\frac{x-y}{t}\big{|}\geq q_{0}\geq 1\), for \(M=q_{0}\), we have \[V(x,t)=\inf\left\{v_{0}(y)+tf^{*}\left(\frac{x-y}{t}\right);\left|\frac{x-y}{t }\right|\leq M\right\}. \tag{31}\] The function \(f^{*}\) is convex and so is continuous, which gives \(Ch(x,t)\) to be nonempty and that infimum becomes minimum in (31), i.e \[V(x,t)=\min\left\{v_{0}(y)+tf^{*}\left(\frac{x-y}{t}\right);\left|\frac{x-y}{ t}\right|\leq M\right\}. \tag{32}\] So, for \(x,z\in\mathbb{R}\), \(y\in Ch(z,t)\), for all \(\eta>0\), we have \[V(x,t)-V(z,t)\leq v_{0}(\eta)+tf^{*}\left(\frac{x-\eta}{t}\right)-v_{0}(y)-tf ^{*}\left(\frac{z-y}{t}\right). \tag{33}\] Set \(\eta=x-z+y\) to get \[V(x,t)-V(z,t)\leq lip(v_{0})|x-z|. \tag{34}\] Interchange \(x\leftrightarrow z\) to obtain \[|V(x,t)-V(z,t)|\leq lip(v_{0})|x-z|, \tag{35}\] which proves the first two parts of the lemma and thus we have the function \(V(x,s,t)\) to be lipshitz in \(x-\)variable with the estimates, \[|V(x,s,t)-V(z,s,t)|\leq lip\big{(}V(.,s)\big{)}|x-z|\leq lip(v_{0})|x-z|, \tag{36}\] \[V(x,s,t)=\inf\left\{V(y,s)+(t-s)f^{*}\left(\frac{x-y}{t-s}\right);\left|\frac{ x-y}{t-s}\right|\leq M\right\}. \tag{37}\] The functions \(f^{*}\) and \(y\mapsto V(y,s)\) are continuous imply that the set \(Ch(x,s,t)\) is non-empty and bounded, which concludes the third point of the lemma. Define a new function \(\gamma(\theta):=x+\left(\frac{x-y}{t-s}\right)(\theta-t)\), which satisfy the equality \(\gamma(0)=\eta^{\prime}\). Thus, \(\eta^{\prime}\) satisfy \[\frac{x-\eta^{\prime}}{t}=\frac{x-y}{t-s}=\frac{y-\eta^{\prime}}{s}, \tag{38}\] and so, we have \[\begin{split} V(x,s,t)&\leq V(y,s)+(t-s)f^{*}\left( \frac{x-y}{t-s}\right)\\ &\leq v_{0}(\eta^{\prime})+sf^{*}\left(\frac{y-\eta^{\prime}}{s} \right)+(t-s)f^{*}\left(\frac{x-y}{t-s}\right)\\ &=v_{0}(\eta^{\prime})+sf^{*}\left(\frac{x-\eta^{\prime}}{t} \right)+(t-s)f^{*}\left(\frac{x-\eta^{\prime}}{t}\right)\\ &=v_{0}(\eta^{\prime})+tf^{*}\left(\frac{x-\eta^{\prime}}{t} \right).\end{split} \tag{39}\] Taking the infimum over \(\eta^{\prime}\) gives \[V(x,s,t)\leq V(x,t). \tag{40}\] To prove the other side of the inequality, as the sets \(Ch(x,s,t)\) and \(Ch(x,t)\) are non-empty, let \(\alpha\in Ch(x,s,t)\) and \(\beta\in Ch(\alpha,s)\). The convexity of \(f^{*}\) along with the equality, \[\frac{x-\beta}{t}=\frac{x-\alpha}{t-s}\left(1-\frac{s}{t}\right)+\frac{\alpha -\beta}{s}\left(\frac{s}{t}\right), \tag{41}\] gives \[tf^{*}\left(\frac{x-\beta}{t}\right)\leq(t-s)f^{*}\left(\frac{x-\alpha}{t-s} \right)+sf^{*}\left(\frac{\alpha-\beta}{s}\right). \tag{42}\] Hence, we have \[\begin{split} V(x,s,t)&=V(\alpha,s)+(t-s)f^{*}\left( \frac{x-\alpha}{t-s}\right)\\ &=v_{0}(\beta)+sf^{*}\left(\frac{\alpha-\beta}{s}\right)+(t-s)f^{ *}\left(\frac{x-\alpha}{t-s}\right)\\ &\geq v_{0}(\beta)+tf^{*}\left(\frac{x-\beta}{t}\right)\\ &\geq V(x,t),\end{split} \tag{43}\] which concludes the fourth point of the lemma. To prove the latter of the fourth point of the lemma, first observe that for \(0\leq t_{1}<t_{2}\), for all \(y\in\mathbb{R}\), there holds \[V(x,t_{2})\leq V(x,t_{1},t_{2})\leq V(y,t_{1})+(t_{2}-t_{1})f^{*}\left(\frac{x-y }{t_{2}-t_{1}}\right). \tag{44}\] Setting \(y=x\), we get \[V(x,t_{2})-V(x,t_{1})\leq f^{*}(0)(t_{2}-t_{1}). \tag{45}\] For \(\widetilde{\eta}\in Ch(x,t_{2})\), we see that \[\left|\frac{x-\widetilde{\eta}}{t_{2}}\right|\leq M \tag{46}\] and so, for all \(y\in\mathbb{R}\), we get \[\begin{split} V(x,t_{2})-V(x,t_{1})\geq v_{0}(\widetilde{\eta})& +t_{2}f^{*}\left(\frac{x-\widetilde{\eta}}{t_{2}}\right)\\ &-v_{0}(y)-t_{1}f^{*}\left(\frac{x-y}{t_{1}}\right).\end{split} \tag{47}\] Choose \(y\) such that \[\frac{x-\widetilde{\eta}}{t_{2}}=\frac{x-y}{t_{1}}\iff y-\widetilde{\eta}= \frac{t_{2}-t_{1}}{t_{2}}\left(x-\widetilde{\eta}\right), \tag{48}\] so that along with Eq. (46), we get \[|y-\widetilde{\eta}|\leq M|t_{2}-t_{1}|. \tag{49}\] Hence, there holds \[\begin{split} V(x,t_{2})-V(x,t_{1})&\geq v_{0}( \widetilde{\eta})-v_{0}(y)+(t_{2}-t_{1})f^{*}\left(\frac{x-\widetilde{\eta}}{t _{2}}\right)\\ &\geq-lip(v_{0})|\widetilde{\eta}-y|-\lambda(t_{2}-t_{1}),\end{split} \tag{50}\] where, \(\lambda:=\sup\{|f^{*}(z)|;|z|\leq M\}\). Setting \[C_{1}:=|f^{*}(0)|+lip(v_{0})+\lambda, \tag{51}\] and along with Eq. (45) and Eq. (50), we see that \[|V(x,t_{2})-V(x,t_{1})|\leq C_{1}|t_{2}-t_{1}|. \tag{52}\] As a consequence, we get \[\begin{split}|V(x_{1},t_{1})-V(x_{2},t_{2})|&\leq |V(x_{1},t_{2})-V(x_{1},t_{1})|+|V(x_{1},t_{2})-V(x_{2},t_{2})|\\ &\leq lip(v_{0})|x_{1}-x_{2}|+C_{1}|t_{2}-t_{1}|,\end{split} \tag{53}\] which concludes that the function \(V\) is lipshitz continuous. To prove the last point of the lemma, first observe that \(V\) is a viscosity solution to the Hamilton Jacobi equation, \[\left\{\begin{aligned} V_{t}+f(V_{x})&=0,\quad x \in\mathbb{R},t>0\\ V(x,0)&=v_{0}(x),\quad x\in\mathbb{R}.\end{aligned}\right. \tag{54}\] The function \(V\) is differentiable a.e and from the "Touching by a \(C^{1}\) function" lemma in [10, Chapter 10], for a.e \((x,t)\in\mathbb{R}\times(0,\infty)\), the function \(V\) satisfy the PDE (54) point-wise. Now, choose \(\varphi\in C_{c}^{\infty}(\mathbb{R}\times[0,\infty))\) and multiply (54) by \(\varphi_{x}\) to get \[\int_{0}^{\infty}\int_{-\infty}^{\infty}\big{[}V_{t}\varphi_{x}+f(V_{x}) \varphi_{x}\big{]}dxdt=0. \tag{55}\] As the function \(V\) is lipshitz, it is differentiable almost everywhere by the Rademacher's theorem and so we see that \[\int_{0}^{\infty}\int_{-\infty}^{\infty}V_{t}\varphi_{x}dxdt =-\int_{-\infty}^{\infty}V(x,0)\varphi_{x}(x,0)dx-\int_{0}^{ \infty}\int_{-\infty}^{\infty}V(x,t)\varphi_{xt}dxdt \tag{56}\] \[=\int_{-\infty}^{\infty}(v_{0})_{x}\varphi(x,0)dx+\int_{0}^{ \infty}\int_{-\infty}^{\infty}V_{x}(x,t)\varphi_{t}dxdt\] Finally, the Eq. (55), \(u=\frac{\partial}{\partial x}V\) and \(u_{0}(x)=\frac{\partial}{\partial x}v_{0}\) tells \[\int_{0}^{\infty}\int_{-\infty}^{\infty}\big{[}u\varphi_{t}+f(u)\varphi_{x} \big{]}dxdt+\int_{-\infty}^{\infty}u_{0}(x)\varphi(x,0)dx=0, \tag{57}\] and that \[\|u\|_{\infty}=\left\|\frac{\partial V}{\partial x}\right\|_{\infty}\leq lip (v_{0})\leq\|u_{0}\|_{\infty}. \tag{58}\] Next, we state a lemma based on [1]. **Lemma 3** (Stability Result).: _Let \(\{\epsilon_{n}\}_{n\in\mathbb{N}}\) be a sequence going to \(0\) and let the functions \(f\) and \(f_{n}:=f_{\epsilon_{n}}\) satisfy the prerequisites (1) - (3). Furthermore, for \(u_{0}\in L^{\infty}(\mathbb{R})\), set \(v_{0}\) to be the primitive (lipshitz) function i.e,_ \[v_{0}(x)=\int_{0}^{x}u_{0}(t)dt.\] _Also, let \(V\) and \(V_{n}\) be the corresponding value functions defined in (15) for the flux \(f\) and \(f_{n}\) respectively. Then, we have the following results:_ 1. _We have that_ \(V_{n}\) _converges to_ \(V\) _uniformly on compact subsets of_ \(\mathbb{R}\times[0,\infty)\) _as_ \(n\to\infty\) 2. _Let_ \(0\leq s<t\)_,_ \(x\in\mathbb{R}\) _and set_ \(Ch_{n}(x,s,t)\) _to be the charecteristic set related to_ \(V_{n}\)_,_ \(Ch(x,s,t)\) _to be the charectersitic set relating_ \(V\) _as defined in (_15_). Set_ \[\lim_{n\to\infty}x_{n}=x,\quad\lim_{n\to\infty}y_{n}=y,\quad\lim_{n\to\infty}t_ {n}=t,\quad\lim_{n\to\infty}s_{n}=s.\] _Then, for_ \(y_{n}\in Ch_{n}(x_{n},s_{n},t_{n})\)_, we see that the point_ \(y\) _is in_ \(Ch(x,s,t)\)_._ 3. _Let_ \(u:=\frac{\partial}{\partial x}V\) _and set_ \(u_{n}:=\frac{\partial}{\partial x}V_{n}\)_. Then, for any_ \(\varphi\in C_{c}^{\infty}(\mathbb{R}\times(0,\infty))\)_, we see that_ \(u\) _satisfy Eq. (_2_) i.e._ \[\int_{0}^{\infty}\int_{-\infty}^{+\infty}\left(u\varphi_{t}+f(u)\varphi_{x} \right)dxdt+\int_{-\infty}^{\infty}u_{0}(x)\varphi(x,0)\ dx=0,\] _and_ \[\lim_{n\to\infty}\int_{0}^{\infty}\int_{-\infty}^{\infty}u_{n}\varphi\ dxdt=\int_{0}^{\infty}\int_{-\infty}^{\infty}u \varphi\ dxdt.\] Proof.: From the assumptions (1) - (3), for all \(n\in\mathbb{N}\), we see that \[\lim_{|q|\to\infty}\frac{f_{n}(q)}{|q|}\geq\lim_{|q|\to\infty}\inf_{j}\frac{f_ {j}(q)}{|q|}=\infty. \tag{59}\] The proof in the lemma 2 suggests that for the constant \[M:=lip(v_{0})+2\sup_{n}|f_{n}^{*}(0)|,\] we have \[V_{n}(x,t)=\inf\left\{v_{0}(y)+tf_{n}^{*}\left(\frac{x-y}{t}\right);\left| \frac{x-y}{t}\right|\leq M\right\}. \tag{60}\] The lemma 1 tells that the sequence \(\{f_{n}^{*}\}\) converges to \(f^{*}\) uniformly on compact subsets of \(\mathbb{R}\) and thus, there holds the statement : \[\lambda:=\sup_{n\in\mathbb{N}}\sup\{|f_{n}^{*}(z)|;|z|\leq M\}\text{ is bounded}. \tag{61}\] From Eq. (52), for \(C_{1}:=\sup_{n}|f_{n}^{*}(0)|+lip(v_{0})+\lambda\), we get \[\begin{split}|V_{n}(x,t_{1})-V_{n}(x,t_{2})|&\leq C _{1}|t_{1}-t_{2}|,\\ |V_{n}(x_{1},t)-V_{n}(x_{2},t)|&\leq lip(v_{0})|x_{1} -x_{2}|.\end{split} \tag{62}\] The Arzela-Ascoli theorem gives the existence of a subsequence \(\{V_{n_{k}}\}\) and a continuous function \(V\) such that \(V_{n_{k}}\) converges to \(V\) uniformly on compact subsets. Now, it suffices to show that the function \(V\) is in fact the value function for the flux \(f\). For \((x,t)\in\mathbb{R}\times(0,\infty)\), \(y_{n}\in Ch_{n}(x,t)\), we have \[\left|\frac{x-y_{n}}{t}\right|\leq M,\] and so there is a subsequence \(\{y_{n_{k}}\}\) converging to \(y\in\mathbb{R}\). Thus, for \((z,t)\in\mathbb{R}\times(0,\infty)\), there holds \[\begin{split} V(x,t)=\lim_{n_{k}\to\infty}V_{n_{k}}(x,t)& =\lim_{n_{k}\to\infty}\left[v_{0}(y_{n_{k}})+tf_{n_{k}}^{*}\left( \frac{x-y_{n_{k}}}{t}\right)\right]\\ &\leq\lim_{n_{k}\to\infty}\left[v_{0}(z)+tf_{n_{k}}^{*}\left( \frac{x-z}{t}\right)\right],\end{split} \tag{63}\] which along with the facts that the function \(v_{0}\) being lipshitz continuous and the functions \(f_{n}^{*}\) being uniformly continuous, implies \[V(x,t)\leq v_{0}(y)+tf^{*}\left(\frac{x-y}{t}\right)\leq v_{0}(z)+tf^{*}\left( \frac{x-z}{t}\right). \tag{64}\] So, we have \[V(x,t)=\inf\left\{v_{0}(z)+tf^{*}\left(\frac{x-z}{t}\right);\left|\frac{x-z}{ t}\right|\leq M\right\}, \tag{65}\] which is precisely the value function corresponding to \(v_{0}\) and \(f^{*}\) and this concludes the first part of the lemma. For the second part of the lemma, for \(z\in\mathbb{R}\) and \(0\leq s<t\), there holds \[\begin{split} V_{n}(x_{n},s_{n},t_{n})&\leq V_{n}( y_{n},s_{n})+(t_{n}-s_{n})f_{n}^{*}\left(\frac{x_{n}-y_{n}}{t_{n}-s_{n}}\right)\\ &\leq V_{n}(z,s_{n})+(t_{n}-s_{n})f_{n}^{*}\left(\frac{x_{n}-z}{ t_{n}-s_{n}}\right).\end{split} \tag{66}\] The sequence \(\{y_{n}\}\) is bounded as \(\left|\frac{x_{n}-y_{n}}{t_{n}-s_{n}}\right|\leq M\) and therefore, for \(y\) a limit point, there is a subsequence \(\{y_{n_{k}}\}\) converging to \(y\). The first part of this lemma and the (Lemma 2) tells that \[\begin{split} V(x,s,t)&=\lim_{n\to\infty}V_{n_{k}}( x_{n_{k}},s_{n_{k}},t_{n_{k}})\\ &=V(y,s)+(t-s)f^{*}\left(\frac{x-y}{t-s}\right)\\ &\leq V(z,s)+(t-s)f^{*}\left(\frac{x-z}{t-s}\right),\end{split} \tag{67}\] which tells that \(y\in Ch(x,s,t)\) and this proves the second part of the lemma. For the last part of the lemma, observe that \(u=\frac{\partial V}{\partial x}\) satisfies Eq. (2), by the Item 5 of the Lemma 2. Now, fix a function \(\varphi\in C_{c}^{\infty}(\mathbb{R}\times(0,\infty))\). Since \(V\) and \(V_{n}\)'s are lipshitz continuous functions, the first part of this Lemma 3 along with integration by parts gives the following integral equalities : \[\begin{split}\int_{0}^{\infty}\int_{-\infty}^{\infty}u(x,t)\varphi (x,t)dxdt&=\int_{0}^{\infty}\int_{-\infty}^{\infty}\left(\frac{ \partial}{\partial x}V(x,t)\right)\varphi(x,t)dxdt\\ &=-\int_{0}^{\infty}\int_{-\infty}^{\infty}V(x,t)\left(\frac{ \partial}{\partial x}\varphi(x,t)\right)dxdt\\ &=-\lim_{n\to\infty}\int_{0}^{\infty}\int_{-\infty}^{\infty}V_{n }(x,t)\left(\frac{\partial}{\partial x}\varphi(x,t)\right)dxdt\\ &=\lim_{n\to\infty}\int_{0}^{\infty}\int_{-\infty}^{\infty} \left(\frac{\partial}{\partial x}V_{n}(x,t)\right)\varphi(x,t)dxdt\\ &=\lim_{n\to\infty}\int_{0}^{\infty}\int_{-\infty}^{\infty}u_{n} (x,t)\varphi(x,t)dxdt,\end{split} \tag{68}\] which concludes the third point of the lemma. Now, we state the Lax-Oleinik approach for explicit formula and the one sided inequality. The proof can be found in [10]. **Lemma 4**.: _Assume that the function \(f:\mathbb{R}\mapsto\mathbb{R}\) is uniformly convex with \(f^{\prime\prime}(\theta)\geq C>0\), for all \(\theta\) in \(\mathbb{R}\). For \(u_{0}\in L^{\infty}(\mathbb{R})\), let \(v_{0}\) be the primitive of \(u_{0}\) and \(V\) be the associated value function as in (12) and (15). The function \(u:=\frac{\partial V}{\partial x}\) is a weak solution to the PDE (1) and for \(t>0\), the function \(y(x,t)=y_{+}(x,t)\), defined in (16) satisfy,_ * _The mapping_ \(x\mapsto y(x,t)\) _is a non-decreasing function._ * _For a.e_ \(x\in\mathbb{R}\)_, there holds the equality_ \[u(x,t)=(f^{*})^{\prime}\left(\frac{x-y(x,t)}{t}\right)\] (69) _Furthermore, the function \(u\) satify the Oleinik-one-sided inequality mentioned in (3) i.e._ \[u(x+z,t)-u(x,t)\leq C(1+t^{-1})z\] **Remark 3**.: _Here, since \(f\) is uniformly convex, we have \((f^{*})^{\prime}=(f^{\prime})^{-1}\)._ **Proof of the Main Theorems.** First, let's recall some known results whose proofs can be found in [10]. Assume that the function \(f:\mathbb{R}\mapsto\mathbb{R}\) is uniformly convex with \(f^{\prime\prime}(\theta)\geq C>0\), for all \(\theta\in\mathbb{R}\). Choose a non-negative function \(\rho\in C_{c}^{\infty}(\mathbb{R}^{2})\) such that * The support of the function \(\rho\) satisfies \[supp(\rho)\subset\{(x,t)\in\mathbb{R}^{2};t\leq 0\}.\] * The intgeral of \(\rho\) is \(1\), i.e \[\int_{\mathbb{R}^{2}}\rho(x,t)dxdt=1.\] (70) For \(\epsilon>0\), let \[\left\{\rho_{\epsilon}(x,t):=\frac{1}{\epsilon^{2}}\rho\left(\frac{x}{ \epsilon},\frac{t}{\epsilon}\right)\right\},\] be the mollifying sequence for the function \(\rho\) and for \(h\in L^{\infty}(\mathbb{R})\), set \[h_{\epsilon}(x,t):=\left(\rho_{\epsilon}*h\right)(x,t). \tag{71}\] Then, the function \(h_{\epsilon}\in C^{\infty}(\mathbb{R}^{2})\) and there holds the inequality, \[\|h_{\epsilon}\|_{\infty}\leq\|h\|_{\infty}. \tag{72}\] Suppose there exist \(C_{1}>0\) such that for all \(t>0\), for a.e \(x\in\mathbb{R}\), \(z>0\), the function \(h\) satisfy \[\frac{h(x+z,t)-h(x,t)}{z}\leq\frac{C_{1}}{t}, \tag{73}\] Then, we have \[\frac{\partial}{\partial x}h_{\epsilon}(x,t)=\lim_{z\to 0^{+}} \frac{h_{\epsilon}(x+z,t)-h_{\epsilon}(x,t)}{z}\] \[\quad=\lim_{z\to 0^{+}}\int_{\tau=-\infty}^{0}\int_{y=-\infty}^{ \infty}\left(\frac{h(x-y+z,t-\tau)-h(x-y,t-\tau)}{z}\right)\rho_{\epsilon}(y, \tau)dyd\tau\] \[\quad\leq\int_{\tau=-\infty}^{0}\int_{y=-\infty}^{\infty}\left( \frac{C_{1}}{t-\tau}\right)\rho_{\epsilon}(y,\tau)dyd\tau\] \[\quad\leq\frac{C_{1}}{t}.\] For \(h_{1},h_{2}\in L^{\infty}(\mathbb{R}^{2})\), define the quantities, \[\begin{split} H(x,t)&:=\frac{f(h_{1}(x,t))-f(h_{2}(x,t ))}{h_{1}(x,t)-h_{2}(x,t)}\\ &=\int_{0}^{1}f^{\prime}\Big{[}\lambda h_{1}(x,t)+(1-\lambda)h_{ 2}(x,t)\Big{]}d\lambda,\end{split} \tag{75}\] \[H_{\epsilon}(x,t):=\int_{0}^{1}f^{\prime}\Big{[}\lambda h_{1\epsilon}(x,t)+(1- \lambda)h_{2\epsilon}(x,t)\Big{]}d\lambda, \tag{76}\] \[\begin{split} M&:=\max_{\lambda\in[0,1]}\|\lambda h _{1}+(1-\lambda)h_{2}\|_{\infty},\\ L&:=\max_{\theta\in[-M,M]}|f^{\prime}(\theta)|,\\ L_{1}&:=\max_{\theta\in[-M,M]}|f^{\prime\prime}( \theta)|.\end{split} \tag{77}\] The notations (75) - (77) yield the following conclusions: 1. The value \(\max\left(\|H\|_{\infty},\|H_{\epsilon}\|_{\infty}\right)\) is less than or equal to \(L\). 2. The function \(H_{\epsilon}\) is in the space \(C^{1}(\mathbb{R}^{2})\). 3. The functions \(H_{\epsilon}\) converges to \(H\) in \(L^{1}_{loc}(\mathbb{R}^{2})\) as \(\epsilon\) goes to \(0\). 4. Suppose that \(h_{1},h_{2}\) satisfy (73), then from (74), for \(t>0\), we have \[\frac{\partial}{\partial x}\Big{[}\lambda h_{1\epsilon}(x,t)+(1-\lambda)h_{2 \epsilon}(x,t)\Big{]}\leq\frac{C_{1}}{t}.\] (78) 5. As \(f^{\prime\prime}\) is assumed to be positive, the relation (76) tells that for \(t>0\), we have \[\begin{split}\frac{\partial H_{\epsilon}}{\partial x}(x,t)& =\int_{0}^{1}f^{\prime\prime}(\lambda h_{1,\epsilon}+(1-\lambda)h_ {2,\epsilon})\frac{\partial}{\partial x}\left[\lambda h_{1\epsilon}+(1- \lambda)h_{2,\epsilon}\right]d\lambda\\ &\leq\frac{C_{1}}{t}\int_{0}^{1}f^{\prime\prime}(\lambda h_{1, \epsilon}+(1-\lambda)h_{2,\epsilon})d\lambda\\ &\leq\frac{C_{1}L_{1}}{t}.\end{split}\] (79) Assuming the properties mentioned in (70) - (79), we have the following lemma. **Lemma 5**.: _Let \(f\) be a uniformly convex function with \(u_{10},u_{20}\in L^{\infty}(\mathbb{R})\) and let \(u_{1}\) and \(u_{2}\) be two weak solutions to the PDE (1) satisfying the Oleinik-one-sided inequality (3). Then, for \(a<b\), \(0<\tau<T\) \(\psi\in C_{c}^{\infty}((a,b)\times(\tau,T))\), there holds that_ \[\left|\int_{0}^{\infty}\int_{-\infty}^{\infty}(u_{1}-u_{2})\psi\ dxdt\right|\leq \|\psi\|_{\infty}(T-\tau)\int_{a-LT}^{b+LT}|u_{10}(x)-u_{20}(x)|dx. \tag{80}\] Proof.: Setting \(u(x,t)\equiv 0\) for \(t<0\), we can assume that the functions \(u_{i}\in L^{\infty}(\mathbb{R}^{2})\), for \(i=1,2\). For \(i=1,2\), define \(h_{i}\) to be \(u_{i}\) and \(h_{i\epsilon}\) to be \(u_{i\epsilon}\). Furthermore, set \(H\), \(H_{\epsilon}\) to be the functions as in (75) and (76). Now, for \((x,t)\in\mathbb{R}^{2}\), define the following: * \(w_{0}(x):=u_{10}(x)-u_{20}(x)\), * \(w(x,t):=u_{1}(x,t)-u_{2}(x,t)\), * A function \(\chi(\theta)\equiv\chi(\theta,x,t)\) which solves the ODE : \[\left\{\begin{aligned} \frac{d\chi}{d\theta}(\theta)& =H_{\epsilon}(\chi(\theta,x,t),\theta)\\ \chi(t,x,t)&=x\end{aligned}\right.\] (81) * The function \(\varphi\) which is a solution to \[\left\{\begin{aligned} \left(\frac{\partial\varphi}{ \partial t}+H_{\epsilon}\frac{\partial\varphi}{\partial x}\right)(x,t)& =\psi(x,t),\ \ \ \ \ t<T,x\in\mathbb{R},\\ \varphi(x,T)&=0,\ \ Hence, for \(0<t<T\) and for \(C_{3}:=\frac{T^{C_{2}+2}}{C_{2}(C_{2}+1)}\), we see that \[\begin{split}\left|\frac{\partial\varphi}{\partial x}(x,t)\right|& =\left|\int_{t}^{T}\frac{\partial\psi}{\partial\xi}(\chi(\theta, x,t),\theta)\frac{\partial\chi}{\partial x}(\theta,x,t)d\theta\right|\\ &\leq\frac{C_{3}}{t^{C_{2}}}\left|\left|\frac{\partial\psi}{ \partial\xi}\right|\right|_{\infty}.\end{split} \tag{87}\] As \(\operatorname{supp}\psi\) is contained in \(\{(x,t);t>\tau\}\), for \(0<t<\tau,x\in\mathbb{R},t<\theta<\tau\), we have \[\begin{split}\frac{d}{d\theta}\varphi(\chi(\theta,x,t),\theta)& =\left(\frac{\partial\varphi}{\partial t}+H_{\epsilon}\frac{ \partial\varphi}{\partial x}\right)(\chi(\theta,x,t),\theta)\\ &=\psi(\chi(\theta,x,t),\theta)\\ &=0.\end{split} \tag{88}\] This tells that \(\varphi(x,t)=\varphi(\chi(\tau,x,t),\tau)\). Thus, the mean value theorem implies for \(0<t<\tau\), \[\int_{-\infty}^{\infty}\left|\frac{\partial\varphi}{\partial x}(x,t)\right| dx\leq\int_{-\infty}^{\infty}\left|\frac{\partial\varphi}{\partial x}(x,\tau) \right|dx. \tag{89}\] Now, since \(u_{1}\) and \(u_{2}\) are weak solutions, for \(0<\tau_{1}<\tau<T\), there holds \[\begin{split}\int_{-\infty}^{\infty}\int_{0}^{\infty}w\psi dxdt& =\int_{-\infty}^{\infty}\int_{0}^{\infty}w\left(\frac{\partial \varphi}{\partial t}+H_{\epsilon}\frac{\partial\varphi}{\partial x}\right)dxdt \\ &:=-I_{1}+I_{2}+I_{3},\end{split} \tag{90}\] where, the terms \(I_{j}\)'s are given by \[\begin{split} I_{1}&:=\int_{-\infty}^{\infty}w_{0}(x )\varphi(x,0)dx,\\ I_{2}&:=\int_{0}^{\tau_{1}}\int_{-\infty}^{\infty} \left(H_{\epsilon}-H\right)\frac{\partial\varphi}{\partial x}wdxdt,\\ I_{3}&:=\int_{\tau_{1}}^{T}\int_{-\infty}^{\infty} \left(H_{\epsilon}-H\right)\frac{\partial\varphi}{\partial x}wdxdt.\end{split} \tag{91}\] Estimation of \(I_{1},I_{2},I_{3}:\) From Eq. (81), for \(x\in\mathbb{R}\), \(0\leq t<\theta<T\), we have \[\chi(\theta,x,t)=\chi(t,x,t)+\int_{t}^{\theta}H_{\epsilon}(\chi(s,x,t),s)ds. \tag{92}\] For \(0<\theta<T\), the Eq. (52) and Eq. (92) tells \[|\chi(\theta,x,0)-x|\leq\|H_{\epsilon}\|_{\infty}T\leq LT. \tag{93}\] Thus, there holds \[x-LT\leq\chi(\theta,x,t)\leq x+LT. \tag{94}\] Therefore, if \(x+LT\leq a\iff x\leq a-LT\), then we have \(\chi(\theta,x,t)\leq a\). If \(x-LT\geq b\iff x\geq b+LT\), then we have \(\chi(\theta,x,t)\geq b\). Thus, the Eq. (83) tells \(\varphi(x,t)=0\) for \(x\notin[a-LT,b+LT]\) and \[|\varphi(x,t)|\leq\|\psi\|_{\infty}(T-\tau). \tag{95}\] Therefore, \(I_{1}\) can be estimated as \[\begin{split}|I_{1}|&=\left|\int_{-\infty}^{\infty} w_{0}(x)\varphi(x,0)dx\right|\\ &\leq\|\psi\|_{\infty}(T-\tau)\int_{a-LT}^{b+LT}|w_{0}(x)|dx. \end{split} \tag{96}\] For the part of \(I_{2}\), for \(0<\tau_{1}<\tau<T\), the conclusion (1) and (94) yields \[\begin{split}|I_{2}|&=\left|\int_{0}^{\tau_{1}}\int _{-\infty}^{\infty}(H_{\epsilon}-H)\frac{\partial\varphi}{\partial x}wdxdt \right|\\ &\leq\underbrace{2\|w\|_{\infty}L\tau_{1}\int_{-\infty}^{\infty} \left|\frac{\partial\varphi}{\partial x}(x,\tau)\right|dx}_{\text{goes to 0 as $\tau_{1}$ goes to 0.}} \end{split} \tag{97}\] Lastly, the estimation on \(I_{3}\) can be done in the following way. The conclusion (1) gives the convergence of \(H_{\epsilon}\) to \(H\) in \(L^{1}_{loc}\). Therefore, (87) and (95) gives \[\begin{split}|I_{3}|&=\left|\int_{\tau_{1}}^{T}\int _{-\infty}^{\infty}(H_{\epsilon}-H)\frac{\partial\varphi}{\partial x}wdxdt \right|\\ &\leq\int_{\tau_{1}}^{T}\int_{a-LT}^{b+LT}|H_{\epsilon}-H|\left| \frac{\partial\varphi}{\partial x}\right||w|dxdt\\ &\leq\underbrace{\frac{C_{3}\|\frac{\partial\psi}{\partial\xi}\|_ {\infty}}{\tau_{1}^{C_{2}}}\|w\|_{\infty}\int_{\tau}^{T}\int_{a-LT}^{b+LT}|H_{ \epsilon}-H|dxdt.}_{\text{goes to 0 as $\epsilon$ goes to 0.}}\end{split} \tag{98}\] Sending \(\epsilon\) to \(0\) and then \(\tau_{1}\) to \(0\), tells \(|I_{2}|+|I_{3}|\to 0.\) Thus, by (90) and (96), we see that \[\left|\int_{-\infty}^{\infty}\int_{0}^{T}w\psi dxdt\right|\leq\|\psi\|_{ \infty}(T-\tau)\int_{a-LT}^{b+LT}|w_{0}(x)|dx, \tag{99}\] which proves the lemma. Using the above results, we now prove the Theorem (1), the Theorem (2) and the Theorem (3). **Proof of the Theorem (1).** The first four parts of the Theorem (1) follows from the Lemma (2). To conclude the theorem, we have to prove the last part of it. Define \(f_{\eta}\) as in (28), with renaming \(\epsilon\) to be \(\eta\), set \[f_{\eta}(p):=(f*\alpha_{\eta})(p)+\eta p^{2}.\] Then, \(\{f_{\eta}\}\) is uniformly convex, smooth and converges to \(f\) on compact subsets by the Lemma 1. Since, \(f\) is convex, \(f^{\prime}\) exists almost everywhere and the Dominated Convergence Theorem gives \[f^{\prime}_{\eta}(p)=(f^{\prime}*\alpha_{\eta})(p)+2\eta p. \tag{100}\] Set \[M :=\max\{\|u_{10}\|_{\infty},\|u_{20}\|_{\infty}\}\] \[I_{\eta} :=[-M-\eta,M+\eta]\] \[L :=\limsup_{\eta\to 0}\{|f^{\prime}(q)|;q\in I_{\eta}\}.\] Then, for \(|p|\leq M\), we have \[|f^{\prime}_{\eta}(p)| \leq|(f^{\prime}*\alpha_{\eta})(p)|+2\eta M \tag{101}\] \[\leq\sup\{|f^{\prime}(q)|;q\in I_{\eta}\}+2\eta M,\] and hence, we see that \[\lim_{\eta\to 0}|f^{\prime}_{\eta}(p)|\leq L. \tag{102}\] For \(i=1,2\), define \(u_{i\eta}\) to be the weak solution to the PDE : \[\begin{cases}\frac{\partial}{\partial t}u_{i\eta}+\frac{\partial}{ \partial x}\big{[}f_{i\eta}(u_{i\eta})\big{]}&=0;\quad x\in\mathbb{R},t>0,\\ u_{i\eta}(x,0)&=u_{i0}(x);\quad x\in\mathbb{R},\end{cases} \tag{103}\] and set \(\omega_{\eta}:=u_{1\eta}-u_{2\eta}\). Also, set \(w_{0}:=u_{10}-u_{20}\). For \(0<\tau<T\), \(a<b\), \(\psi\in C_{c}^{\infty}((a,b)\times(\tau,T))\), the Lemma (5) gives \[\left|\int_{-\infty}^{\infty}\int_{0}^{\infty}\omega_{\eta}\psi dxdt\right| \leq\|\psi\|_{\infty}(T-\tau)\int_{a-LT}^{b+LT}|\omega_{0}(x)|dx. \tag{104}\] On the compact set \(\operatorname{supp}(\psi)\), Lemma 1 tells that \(f_{\eta}\) converges to \(f\). Now, from the Stability Lemma (3), \(\omega_{\eta}\) converges to \(\omega\) in \(\mathscr{D}^{\prime}(\mathbb{R}\times(0,\infty))\) as \(\eta\) goes to \(0\). Thus, sending \(\eta\) to \(0\), we see that \[\left|\int_{a}^{b}\int_{\tau}^{T}\omega\psi dxdt\right|\leq\|\psi\|_{\infty}( T-\tau)\int_{a-LT}^{b+LT}|\omega_{0}(x)|dx. \tag{105}\] Now, letting \(\psi\rightarrow\frac{\omega}{|\omega|}\) gives \[\int_{a}^{b}\int_{\tau}^{T}|\omega|dxdt\leq(T-\tau)\int_{a-LT}^{b+LT}|\omega_{0} (x)|dx. \tag{106}\] Thus for a.e \(T>0,\) by the Lebesgue differentiation theorem (refer [10]), we have \[\lim_{\tau\to T}\frac{1}{T-\tau}\int_{\tau}^{T}\left(\int_{a}^{b}| \omega(x,\theta)|dx\right)d\theta=\int_{a}^{b}|\omega(x,T)|dx, \tag{107}\] which proves the fifth point of the first theorem i.e. \[\int_{a}^{b}|\omega(x,T)|dx\leq\int_{a-LT}^{b+LT}|\omega_{0}(x)|dx. \tag{108}\] **Proof of the Theorem (2).** Set \(\omega_{0}:=u_{10}-u_{20}\) which is non positive function and let \(0<\tau<T,\)\(\psi\in C_{0}^{1}(\mathbb{R}\times(0,T))\) be a function such that \[\psi(x,t)\geq 0,\quad\forall(x,t)\in\mathbb{R}\times(0,\infty). \tag{109}\] Let \(\varphi\) be as in (82) which tells \(\varphi(x,0)\leq 0\) for all \(x\in\mathbb{R}\) by (83) and by assumption, \(\omega_{0}\leq 0.\) Now, from (83) and (90), we have \[\begin{split}&\int_{-\infty}^{\infty}\int_{0}^{\infty}(u_{1}(x,t)-u _{2}(x,t))\psi(x,t)dxdt\\ &=-\int_{-\infty}^{\infty}\omega_{0}(x)\varphi(x,0)dx+\int_{0}^{ T}\int_{-\infty}^{\infty}(H_{\epsilon}-H)\frac{\partial\varphi}{\partial x} \omega dxdt\\ &\leq\int_{0}^{T}\int_{-\infty}^{\infty}(H_{\epsilon}-H)\frac{ \partial\varphi}{\partial x}\omega dxdt\quad\underset{\epsilon\to 0}{ \longrightarrow}0.\end{split} \tag{110}\] Thus, for all positive \(\psi\in C_{0}^{1}((a,b)\times(\tau,T)),\) we see that \[\int_{a}^{b}\int_{\tau}^{T}\big{[}u_{1}(x,t)-u_{2}(x,t)\big{]}\psi(x,t)dxdt\leq 0, \tag{111}\] which tells that for a.e \((x,t)\in\mathbb{R}\times(0,\infty),\) the functional inequality \[u_{1}(x,t)\leq u_{2}(x,t), \tag{112}\] which proves the first part of the theorem. For \(a=-\infty,b=+\infty\), for \(i\in\{1,2\}\), we have \[\begin{split}\int_{-\infty}^{\infty}\int_{\tau}^{T}|u_{in}(x,t)& -u_{im}(x,t)|dxdt\\ &\leq(T-\tau)\underbrace{\int_{-\infty}^{\infty}|u_{0in}(x)-u_{0 im}(x)|dx}_{\text{goes to 0 as $n,m\to\infty$, by hypothesis.}}\end{split} \tag{113}\] which tells that \(\{u_{in}\}\) is a cauchy sequence in \(L^{1}(\mathbb{R}\times(\tau,T))\). Thus, there exists \(u_{i}\in L^{1}(\mathbb{R}\times(0,T))\) such that \(\lim_{n\to\infty}u_{in}(x,t)=u_{i}(x,t)\). The \(L^{1}\) contraction property then gives \[\begin{split}\int_{-\infty}^{\infty}\int_{\tau}^{T}|u_{1}(x,t)& -u_{2}(x,t)|dxdt\\ &=\lim_{n\to\infty}\int_{-\infty}^{\infty}\int_{\tau}^{T}|u_{1n}( x,t)-u_{2n}(x,t)|dxdt\\ &\leq(T-\tau)\lim_{n\to\infty}\int_{-\infty}^{\infty}|u_{10n}(x)-u _{20n}(x)|dx\\ &=(T-\tau)\int_{-\infty}^{\infty}|u_{10}(x)-u_{20}(x)|dx,\end{split} \tag{114}\] which proves Eq. (24) and taking \(u_{10}=u_{20}\) in Eq. (114) gives Eq. (25). Finally, we have \[\begin{split}\lim_{\tau\to T}\frac{1}{T-\tau}\int_{\tau}^{T} \Big{(}\int_{\mathbb{R}}|u_{1}(x,t)-u_{2}(x,t)|dx\Big{)}dt=\\ \int_{\mathbb{R}}|u_{1}(x,T)-u_{2}(x,T)|dx.\end{split}\] So, for a.e \(T>0\), we have \[\int_{\mathbb{R}}|u_{1}(x,T)-u_{2}(x,T)|dx\leq\int_{-\infty}^{\infty}|u_{10}(x )-u_{20}(x)|dx.\] From the last part of this theorem, we have for \(u_{0}\in L^{1}(\mathbb{R})\), the function constructed \(u\) is in \(L^{1}(\mathbb{R}\times(0,T))\). But, it is not yet clear if \(f(u(x,t))\) is well defined and satisfy the equation (1). We shall prove this in several steps. Let \(u_{0}\in L^{1}(\mathbb{R})\) and \(u_{0n}\in L^{\infty}(\mathbb{R})\cap L^{1}(\mathbb{R})\) such that \(u_{0n}\) converges to \(u_{0}\) in \(L^{1}(\mathbb{R})\) as \(n\) goes to infinity. Let \(\epsilon>0\) and \(f_{\epsilon}\) be as in (28). Let \(V_{\epsilon,n}\) be the value function as in (15) with the flux \(f_{\epsilon}\) and the initial data \(u_{0n}\). Furethermore, let the corresponding charecteristic set be \(Ch_{\epsilon,n}(x,t)\) with \(y_{\pm,\epsilon,n}(x,t)\) as defined in (15), (16). Let \(K\) be a compact subset of \(\mathbb{R}\times(0,\infty)\). Then, the following holds. Step 1: There exist \(C\equiv C(K)>0\), independent of \(\epsilon\) and \(n\) such that for any \((x,t)\in K,n>0,y\in Ch_{\epsilon,n}(x,t)\), there holds \[|y|\leq C(K). \tag{115}\] Proof.: Suppose not, then there is a sequence \(\epsilon_{k}\to 0\), \((x_{k},t_{k})\in K\), \(n_{k}\to\infty\), \(y_{k}\in Ch_{\epsilon_{k},n_{k}}(x_{k},t_{k})\) such that, * \(\lim_{k\to\infty}(x_{k},t_{k})=(x_{0},t_{0})\in K\). * \(\lim_{k\to\infty}|y_{k}|=\infty\). Since, \(y_{k}\in Ch_{\epsilon_{k},n_{k}}(x_{k},t_{k})\), we see that for all \(z\in\mathbb{R}\), there holds \[V_{\epsilon_{k},n_{k}}(x_{k},t_{k}) =v_{0n_{k}}(y_{k})+t_{k}f_{\epsilon_{k}}^{*}\left(\frac{x_{k}-y_{ k}}{t_{k}}\right)\] \[\leq v_{0n_{k}}(z)+t_{k}f_{\epsilon_{k}}^{*}\left(\frac{x_{k}-z}{ t_{k}}\right),\] where, \[v_{0,n_{k}}(z):=\int_{0}^{z}u_{0,n_{k}}(\theta)d\theta.\] From the convergence of \(u_{0,n}\) to \(u_{0}\) in \(L^{1}\), we see that there here exists \(k_{0}\geq 1\) such that for all \(z\) and for all \(k\geq k_{0}\), we have \[|v_{0,n_{k}}(z)|\leq\int_{-\infty}^{\infty}|u_{0,n_{k}}(\theta)|d\theta\leq 2 \int_{-\infty}^{\infty}|u_{0}(\theta)|d\theta.\] Also, note that compact set \(K\) lies strictly in the upper half plane, which tells that for all \((x,t)\in K\), the time factor \(t\) is strictly bigger than some positive number. Now, for \(k\geq k_{0}\), evaluating at \(z=0\), we have \[t_{k}f_{\epsilon_{k}}^{*}\left(\frac{x_{k}-y_{k}}{t_{k}}\right)\leq 2\|u_{0} \|_{L^{1}}+t_{k}f_{\epsilon_{k}}^{*}\left(\frac{x_{k}}{t_{k}}\right).\] Letting \(k\) going to infinity, we see that \[\lim_{k\to\infty}f_{\epsilon_{k}}^{*}\left(\frac{x_{k}-y_{k}}{t_{k}}\right) \leq\frac{2\|u_{0}\|_{L^{1}}}{t_{0}}+f^{*}\left(\frac{x_{0}}{t_{0}}\right). \tag{116}\] Now, \(f_{\epsilon}\) goes to \(f\) on compact sets tells that \(f_{\epsilon}\) is uniformly bounded on \([-1,1]\). So, by the definition of the Fenchel dual, there exist \(q_{0}\geq 1\) such that for \(|q|\geq q_{0}\) and for the particular \(p=\frac{q}{|q|}\), there holds \[\frac{f_{\epsilon}^{*}(q)}{|q|}\geq 1-\frac{f_{\epsilon}\left(\frac{q}{|q|} \right)}{|q|}\geq\frac{1}{2},\] or equivalently, there holds \[f_{\epsilon}^{*}(q)\geq\frac{1}{2}|q|. \tag{117}\] Now, \(|y_{k}|\) goes to infinity implies that for \(k\) large, we have \(\left|\frac{x_{k}-y_{k}}{t_{k}}\right|\geq q\). Along with (116) and (117), we have \[\begin{split}\infty&=\frac{1}{2}\lim_{k\to\infty} \left|\frac{x_{k}-y_{k}}{t_{k}}\right|\\ &\leq\lim_{k\to\infty}f_{\epsilon_{k}}^{*}\left(\frac{x_{k}-y_{k }}{t_{k}}\right)\\ &\leq\frac{2\|u_{0}\|_{L^{1}}}{t_{0}}+f^{*}\left(\frac{x_{0}}{t_ {0}}\right),\end{split} \tag{118}\] which is a contradiction. Step 2: We have the limit, \[\lim_{|q|\to\infty}\inf_{0<\epsilon<1}|f_{\epsilon}^{\prime}(q)|=\infty. \tag{119}\] Proof.: Since, \(f\) has super-linear growth and convex, we see that \[\lim_{|q|\to\infty}|f^{\prime}(q)|=\infty. \tag{120}\] By the Dominated Convergence Theorem, we have \[f_{\epsilon}^{\prime}(q)=\int_{|y|\leq 1}f^{\prime}(q-\epsilon y)\alpha(y)dy+2 \epsilon q.\] If \(q\to\infty\), by (120), we see that \[\lim_{q\to\infty}\inf_{0<\epsilon<1,|y|\leq 1}f^{\prime}(q-\epsilon y)=\infty.\] Thus, by the Fatou's lemma, there holds \[\lim_{q\to\infty}\inf_{0<\epsilon<1}f_{\epsilon}^{\prime}(q)\geq\int_{|y| \leq 1}\left(\liminf_{q\to\infty}f^{\prime}(q-\epsilon y)\right)\alpha(y)dy=\infty.\] Similarly, if \(q\to-\infty\), then from (120), we have \[\lim_{q\to-\infty}\inf_{0<\epsilon<1,|y|\leq 1}\left(-f^{\prime}(q-\epsilon y) \right)=\infty.\] and from convexity of \(f\) i.e \(f^{\prime}\) is decreasing near \(-\infty\), we see that \[\lim_{q\to-\infty}\inf_{0<\epsilon<1}\left(-f_{\epsilon}^{\prime}(q)\right) \geq\int_{|y|\leq 1}\left(\liminf_{q\to-\infty}-f^{\prime}(q-1)\right) \alpha(y)dy=\infty. \tag{121}\] * Let \(u_{\epsilon,n}\) be the solution of the PDE (1) with the flux \(f_{\epsilon}\) and the initial data \(u_{0n}\). Then, by the Lax-Oleinik explicit formula, for \(t>0\) and a.e \(x\in\mathbb{R}\), we see there exist \(y_{+,\epsilon,n}\) (as defined in Eq. (16)) such that \[f_{\epsilon}^{\prime}(u_{\epsilon,n}(x,t))=\frac{x-y_{+,\epsilon,n}(x,t)}{t}\] (122) Let \(K\subset\mathbb{R}\times(0,\infty)\) be a compact set. Then, from the (Step 1:), there exist \(C(K)>0\) such that for all \(0<\epsilon<1\), \((x,t)\in K\), for all \(n\), we have \[|f_{\epsilon}^{\prime}(u_{\epsilon,n}(x,t))|=\left|\frac{x-y_{+,\epsilon,n}(x,t )}{t}\right|\leq C(K).\] (123) From Item 3 of the Lemma 3, letting \(\epsilon\to 0\), we obtain the limit \(u_{\epsilon,n}(x,t)\to u_{n}(x,t)\) in \(\mathcal{D}^{\prime}(\mathbb{R}\times(0,\infty))\) and from Eq. (99), \(u_{n}(x,t)\) is in \(L^{1}_{loc}(\mathbb{R}\times(0,\infty))\). From the (Step 2:), it is seen that the set \(\{u_{\epsilon,n}(x,t)\}\) is uniformly bounded, for all \((x,t)\in K\), for a fixed \(n\in\mathbb{N}\) and for all \(\epsilon\) near zero. Now, we show that the uniform bound can be taken to be independent of \(n\) as well. Let \(K\subset\mathbb{R}\times(0,\infty)\) be a rectangle and \(\Omega:=int(K)\), the interior of the set \(K\). Set the terms in the Proposition 1 mentioned in the Appendix, as * \(w_{k}(x,t)\equiv u_{\epsilon,n}(x,t)\), * \(w(x,t)\equiv u_{n}(x,t)\). Since, the function \(u_{n}(x,t)\) is defined as \(\frac{\partial V_{n}}{\partial x}(x,t)\), we have \[\|u_{n}\|_{L^{\infty}(\mathbb{R}\times(0,\infty))}\leq Lip(V_{n})\|u_{0,n}\|_{ L^{\infty}(\mathbb{R})}.\] So, from the Proposition 1, for all \(n\in\mathbb{N}\), we see that \[\|u_{n}\|_{L^{\infty}(K)}\leq\sup_{k}\|u_{\epsilon,n}\|_{L^{\infty}(K)}.\] (124) Now, the \(L^{1}-\)contractivity tells that the functions \(u_{n}(x,t)\) is cauchy in \(L^{1}_{loc}(\mathbb{R}\times(0,\infty))\) and hence, converges to some function \(u(x,t)\) in \(L^{1}_{loc}(\mathbb{R}\times(0,\infty))\). The Eq. (124) tells that the solution \(u:=\lim u_{n}\) is in \(L^{\infty}(K)\). The function \(f\) is convex and so is continuous. The fact that the \(L^{1}\) convergence implies there exist a subsequence that converge pointwise almost everywhere tells that there is some subsequence such that \(f(u_{n_{k}}(x,t))\) converges to \(f(u(x,t))\), for a.e \((x,t)\in\mathbb{R}\times\mathbb{R}^{+}\). The \(\{u_{n}(x,t)\}\) is bounded on \(K\) tells that by the dominated convergence theorem, for all \(\varphi\in C_{c}^{\infty}(K)\), we have \[\begin{split}\int_{\mathbb{R}}\int_{0}^{\infty}\Big{[}u\varphi_{t}+f (u)\varphi_{x}\Big{]}dxdt&=\lim_{k\to\infty}\int_{\mathbb{R}}\int_{0 }^{\infty}\Big{[}u_{n_{k}}\varphi_{t}+f(u_{n_{k}})\varphi_{x}\Big{]}dxdt\\ &=0.\end{split} \tag{125}\] For the last part of the theorem, fix \(n\in\mathbb{N}\) and for \(\eta>0\), \(\epsilon>0\) and \(T>0\), define the function \(\varphi(x,t):=A_{\epsilon}(x)B_{\eta}(t)\) by, \[A_{\epsilon}(x):=\left\{\begin{aligned} & 1&\text{ if }x\in[a,b],\\ & 0&\text{ if }x\notin[a-\epsilon,b+\epsilon],\\ &\frac{x-a+\epsilon}{\epsilon}&\text{ if }x\in[a-\epsilon,a],\\ &\frac{b+\epsilon-x}{\epsilon}&\text{ if }x\in[b,b+\epsilon]. \end{aligned}\right. \tag{126}\] \[B_{\eta}(t):=\left\{\begin{aligned} & 1&\text{ if }t\in[0,T],\\ &\frac{T+\eta-t}{\eta}&\text{ if }t\in[T,T+\eta],\\ & 0&\text{ if }t\geq T+\eta.\end{aligned}\right. \tag{127}\] The above defined \(\varphi\) is lipshitz and has compact support. Now, from the weak formulation Eq. (2), for the solution \(u_{n}\) satisfying the conservation laws with the initial data \(u_{0n}\in L^{\infty}(\mathbb{R})\), we have \[\begin{split}\frac{-1}{\eta}\int_{T}^{T+\eta}&\int_{ -\infty}^{\infty}u_{n}(x,t)A_{\epsilon}(x)dxdt\\ &+\int_{0}^{T+\eta}\int_{-\infty}^{\infty}f(u_{n}(x,t))\left(A_{ \epsilon}(x)\right)_{x}B_{\eta}(t)dxdt\\ &+\int_{-\infty}^{\infty}u_{0n}(x)A_{\epsilon}(x)dx=0.\end{split}\] As \(u_{0n}\) is in \(L^{\infty}(\mathbb{R})\), we have that the function \(u_{n}(x,t)\) to be in \(L^{\infty}(\mathbb{R}\times(0,\infty))\). So, by the Lebesgue differentiation theorem and the dominated convergence theorem, for a.e \(t>0\) depending on \(A_{\epsilon}\), sending \(\eta\to 0\), we have \[\int_{-\infty}^{\infty} u_{n}(x,t)A_{\epsilon}(x)dx\] \[=\int_{0}^{t}\int_{-\infty}^{\infty}f(u_{n}(x,\tau))\left(A_{ \epsilon}(x)\right)_{x}B_{\eta}(\tau)dxd\tau\] \[+\int_{-\infty}^{\infty}u_{0n}(x)A_{\epsilon}(x)dx.\] Now, let \(t\to 0\) to get \[\lim_{t\to 0}\int_{-\infty}^{\infty}u_{n}(x,t)A_{\epsilon}(x)dx=\int_{- \infty}^{\infty}u_{0n}(x)A_{\epsilon}(x)dx\] Equivalently, there holds \[\lim_{t\to 0}\left[\int_{a}^{b}u_{n}(x,t)dx+\int_{a-\epsilon}^{a}u_{n }(x,t)A_{\epsilon}(x)dx+\int_{b}^{b+\epsilon}u_{n}(x,t)A_{\epsilon}(x)dx\right]\] \[=\int_{a}^{b}u_{0n}(x)dx+\int_{a-\epsilon}^{a}u_{0n}(x,t)A_{ \epsilon}(x)dx+\int_{b}^{b+\epsilon}u_{0n}(x,t)A_{\epsilon}(x)dx\] Observe that the chosen \(A_{\epsilon}\) has the range \([0,1]\). So, let \(\epsilon\to 0\) to obtain \[\lim_{t\to 0}\int_{a}^{b}u_{n}(x,t)dx=\int_{a}^{b}u_{0n}(x)dx.\] (128) Finally as \(u_{n}\to u\) in \(L^{1}(\mathbb{R}\times(0,T))\), for all \(T>0\) and by the \(L^{1}-\)contractive property, for a.e \(t>0\), we see that \[\left|\int_{a}^{b}u(x,t)dx-\int_{a}^{b}u_{0}(x)dx\right| \leq\left|\int_{a}^{b}(u(x,t)-u_{n}(x,t))dx\right|\] \[+\left|\int_{a}^{b}(u_{n}(x,t)-u_{0n}(x))dx\right|\] (129) \[+\left|\int_{a}^{b}(u_{0}(x)-u_{0n}(x))dx\right|\] \[\leq 2\int_{-\infty}^{\infty}|u_{0}(x)-u_{0n}(x)|dx\] \[+\left|\int_{a}^{b}(u_{n}(x,t)-u_{0n}(x))dx\right|.\] From Eq. (128) and the fact that \(u_{0n}\) converge to \(u_{0}\) in the \(L^{1}\) norm, letting \(t\to 0\) and \(n\to\infty\), we have \[\lim_{t\to 0}\int_{a}^{b}u(x,t)dx=\int_{a}^{b}u_{0}(x)dx.\] This, together with (125) gives that \(u\) is a "Kruzkov" solution and this concludes the proof for the second theorem. **Proof of the Theorem (3).** Looking at the possibilities for \(\mu_{\pm}\), we have four cases: 1. \(\mu_{+}=\infty\) and \(\mu_{-}=-\infty\). 2. \(\mu_{+}=\infty\) and \(\mu_{-}>-\infty\). 3. \(\mu_{+}<\infty\) and \(\mu_{-}=-\infty\). 4. \(\mu_{+}<\infty\) and \(\mu_{-}>-\infty\). The Theorem 2 deals with the case 1. So, it is now enough to prove for the case 2 and a similar analysis follows for the cases 3 and 4.So, assume that \(\mu_{+}=+\infty\) and \(\mu_{-}>-\infty\). Also for \(n\geq 1\), let \(p_{n}\in(-n-1,-n)\) such that the function \(f\) is differentiable at \(p_{n}\). Furthermore, let \(f_{n}\) be the mollification of \(f\) at \(A=p_{n}\) and \(B=\infty\) as mentioned in Eq. (142) in the appendix. Also, let \(u_{0}\) be a function in \(L^{1}(\mathbb{R})\) and define \[u_{n0}(x):=\begin{cases}u_{0}(x)&\text{ if }u_{0}(x)\geq-n\\ 0&\text{ otherwise.},\end{cases} \tag{130}\] Then, from the Theorem 1, there exist a solution \(u_{n}\) of (2) satisfying \(\|u_{n}\|_{\infty}\leq\|u_{n0}\|_{\infty}\). Now, from (Item 1) of the Theorem 2, we have \(u_{n}(x,t)\geq-n\) for a.e \((x,t)\in\mathbb{R}\times(0,\infty)\). Hence, for \(m>n\), we have \[f_{n}\left(u_{n}\left(x,t\right)\right)=f_{m}\left(u_{n}\left(x,t\right) \right), \tag{131}\] which tells that the functions \(u_{n}\) and \(u_{m}\) are solutions for the same flux \(f_{m}\). Thus, by the \(L^{1}-\)contractivity, for \(0<\tau<T\), there holds \[\int_{-\infty}^{\infty}\int_{\tau}^{T}|u_{n}(x,t)-u_{m}(x,t)|dxdt\leq\underbrace {(T-\tau)\int_{-\infty}^{\infty}|u_{n0}(x)-u_{m0}(x)|dx.}_{\text{goes to 0 as}m,n\to\infty} \tag{132}\] Thus, we have that the functions \(\{u_{n}\}\) to be cauchy in \(L^{1}_{loc}(\mathbb{R}\times(0,\infty))\). Now, as \(u_{n+1,0}(x)\leq u_{n0}(x)\) holds, from the part (2) of the Theorem 2, we see that \[u_{n+1}(x,t)\leq u_{n}(x,t) \tag{133}\] Define \[u(x,t):=\lim_{n\to\infty}u_{n}(x,t) \tag{134}\] Furthermore, let \(u_{0}\) and \(\widetilde{u_{0}}\) be functions in \(L^{1}(\mathbb{R})\) and set \(u(x,t)\) and \(\widetilde{u}(x,t)\) to be as in Eq. (134). Then, from Eq. (131), it follows that \[\begin{split}\int_{-\infty}^{\infty}\int_{\tau}^{T}|u(x,t)- \widetilde{u}(x,t)|dxdt&\leq\lim_{n\to\infty}\int_{-\infty}^{ \infty}\int_{\tau}^{T}|u_{n}(x,t)-\widetilde{u_{n}}(x,t)|dxdt\\ &\leq(T-\tau)\lim_{n\to\infty}\int_{-\infty}^{\infty}|u_{n0}(x)- \widetilde{u_{n0}}(x)|dx\\ &\leq(T-\tau)\int_{-\infty}^{\infty}|u_{0}(x)-\widetilde{u_{0}} (x)|dx.\end{split} \tag{135}\] As in the earlier proof, we have \[\lim_{\tau\to T}\frac{1}{T-\tau}\int_{-\infty}^{\infty}\int_{\tau}^{T}|u(x,t)- \widetilde{u}(x,t)|dxdt=\int_{-\infty}^{\infty}|u(x,T)-\widetilde{u}(x,T)|dxdt.\] Along with Eq. (135), we see that Eq. (26) is established. Now, as \(u_{n0}(x)\geq-n\), we have that \(u_{n}(x,t)\geq-n\) for a.e \((x,t)\in\mathbb{R}\times(0,\infty)\) and so there holds \[f\left(u_{n}(x,t)\right)=f_{n}\left(u_{n}(x,t)\right).\] Hence, for a compact set \(K\subset\mathbb{R}\times(0,\infty)\) and for any \(\varphi\in C_{c}^{\infty}(K)\), we have \[\int_{-\infty}^{\infty}\int_{0}^{\infty}\left(u_{n}\varphi_{t}+f(u_{n}) \varphi_{x}\right)dxdt=\underbrace{\int_{-\infty}^{\infty}\int_{0}^{\infty} \left(u_{n}\varphi_{t}+f_{n}(u_{n})\varphi_{x}\right)dxdt.}_{\text{equals 0}} \tag{136}\] Since \(u_{-}\leq 0\leq u_{+}=\infty\), \(|u_{-}|<\infty\), there exist \(\alpha>0,\beta>0\) such that \[f(p)\leq\begin{cases}\alpha+\beta|p|,&\text{ if }p\leq 0\\ \alpha+f(p),&\text{ if }p\geq 0.\end{cases} \tag{137}\] Set \[E_{-}:=\{(x,t)\in\mathbb{R}\times(0,\infty);u(x,t)\leq 0\}\] \[E_{+}:=\{(x,t)\in\mathbb{R}\times(0,\infty);u(x,t)\geq 0\}\] Let \(K\subset\mathbb{R}\times(0,\infty)\) be a compact set. As in the proof of Theorem 2, since \(u_{+}=\infty\), there exist a constant \(C(K)\geq 0\) such that \[|u_{1}(x,t)|\leq C(K),\quad\forall(x,t)\in E_{+}.\] Now, \(u(x,t)\leq u_{1}(x,t)\), we have \[|u(x,t)|\leq C(K),\quad\text{ for }(x,t)\in E_{+}. \tag{138}\] The Eq. (137) gives that for a.e \((x,t)\in\mathbb{R}\times(0,\infty)\), we have \[|f(u(x,t))|\leq\begin{cases}\alpha+\beta|u(x,t)|,&\text{ if }(x,t)\in E_{-},\\ \alpha+|f(u(x,t))|,&\text{ if }(x,t)\in E_{+}.\end{cases} \tag{139}\] The Eq. (138) tells that there exist a \(\tau\geq\beta\) such that \[|f(u(x,t))|\leq\alpha+\tau|u(x,t)|. \tag{140}\] The function \(u\) is in \(L^{1}(\mathbb{R}\times(0,T))\), for all \(T>0\) tells by the Dominated Convergence theorem, that for all \(\varphi\in C_{c}^{\infty}(K)\), there holds \[\begin{split}\int_{-\infty}^{\infty}\int_{0}^{\infty}\left[u \varphi_{t}+f(u)\varphi_{x}\right]dxdt&=\lim_{n\to\infty}\int_{- \infty}^{\infty}\int_{0}^{\infty}\left[u_{n}\varphi_{t}+f(u_{n})\varphi_{x} \right]dxdt\\ &=\lim_{n\to\infty}\int_{-\infty}^{\infty}\int_{0}^{\infty} \left[u_{n}\varphi_{t}+f_{n}(u_{n})\varphi_{x}\right]dxdt\\ &=0,\end{split} \tag{141}\] which is true by the fact that \(f(u_{n})=f_{n}(u_{n})\). Finally, as in the Theorem 2, for \(a<b\), we have \[\lim_{t\to 0}\int_{a}^{b}u(x,t)dx=\int_{a}^{b}u_{0}(x)dx,\] which concludes the proof for the third theorem. ## Appendix * **Mollification to Super Linear Growth.* * For \(f:\mathbb{R}\mapsto\mathbb{R}\), a convex function, \(f\) is differentiable a.e. Let \(A<B\), be two points in \(\mathbb{R}\) where \(f\) is differentiable at. Let \(D>0\) and set \[g(x):=\begin{cases}f(p)&\text{ if }A\leq p\leq B,\\ f(A)+f^{\prime}(A)(p-A)+D(p-A)^{2}&\text{ if }p\leq A,\\ f(B)+f^{\prime}(B)(p-B)+D(p-B)^{2}&\text{ if }p\geq B,\end{cases}\] (142) Then the function \(g\) has the following properties: * The function \(g\) has superlinear growth. * The function \(g\) is convex. * There holds the equality \(g(x)=f(x)\), for \(x\in[A,B]\). * **Proof of the Lemma (1).** As in the definition of \(f^{*}\), \[f^{*}(q):=\sup\{p.q-f(p);p\in\mathbb{R}\},\] we see that for any \(p,q\in\mathbb{R}\), there holds \[f^{*}(q)\geq p.q-f(p).\] Normalising the quantities, we get \[\frac{f^{*}(q)}{|q|}\geq p.\frac{q}{|q|}-\frac{f(p)}{|q|},\] which tells that \[\lim_{|q|\to\infty}\frac{f^{*}(q)}{|q|}\geq p.w\quad\text{for all p},\] where \(w\in\{-1,+1\}\). Sending \(p.w\) to infinty gives the superlinearity of \(f^{*}\), \[\lim_{|q|\to\infty}\frac{f^{*}(q)}{|q|}=\infty.\] The function \(f\) is superlinear implies \[p.q-f(p) =|p|\left(\frac{p}{|p|}q-\frac{f(p)}{|p|}\right)\] \[\leq\qquad\underbrace{|p|\left(|q|-\frac{f(p)}{|p|}\right)}_{ \text{goes to $-\infty$ as $|p|$ goes to infinity}.}\] which tells that there exists \(p_{0}\geq 0\) such that \[f^{*}(q)=\sup_{p\in\mathbb{R}}\{p.q-f(p)\}\leq\sup_{|p|\leq p_{0}}\{p.q-f(p)\},\] and so, \(f^{*}(q)<\infty\), for all \(q\in\mathbb{R}\). Let \(f_{\epsilon}\) satisfy the assumptions (3). Let \(M>0\) and \(|q|\leq M\), then we have \[p.q-f_{\epsilon}(p) =|p|\left(\frac{p}{|p|}q-\frac{f_{\epsilon}(p)}{|p|}\right) \tag{143}\] \[\leq|p|\left(M-\frac{f_{\epsilon}(p)}{|p|}\right),\] and so we see that \[\lim_{|p|\to\infty}\sup_{0<\epsilon\leq 1}\{p.q-f_{\epsilon}(p)\}\leq\lim_{|p |<\infty}|p|\left\{M-\inf_{0<\epsilon\leq 1}\frac{f_{\epsilon}(p)}{|p|} \right\}.\] Thus, there exists \(p_{0}(M)\) independent of \(\epsilon\), such that for all \(\epsilon\in(0,1]\), \(|q|\leq M\), there holds \[f_{\epsilon}^{*}(q)=\sup_{|p|\leq p_{0}}\left\{p.q-f_{\epsilon}(p)\right\}.\] Again as \(f\) is superlinear, by similar arguement, there exists \(p_{1}>0\) \[f^{*}(q)=\sup_{|p|\leq p_{1}}\left\{p.q-f(p)\right\}.\] Set \(p_{2}:=\max\{p_{0},p_{1}\}\). Then, for \(|q|\leq M\), we have \[f_{\epsilon}^{*}(q)=\sup_{|p|\leq p_{2}}\left\{p.q-f_{\epsilon}(p)\right\},\] \[f^{*}(q)=\sup_{|p|\leq p_{2}}\left\{p.q-f(p)\right\}.\] So, by continuity and compactness, for all \(|q|\leq M\), there exists \(q_{1}=q_{1}(q),q_{2}=q_{2}(q)\) such that \(|q_{1}|\leq p_{2}\), \(|q_{2}|\leq p_{2}\) and \[f_{\epsilon}^{*}(q)=q_{1}q-f_{\epsilon}(q_{1}),\] \[f(q)=q_{2}q-f(q_{2}).\] Hence, for all \(p\in\mathbb{R}\), we have \[f_{\epsilon}^{*}(q)-f(q)\leq q_{1}q-f_{\epsilon}(q_{1})-pq+f(p).\] Setting \(p=q_{1}\), we have \[f_{\epsilon}^{*}(q)-f^{*}(q)\leq f(q_{1})-f_{\epsilon}(q_{1})\leq\sup_{|p| \leq p_{2}}|f(p)-f_{\epsilon}(p)|.\] Interchanging \(f\leftrightarrow f_{\epsilon}\), for all \(|q|\leq M\), there holds \[|f_{\epsilon}^{*}(q)-f^{*}(q)|\leq\sup_{|p|\leq p_{2}}|f(p)-f_{\epsilon}(p)|.\] Hence, \(f_{\epsilon}^{*}\) converges to \(f^{*}\) on compact sets uniformly. Furthermore, for all \(p\in\mathbb{R}\), we have \[\frac{\inf_{0<\epsilon\leq 1}f_{\epsilon}^{*}(q)}{|q|}\geq\frac{pq}{|q|}-\sup_{0 <\epsilon\leq 1}\frac{f_{\epsilon}^{*}(p)}{|q|}.\] So, for \(w\in\{-1,+1\}\), with \(\frac{q}{|q|}\to w\) says \[\lim_{|q|\to\infty}\inf_{0<\epsilon\leq 1}\frac{f_{\epsilon}^{*}(q)}{|q|}\geq pw.\] Letting \(pw\) to go to infinity, we obatin \[\lim_{|q|\to\infty}\inf_{0<\epsilon\leq 1}f_{\epsilon}^{*}(q)=\infty.\] This proves the first three parts of the lemma. For the last part of the lemma, define a new function \[F_{\epsilon}(x):=\left(\alpha_{\epsilon}*F\right)(x)+\epsilon x^{2}.\] The function \(F*\alpha_{\epsilon}\) is smooth and convex as \(F\) is convex and hence \((F*\alpha_{\epsilon})^{\prime\prime}\geq 0\). Equivalently, there holds \[F_{\epsilon}^{\prime\prime}(x)\geq\epsilon>0,\] i.e \(F_{\epsilon}\) is uniformly convex. Let \(\alpha\in C_{c}^{\infty}(B(0,1))\). As \(F\) as has superlinear growth and is convex, there exist \(q_{0}>0\) such that * \(F(p)>0\), for all \(|p|\geq q_{0}\). * The function \(F(p)\) is non decreasing for \(p>q_{0}\). * The function \(F(p)\) is non increasing for \(p<-q_{0}\) Hence, for all \(x\geq q_{0}+1\), for all \(0<\epsilon\leq 1\), \(|y|\leq 1\), we have \[F(x-\epsilon y)\geq F(x-1),\] and for all \(x\leq-q_{0}-1\), for all \(0<\epsilon\leq 1\), \(|y|\leq 1\), we have \[F(x-\epsilon y)\geq F(x+1).\] Taking the limits, we get \[\lim_{x\to\infty}\inf_{0<\epsilon\leq 1}\frac{F(x-\epsilon y)}{|x\pm 1|}\geq \lim_{x\to\infty}\frac{F(x\pm 1)}{|x\pm 1|}=\infty.\] So, there holds \[\lim_{x\to\infty}\inf_{0<\epsilon\leq 1}\left(\frac{F_{ \epsilon}(x)}{x}\right) \geq\lim_{x\to\infty}\inf_{0<\epsilon\leq 1}\int_{|y|\leq 1}\frac{F(x- \epsilon y)}{x}\alpha(y)dy\] \[\geq\underbrace{\lim_{x\to\infty}\int_{|y|\leq 1}\frac{F(x-1)}{|x-1 |}\frac{|x-1|}{|x|}\alpha(y)dy}_{=\infty}.\] Similarly, we get \[\lim_{x\to\infty}\inf_{0<\epsilon\leq 1}\left(\frac{F_{\epsilon}(x)}{-x} \right)\leq\underbrace{\lim_{x\to\infty}\int_{|y|\leq 1}\frac{F(x-1)}{|x-1 |}\frac{|x-1|}{|x|}\alpha(y)dy}_{=\infty}.\] This concludes the proof for the lemma. 3. **Proposition 1**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded open set. Furthermore, let \(\{w_{k}\}\subset L^{\infty}(\Omega)\) and let \(w\in L^{\infty}(\Omega)\). Also, let \(M_{1}>0\) and \(M_{2}>0\) be two constants such that for all \(k\in\mathbb{N}\), there holds_ * \(\|w_{k}\|_{L^{\infty}(\Omega)}\leq M_{1},\)__ * \(\|w\|_{L^{\infty}(\Omega)}\leq M_{2}.\)__ _Moreover assume \(\varphi\in C_{c}^{1}(\Omega)\), there holds_ \[\lim_{k\to\infty}\int_{\Omega}w_{k}(x)\varphi(x)dx=\int_{\Omega}w(x)\varphi(x )dx. \tag{144}\] _Then, we see that_ \[\|w\|_{L^{\infty}(\Omega)}\leq M_{1}. \tag{145}\] _(Also, refer_ _[_11_]__)._ Proof.: By the regularity of the Lebesgue measure, we have that the space \(C^{1}_{c}(\Omega)\) is dense in the space \(L^{1}(\Omega)\). Hence, for all \(f\in L^{1}(\Omega)\) and for all \(\eta>0\), there exist \(\varphi\in C^{1}_{c}(\Omega)\) such that \[\int_{\Omega}|f-\varphi|dx<\eta. \tag{146}\] Now, from the hypothesis of the proposition, we see that \[\left|\int_{\Omega}\left(w_{k}-w\right)fdx\right| =\left|\int_{\Omega}\left(w_{k}-w\right)\varphi dx\right|+\left| \int_{\Omega}\left(w-w_{k}\right)\left(f-\varphi\right)dx\right| \tag{147}\] \[\leq\left|\int_{\Omega}\left(w_{k}-w\right)\varphi dx\right|+(M_ {1}+M_{2})\|f-\varphi\|_{L^{1}(\Omega)}\] \[\leq\left|\int_{\Omega}w_{k}\varphi-\int_{\Omega}w\varphi\right| +\eta(M_{1}+M_{2}).\] Now, sending \(k\to\infty\), from Eq. (144), we obtain \[\lim_{k\to\infty}\int_{\Omega}w_{k}f=\int_{\Omega}wfdx. \tag{148}\] Define new functions \(l_{k}\) and \(l\) in the dual space \(L^{1}(\omega)^{*}\) by \[l_{k}(f):=\int_{\Omega}w_{k}fdx\quad\text{and}\quad l(f):=\int_{\Omega}wfdx.\] Then, from Eq. (148) and the hypothesis, we have * \(|l_{k}(f)|\leq M_{1}\|f\|_{L^{1}(\Omega)}\), equivalently, the operator norm \(\|l_{k}\|\) is bounded by \(M_{1}\), * \(|l(f)|\leq M_{2}\|f\|_{L^{1}(\Omega)}\), equivalently, the operator norm \(\|l\|\) is bounded by \(M_{2}\), * \(l_{k}(f)\to l(f)\), for all \(f\) in \(L^{1}(\Omega)\), i.e \(l_{k}\) converges to \(l\) weakly in \(L^{1}(\Omega)^{*}\). Now, Banach-Alaoglu's theorem tells that the closed ball \(\overline{B(0,M_{1})}\) in \(L^{1}(\Omega)^{*}\) is weakly compact. As \(l_{k}\) is in \(\overline{B(0,M_{1})}\), for all \(k\), we have that \(l\in\overline{B(0,M_{1})}\), which concludes that \[\|w\|_{L^{\infty}(\Omega)}\leq M_{1}.\]
2305.18613
Evidence of a Quasi-periodic Global-scale Oscillation in the Near-Surface Shear Layer of the Sun
We present evidence of hitherto undiscovered global-scale oscillations in the near-surface shear layer of the Sun. These oscillations are seen as large scale variations of radial shear in both the zonal and meridional flows relative to their mean values. The variations cover all or most of a visible hemisphere, and reverse with a timescale on the order of a solar rotation. A large annual variation in the meridional shear anomaly is understandable in terms of the tilt of the rotation axis, but the rapid oscillations of the shear anomalies in both zonal and the meridional directions appear to be modulated in a more complex, not-quite annual way, although the latter are also strongly modulated by the projected rotational axis angle. Small-scale anomalies in the neighborhood of active regions lend support to their solar origin and physical interpretation. These results were obtained by analyzing ring-diagram fits of low-order modes in high-resolution Doppler data from the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory.
Richard S. Bogart, Charles S. Baldner, Sarbani Basu, Rachel Howe, Maria Cristina Rabello Soares
2023-05-29T21:11:51Z
http://arxiv.org/abs/2305.18613v1
# Evidence of a Quasi-periodic Global-scale Oscillation in the Near-Surface Shear Layer of the Sun ###### Abstract We present evidence of hitherto undiscovered global-scale oscillations in the near-surface shear layer of the Sun. These oscillations are seen as large scale variations of radial shear in both the zonal and meridional flows relative to their mean values. The variations cover all or most of a visible hemisphere, and reverse with a timescale on the order of a solar rotation. A large annual variation in the meridional shear anomaly is understandable in terms of the tilt of the rotation axis, but the rapid oscillations of the shear anomalies in both zonal and the meridional directions appear to be modulated in a more complex, not-quite annual way, although the latter are also strongly modulated by the projected rotational axis angle. Small-scale anomalies in the neighborhood of active regions lend support to their solar origin and physical interpretation. These results were obtained by analyzing ring-diagram fits of low-order modes in high-resolution Doppler data from the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory. ## 1 Introduction The near-surface shear layer of the Sun is a zone extending for about 35 Mm below the photosphere, or about 5% of the radius, 0.05 \(R_{0}\), depending somewhat on latitude, in which the local mean differential rotation rate rapidly increases with depth (Foukal, 1972; Rhodes et al., 1990; Thompson et al., 1996; Schou et al., 1998, etc.). In a uniformly rotating coordinate frame, the variation across the layer can be thought of as a vertical (radial) shear in a mean azimuthally-symmetric zonal flow characteristic of the latitude (and epoch). The depth of the layer and the amplitude of the shear are thought to play a significant role in the evolution of global magnetic activity and the life of magnetic active regions (Brandenburg, 2005; Pipin & Kosovichev, 2011; Karak & Cameron, 2016; Jha & Choudhuri, 2021). There is also a mean meridional circulation in each hemisphere at the surface, whose associated flows extend downward through and probably beyond the near-surface shear layer (Gonzalez Hernandez et al., 1999; Giles et al., 1997, etc.). Relative to these mean or slowly varying structures, however, there also appear to be anomalous localized flows persisting for comparatively short times. Evidence for slight prograde and retrograde anomalies in the zonal flow patterns in the near-surface regions around certain longitudes at high latitudes, persisting for a few rotations, has been previously presented (Hathaway et al., 2013; Bogart et al., 2015). In further exploring these localized anomalous flows, we have discovered what we believe to be a hitherto unsuspected phenomenon: occasional slight enhancements or diminutions of both the mean zonal shear, and of any mean meridional shear characteristic of the latitude as well. These anomalies extend over all or most latitudes in large longitudinal sectors, up to the width of a hemisphere or more, persisting for at least several days and for up to several months. Furthermore, the boundaries between these sectors of positive and negative anomalous shear are typically very sharp, spanning no more than about 20 degrees in longitude. ## 2 Data Analysis The anomalous flow determinations are based on ring-diagram analysis (Hill, 1988) of Dopplergrams from the Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012) on the Solar Dynamics Observatory (SDO). The data are analyzed in regions of diameter five heliographic degrees tiling almost the entire disc, with the tiles spaced every 2.5 degrees in latitude and every 2.5 degrees in longitude as well, at least up to latitude \(\pm 40^{\circ}\), with wider spacing at higher latitudes to approximately preserve physical spacing. They are tracked for \(9^{h}36^{m}\), a little more than their average displacement by rotation of \(5^{\circ}\)(Bogart et al., 2011). The analysis is repeated up to 72 times per Carrington rotation, being skipped only when more than 30% of the potentially available Dopplergrams in the interval are either missing or of unacceptable quality which seldom occurs more than once per rotation. Although the aim of ring-diagram analysis is to use information from trapped acoustic waves of different radial orders to establish the depth profile of the transverse fluid motions by inverting the data against a profile of the sound speed, the comparatively small extent of the \(5^{\circ}\) tiles limits the detection to very short wavelengths; consequently only the very low orders of acoustic modes, up to radial order 3, are regularly fit in the power spectra along with the surface waves, the \(f\)-mode. Such small data sets are not amenable to traditional inversion techniques, so we instead simply analyze the data for the flow parameters \(U_{x}\) and \(U_{y}\) (representing the zonal and meridional components respectively of the transverse flow field causing a Doppler shift in the wave power) as functions of the equivalent lower turning point corresponding to the phase speed of the mode. For each tile at each time interval, ring-diagram fits produce for each radial order values of the local flow parameters \(U_{x}\) and \(U_{y}\) and equivalent spherical harmonic degree (corresponding to wave-number) as functions of frequency (module **rdfit** of the HMI processing pipeline (Bogart et al., 2011)). We determine linear fits of the flow parameters as functions of the classical turning point, the depth in the Sun at which the sound speed corresponds to the phase speed for the frequency and wave-number of the mode. The slope of the fit, \(dU_{i}/dR_{t}\), we take as a proxy for the physical radial shear of the flow in the corresponding direction, and label it as such, \(dU_{i}/dr\). The results presented here are based on fits of the \(f\)-mode, which show the clearest effects. Strictly speaking there is no turning point based on refraction for such waves. Their effective depth of penetration, however, follows the same trend in sound-speed, and the results we have obtained for the low-order \(p\)-modes \(n\) 1-3 covering a depth range of about 2-9 Mm, are in general agreement with \(f\)-mode results. There are large-scale variations over the field in both the \(U_{i}\) and \(dU_{i}/dr\) fits that depend on the image location of each tile. For the \(U_{x}\) and \(U_{y}\) parameters, these are clearly chiefly solar in origin, being due to the mean differential rotation and meridional circulation respectively. For the \(dU_{i}/dr\) parameters, these appear to be due primarily to observational and analysis effects, likely involving varying sensitivity of the Doppler signal during the tracking of the tiles. The mean values at each location over a full year are shown in the top panel of Fig. 1. The reference year chosen is Year IX of the SDO mission, Carrington Time (CT) 2216:060-2230:285 (May 1, 2019 - Apr 30, 2020), centered around the time of solar minimum. The predominantly east-west variation in the zonal shear measurement indicates that differential rotation is not a factor, even though the data have been uniformly tracked at the Carrington rate, while the predominantly north-south variation in the meridional shear measurement is suggestive of a center-to-limb effect, although neither is quite symmetric. It is important to note that although we are using the annual means at each observing location as our nominal means, there are significant variations over the course of a year in the \(dU_{y}/dr\) values, though not the \(dU_{x}/dr\) values, presumably because of the latitudinal dependence of the former. This is exhibited in the lower panel of Fig. 1. As we shall see, it has significant consequences for the detection of anomalies in the meridional shear. In order to measure localized variations of the flow and shear as functions of time, the residual differences between the fit values for each tile and the corresponding average for its Stonyhurst location are recorded. For each "extended" Carrington longitude and latitude (including the associated Carrington rotation, and running backward or forward in time as appropriate), the residual values for all samples are averaged together, weighted by the cosine of the Stonyhurst longitude at the time of the sample. These represent all the useful observations at the particular Carrington location, of which there are typically \(\sim\)30 for latitudes up to \(\pm 40^{\circ}\)and at least 17 for latitudes up to \(\pm 65^{\circ}\). ## 3 Results Plotting the residual values of the presumed radial shear of the flow components at each location and time as synoptic maps, two features can be noted (see Fig. 2). One is that in the vicinity of active regions, there is generally convergent shear in both the zonal and meridional components of the flow. This is consistent with both surface and helioseismic determinations that show inflows at the surface and outflows at depth around sunspots (Haber et al., 2003, 2004; Kosovichev & Duvall, 2006, etc), and lends support to the identification of our measurements with a physical shear. A more striking and wholly unexpected feature manifests itself however on a larger spatial scale: there are periods, ranging from a few days to a few months, when the residual zonal shear at all or nearly all latitudes is either positive or negative, with rather sharp boundaries between them. The amplitude of this oscillatory behavior, which has persisted throughout the twelve years of the SDO mission to date, is of the order of about \(\pm 5\) m/s/Mm in the \(f\)-mode analysis. Similar analyses for the low-order acoustic mode \(p1\)-\(3\) show an identical pattern, although the amplitude for the \(p1\)-mode results is somewhat lower than for the others. Figure 1: (Top) Annual mean values of the inferred “shear” in the zonal and meridional components of the flow, \(\langle dU_{x}/dr\rangle\) (left) and \(\langle dU_{y}/dr\rangle\) (right) respectively at each \(5^{\circ}\)analysis target location, as determined from fitting of the f-mode flow parameters between radial turning point values of 0.9950 and 0.9995. (Bottom) Differences in the mean values \(\langle dU_{x}/dr\rangle\) (left) and \(\langle dU_{y}/dr\rangle\) (right) during the 4 months when \(B_{0}>3^{\circ}.625\) (above) and when \(B_{0}<-3^{\circ}.625\) (below) compared with the annual means. Note the different color scale ranges, which are both in units of m/sec/Mm. Similar patterns exhibit themselves in the meridional shear anomalies as well, although these are superimposed on a strong annual variation that is clearly associated with the projected tilt of the solar rotation axis, the \(B_{0}\) angle. That this should be so is not surprising, as resolution near the limb is very much poorer in the radial than the tangential direction due to foreshortening. At high latitudes, the meridional component is nearly radial, while the zonal component is roughly tangential. (The opposite obtains at low latitudes and extreme longitudes east and west, with the zonal flow and shear measurements being poorer; this is the reason for the weighting by the cosine of the contributing Stonyhurst longitude.) Because the variation of the mean \(\langle dU_{y}/dr\rangle\) over the field is predominantly north-south, it is much more likely to be affected by variation in the \(B_{0}\) angle with time of year than the \(\langle dU_{x}/dr\rangle\) variation, which is basically east-west. This is clearly illustrated in the lower panel of Fig. 1, in which the means over only the parts of a year for times when the \(B_{0}\) angle is large are compared with the mean for the whole year. At such times these differences are contributing significantly to the local anomalies, which are of necessity with respect to the full-year means at each latitude. Despite this effect, the short-period oscillations continue to be clearly visible, as shown in the middle panel of Fig. 4. Furthermore, after removal of the annual variations, which are well fit by \(dU_{y}/dr=0.5B_{0}\), with \(dU_{y}/dr\) in m/s/Mm and \(B_{0}\) in deg, the residual amplitude of the oscillations in the meridional shear are approximately the same as those of the zonal shear as shown in the bottom panel, although their amplitude has been more or less steadily increasing since the beginning of mission, when they were scarcely distinguishable. A puzzling feature of the observed oscillations is a nearly (but not quite) annual pattern in their recurrence. It is evident for the zonal shear anomalies, which are not significantly affected by variation of the \(B_{0}\) angle. This can be clearly seen in Fig. 3. There are typically sets of multiple peaks and troughs recurring on a time scale of about a solar rotation in spring and autumn (when \(|B_{0}|\) is large), while there are also extended periods around midwenter and midsummer (the times of perihelion and aphelion when mean observer radial velocity is at a minimum) when the anomalous shear remains of the same sign. (Note that the SDO mission began around May 1 2010, so the first such period of suppressed oscillation was centered around CR 2104/2105. For reference, the Jan 1 times are marked in the lower panel.) Note however that the zonal shear anomalies during these extended times were of different signs: with one exception they were negative at the beginnings of years 2011-2018, and positive in 2013 and 2019 and thereafter. Comparable patterns can be seen in the meridional shear anomalies after removal of the annual \(B_{0}\) variation: there are periods of rapid oscillations around spring and autumn, and other times when these oscillations are suppressed. To some extent this effect is masked by yet another puzzling feature, a gradual but steady increase in the amplitudes of the meridional shear anomaly amplitudes over the course of the mission, while those of the zonal shear anomalies remain constant. That the recurrence pattern of the short-term shear oscillations is not quite annual is vividly shown in Fig. 5. From that plot it appears that the date of the spring reversals has been advancing, while that of the autumn ones has been retreating. The change in behavior of the periods when reversals were suppressed is also clear from that figure. Furthermore, the variation in time of the annual phasing of these oscillations is the same **for both the zonal and meridional components**, as shown in Fig. 5, although the discontinuity around the time of solar minimum is much Figure 2: Synoptic maps (in a plate carrée projection with Carrington longitude increasing from left to right and latitude from bottom to top) of the residual values of the zonal shear parameter \(dU_{x}/dr\) (above) and meridional one \(dU_{y}/dr\) (below) for four recent Carrington rotations (2265–2268, Dec. 2022 – Mar. 2023, right to left in order to align Carrington longitudes, which increase from left to right within each map). The color scale saturation range for both maps is \(\pm 10\) m/sec/Mm, with positive (westward and northward) values red. A number of active region sites are conspicuous, including for example those of AR 13190 at 120-15 in CR 2266 and AR 13234 at 345+25 in CR 2268 less pronounced **for the meridional component**, indeed if it is present at all. Instead, there seems to have been a gradual increase in the amplitude of these oscillations over the whole observing interval. ## 4 Discussion Association of changes in the near-surface dynamics with the solar cycle, identified as the torsional oscillation, has been based on azimuthally-averaged measurements of the latitude dependence of the zonal velocity (Howard & Labonte, 1980; Kosovichev & Schou, 1997); likewise for the meridional circulation (Duvall, 1979; Labonte & Howard, 1982; Giles et al., 1997). The distribution of magnetic activity over the photosphere is, however, far from uniform, and it is Figure 3: Top: Synoptic maps of the zonal shear inferred from \(f\)-mode binning at each Carrington location for the first eleven years of the SDO mission. Because the Carrington longitudes decrease with time, the plotted rotations increase from right to left within each row, which corresponds approximately to a calendar year. The color scale is the same as for Fig. 2. Bottom: Time series of 20-sample smoothed averages of the inferred zonal shear at each Carrington longitude, averaged over all latitudes between \(\pm 40^{\circ}\). The dashed vertical green lines mark the times of the beginning of each calendar year, beginning with 2011. Figure 4: Similar to Fig. 3, for the meridional shear. The two lower plots are of the time series of the zonally-averaged shear anomalies before (above) and after (below) detrending by a fit for linear dependence on the \(B_{0}\) angle. The dashed blue and red lines in the plot of raw values mark the times of maximum and minimum \(B_{0}\) angle respectively, while the dashed green lines in the plot of detrended values mark the beginnings of calendar years, as in Fig. 3. tempting to associate the sectoral patterns observed here with the development of activity nests and active longitudes (Castenmiller et al., 1986; Balthasar & Schuessler, 1983). It is noteworthy that the zonal shear anomalies, although often exhibiting a north-south hemisphere imbalance in amplitude (clearly associated with the observational effects of the annual \(B_{0}\) variation), nevertheless generally extend over all or nearly all latitudes, well beyond the sunspot zones, and that they persist throughout the solar cycle with no obvious changes in their amplitude nor frequency of occurrence. It should also be remarked however that (a) there are long-term variations in the time of year at which rapid oscillations on a timescale of a rotation are seen; and (b) the sign of the zonal shear anomaly at least during the periods when it persists for multiple rotations (particularly around the time of perihelion) lasts for several years. There are, however, changes in 2013 and again in 2018. This last of course roughly coincides with the boundary between Solar Cycles 24 and 25, as does the change in phase of the times of observed rapid oscillations. However, it also coincides with the one time during the mission when the focus was adjusted by altering the target temperature of the front window, while the former coincides with the one time when the thermal control program was altered. The remainder of the oscillatory pattern, particularly the sudden alterations of the sign of the anomalies within a solar rotation, cannot be explained by any known instrument or orbital systematics, however; hence we believe that they are solar in nature. Although the results presented here are based on the fitting of the \(f\)-mode parameters for transverse flow and wave-number as functions of frequency, the results when fitting the same parameters for the accessible acoustic modes \(p\)-1-3, as remarked above, are broadly similar in the locations, timings, and sign of the oscillations, though differing somewhat in magnitude. Likewise, although these results are based on analysis of 5\({}^{\circ}\) tiles, they are also supported in analysis of larger tiles as well: synoptic maps similar to those of Figs. 3 and 4 based on fitting of 15\({}^{\circ}\) tiles exhibit the same large scale patterns of shifts between positive and negative anomalies, and at the same times, as can be seen in Fig. 6. In those maps the localized anomalous shears around active regions are also quite pronounced. The same is true for mode parameters resulting from the alternate fitting method of the 15\({}^{\circ}\) spectra in which they and the mode frequencies are fit as functions of wave-number (**rdfitf**: (Bogart et al., 2011) and (Haber et al., 2000)), although the results are distinctly noisier. This may be because with the choices of region sizes and tracking times in the HMI pipeline and fitting at steps of \(\Delta k\) vs. \(\Delta\nu\), that method under-samples the power spectral data, whereas **rdfitc** over-samples the spectra, resulting in substantially more data to which to fit means and derivatives. It suggests the possibility of exploring fittings of the higher order modes accessible in those cases. In this case, the results obtained from fitting of the variation of the flow parameters with turning point depth may be compared with results of actual inversions, although inversions for shear very near the surface are difficult because of the lack of a sufficient number of high-degree modes. We have not detected any large-scale or coherent shear anomalies in the analysis of inversions of the mode parameters resulting from either fitting method applied to 15\({}^{\circ}\)spectra. Finally, it should be noted that although there is a clear and simple dependence of the averaged meridional shear anomalies on the annually varying \(B_{0}\) angle, the (not-quite annual) variation in the appearance (or detection) of Figure 5: “Stack plots” showing the mean shear anomalies at all latitudes between \(\pm 40^{\circ}\)in the zonal direction (left) and the meridional direction (after detrending by the annual \(B_{0}\) dependence, right), as functions of time on the horizontal axis and day of the year on the vertical axis. averaged zonal shear anomalies is much harder to explain. It may lie in the variation of the large SDO velocities, which are known to be associated with substantial diurnal variations in the observed quantities (Couvidat et al., 2016), and which vary somewhat with precession of the orbital nodes. Likewise, the apparent steady increase in the amplitude of the swings in anomalous meridional shear is puzzling. Whether it may be due to a secular trend in the sensitivity of our detection method, or also associated with a solar cycle, it is too soon to be able to say. The fact that at different times of year the intervals between oscillations either advances or retreats suggests that with the present data set it is unlikely that they can be associated with either a regular prograde or retrograde motion of an azimuthal order \(m=1\) disturbance, at least in the Carrington frame. For these reasons, it is highly desirable to confirm these observations using data from other sources. This work uses data from the Helioseismic and Magnetic Imager. HMI data are courtesy of NASA/SDO and the HMI science team. The data used in this article are publicly available from the Joint Science Operations Center at jsoc_stanford.edu. The fitted parameters of the ring diagram fits used for the current study are available from the corresponding author on reasonable request. This research was supported in part by NASA Contract NAS5-02139 to Stanford University. RH acknowledges the support of the UK Science and Technology Facilities Council (STFC) through grant ST/V000500/1. _Facility:_ HMI
2302.02834
Surrogate uncertainty estimation for your time series forecasting black-box: learn when to trust
Machine learning models play a vital role in time series forecasting. These models, however, often overlook an important element: point uncertainty estimates. Incorporating these estimates is crucial for effective risk management, informed model selection, and decision-making.To address this issue, our research introduces a method for uncertainty estimation. We employ a surrogate Gaussian process regression model. It enhances any base regression model with reasonable uncertainty estimates. This approach stands out for its computational efficiency. It only necessitates training one supplementary surrogate and avoids any data-specific assumptions. Furthermore, this method for work requires only the presence of the base model as a black box and its respective training data. The effectiveness of our approach is supported by experimental results. Using various time-series forecasting data, we found that our surrogate model-based technique delivers significantly more accurate confidence intervals. These techniques outperform both bootstrap-based and built-in methods in a medium-data regime. This superiority holds across a range of base model types, including a linear regression, ARIMA, gradient boosting and a neural network.
Leonid Erlygin, Vladimir Zholobov, Valeriia Baklanova, Evgeny Sokolovskiy, Alexey Zaytsev
2023-02-06T14:52:56Z
http://arxiv.org/abs/2302.02834v2
# Uncertainty estimation for time series forecasting via Gaussian process regression surrogates ###### Abstract. Machine learning models are widely used to solve real-world problems in science and industry. To build robust models, we should quantify the uncertainty of the model's predictions on new data. This study proposes a new method for uncertainty estimation based on the surrogate Gaussian process model. Our method can equip any base model with an accurate uncertainty estimate produced by a separate surrogate. Compared to other approaches, the estimate remains computationally effective with training only one additional model and doesn't rely on data-specific assumptions. The only requirement is the availability of the base model as a black box, which is typical. Experiments for challenging time-series forecasting data show that surrogate model-based methods provide more accurate confidence intervals than bootstrap-based methods in both medium and small-data regimes and different families of base models, including linear regression, ARIMA, and gradient boosting. uncertainty estimation, surrogate models, time-series, Gaussian processes, bootstrap + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: journal: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + of the data at hand. We assume that the base model accurately predicts target value, but lacks an uncertainty estimation for its predictions. We want to equip it with the ability to efficiently estimate point confidence intervals or uncertainty estimates for predictions. While, the most natural approach is a variant of bootstrap it faces two challenges that are hard to handle without looking into a particular problem: dependencies in sequential time series data and inefficiency of bootstrap that requires training of multiple models and using all of them during inference. The former problem can be mitigated via taking the dependency structure into the account, while you need to make correct assumptions about the dependence structure [23]. The later problem can be mitigated with e.g. MCMC dropout approach [24], but this approximation reduces the quality of uncertainty estimation [41]. The alternative proposed in this paper is to train a _surrogate_ model on the same input data used to train the base model. If the problem requires representation learning, we can train a surrogate model that takes embeddings as an input or in other way approach deep kernel learning [51]. A similar approach was used to provide uncertainty estimates for image classification problems [27]. Their Gaussian process classifier was trained on top of hidden image representations computed by CNN. We also select Gaussian process regression as a functional class for the surrogate model training, as it provides reliable uncertainty estimates [20] and straightforward to train [50]. After a surrogate model is trained, we construct a combined model, which uses a base model to make target value prediction and a surrogate model to estimate uncertainty for the prediction of base model. To make uncertainty estimation reliable, we design a loss function that allows the surrogate model to imitate the base model predictions, while keeping the method computationally efficient and avoiding contamination of the training sample with points we are uncertain about. We compare the proposed method for uncertainty estimation with bootstrap ensemble methods and other relevant methods [40]. Our evidence include the experiments with different base models: linear models, ARIMAs, gradient boostings [7]. Most of these methods have their own intrinsic ways for uncertainty estimation, see e.g. [42] for ARIMA, [8, 30] for gradient boosting. The experiments show that produced uncertainty estimates with our surrogate approach is more accurate than predictions from even build-in methods designed specifically for these models. In this paper, we focus on time-series forecasting problem, as one of the most challenging and requiring close attention to the structure of the dependence of the data with a convenient tools for the comparison of performance over diverse regression problems. An example of our uncertainty estimation is in Figure 1 demonstrates that the surrogate uncertainty estimates don't suffer from overconfidence typical for other methods. Moreover, Table 1 shows that this evidence is not anecdotal, it holds for wide range of datasets and types of base models: our approach to construction of a surrogate model outperforms basic uncertainty estimates for consider classes of models, the best bootstrap-based approach we found and naive training of a surrogate uncertainty estimation. Our claims are the following: * The proposed surrogate uncertainty estimation for a black box model is natural and easy to implement. The corresponding loss function encapsulates primal requirements to uncertainty estimators, while keeping the pipeline simple. * Additional computational costs during training and inference are small. We prove this for the used Gaussian process regression surrogate. Figure 1. Example of uncertainty estimation with our method and other methods: the left plot shows obtained predictions and corresponding uncertainty estimates, the right plot provides insight on the quality of the uncertainty estimation. The description of dataset A is available below. * For time series forecasting problem, the quality of surrogate uncertainty estimates is better than the performance of model-specific approaches and bootstrap approaches that take into account structure of the data. This observation holds for different base black boxes that require uncertainty estimation. We conduct experiments for the time series forecasting problem, while we don't use any specific properties of this problem. It is likely, that similar results hold for a more general class of regression problems, while we don't provide evidence in this paper. ## 2. Related Work Uncertainty typesThe uncertainty of a value is understood as its characteristic, which describes a certain allowable spread of its values and arises due to the inaccuracy of measuring instruments, the inconsistency of the allowed restrictions with the real data and the processes behind them, as well as with the approximations contained in the model itself. There are three types of uncertainty in the machine learning literature: aleatoric uncertainty, epistemic uncertainty and combined type (Steintein and Tschur, 2017). Aleatoric uncertainty (Aleatoric, 1977) is related to the probabilities nature of the data and the impossibility of overcoming it. The simplest example is an error in the data received by a measuring device with a given error. We may say that such a scatter occurs by chance and may not be eliminated. On the other hand, we may determine its characteristics using, for example, methods for building a model with inhomogeneous heteroscedastic noise. Epistemic uncertainty (Aleatoric, 1977) is related to the limitations of the model used for forecasting. It arises due to the inaccuracy of the approximations embedded in the model or as a result of applying the model to new data that differ from those used in its construction. Such uncertainty may be reduced, for example, by improving the model or by using a more correct data set to train it. Ensemble approach for uncertainty quantificationOne of the approaches to obtain estimates of uncertainty is the construction of an ensemble of similar, but different in some nuances, models. At the same time, to obtain estimates of uncertainty, the spread of different model predictions is considered (Steintein and Tschur, 2017; Steintein and Tschur, 2017). For example, an estimate of the forecast variance at a point may be its empirical estimate from the forecast vector of an ensemble of models. The most well-known approach of constructing an ensemble of models -- using of bootstrap (Stein and Tschur, 2017). During bootstrapping, objects for training are sampled from the training set with repetitions. The resulting sample is used to train models from ensembles. With a reasonable choice of the number of models in the ensemble, this method allows one to obtain fairly accurate statistical estimates of the data. However, such a procedure, in its basic form, considers data as a set of independent objects. Here we consider sequential data in which there is a temporal connection. There are sampling methods that extend the basic version of the bootstrap, designed specifically for working with time series and sequential data models. In particular, a block bootstrap (Stein and Tschur, 2017; Stein and Tschur, 2017) or auto-regressive data sampling (Stein and Tschur, 2017) is used. Recent work considers obtaining uncertainty estimates using ensembles of deep models (Stein and Tschur, 2017). Due to the significant number of parameters available in neural networks, the slightest changes in the initialization of the model before the training lead to changes in the trained model. Moreover, estimates of the mean and scatter for each predicted point may be obtained. However, for classical machine learning models with a small number of parameters applied to a small amount of data, this approach is poorly applicable since it is more difficult to achieve a variety of outputs. Gaussian process approach for Uncertainty QuantificationSometimes there is a situation when the test set has a different distribution than the train set. This problem is known as the out-of-distribution (OOD) problem. Therefore, a model is needed that can detect this and give a uniform distribution to classes. The Gaussian Process Regression (GPR) (Stein and Tschur, 2017) model has this property. It works well even in the case of misspecified model (Stein and Tschur, 2017). It is a well-known fact that the GPR model was introduced as a fully probabilistic substitute for the multilayer perceptron (MLP). That idea was pointed out in the observation (Stein and Tschur, 2017), where a GPR is an MLP with infinite units in the hidden layer. Traditional GPR models have been extended to more expressive variants, for example, to Deep Gaussian Process (Beng et al., 2017). There are quite a lot of studies of GPR properties. For instance, it is a case where we have the misspecified problem statement (Stein and Tschur, 2017) for GPR. In (Stein and Tschur, 2017), they obtain the exact expression for interpolation error in the misspecified case for stationary Gaussian process, using an infinite grid designs of experiments. This setup is correct due to it does not significantly affect the results (Stein and Tschur, 2017). In addition, there are some applied examples where GPR is considered. For example, it is problems with time-series forecasting (Stein and Tschur, 2017). Specifically, the article (Stein and Tschur, 2017) shows that GPR can be applied for time-series forecasting even for large data scenario. In this case, there is adopted so-called sparse Gaussian process regression for e.g. multiple-step ahead forecasting. However, there are some issues in the high-dimensional problems: it is crucial to extract features or reduce dimension. To solve this problem, there was proposed a simple solution (Stein and Tschur, 2017), using spectral normalization to the weights in each layer (Stein and Tschur, 2017). Surrogates modelsThe use of an ensemble of models is justified from a theoretical point of view, but there are limitations that do not allow it to be fully applied as a universal way of estimating uncertainty. It is the ambiguity of solving the problem of choosing a sampling procedure for constructing an ensemble of models. Moreover, that type of method is computationally expensive: it is necessary to build a large number of models, which is not always possible for both machine models and less computationally efficient deep learning models. Therefore, an alternative approach based on surrogate modelling is used. A surrogate model or meta-model \(\tilde{f}(x)\) is created for a model \(\hat{f}(x)\). Due to the procedure for constructing such a model, we assume \(\tilde{f}(x)\approx\tilde{f}(x)\)(Stein and Tschur, 2017). Due to the adequacy of the quality of the model and the estimation of uncertainty, it seems natural to use regression based on Gaussian processes as a surrogate model for the original one. A similar approach was used to improve active learning (Stein and Tschur, 2017). ## 3. Methods In this section, we pose a formal uncertainty estimation problem in Section 3.1 and introduce our methods in Section 3.2. For the sake of comparison we provides ideas of bootstrap approaches in Section 3.5. ### Problem formulation Let us have training dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\), where \(\mathbf{x}_{i}\) is an input data sample from the domain \(\mathcal{X}\subset\mathbb{R}^{d}\), and \(y_{i}\) is a corresponding target from the domain \(\mathcal{Y}\subset\mathbb{R}\). We assume that the training dataset was sampled from a joint distribution of inputs and targets \(p(\mathbf{x},y)\). We also make standard Bayesian assumptions for the data generation process: first parameters \(\mathbf{\theta}\in\mathbf{\Theta}\) of some random function \(f_{\mathbf{\theta}}:\mathcal{X}\rightarrow\mathcal{Y}\) are sampled, then \(y\) is sampled from the conditional distribution \(p(y|\mathbf{x},\mathbf{\theta})\). For a regression problem, this conditional distribution is usually assumed to be Gaussian: \(p(y|\mathbf{x},\mathbf{\theta})=\mathcal{N}(y;f_{\mathbf{\theta}}(\mathbf{x}),\sigma_{ \mathbf{\theta}}^{2}(\mathbf{x}))\). With these assumptions predictive distribution can be written as: \[p(y|\mathbf{x},\mathcal{D})=\int_{\mathbf{\theta}}p(y|\mathbf{x},\mathbf{\theta})p( \mathbf{\theta}|\mathcal{D})d\mathbf{\theta} \tag{1}\] where \(p(y|\mathbf{x},\mathbf{\theta})\) - target likelihood given parameters of function \(f_{\mathbf{\theta}}\) and input point \(\mathbf{x}\), \(p(\mathbf{\theta}|\mathcal{D})\) - posterior distribution of parameters \(\mathbf{\theta}\). In practice, the predictive distribution is rarely tractable, so many methods use point estimates: \(p(y|\mathbf{x},\mathcal{D})\approx p(y|\mathbf{x},\mathbf{\theta})\), where \(\mathbf{\theta}\) can be e.g. a maximum likelihood estimate, substituting \(p(\mathbf{\theta}|\mathcal{D})\) with a delta-function. In this case, we treat the mean value of this distribution \(\hat{f}(\mathbf{x})\) as the prediction of the model. We can formulate our problem in two ways. The first goal is an accurate estimation of the variance \(\hat{\sigma}^{2}(\mathbf{x})\) of the distribution \(p(y|\mathbf{x},\mathcal{D})\) at a point \(\mathbf{x}\) quantifying the uncertainty about the model prediction. The second goal is estimation of the shortest confidence interval of the significance level \(\alpha\) such that the true value fall into this interval with probability \(\alpha\), and the interval is the shortest among all such intervals. If \(p(y|\mathbf{x},\mathbf{\theta})\) is Gaussian, two formulations coincide, as the confidence interval for the prediction with probability \(\alpha\) can be written as \[Cl_{\alpha}=[\hat{f}(\mathbf{x})-z_{\alpha/2}\hat{\sigma}(\mathbf{x}),\hat{f} (\mathbf{x})+z_{\alpha/2}\hat{\sigma}(\mathbf{x})], \tag{2}\] where \(z_{\alpha/2}\) is the \(\alpha/2\) quantile of standard normal distribution. If we have a probabilistic model, these problems admit reasonable solutions. However, in many cases we have only a black-box deterministic _base_ model \(\hat{f}(\mathbf{x})\). So, our final goal is to equip a deterministic regression model with uncertainty estimation. In the following sections, we propose a surrogate modelling approach, which can be used to estimate the uncertainty of a deterministic base model. Later, we separately discuss an ensemble-based model. ### Surrogate based on Gaussian process regression We present a universal and numerical efficient approach that requires no assumptions about the model and can equip any black-box model with an uncertainty estimate. Given a deterministic _base_ model \(\hat{f}(\mathbf{x})\), we introduce another _surrogate_ model \(\hat{f}(\mathbf{x})\). We select \(\tilde{f}(\mathbf{x})\) such that it directly models the target distribution \(p(y|\mathbf{x},\mathcal{D})\) while mimicking the black-box model predictions from \(\hat{f}(\mathbf{x})\). A natural choice for such a probabilistic model is the Gaussian process regression (GPR) (Srivastava et al., 2015) as it produces the Gaussian distribution \(p(y|\mathbf{x},\mathbf{\theta})\). Intuitively, the surrogate model trained on the same dataset approximates base model \(\tilde{f}\approx\hat{f}\), which itself approximates underlying distribution \(\hat{f}(\mathbf{x})\approx\mathbb{E}_{p(y|\mathbf{x},\mathcal{D})}y\) and thus we expect it to have adequate uncertainty estimates. We note, that if the input dimension is sufficiently low, we can train GP using the initial input data. If the data modality requires representation learning, we use embeddings from the base model to train Gaussian process with deep representation-based kernel. ### Matching surrogate model training Using the approach described above as it is to obtain a surrogate model is naive, as we expect that if the training sample is the same, then the trained model would be the same in cases of the base model and a surrogate model. Numerous pieces of evidence suggest that this is not true, and we can get different models, even if we use the same dataset and the same class of models. On the other hand, the base model is available as a black box, so one can query it at the points of interest, improving the surrogate model by showing it more relevant training data. For Gaussian process regression, we can use this information efficiently. Moreover, the uncertainty estimate will have a natural kernel-style behaviour: the uncertainty increases as we go away from the points from the initial sample used to train the base model. We assume that the surrogate model \(\hat{f}(\mathbf{x})\) is a realization of Gaussian process. More precisely, \(\tilde{f}(\mathbf{x})\sim GP(0,k(\mathbf{x},\mathbf{x}^{\prime})|\mathcal{D})\) for a Figure 2. Example of Uncertainty estimation for two dimensional input via a matching surrogate model. The variance estimate corresponds to the fill color at the point. At points from the initial training sample \(X\) the uncertainty is almost zero, while for points from the additional sample \(X^{\prime}\) it takes reasonable values reflecting our absence of knowledge about the true function values at these locations. covariance function \(k(\mathbf{x},\mathbf{x}^{\prime})\) from a parametric family, zero mean and condition on available data \(\mathcal{D}\). The conditional mean \(m(\mathbf{x})\) and variance \(\sigma^{2}(\mathbf{x})\) given the sample of observations \(\mathcal{D}\) at a new point \(\mathbf{x}\) in this case have the following form: \[m(\mathbf{x}) =\mathbf{k}^{T}K^{-1}\mathbf{y}=\mathbf{k}^{T}\boldsymbol{ \alpha},\boldsymbol{\alpha}=K^{-1}\mathbf{y},\] \[\sigma^{2}(\mathbf{x}) =k(\mathbf{x},\mathbf{x})-\mathbf{k}^{T}K^{-1}\mathbf{k},\] where \(\mathbf{k}=\{k(\mathbf{x},\mathbf{x}_{i})\}_{i=1}^{N}\), \(K=\{k(\mathbf{x}_{i},\mathbf{x}_{j})\}_{i,j=1}^{N}\). To find the parameters of covariance function, one maximizes the likelihood of the data given the covariance function. In our case, we have an additional requirement to match the base model \(\hat{f}(\mathbf{x})\). So, the loss function is the sum of two loss terms: \[L(\tilde{f},\tilde{f},\mathcal{D},\mathcal{D}_{\hat{f}}^{\prime})=-(1-C)\log p (\mathbf{y}|X,\tilde{f})+C\sum_{i=1}^{L}(\tilde{f}(\mathbf{x}_{i}^{\prime})- \hat{f}(\mathbf{x}_{i}^{\prime}))^{2}, \tag{3}\] where \(\log p(\mathbf{y}|X,\tilde{f})\) is the data likelihood, \(C\in[0,1]\) is the second term weight coefficient. The second term is the sum of squared difference between the surrogate model prediction \(\tilde{f}(\mathbf{x}_{i}^{\prime})\) and the base model \(\hat{f}(\mathbf{x}_{i}^{\prime})\) for a sample \(\mathcal{D}_{\hat{f}}^{\prime}=\{(\mathbf{x}_{i},\hat{f}(\mathbf{x}_{i}^{ \prime}))\}_{i=1}^{L}\). In our experiments, inputs in \(\mathcal{D}_{\hat{f}}^{\prime}\) are selected uniformly randomly over the domain of interest. The pseudo-code for the proposed method that we name _Surr Ind_ in pytorch style is available in Appendix B. In the terminology of sparse Gaussian process regression, points from \(\mathcal{D}\) are _inducing_ points, as we condition our distribution on them. We use additional points from \(\mathcal{D}_{\hat{f}}^{\prime}\) only to adjust the model parameters. So, we (1) keep the computational complexity low and (2) uncertainty estimation growing if we go away from the initial training sample. Let us put these two statements formally. Lemma 3.1 ().: _The computational complexity for the evaluation of the loss function (3) equals to \(O(N^{3})+O(LN)\)._ Proof.: Using the formula for the likelihood for Gaussian process regression, we get \[L(\tilde{f},\hat{f},\mathcal{D},\mathcal{D}^{\prime}) =\frac{(1-C)}{2}\left(N\log\pi+\log\det|K|+\mathbf{y}^{T} \boldsymbol{\alpha}\right)+\] \[+C\left(K_{X^{\prime}X}\boldsymbol{\alpha}-\hat{f}(X^{\prime}) \right)^{T}\left(K_{X^{\prime}X}\boldsymbol{\alpha}-\hat{f}(X^{\prime})\right),\] where \(K_{X^{\prime}X}=\{k(\mathbf{x}_{i}^{\prime},\mathbf{x}_{j})\}_{i,j=1}^{LN}\), \(\hat{f}(X^{\prime})=\{\hat{f}(\mathbf{x}_{i}^{\prime}),\ldots,\hat{f}( \mathbf{x}_{L}^{\prime})\}\). So, we need to evaluate two terms: the likelihood and the squared loss. To calculate the likelihood we need \(O(N^{3})\), as we need the inverse and the determinant of the covariance matrix of size \(N\times N\). Note, that if we have the inverse, we calculate \(\boldsymbol{\alpha}=K^{-1}\mathbf{y}\) in \(O(N^{2})\). So, to get the predictions we need \(O(LN)\) additional operations in addition. Summing both complexities, we obtain the desired \(O(N^{3})+O(LN)\). So, as long as we keep \(L\) of order of magnitude similar to \(N\), we have little additional computational power required. Moreover, we can afford \(L\) to be of order \(N^{2}\), which is impossible with the naive baselines above. The second statement about amount of the noise available is also natural. If we move \(\mathbf{x}\) to infinity, then \(\sigma^{2}(\mathbf{x})\) tends to \(k(\mathbf{x},\mathbf{x})\) for any reasonable covariance function, whatever \(\mathcal{D}_{\hat{f}}^{\prime}\) were used, as the components of the covariance vector \(k(\mathbf{x},\mathbf{x}^{\prime})\) goes to zero. An example of application of our approach is presented in Figure 2. _Surrogate-model-aware inference._ After the surrogate model is trained, we use the following combined model: the point target values predictions come from base model \(\hat{f}(\mathbf{x})\), and the variance \(\hat{\sigma}^{2}(\mathbf{x})\) from the surrogate model. We assume, that the distribution of the output is Gaussian, and can use the formula (2) to produce confidence intervals, if required. The added computational complexity of our approach during inference is the evaluation cost for the surrogate model variance. ### Baseline surrogate model training We can also consider other design of experiments for training the surrogate model \(\hat{f}\) without changing the log-likelihood loss function, but changing the data we use for training. Let us consider four natural options: 1. (Surr I) \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) 2. (Surr II) \(\mathcal{D}_{\hat{f}}=\{(\mathbf{x}_{i},\hat{f}(\mathbf{x}_{i}))\}_{i=1}^{N}\) 3. (Surr III) \(\mathcal{D}_{\hat{f}}\cup\mathcal{D}_{\hat{f}}^{\prime}=\{(\mathbf{x}_{i}, \hat{f}(\mathbf{x}_{i}))\}_{i=1}^{N}\cup\{(\mathbf{x}_{i}^{\prime},\hat{f}( \mathbf{x}_{i}^{\prime}))\}_{i=1}^{L}\) 4. (Surr IV) \(\mathcal{D}\cup\mathcal{D}_{\hat{f}}^{\prime}=\{(\mathbf{x}_{i},y_{i})\}_{i=1 }^{N}\cup\{(\mathbf{x}_{i}^{\prime},\hat{f}(\mathbf{x}_{i}^{\prime}))\}_{i=1}^{L}\) The first dataset is the original training dataset. In the second case, we approximate the base model directly, while we can lost information about aleatoric uncertainty presented in the initial dataset. In the third and the fourth case, we append additional points from input domain \(\mathcal{X}\) to the training dataset, to better approximate the base model. In fourth case we use initial targets on original dataset. These options are natural baselines with strong empirical evidence behind them. In experiments with dataset types (2), (3) and (4) we add variances at each point for each target \(\hat{f}(\mathbf{x}_{i})\), corresponding to inaccuracy for the base model predictions. For points from \(\mathcal{D}\) we use a small noise variance values. For points from new datasets we use a single following value: \[\hat{\sigma}^{2}=\frac{1}{N}\sum_{i=1}^{N}\left(\left(\hat{f}(\mathbf{x}_{i})-y_ {i}\right)-\mu_{\sigma}\right)^{2},\mu_{\sigma}=\frac{1}{N}\sum_{i=1}^{N}\left( \hat{f}(\mathbf{x}_{i})-y_{i}\right)\] This variance is assigned for each point, as we can pass noise variances for each point as an input during training for almost all popular realizations of GPR. ### Bootstrap-based ensemble methods In this section, we briefly describe bootstrap or Monte-Carlo approach to uncertainty estimation for time series. If we can sample from \(p(\boldsymbol{\theta}|\mathcal{D})\), then using a sample \(\boldsymbol{\theta}_{i}\) from this distribution, Monte-Carlo approximation holds: \[p(y|\mathbf{x},\mathcal{D})\approx\frac{1}{k}\sum_{i=1}^{k}p(y|\mathbf{x}, \boldsymbol{\theta}_{i}).\] Following this direction, we get an estimate of the variance as \[\hat{\sigma}^{2}(\mathbf{x})=\frac{1}{k-1}\sum_{i=1}^{k}(\hat{f}_{\boldsymbol{ \theta}_{i}}(\mathbf{x})-\bar{f}(\mathbf{x}))^{2},\] where \(\tilde{f}(\mathbf{x})=\frac{1}{R}\sum_{i=1}^{k}\hat{\mathbf{f}}_{\mathbf{\theta}_{i}}(\mathbf{x})\). The problem is that we can't sample from the distribution \(p(\mathbf{\theta}|\mathcal{D})\) to get an independent sample of \(\mathbf{\theta}_{i}\). The bootstrap algorithm proposes a variant of such sampling. The pairs \((\mathbf{x}_{i},y_{i})\) from the training sample \(\mathcal{D}\) come from an unknown distribution \(F\) whose parameters we want to estimate. From this sample, we form \(k\) subsamples \(\mathcal{D}_{i}\) of size \(N\) with replacement. The most general option involves the simultaneous sampling of pairs of data vectors and features \((\mathbf{x}_{i},y_{i})\). Moreover, some observations \((\mathbf{x}_{i},y_{i})\) in the \(j\) -th subsample may be repeated or absent. An alternative solution is to sample the data and feature vectors separately, for example, for the \(j\) -th subsample, mix the order \(y_{j}\) with a fixed matrix \(X\). Anyway, using a resulting set \(\mathcal{D}_{i}\) we train a model \(\hat{f}_{i}(\mathbf{x})=\hat{f}_{\mathbf{\theta}_{i}}(\mathbf{x})\). Consolidating all models, we obtain an ensemble of size \(k\). The bootstrap procedure requires training \(k\) independent models, which is inefficient. Another challenge is a difference between the bootstrap distribution of parameters and the true posterior distribution \(p(\mathbf{\theta}|\mathcal{D})\). In practice, the problem is even more severe for time series forecasting. Several appropriate bootstrap procedures for time series try to take this issue into account. In this work, we will use three additional types of time-series bootstraps to cover the most broadly used approaches. They are Maximum Entropy-based bootstrap (MEB), Stationary Block Bootstrap (SBB), and Bootstrap-ping Stationary Autoregressive processes (BSAP). Let us briefly describe them and their assumptions. SBB belongs to the family of block methods given in [21; 35; 36]. The main idea is to split a sequence to blocks and permute these block during each bootstrap iteration. If the time series is stationary, we have provable good properties of the procedure. By adding stochasticity in the splitting procedure, we ensure, that SBB mimics the true posterior distribution. BSAP originates from [23]. For this approach, the assumption must be fulfilled that the series is stationary and generated by an autoregressive (AR) model. \[y_{i}=\beta_{1}y_{i-1}+\ldots+\beta_{p}y_{i-p}+\epsilon_{i},i\in\mathbb{Z}.\] The main idea is to fit AR model and get its parameters \((\hat{\beta}_{1n},\ldots,\hat{\beta}_{pn})^{\mathsf{T}}\). After it, we obtain residuals \(\tilde{\epsilon}_{i}\) \[\tilde{\epsilon}_{i}=y_{i}-\hat{\beta}_{1n}y_{i-1}-\ldots-\hat{\beta}_{pn}y_{ i-p}-\frac{1}{n-p}\sum_{k-p+1}^{n}\hat{\epsilon}_{k},i=p+1,\ldots,n.\] Afterwards we equally likely sample \(n-p-1\) points from the set \(\{\tilde{\epsilon}_{p+1},\ldots,\tilde{\epsilon}_{n}\}\) with repetitions. Then using it get new points via AR model with sample noises. \[y_{i}^{*}=\hat{\beta}_{1n}y_{i-1}^{*}+\ldots+\hat{\beta}_{pn}y_{i-p}^{*}+ \epsilon_{i}^{*},i\in\mathbb{Z}. \tag{4}\] As most of the time-series are non-stationary, we use Holt-Winters transformation [52] to get stationary time series before the application of BSAP. Maximum Entropy-based bootstrap [47] follows a different idea. We sort all target values and remember the indexes for these targets from the original sample. Then, we sample values from this interval, round them, so each of the sample value correspond to the closest one from the training sample, then we use the indexes used for sorting to get a replacement for each value. For this approach, the ergodic theorem [46] is satisfied, which guarantees that the bootstrap values will be close to the values of the sample: it is more likely that points close to the points from the sample will appear. Because of this, the resulting subsamples will differ from the sample itself, but not much. So we get a new set of samples and models for the ensemble and will be able to estimate the uncertainty. ## 4. Experiments We structure the experiments section in the following way. Subsection 4.1 presents the used time series forecasting benchmark. Subsection 4.2 introduced used quality metrics. Then we describe the main comparison of our approach with others for different types of base models. Subsection on ablation study concludes this section. Not all experiments made to the main text of the paper, so for more detailed experiments we refer an interesting leader to Appendix C. In all tables below best results are highlighted with **bold** font, second best results are underscored. ### Datasets Time series forecasting data FD benchmarkOne of the largest benchmarks for predicting one-dimensional time series is the Monash time series forecasting archive (TSForecasting) [13]. It contains 26 publicly available time series datasets from different applied domains with equal and variable lengths. The goal for each dataset is time series forecasting for a specific time horizon \(h\). The data cover nine diverse areas: tourism, banking, Internet, energy, transport, economy, sales, health and nature. Some sets repeat in slightly different versions: the frequency of the time series considered in them changes (day, month, quarter or year), or missing values are included and excluded. Because of this, the total number of sample options reaches 50. We chose TSForecasting for the variety of presented sequential data, which allows us to cover a large class of tasks. In most cases, one sample contains a fairly large number of time series, 2600 on average. There are also six additional very long time series, two of which have a length of more than 7 million. Original paper [13] provides a more detailed description of each dataset. All the metadata for building the model (prediction horizon, context length, periodicity, etc.) follow the settings from this project. For some datasets in [13], there was no metadata. We excluded such datasets from consideration. Therefore, the number of datasets has decreased to 19 datasets with one-dimensional data and one dataset with single multi-dimensional time series. From each dataset with one-dimensional data first two time series were taken. For one-dimensional time series from TSForecasting with target values only, we use \(p\) lags as features. For example, \(i\)-th point its label \(y\) and features \(\mathbf{x}\) are \[y=y_{i},\quad\mathbf{x}=(y_{i-1},\ldots,y_{i-p}).\] For multi-dimensional time series, we use provided features. We split time series into train and test data using test data with the size of \((h+p)\) for one-dimensional time series and \(h\) for multi-dimensional time series. Moreover, to speed up computations, only time series of length \(\max(2\cdot lag,200)\) are used. Financial dataAs concrete applied examples, we consider two datasets for the time series prediction connected to the financial industry. We can't disclose the names of the targets and name them Dataset A and Dataset B. The target values are modified to hide their true value and scale, while these changes don't affect the obtained metrics. Both datasets have clear out-of-distribution parts related to the changes caused by COVID-19 and other crises, so they can be used to evaluate the uncertainty estimation for time series. Moreover, they have another challenge related to the small number of points available for training. ### Evaluation metrics To provide a multi-faceted evaluation, the results include values of various quality metrics for the uncertainty quantification. We present values of the Root Mean Square Calibration Error (RMSCE) and miscalibration area in the main text. Better calibrated models have larger RMSCE and smaller miscalibration area. Additional metrics and related discussion are presented in appendix section A. Here we use critical difference (CD) diagrams to compare statistically methods. Following the recommendation in (Bordes and McAllester, 2017) we used Friedman test (Friedman, 1992) to reject the null hypothesis. Afterwards we consider the pairwise post-hoc analysis where the average rank comparison is replaced by a Wilcoxon signed-rank test with Holm's alpha (5%) correction(Friedman, 1992). The results are visualized by a critical difference diagram proposed in (Bordes and McAllester, 2017). In the diagram a thick horizontal lines shows a group of models (a clique) that are not-significantly different in terms of metric value. We use framework to build CD diagram from the article (Friedman, 1992). There are CD diagrams for regression metric (RMSE) and calibration metric (Miscalibration Area). ### Considered uncertainty estimates The complexity of uncertainty estimation problem lead to diverse solutions specific to different approaches. Our question is how we can overcome the intrinsic and model-agnostic methods for different classes of base models. We consider ordinary least squares (OLS), CatBoost (a realization of Gradient boosting equipped with uncertainty estimate) and ARIMA. For all methods the hyperparameters are default. For all base models we consider different ways to construct uncertainty estimates. We start with build-in approaches for each base model. OLS and ARIMA models obtain uncertainty estimates based on a Bayesian assumption about the model parameters. Gradient boosting model uses auxiliary models that minimize the quantile loss. We compare the built-in approaches with alternatives that uses the base model as a black box or can train it in case of bootstraps. Considered bootstraps include Naive bootstrap (Naive BS) and advanced boostrap types (MEB BS, SBB BS, BSAP BS), described in the relevant section above. In ablation study, we also compare our approach _Surr Ind_ (surrogate model with initial training points as inducing) with different types of surrogate models (Surr I, Surr II, Surr III and Surr IV). Additional details on used procedure to construct baseline surrogate models are presented in Appendix, Section C.4. ### Main results Our goal here is compare different methods with the focus on the quality of uncertainty estimates. The main results for Forecasting data are in Table 2. Since Forecasting data has a lot of time series, we decided to count ranks and average it for all pairs of a base model and corresponding uncertainty estimate for it. Even higher level aggregation is provided in the teaser table 1 with Our surrogate being a pen name for Surr Ind and Best bootstrap being a pen name for BSAP BS. We note, that at both low level and high rank-based level comparison in almost all cases our Surr Ind provides the best results. Surr I is also a strong baseline, which make them a sound alternative in some cases. Another insights are provided by critical difference diagrams (Bordes and McAllester, 2017) and plot them in Figures 3, 5. More detailed comparison with discussion of obtained critical difference diagrams are given in Appendix, Section C.2. We also provide metrics for separate datasets A and B to give a taste, what is not only the rank, but also related differences in metrics. Tables 3 and 4 provides detailed uncertainty estimation quality metrics for Dataset A and Dataset B respectively. Again, our Surr Ind provides superior uncertainty estimation metrics in many cases. So, the presented results show that Surr Ind and Surr I outperform other methods and are non-significantly different from each other. In addition, some bootstrap methods are non-significantly different from each other. ### Comparison of surrogate models There are various strategies for training surrogate models in addition to our main line. In particular, we compare our approach with four Baseline types of surrogate: Surr I - trained on the initial dataset \(\mathcal{D}\), Surr II trained on \(\mathcal{D}_{f}\), Surr III trained on \(D_{f}\cup D^{\prime}_{f}\) and Surr IV trained on \(D\cup D^{\prime}_{f}\). In our experiments \(|D^{\prime}_{f}|=20\) and noise assumed in targets for Surr II, Surr III, Surr IV with variance computed by formula (3.4). Results on Forecasting Data are presented in Table 5; critical difference diagram for OLS base model is in Figure 8. Surr Ind outperforms the basic approach without additional points Surr I as well as other approaches with slightly worse results for CatBoost base model. ### Selection of hyperparameters for Surr Ind _C and L selection._ Our approach has two main hyperparameters: the weight of the second term in the loss function \(C\) and the number of additional points in the sample \(\mathcal{D}^{\prime}_{f}\). In this subsection, we investigate, how they affect the performance of our models. We fix one hyperparameters and vary another producing the quality metric miscalibration area for each particular pair. We present results for one dataset A, results for other datasets are similar. The results of the experiments are in Figures 6. We see, that we observe higher metrics for moderate values of \(C\) and high number of generated points \(L\) comparable to the initial sample size \(N\). On other hand, we see, that after some moment the performance halts to improve. Thus, we recommend to use \(C\) in the interval \([0.5,1]\) and \(L\approx N\). _Choice of kernel for surrogate model._ Here we investigate which kernel in preferable for GPR surrogate model. We train Surr Ind with linear and with RBF kernels and measure their uncertainty estimation quality on Forecasting Data. In Figure 7 Miscalibration Area of RBF and linear kernel is compared, with different base models. One can see, that linear kernel performs slightly better with all base models, and particularly with OLS. Moreover, linear kernel-based surrogate model have better rank, computed on Forecasting Data, than RBF kernel-based one. So, we select linear kernel for Surr Ind approach in the main study. ## 5. Discussion and Conclusions We have shown that the surrogate-model-based approach produces accurate confidence interval predictions for different base models on different datasets. Calibration and regression metrics of surrogate models are comparable with classical bootstrap ensemble methods and in most cases are better. This means that we can get accurate uncertainty estimates without the necessity to train many models using bootstrap and select a proper bootstrap technique among existing ones. The computational complexity of our method coincides with that of Gaussian process regression for the available training sample. From one point of view, it is a con, especially for a complex base \begin{table} \begin{tabular}{l l l l l l} \hline \hline Uncertainty & \multirow{2}{*}{Base model} & \multirow{2}{*}{RMSE} & \multicolumn{2}{c}{Miscal.} & \multirow{2}{*}{RMSCE} & \multirow{2}{*}{ENCE} \\ estimate & & & Area & & \\ \hline Build-in & & **0.028** & 0.35 & 0.384 & 1.801 \\ Naive BS & & 0.029 & 0.363 & 0.406 & 2.922 \\ MEB BS & & **0.028** & 0.408 & 0.452 & 2.507 \\ SBB BS & OLS & **0.028** & 0.363 & 0.399 & 1.913 \\ BSA PS & & 0.03 & 0.314 & 0.352 & 2.12 \\ Surr I & & **0.028** & 0.058 & 0.067 & 0.416 \\ Surr Ind & & **0.028** & **0.054** & **0.063** & **0.348** \\ Build-in & & 0.063 & 0.374 & 0.435 & 0.326 \\ Naive BS & & **0.029** & 0.41 & 0.451 & 1.49 \\ MFB BS & & 0.039 & 0.272 & 0.295 & 1.057 \\ SBB BS & ARBMA & 0.065 & **0.085** & **0.094** & 0.462 \\ BSA PS & & 0.035 & 0.206 & 0.245 & 1.177 \\ Surr I & & 0.063 & 0.37 & 0.407 & 1.561 \\ Surr Ind & & 0.063 & 0.086 & 0.101 & **0.342** \\ Build-in & & 0.029 & 0.455 & 0.355 & 286228 \\ Naive BS & & 0.028 & 0.225 & 0.263 & 3.336 \\ MFB BS & & 0.03 & 0.299 & 0.345 & 2.417 \\ SBB BS & CatBoost & 0.031 & 0.334 & 0.366 & 4.092 \\ BSA PS & & **0.026** & 0.178 & 0.215 & 1.954 \\ Surr I & & 0.029 & **0.046** & **0.055** & **0.479** \\ Surr Ind & & 0.029 & **0.045** & **0.055** & 0.486 \\ \hline \hline \end{tabular} \end{table} Table 4. Quality metrics for regression and uncertainty estimation on Dataset B Figure 5. OLS base model comparison of RMSE on Forecasting data \begin{table} \begin{tabular}{l l l l l l} \hline \hline Uncertainty & \multirow{2}{*}{Base model} & \multirow{2}{*}{RMSE} & \multicolumn{2}{c}{Miscal.} & \multirow{2}{*}{RMSCE} & \multirow{2}{*}{ENCE} \\ estimate & & & Area & & \\ \hline Build-in & & **16362.093** & 0.167 & 0.184 & 0.824 \\ Naive BS & & 16062.77 & 0.204 & 0.229 & 1.189 \\ MEB BS & & 16512.081 & 0.32 & 0.358 & 2.213 \\ SBB BS BS & & 17974.76 & 0.054 & 0.067 & 0.476 \\ BAS PS & & 20228.63 & 0.224 & 0.276 & 2.396 \\ Surr I & & **16362.093** & **0.028** & **0.036** & **0.234** \\ Surr Ind & & **16362.093** & 0.084 & 0.089 & 0.384 \\ \hline Build-in & & 25534.59 & 0.269 & 0.313 & 0.659 \\ Naive BS & & **16295.585** & 0.144 & 0.16 & **0.61** \\ MEB BS & & 24127.47 & 0.45 & 0.513 & 4.075 \\ SBB BS & ARBMA & & 32248.943 & 0.43 & 0.479 & 2.078 \\ BAS PS & & **24193.585** & **0.099** & **0.115** & **0.12** \\ Surr I & & 25324.597 & 0.292 & 0.33 & 0.823 \\ Surr Ind & & 25534.539 & 0.353 & 0.34 & 1.317 \\ Surr Ind & & 33524.62 & 0.489 & 0.563 & **0.087** \\ Surr I & & 31155.19 & 0.456 & 0.535 & 14.201 \\ MEB BS & & 31919.12 & 0.466 & 0.538 & 9.344 \\ Surr Ind & & 336202.92 & 0.495 & 0.57 & 22.932 \\ Surr I & & **30293.746** & 0.469 & 0.54 & 13.866 \\ Surr I & & 33352.452 & **0.354** & **0.394** & 1.376 \\ Surr Ind & & 33522.452 & 0.373 & 0.413 & 1.437 \\ \hline \hline \end{tabular} \end{table} Table 3. Quality metrics for regression and uncertainty estimation on Dataset A model. Moreover, as most time series data have moderate sample sizes, it is not a burden. From the other point of view, \(O(N^{3})\) can be prohibitive in some cases. We suggest using sparse Gaussian process regression (GPR) [37] or other large-data-set variants of GPR widely available in modern literature [26] and used in industrial problems [3]. We would highlight the importance of selection of the set of inducing points in our approach, as they are the points with zero model uncertainty in them, and can be selected with this thought in mind. Detailed investigation on how to select a proper method or a proper set of inducing points to condition Gaussian process regression is an interesting question for future research. We also limited ourselves to the time series forecasting problem. For time series forecasting, we have a ready benchmark with datasets of moderate size, which allows numerous experiments and consideration of diverse base models. Moreover, Gaussian process regression works well in estimation uncertainty for multiple-step ahead forecast [12], making them even more plausible. Other benchmarks like UCI [2] would also help to understand the performance of the surrogate GPR better, with our work being a good starting point. Another possible application include yet-unsolved problem of the estimation of uncertainty for deep learning. It arises in NLP problems [41], open-set face recognition [17], OOD detection in computer vision [1]. We provide an example, that our method works for the state-of-the-art neural network for time series forecasting InceptionTime. Combining our ideas with large scale GP and deep kernel learning [34, 51], we should be able to fill this gap better. Last but not the least is the limitations of GP models in terms of types of modelled uncertainty. We would say, that taking into account uncertainty of the parameters estimation for Gaussian process regression can help in some problems, see [48] for relevant ideas. To sum up, we believe that the presented work would help practitioners to construct uncertainty estimates for their models using a Gaussian process regression surrogate trained in the manner suggested in the paper. Some questions related to the presented method remain open, while the excessive wisdom of the crowd makes them primarily engineering, while attractive and worth exploring. ## Acknowledgments We thank Evgenya Romanenkova for helpful advices and thorough paper reading.
2306.01612
Thrust force is tuned by the rigidity distribution in insect-inspired flapping wings
We study the aerodynamics of a flapping flexible wing with a two-vein pattern that mimics the elastic response of insect wings in a simplified manner. The experiments reveal a non-monotonic variation of the thrust force produced by the wings when the angle between the two veins is varied. An optimal configuration is consistently reached when the two veins are spaced at an angle of about 20 degrees. This value is in the range of what has been measured in the literature for several insect species. The deformation of the wings is monitored during the experiment using video recordings, which allows to pinpoint the physical mechanism behind the non-monotonic behaviour of the force curve and the optimal distribution of the vein network in terms of propulsive force.
Roméo Antier, Benjamin Thiria, Ramiro Godoy-Diana
2023-06-02T15:21:00Z
http://arxiv.org/abs/2306.01612v2
# Thrust force is tuned by the rigidity distribution in insect-inspired flapping wings ###### Abstract We study the aerodynamics of a flapping flexible wing with a two-vein pattern that mimics the elastic response of insect wings in a simplified manner. The experiments reveal a non-monotonic variation of the thrust force produced by the wings when the angle between the two veins is varied. An optimal configuration is consistently reached when the two veins are spaced at an angle of about 20 degrees. This value is in the range of what has been measured in the literature for several insect species. The deformation of the wings is monitored during the experiment using video recordings, which allows to pinpoint the physical mechanism behind the non-monotonic behaviour of the force curve and the optimal distribution of the vein network in terms of propulsive force. ## I Introduction Quoting Wootton [1]: _"In considering insect wings, whether for comparative illustration or aerodynamic analysis, some simplifications are inevitable. Two in particular are common: to regard the wing as essentially flat, and as effectively rigid. Neither is true, and the latter can be seriously misleading."_ And the same can be said for most flapping wings and fins, where the structural deformation that accompanies the back and forth motion is a fundamental element of the dynamical balance [2; 3]. In particular, the periodic stroke reversals of flapping wings and their associated cycle of acceleration and deceleration give rise to a rich variety of vortex structures that are crucial players in the unsteady aerodynamic mechanisms inherent to flapping flight--see e.g. [4; 5; 6; 7], for a review. Another noteworthy point is that these mechanisms are tuned with the deformation dynamics of the wings [8; 9], where specific features such as the passive wing pitch reversal observed in some insects [10] or the active camber control used by bats [11] are determinant in the cycle of aerodynamic force production. Several works [8; 12; 13] have addressed the problem of wing deformation (see e.g. [14] for a review) and models have usually decomposed the main deformation modes as a combination of spanwise and chordwise bending [15; 16; 17]. In the case of insects, a network of veins confers their wings an anisotropic rigidity [3; 18; 19], which governs the passive responses of the wings to aerodynamic, inertial and occasional impact forces [20; 21]. The structural function of veins is not straightforward, and it coexists with their other roles as transmission conduits of air and hemolymph, and as sensory elements [19]. However, a few main features are recurrent, such as the veins being the thickest closest to the wing root, tapering towards the tip and trailing edge of the wings [22; 23]. Another observation is that most insect wings present a zone near the leading edge stiffened by thick veins and relief (see e.g. [20]), with thinner veins elsewhere that will let the wing membrane deform more easily during the flapping motion. A secondary stiffened axis oriented obliquely at an angle with the leading edge is also present in many insect wings. Interestingly, the angle between the leading edge and this oblique stiffened area is narrowly-distributed around \(15^{\circ}\) to \(30^{\circ}\) (see e.g. for dipteran wings [18; 24]); the stiffness of this zone is provided by the combined effect of veins and corrugation [24]. The details of the local venation are essential here, and it has been suggested that extant vein patterns have resulted from evolutionarily convergent vein fusions [25]. The goal of this paper is to examine the effect of different patterns of wing rigidity on the flapping wing aerodynamics by using a minimal model that mimics the elastic response of insect wings in a simplified manner. We use a two-vein pattern (see Fig. 1) with one main vein along the leading edge of the wing, and a secondary vein also attached at the root of the wing but that extends obliquely at a specific angle with respect to the leading edge. The angle between the two veins is the main experimental parameter explored. The experiments reveal a non-monotonic variation of the average thrust force produced by the flapping wings with a local optimum when the two veins are spaced at an angle of about \(20^{\circ}\), which is in the range of the typical angles observed in insects. An explanation of the physical mechanisms involved is proposed using observations of the instantaneous kinematics of the flapping wing deformation. ## II Experimental setup and methods ### Wings and flapping system A flapping system with two wings is used for the experiments. The wings are composed of a 3D-printed skeleton with two veins and a thin membrane. The model wing is a Zimmerman planform (see Fig.1 C) [26; 27], that represents a simplified hummingbird or insect wing shape. The wings are shaped like two quarter ellipses. A quarter ellipse with half major axis the span and half minor axis one quarter of the chord. The leading edge of the wing follows the curve of this first quarter ellipse. The second quarter ellipse is connected to the first one along the span. The half major axis is also the length of the wingspan, and the half minor axis is three quarters of the chord. The length of the wing is \(R=75\) mm, and the root chord measures \(c=25\) mm, the mean chord thus being \(c_{m}=19.6\) mm. The aspect ratio of the wing is \(R=R/c_{m}=3.82\). The wing shape is cut on a polyethylene terephthalate (PET) film of thickness \(30\)\(\mu\)m. The cutting is done with a laser cutting machine (CO2 with infrared ray, Epilog Laser, type Helix). The Young's modulus of the membrane is \(4\) GPa. The membrane is supported by two reinforcements disposed respectively on the leading edge and along a direction making an angle \(\beta\) with the leading edge (see Fig.1 C). The reinforcements play the role of veins. The material of the veins is polylactic acid (PLA) with a Young's modulus of \(4\pm 0.3\) GPa. In order to have a symmetric deformation, a vein skeleton is glued on each side of the membrane using a Teroson SB2444 rubber adhesive. This adhesive is very elastic and allows the membrane to slide between the veins without detaching. The veins are \(1\) mm wide. The thicknesses on each side of the membrane are of \(480\)\(\mu\)m for the leading edge and \(240\)\(\mu\)m for the radial vein. The angle between the two veins \(\beta\), varies between \(10^{\circ}\) and \(90^{\circ}\) in \(5^{\circ}\) steps. We also made a wing consisting of a membrane and a single vein at the leading edge. This wing is referenced by the case \(\beta=0^{\circ}\). The leading edge vein is extended by \(3.5\) mm toward the center of the ellipse so that it can be connected to the flapping mechanism. The length of the radial vein, \(R_{v}\), evolves as a function of the angle \(\beta\) through the relationship: \[R_{v}=\frac{1}{\cos(\beta)}\left(\frac{1}{R^{2}}+\frac{\tan^{2}\beta}{c^{2}} \right)^{-1/2}\] Because \(R_{v}\) diminishes with increasing \(\beta\), the total mass of the wing also diminishes slightly. The difference in mass between the heaviest wing (\(\beta=5^{\circ}\)) and the lightest wing (\(\beta=90^{\circ}\)) is \(16\%\). Two wings are mounted on a flapping system based on the DelFly design [28] obtained by dismantling a commercially-available flapping-wing bird toy to keep only the motor and crank mechanism. The system, powered externally, allows to generate a sinusoidal planar flapping motion with an amplitude of \(32^{\circ}\) for frequencies \(f\) ranging from \(5\) to \(20\) Hz. Front view and side view photos of the system are shown in Fig. 1 (D) and (E), respectively. ### Wings scaling analysis In order to asses how far, dynamically, our rudimentary model wings are from the case of an insect wing, it is convenient to examine a few dimensionless quantities. The main elements of the flapping wing problem are the aerodynamic force, the elastic bending rigidity of the wing, and its inertia [32; 33; 34]. We can use as a basic model a flexible plate (dimensions: mean chord \(c_{m}\), span \(R\) and thickness \(h\), density \(\rho_{s}\) --i.e. surface mass density \(\mu_{s}=\rho_{s}h\)--, and elastic modulus \(E\)) that will bend under the action of its own inertia and of the aerodynamic forces. For a flapping motion in hovering characterised by an angular amplitude \(\psi_{0}\) and frequency \(\omega=2\pi f\), before considering the wing deformation we can already define [2]: (i) the Reynolds number, written in terms of a reference flapping velocity \(U_{\text{ref}}=2\psi_{0}Rf\), the density \(\rho\) and dynamic viscosity \(\eta\) of air, and using the mean chord \(c_{m}\) as reference length scale \(L_{\text{ref}}\): \[Re=\frac{\rho U_{\text{ref}}L_{\text{ref}}}{\eta}=\frac{\rho 2\psi_{0}Rfc_{m}}{ \eta}\;, \tag{1}\] which governs the aerodynamic regime by setting the importance of fluid inertial versus viscous effects; and (ii), the reduced frequency \[k=\frac{\omega c_{m}}{U_{\text{ref}}}=\frac{\pi}{\psi_{0}\mathcal{R}}\;, \tag{2}\] which in the present hovering case does not depend explicitly on the physical frequency because the reference velocity is the flapping velocity that is itself proportional to the frequency. Now, to estimate the effects of aerodynamic loading and wing inertia measured against the elastic response Figure 1: (A) and (B) Two examples of the rigidity distribution in insect wings (Figures from [1]). Supporting areas (stippled), deformable areas (unstippled) and flexion lines (dashed) in (A) _Syrphus ribesii_ (Diptera); (B) _Vespula germanica_ (Hymenoptera). m.f.l., median flexion line; cl.f., claval furrow; tr.f.l., transverse flexion line. Scale lines = 5 mm. (C) Model wing used in the present work. (D) and (E) Frontal and side views, respectively, of the system mounted on the force sensor. In (D) several snapshots are superposed to illustrate the flapping wing motion. of the wing, we can use, respectively, a Cauchy number [35; 36]: \[C_{Y}=\frac{\frac{1}{2}\rho U_{\text{ref}}^{2}L_{\text{ref}}^{3}}{\overline{EI}}= \frac{2\rho R^{2}\psi_{0}^{2}f^{2}c_{m}^{3}}{\overline{EI}}\, \tag{3}\] which characterizes the deformation of the wing under the effect of the fluid flow, and the elasto-inertial number [33]: \[\mathcal{N}_{ei}=\frac{\mu_{s}a_{\text{ref}}L_{\text{ref}}^{3}}{\overline{EI}} =\frac{4\pi^{2}\mu_{s}R\psi_{0}f^{2}c_{m}^{3}}{\overline{EI}}\, \tag{4}\] which characterizes the deformation of the wing under the effect of its own inertia. \(\mathcal{N}_{ei}\) is written in terms of a reference acceleration \(a_{\text{ref}}=R\psi_{0}\omega^{2}\). \(\overline{EI}\) in Eqs. 3 and 4 is an average plate bending rigidity. These dimensionless numbers can be used to give an indicative picture of the model wings in comparison with insect (or other) wings in a global parameter space, as shown in Table 1, and comfort the idea of using the present experiment to examine the effect of the rigidity distribution of the wing on its aerodynamic performance. ### Force sensor The system is mounted on a Schunk FT-Nano 17 6-axis force sensor as shown in Fig. 1 D, such that the average propulsive force produced by the flapping wings points towards the \(x\)-direction. In a right-handed cartesian reference frame, the weight of the device is thus directed towards the negative \(z\)-direction. In what follows, we focus on the forward component of the force \(F_{x}\), which in the present setup is the most relevant concerning the aerodynamic force production because the forward component is perpendicular to the stroke plane (as in the mercury-go-round setup of [33; 37]). The reciprocal motion of the wings and the symmetry of the setup determine that the \(F_{y}\) and \(F_{z}\) components of the force as well as the \(M_{x}\) and \(M_{z}\) components of the moment average to zero over each flapping period. The \(M_{y}\) component has non-zero mean, but what can be learned from its dynamics in the tethered frame of the present experiment is redundant from what is obtained from the analysis of the forward force \(F_{x}\). A typical time series of the measured force signal is shown in Fig. 2 (A). The signal is noisy because the forces produced by the wings were close to the limit of the measurement range of our sensor. Nonetheless, the periodicity driven by the flapping motion is clearly visible, as highlighted in the figure by the running average also plotted. These time-resolved measurements were robustly repeatable. The Fourier transform of the signal--shown in Fig. 2 (B)--has its largest peak at twice the flapping frequency. This is expected because two peaks of force are produced over one period when the wing instantaneous velocity is highest during the upstroke and \begin{table} \begin{tabular}{l c c c c c c} \hline Parameter & units & Hawkmoth & Hoverfly & Bumblebee & European honey bee & Model wings \\ & & _(Manduca sexta)_ & _(Eristalis tenax)_ & _(Bombus terterstris)_ & _(Apis mellifera)_ & of the present study \\ \hline \(R\) & (m) & 0.049 & 0.009 & 0.016 & 0.0097 & 0.075 \\ \(S\) & (m\({}^{2}\)) & 8.91\(\times 10^{-4}\) & 3.7\(\times 10^{-5}\) & 1.1\(\times 10^{-4}\) & 4.2\(\times 10^{-5}\) & 0.0015 \\ \(c_{m}\) & (m) & 0.018 & 0.004 & 0.007 & 0.004 & 0.02 \\ \(m_{w}\) & (kg) & 4.7\(\times 10^{-5}\) & 6.0\(\times 10^{-7}\) & 1.25\(\times 10^{-6}\) & 5.0\(\times 10^{-7}\) & 1.5\(\times 10^{-4}\) \\ \(\mu_{s}\) & (kg m\({}^{-2}\)) & 0.0527 & 0.0162 & 0.0118 & 0.0119 & 0.1018 \\ \({}^{\dagger}EI_{\text{beam}}\) & (N m\({}^{2}\)) & 8.0\(\times 10^{-6}\) & 7.7\(\times 10^{-6}\) & 7.7\(\times 10^{-6}\) & 1.82\(\times 10^{-6}\) & [0.77 - 47.5]\(\times 10^{-6}\) \\ \({}^{\dagger}\overline{EI}\) & (N m) & 1.63\(\times 10^{-4}\) & 8.56\(\times 10^{-4}\) & 4.81\(\times 10^{-4}\) & 1.88\(\times 10^{-4}\) & [1 - 63]\(\times 10^{-5}\) \\ \(\psi_{0}\) & (\({}^{\circ}\); rad) & 57; 0.99 & 51; 0.90 & 60; 1.05 & 65; 1.13 & 16; 0.28 \\ \(f\) & (Hz) & 25 & 210 & 150 & 250 & [12 - 20] \\ \(Re\) & - & 2933 & 934 & 2204 & 1577 & [653 - 1089] \\ \(C_{Y}\) & - & 0.131 & 0.001 & 0.009 & 0.008 & [0.002 - 0.311] \\ \(\mathcal{N}_{ei}\) & - & 2.34 & 0.02 & 0.11 & 0.14 & [0.15 - 24.85] \\ \(k\) & - & 1.17 & 1.58 & 1.24 & 1.23 & 2.95 \\ \(\mathcal{R}\) & - & 2.69 & 3.10 & 2.18 & 2.41 & 3.82 \\ \hline \end{tabular} \({}^{\dagger}\)the values \(\overline{EI}\) of average plate bending rigidity were obtained as \(\overline{EI}=EI_{\text{beam}}/R\), where \(EI_{\text{beam}}\) is the chord-wise flexural stiffness. \(EI_{\text{beam}}\) values for insects [15] come from an indirect measurement of an equivalent beam performed by applying a point force to bend the wing and using the measured force \(F\) and wing displacement \(\delta\) to calculate an overall flexural stiffness \(EI_{\text{beam}}=Fl^{3}/3\delta\), \(l\) being the effective beam length. \end{table} Table 1: Wing morphological and material properties, kinematic parameters, and dimensionless numbers for a few insect species and for the model wings. Data from [2; 29; 15; 30; 31]. the downstroke. The average force, marked as a dashed horizontal line in Fig. 2 (A), is the main output used as a performance probe as the \((\beta,f)\)-parameter space is explored. The results are summarised in Fig. 2 (C), where this time-averaged force \(\bar{F}_{x}\) is plotted as a function of the radial vein angle \(\beta\) for several frequencies. ### Kinematics tracking In addition to the force measurements, the motion of the wings was tracked using a Phantom Miro M120 high-speed camera recording \(1920\times 1200\) pixel\({}^{2}\) images at 800Hz. Fig. 3 shows time series of a side view (A) and a front view (B) to give a qualitative picture of the deformation of the wing during the flapping cycle. In order to quantify the wing deformation, four points of interest were tracked using ImageJ [38]: two at the leading edge (at the root and at the tip), and two at the trailing edge (at the point where the radial vein ends, and at end of the largest chord section). Figure 2: Thrust force measurements. (A) Time series \(F_{x}(t)\) and (B) its frequency content obtained by FFT for a typical case (\(\beta=20^{\circ}\) and \(f=17.5\) Hz). In (A) a running mean of the signal is shown (solid blue line) as well as its average value (dashed line). (C) Average force \(\bar{F}_{x}\) as a function of the radial vein angle \(\beta\) for several frequencies (\(f=12.5\), \(14.5\), \(16.5\), \(17.5\) and \(18.5\) Hz, from darkest to lightest color respectively). Each point is the mean value of several runs with identical parameters and the corresponding standard deviations are represented as error bars. (D) Thrust coefficient \(C_{f}\) for the same experimental data computed using Eq. 5; in the inset, \(C_{f}\) is normalized by the maximum value \(C_{f\max}\) of each frequency series. ## III Results and discussion ### Thrust coefficient As mentioned above, the time average of the force \(\bar{F}_{x}\) presented in Fig. 2 (C) is the main performance indicator of the system as a function of the parameter space constituted by the radial vein angle \(\beta\) and the flapping frequency \(f\). A first step in the analysis is naturally to find a dimensionless representation of the data that is plotted in physical units in Fig. 2. To do so, we define a thrust coefficient \[C_{f}=\frac{\bar{F}_{x}}{\frac{1}{2}\rho u_{\rm wing}^{2}S} \tag{5}\] where \(u_{\rm wing}=2\pi fA\) is the characteristic flapping speed defined by the frequency \(f\) and amplitude \(A\) of the flapping motion. We define a nominal amplitude \(A=R\sin(32^{\circ})\) based on the wing length and the flapping angular amplitude. The reference surface \(S\) is the area of the two wings. The thrust coefficient is presented in Fig. 2 (D). Two main observations can be made: on the one hand, the measurements for each frequency constitute a non-monotonic curve. As the angle \(\beta\) of the radial vein is increased from zero, a clear maximum occurs around \(\beta\approx 20^{\circ}\), followed by a minimum at \(\beta\approx 35^{\circ}\). Further increasing \(\beta\) makes the propulsive force grow again until it reaches a similar value to that of the first maximum observed at \(\beta\approx 20^{\circ}\). The second observation is that when the frequency is increased, the performance curve is shifted to higher values whilst keeping a fairly similar shape. This can be seen clearly by normalising the curve corresponding to each flapping frequency by its maximum value \(C_{f\rm max}\), as shown in the inset of Fig. 2 (D). That the thrust force increases with the flapping frequency is of course an expected result, which has been reported for similar systems in the literature [39; 40]. In what follows we analyse these results in light of the wing deformation observations. ### Wing deformation kinematics An overview of the flapping motion captured from side and top views is presented in Fig. 3. The main point of this visualisation is to examine the typical behaviour of the wing and to identify the basic elements of its deformation dynamics. The phase lag between the leading edge and trailing edge has been used in the literature [37] to explain the performance increase of flexible wings Figure 3: (A) Side and (B) front views of a flapping sequence of a wing with \(\beta=40^{\circ}\) at 17.5 Hz. with respect to rigid wings. Considering that the average main component of the aerodynamic force is perpendicular to the stroke plane, the advantage of the flexible wing comes from the redirection of the force in the useful direction. A representation of this idea for a section of a flexible wing is shown in Fig 4 (A), which defines the two parameters that will be used in the following: the projection of the wing surface on the stroke plane \(yz\), defined as \(S_{x}\) --see also Fig 4 (B)--, and the trailing angle \(\theta\), both measured at the instant of maximum flapping velocity (i.e. when the wing passes the horizontal position). Note that the trailing angle \(\theta\) is different from the usual aircraft trailing edge angle, defined as the angle between the tangents of the upper and lower airfoil at trailing edge, indicating trailing edge sharpness. Now, \(S_{x}\) can be used as a measure of the aforementioned force redirection. If the wing is considered as a homogeneous plate bending under its first mode of deformation, this picture is sufficient to explain the basic physical mechanism driving the performance of a flexible wing [33; 8]. The radial vein complicates the picture because each section of the wing behaves now differently --see Fig. 5 (B) and (C). In particular, the phase lag of the trailing edge becomes different depending on the span-wise position. Nonetheless, we can still examine how \(S_{x}\) changes with \(\beta\). This is shown in Fig. 4 (B) for the cases of \(f=14.5\) and \(17.5\) Hz. As \(\beta\) increases from zero, the radial vein starts preventing part of the wing to bend and \(S_{x}\) diminishes. This trend saturates at \(\beta\approx 40^{\circ}\) until \(\approx 60^{\circ}\), after what \(S_{x}\) increases again to larger values. Recalling the force measurements of Fig. 2, where the thrust minima are observed around \(\beta\approx 40^{\circ}\), reinforces the idea of a larger projected surface \(S_{x}\) being a necessary feature for increasing thrust production. ### Projected wing surface and trailing angle To go further, it is useful to describe the wing in terms of its two sections: the first section is the area comprised between the leading edge and the radial vein and the second one that between the radial vein and the trailing edge. We define these, respectively, as the _inter-vein area_ and the _trailing area_. Because the wing is built with portions of ellipses, the areas of these two sections can be expressed analytically, as a function of \(\beta\), which gives the curves shown in Fig. 5 (A). Panels (B), (C), and (D) in Fig. 5 show three examples for different angles of the radial vein, when the wing passes the horizontal position, with the perimeters of the two areas highlighted. Since Figure 4: (A) and (B) Schematic representations of the flexible wing moving at speed \(u_{\text{wing}}\). In (A), a section in the \(xz\) plane is pictured, the thick blue arrow represents the total aerodynamic force which points more in the forward direction the more the wing is bent. The trailing angle \(\theta\) and the projected surface \(S_{x}\) are also shown. (B) shows a three-dimensional sketch. (C) and (D) Measured values of the projected surface \(S_{x}\) (C) and the trailing angle (D) as a function of \(\beta\) for \(f=14.5\) and \(17.5\) Hz. the snapshots are frontal views of the wing, the addition of the highlighted areas is actually \(S_{x}\). For small angles between the veins, typically \(\beta<20^{\circ}\), the trailing area is larger than the inter-vein area, this implies that \(S_{x}\) is dominated by the deformation of the trailing area --see Fig. 5 (B). The larger this free surface, the larger its deformation. On the contrary, for large \(\beta\), typically \(\beta>50^{\circ}\), the trailing area is very small and hardly deforms at all. Its influence is then small in the generation of aerodynamic forces. The surface between the veins is the largest and its swelling is at the origin of the redistribution of the aerodynamic forces --see Fig. 5 (D) for the limit case of \(\beta=90^{\circ}\) where the trailing area has vanished and the whole wing is the inter-vein area. Summarising, the changes in the projected area as a function of the radial vein angle give us a first physical insight to explain the non-monotonic behaviour of the propulsive force observed in Fig 2. For lower values of \(\beta\), the force production is dominated by the trailing area, whereas for higher values, typically above \(\beta\approx 40^{\circ}\), it is the inter-vein area that contributes the most. Now, considering firstly the lower angles, say \(\beta\lesssim 40^{\circ}\), the force measurements have a maximum value at \(\beta\approx 20^{\circ}\). This means that the thrust force does not solely depend on the surface (or projected surface) of the trailing area, which is largest at \(\beta=0^{\circ}\), but also on the way the wing is bent relatively to the incident wind. Examining side views of the flapping wing (see Fig. 6) brings evidence of the reason for the suboptimal performance of the wings with very low values of \(\beta\). Because the radial vein also imposes the maximum length of the trailing area that can bend, a small angle \(\beta\) means that the trailing surface is long enough to bend on itself as shown in Fig. 6, thus losing the aerodynamic benefit of flexibility. In practice, the excessive bending of the trailing area determines that its orientation is suboptimal during large portions of the flapping cycle. This can be examined quantitatively by tracking the trailing angle--see Fig. 4 (A)--at the instant of maximum flapping velocity, as presented in Fig. 4 (D) as a function of \(\beta\) for two different frequencies. We use the dimensionless representation \(\theta/\phi\) introduced by [37], where the angle of the incoming wind \(\phi\) is in the present case equal to \(90^{\circ}\) since there is no incident velocity on the static wing because the system is fixed in the lab reference frame. The measurements of the trailing angle in figure Fig. 4 (D) bring a clear explanation underlying the maximum of aerodynamic force measured for the wing with the radial vein at \(\beta\approx 20^{\circ}\): as in [37], this optimum coincides with the best alignement of the trailing angle \(\theta\) and the angle \(\phi\) of the local wind seen by the translating wing. As \(\beta\) increases, the trailing angle goes to zero because it is measured at the tip, which is part of the trailing area that deforms less and less and behaves as in a rigid wing when \(\beta\) tends to \(90^{\circ}\). For these larger values of \(\beta\) the influence of the trailing area concerning thrust production diminishes, so the measurement of the trailing edge as in Fig. 4 (D) becomes irrelevant. As \(\beta\) becomes larger, the main part of the force production is ensured by the inter-vein area, which represents most of the total wing surface. Coming back to Fig. 4 (C), the increase of the projected area \(S_{x}\) for \(\beta>50^{\circ}\) is driven by the swelling of the inter-vein area, which can be seen in Fig. 5 (D) for the limit case of \(\beta=90^{\circ}\). We can hypothesise that the physical mechanism enhancing thrust at these large values of the radial vein angle \(\beta\) should be similar to the case described for the trailing area. However, a quantitative picture would need the tracking of the whole trailing edge and not just of a single point as we have done to produce Fig. 4 (D), which is out of reach of the experiments reported here. ## IV Concluding remarks We have examined the role of non-homogeneous wing stiffness in the aerodynamic force production by flapping wings, using a simple model with a two-vein skeleton. The main observation is that the thrust produced by the wings varies non-monotonically with changes in the angle \(\beta\) between the two veins that constitute the skeleton: the leading edge vein and the radial vein. A local optimum of the aerodynamic performance is observed for \(\beta\approx 20^{\circ}\), which is compatible with the typical angles observed in several insect wings [18]. The radial vein in the model used here is of course a crude simplification of the complex patterns found in real insect wings, but it serves the purpose of separating the wing surface in Figure 5: (A) Areas of the inter-vein and trailing sections of the wing as a function of \(\beta\); (B), (C), and (D) frontal snapshots of the wing for different values of \(\beta\) with the wing sections highlighted. two areas that have a dynamic equivalence to what is observed in nature. What we have called the inter-vein area functions in a similar manner to the part of the wing close to the leading edge in insects, which is rather stiff, while the trailing area deforms much more during the flapping cycle. Coupling thrust force measurements with visualisation of the wing deformation lets us explain the physical origin of the non-monotonic behaviour of the aerodynamic force production: increasing the radial vein angle makes the wing change from a regime dominated by the trailing area, with its associated strong deformations, to another regime where the inter-vein area pilots everything. It is the former case with lower angles that allows us to come back to the case of insects mentioned above: the observed optimum angle \(\beta\approx 20^{\circ}\) constitutes a trade-off between using the aerodynamic benefit of deformation that redirects the average force to have a stronger thrust component, and avoiding an excessive folding of the flexible wing that diminishes its effective surface. A word of caution should be said about the limitations of the present artificial flapping wings to represent the far more complex cases of real insect wings. A first point concerns the simple up-and-down movement of the wings used here, which does not involve any of the mechanisms insects use to control wing kinematics through their thoracic muscles and hinge joints [19]. One major feature that is thus missing is the wing rotation that accompanies flapping. In addition, the two-vein wing design does not tightly control the wing shape. This leads, firstly, to camber profiles that are not representative of real wings (see, e.g., [13] for the case of lowerflies), and, secondly, to an exaggerated lack of constrain to bending of the trailing area. These issues should be considered in the design of future insect-inspired flapping robots. Ongoing work is concerned with the study of three-dimensional wing deformation dynamics with simultaneous measurement of aerodynamic forces. ###### Acknowledgements. This work was supported by the Agence Nationale de la Recherche and the ASTRID program through the project ANR-19-ASTR-002 (NANOFLY).
2307.16047
On $Δ$-spaces
$\Delta$-spaces have been defined by a natural generalization of a classical notion of $\Delta$-sets of reals to Tychonoff topological spaces; moreover, the class $\Delta$ of all $\Delta$-spaces consists precisely of those $X$ for which the locally convex space $C_p(X)$ is distinguished. The aim of this article is to better understand the boundaries of the class $\Delta$, by presenting new examples and counter-examples. 1) We examine when trees considered as topological spaces equipped with the interval topology belong to $\Delta$. In particular, we prove that no Souslin tree is a $\Delta$-space. Other main results are connected with the study of 2) $\Psi$-spaces built on maximal almost disjoint families of countable sets; and 3) Ladder system spaces. It is consistent with CH that all ladder system spaces on $\omega_1$ are in $\Delta$. We show that in forcing extension of ZFC obtained by adding one Cohen real, there is a ladder system space on $\omega_1$ which is not in $\Delta$. We resolve several open problems posed in the literature.
Arkady Leiderman, Paul Szeptycki
2023-07-29T18:28:54Z
http://arxiv.org/abs/2307.16047v1
# On \(\Delta\)-spaces ###### Abstract. \(\Delta\)-spaces have been defined by a natural generalization of a classical notion of \(\Delta\)-sets of reals to Tychonoff topological spaces; moreover, the class \(\Delta\) of all \(\Delta\)-spaces consists precisely of those \(X\) for which the locally convex space \(C_{p}(X)\) is distinguished [25]. The aim of this article is to better understand the boundaries of the class \(\Delta\), by presenting new examples and counter-examples. 1) We examine when trees considered as topological spaces equipped with the interval topology belong to \(\Delta\). In particular, we prove that no Souslin tree is a \(\Delta\)-space. Other main results are connected with the study of 2) \(\Psi\)-spaces built on maximal almost disjoint families of countable sets; and 3) Ladder system spaces. It is consistent with CH that all ladder system spaces on \(\omega_{1}\) are in \(\Delta\). We show that in forcing extension of ZFC obtained by adding one Cohen real, there is a ladder system space on \(\omega_{1}\) which is not in \(\Delta\). We resolve several open problems posed in [12], [25], [31], [32]. Key words and phrases:\(\Delta\)-set, countably metacompact space, \(\omega_{1}\)-tree, \(\Psi\)-space, ladder system space 2010 Mathematics Subject Classification: 54C35, 54G12, 54H05, 46A03 ## 1. Introduction The reader is referred to consequent sections for an unexplained terminology. Throughout the article, all topological spaces are assumed to be Tychonoff and infinite. By \(C_{p}(X)\) we mean the space \(C(X)\) of real-valued continuous functions defined on \(X\) equipped with the pointwise convergence topology. In this paper we investigate the class of topological spaces \(X\) such that the locally convex space \(C_{p}(X)\) is distinguished. _Distinguished_ locally convex (metrizable) spaces were introduced by J. Dieudonne and L. Schwartz and the reader may consult [13] (and references therein) for an introduction to and a brief history of the main advances in the area of general distinguished locally convex spaces. An intrinsic description of \(X\) such that \(C_{p}(X)\) is distinguished has been found recently in [25] (see also [12]). **Theorem 1.1**.: ([25]) _For a Tychonoff space \(X\), the following conditions are equivalent_:__ 1. \(C_{p}(X)\) _is distinguished._ 2. _Any countable disjoint collection of subsets of_ \(X\) _admits a point-finite open expansion in_ \(X\)_._ 3. \(X\) _is a_ \(\Delta\)_-space._ **Definition 1.2**.: ([25], [27]) _A topological space \(X\) is said to be a \(\Delta\)-space if for every decreasing sequence \(\{D_{n}:n\in\omega\}\) of subsets of \(X\) with empty intersection, ## 1. Introduction Let \(X\) be a \(Q\)-set and let \(\mathfrak{C}\) be a \(Q\)-set. We say that \(\mathfrak{C}\) is _\(\Delta\)-set_ if there exists a \(\Delta\)-set \(\mathfrak{C}\) such that \(\mathfrak{C}\) is \(\Delta\)-set. If \(\mathfrak{C}\) is \(\Delta\)-set, then \(\mathfrak{C}\) is \(\Delta\)-set. The _\(\Delta\)-set_ of \(\Delta\)-sets is the _\(\Delta\)-set_ of \(\Delta\)-sets. The _\(\Delta\)-set_ of \(\Delta\)-sets is the _\(\Delta\)-set_ of \(\Delta\)-sets. countable union of closed discrete subspaces. We mention the following simple facts about \(Q\)-spaces: * \(X\) is a \(Q\)-space implies that \(X\) is a \(\Delta\)-space (the proof is the same as for sets of reals, see e.g. [25, Proposition 4.1]). * If \(X\) is a \(\sigma\)-closed discrete then \(X\) is a \(Q\)-space, therefore also a \(\Delta\)-space. * If \(X\) is a \(Q\)-space then \(X\) is a \(\sigma\)-discrete iff \(X\) is \(\sigma\)-closed discrete. Whether there is a \(Q\)-space that is not \(\sigma\)-discrete is quite non-trivial. This question was first asked by H. Junnila [24] and led Z. Balogh to define a topological space \(X\) to be a \(Q\)_-set space_ if \(X\) is a \(Q\)-space and \(X\) is not \(\sigma\)-discrete [1]. In that paper he gave a beautiful ZFC construction of a \(Q\)-set space and later improved this result to obtain a hereditarily paracompact, perfectly normal \(Q\)-set space \(X\) such that \(|X|=\mathfrak{c}\)[3]. It is worth remarking that the techniques used in constructing \(Q\)-set spaces Z. Balogh later adapted to construct several Dowker spaces in ZFC including the aforementioned example of size \(\mathfrak{c}\). A systematic study of the class \(\Delta\) was originated in the paper [25] and continued in [26]. Among other results, it was proved that a \(\Delta\)-space \(X\) must be scattered if \(X\) is locally compact [25] or countably compact [26]. It has been shown that the ordered space of ordinals \([0,\omega_{1}]\) provides an example of a scattered compact space which is not in \(\Delta\)[25]. However, a collection of compact \(\Delta\)-spaces is quite rich and includes both scattered Eberlein compact spaces and one-point compactifications of Isbell-Mrowka \(\Psi\)-spaces (for details see [25]). The class \(\Delta\) is preserved under closed continuous images and it is invariant under the operation of taking countable unions of closed subspaces [26]. In particular, any topological space which can be represented as a countable union of scattered Eberlein compact spaces is in \(\Delta\). It is worth mentioning also that the class \(\Delta\) is invariant under taking arbitrary subspaces, and adding a finite set to a \(\Delta\)-space does not destroy the property of being a \(\Delta\)-space [25]. The aim of this article is to better understand the boundaries of the class \(\Delta\), by presenting new examples and counter-examples. We focus on the three very well known set-theoretical constructions producing locally compact first-countable topological spaces: \(\omega_{1}\)-trees equipped with the interval topology (Section 3); \(\Psi(D,\mathcal{A})\) spaces, where \(\mathcal{A}\) is an almost disjoint family of countable subsets of an infinite set \(D\) (Section 4); and the ladder system spaces \(X_{L}\) (Section 5). Summing up several known facts, we note that every almost Souslin tree is countably metacompact (Fact 3.5), and we observe that consistently there is an almost Souslin Aronszajn tree that is a \(Q\)-space (Fact 3.2). The following statement proved in Section 3 is one of the main results of our paper: no Souslin tree is a \(\Delta\)-space (Theorem 3.6). In Section 4 we give the negative answer to [25, Problem 5.11]: there is a locally compact \(\Psi\)-space \(X\) such that its one-point compactification is a scattered compact space with a finite Cantor-Bendixson rank, but \(X\notin\Delta\) (Corollary 4.6). Also, we demonstrate that there exists an Isbell-Mrowka \(\Psi\)-space \(X\) such that one-point extension \(X_{p}=X\cup\{p\}\) of \(X\) has uncountable tightness at the point \(p\), for some \(p\in\beta(X)\setminus X\) (Theorem 4.10). This result resolves [31, Problem 2.15] and [32, Question 4.1]. It is consistent with CH that all ladder system spaces on \(\omega_{1}\) are in \(\Delta\). The main result presented in Section 5 says that in forcing extension obtained by adding one Cohen real, there is a ladder system \(L\) on \(\omega_{1}\) whose corresponding space \(X_{L}\) is not countably metacompact and hence \(X_{L}\notin\Delta\) (Theorem 5.3). Also, assuming \(MA(\omega_{1})\), we prove that for any pair of ladder systems, \(L_{1}\) and \(L_{2}\), the product \(X_{L_{1}}\times X_{L_{2}}\) is hereditarily normal (Theorem 5.7). In the last Section 6 we notably generalize several results from [13] and also we provide an example of a Lindelof \(P\)-space \(X\notin\Delta\) answering negatively [12, Problem 24]. ## 2. Auxiliary notions Our set-theoretic notation is standard and follows the text [30]. Topological terminology follows the book [11]. For the reader's convenience we recall some relevant notions. * A collection of sets \(\{U_{\gamma}:\gamma\in\Gamma\}\) is called an _expansion_ of a collection of sets \(\{X_{\gamma}:\gamma\in\Gamma\}\) in \(X\) if \(X_{\gamma}\subseteq U_{\gamma}\subseteq X\) for every index \(\gamma\in\Gamma\). A collection of sets \(\{U_{\gamma}\subseteq X:\gamma\in\Gamma\}\) is called _point-finite_ if for every \(x\in X\) there are at most finitely many indexes \(\gamma\in\Gamma\) such that \(x\in U_{\gamma}\). * A topological space \(X\) is called _countably metacompact_ if every countable open cover of \(X\) has a point-finite open refinement, or, equivalently, if for every decreasing sequence \(\{D_{n}:n\in\omega\}\) of closed subsets of \(X\) with empty intersection, there is a decreasing expansion \(\{V_{n}:n\in\omega\}\) consisting of open subsets of \(X\), also with empty intersection. Immediately, every \(\Delta\)-space is hereditarily countably metacompact. * A topological space \(X\) is called _scattered_ if every closed \(A\subseteq X\) has an isolated (in \(A\)) point. * Let \(p\) be a point of a topological space \(X\). We say that \(X\) has countable _tightness_ at \(p\), and we write \(t(p,X)=\aleph_{0}\) if whenever \(p\) is in the closure \(cl\,Y\) of a set \(Y\subseteq X\), then there is a countable \(B\subseteq Y\) such that \(p\) is in the closure of \(B\). Tightness of \(X\), \(t(X)=\aleph_{0}\) if \(t(p,X)=\aleph_{0}\) for each point \(p\in X\). * A topological space \(X\) is called _Frechet-Urysohn_ if for every subset \(Y\subseteq X\) and each \(x\in cl\,Y\) there exists a sequence \(\{x_{n}:n\in\omega\}\) in \(Y\) which converges to \(x\). * A Tychonoff space \(X\) is called _pseudocompact_ if every continuous function \(f:X\to\mathbb{R}\) is bounded. * A topological space \(X\) is called a \(P\)_-space_ if countable intersections of open sets in \(X\) are open. * A compact space which can be embedded into a Banach space equipped with the weak topology is called an _Eberlein_ compact space. In the paper we identify the ordered space of ordinals \([0,\xi)\) with \(\xi\). ## 3. Trees A _tree_ is a well-founded partially ordered set such the set of predecessors of any element is well-ordered. In the article, \(\operatorname{Lev}_{\alpha}(T)\) denotes the \(\alpha^{\text{th}}\) level of a tree \(T\) and \(T_{\alpha}=\bigcup_{\beta<\alpha}\operatorname{Lev}_{\beta}(T)\). An _Aronszajn tree_ is an \(\omega_{1}\)-tree with no uncountable branches. A _Souslin tree_ is an \(\omega_{1}\)-tree such that every branch and every antichain is at most countable. An \(\omega_{1}\)-tree \(T\) is said to be _almost Souslin_ if whenever \(A\) is an antichain of \(T\) then \(\{\alpha\in\omega_{1}:A\cap\operatorname{Lev}_{\alpha}(T)\neq\emptyset\}\) is not stationary. A tree is called _special_ if it can be represented as a countable union of antichains. While there are a number of interesting topologies that can be considered on a tree, we consider trees as topological spaces equipped with the interval topology. The _interval topology_ on a tree \(T\) is generated by the open base consisting of all intervals of the form \((s,t]=\{x\in T:s<x\leq t\}\), together with all singletons \(\{t\}\), where \(t\) is a minimal element of \(T\). Since \(\omega_{1}\notin\Delta\), if a tree is a \(\Delta\)-space then it does not have an uncountable branch. The following observations are based on the known results from the literature. **Fact 3.1**.: _Every special tree is \(\sigma\)-closed discrete [37]. It follows that every special tree is a \(Q\)-space, and so every special tree is a \(\Delta\)-space._ **Fact 3.2**.: _It is consistent with ZFC that there is a non-special Aronszajn tree \(T\) that is a \(Q\)-space, hence a \(\Delta\)-space._ Almost Souslin tree clearly cannot be special. We claim that it is consistent to have an almost Souslin Aronszajn tree \(T\) such that every subset of \(T\) is \(F_{\sigma}\) (this claim appears in [47, page 273]). Let us give a bit more details about this Aronszajn tree. We thank S. Todorcevic for pointing out that such an Aronszajn tree can be obtained as follows. \(\mathbb{Q}\) denotes the set of rationals endowed with the usual linear order. Let \(w\mathbb{Q}\) be the tree of well-ordered subsets of \(\mathbb{Q}\) ordered by end-extension. Clearly, \(w\mathbb{Q}\) is a tree of height \(\omega_{1}\). It is straightforward to check that the interval topology on \(w\mathbb{Q}\) is finer than the natural separable metric topology which \(w\mathbb{Q}\) inherits from the product topology on \(2^{\mathbb{Q}}\). Indeed, let \(U\) be a basic open subset of \(2^{\mathbb{Q}}\) determined by the finite partial function \(r\), and let \(t\in w\mathbb{Q}\) be an element of \(U\). Denote by \(q\) the smallest element of \(t\) such that \((-\infty,q)\cap\operatorname{dom}(r)=(-\infty,\sup(t))\cap\operatorname{dom} (r)\). Then for \(s=t\cap(-\infty,q)\), we have \((s,t]\subseteq U\). This means that the interval topology on \(w\mathbb{Q}\) has a weaker separable metric topology. Recall that \(\mathfrak{p}\) is the minimal cardinality of a centered family \(\mathcal{F}\) of infinite subsets of \(\omega\) for which one cannot find an infinite set \(A\subseteq\omega\) such that \(A\setminus B\) is finite for all \(B\in\mathcal{F}\) (see [9]). By the Rothberger-Silver theorem, any separable metric space of size \(\omega_{1}\) is a \(Q\)-space under the assumption \(\mathfrak{p}>\omega_{1}\) (see [17, Corollary 23B]). Hence, assuming \(\mathfrak{p}>\omega_{1}\), any subspace of \(w\mathbb{Q}\) of size \(\omega_{1}\) is a \(Q\)-space because it admits a weaker separable metric topology. Theorems 15 and 16 of [49] describe the required almost Souslin Aronszajn subtree \(T\) of \(w\mathbb{Q}\) which is consistent with \(\mathfrak{p}>\omega_{1}\). **Fact 3.3**.: _W. Fleissner, assuming \(\Di^{+}\), proved that there exists an Aronszajn tree which is not countably metacompact [15]. In [18] it was shown that such an example can be constructed from a weaker assumption \(\Diamond\). Therefore, it is consistent with ZFC that there exists an Aronszajn tree which is not a \(\Delta\)-space._ **Fact 3.4**.: _If a tree \(T\) is a \(Q\)-space, then \(T\) is \(\mathbb{R}\)-embeddable (follows from [19, Theorem 2.1]), and since for any \(\mathbb{R}\)-embeddable \(\omega_{1}\)-tree \(T\) every uncountable subset of \(T\) must contain an uncountable antichain [7], no Souslin tree can be a \(Q\)-space._ **Fact 3.5**.: _Every almost Souslin tree is (hereditarily) countably metacompact (this result is due to P. Nyikos, see [15])._ Below we present one of the main results of our paper. **Theorem 3.6**.: _No Souslin tree is in the class \(\Delta\)._ Proof.: Let \(T\) be a Souslin tree. For a subset \(A\subseteq T\) and \(B\subseteq A\) we say that \(B\) is _predense in \(A\)_ if for every \(t\in A\) there is \(s\in B\) such that \(s\) and \(t\) are comparable. Note that if \(B\) is predense in \(A\) and if \(C\subseteq B\) is a maximal in \(B\) antichain, then \(C\) is also maximal in \(A\). Note also that in a Souslin tree \(T\), for any \(A\subseteq T\) there is \(\alpha<\omega_{1}\) such that \(A\cap T_{\alpha}\) is predense in \(A\). **Lemma 3.7**.: _If \(S\subseteq\omega_{1}\) is stationary and \(A\subseteq T\) is uncountable, then there is some \(\gamma\in S\) such that \(\mathit{cl}\,A\cap\mathit{Lev}_{\gamma}(T)\neq\emptyset\)._ Proof.: Since the set \(T_{\beta}\) is countable for every \(\beta<\omega_{1}\) and \(A\) is uncountable, \(A\setminus T_{\beta}\) with the order inherited from \(T\) is a Souslin tree. Let \[C=\{\alpha\in\omega_{1}:\forall\beta<\alpha\;\;(A\cap T_{\alpha})\setminus T _{\beta}\text{ is predense in }A\setminus T_{\beta}\}.\] Observe first that \(C\subseteq\omega_{1}\) is closed. Second, \(C\) is unbounded. To see this, let \(f:\omega_{1}\to\omega_{1}\) be defined by \[f(\beta)=\min\{\alpha>\beta:(A\cap T_{\alpha})\setminus T_{\beta}\text{ is predense in }A\setminus T_{\beta}\}.\] Then \(f(\alpha)\in C\) whenever \(\alpha\in C\). Thus, \(C\) is a club set. Next we show that for every \(\alpha\in C\) we have \(\mathit{cl}\,A\cap\mathrm{Lev}_{\alpha}(T)\neq\emptyset\) which completes the proof of the lemma. To see this, let \(\alpha_{n}\) be an increasing sequence cofinal in \(\alpha\) and for each \(n\) let \(A_{n}\) be a maximal antichain in \(A\setminus T_{\alpha_{n}}\) such that \(A_{n}\subseteq T_{\alpha}\) (we can do this since \(\alpha\in C\)). Then choosing \(t\in A\setminus T_{\alpha}\), and \(s\in\mathrm{Lev}_{\alpha}(T)\) such that \(s\leq t\) we get that \(\bigcup_{n}A_{n}\) is cofinal below \(s\) and so \(s\in\mathit{cl}\,A\) as required. Now we finish the proof that \(T\) cannot be a \(\Delta\)-space. Let \(\{S_{n}\subset\omega_{1}:n\in\omega\}\) be any countable family of pairwise disjoint stationary sets and let \(T_{n}=\bigcup_{\alpha\in S_{n}}\mathrm{Lev}_{\alpha}(T)\). By way of contradiction, suppose that there are open sets \(U_{n}\supseteq T_{n}\) such that the family \(\{U_{n}:n\in\omega\}\) is point-finite. For every \(t\in T\) define \(\mathrm{ord}(t)=\{k\in\omega:t\in U_{k}\}\). Choose \(n\) so that \(A=\{t:n\not\in\mathrm{ord}(t)\}\) is uncountable. Now, using Lemma 3.7, we find \(\gamma\in S_{n}\) and \(s\in\mathrm{Lev}_{\gamma}(T)\) such that \(s\in\mathit{cl}\,A\). Thus, since \(s\in T_{n}\) we have that \(U_{n}\cap A\neq\emptyset\). This is a contradiction with \(n\notin ord(t)\) for all \(t\in A\). Analyzing this proof we see that we used only the following property of a Souslin tree \(T\): If \(T\) is written as a union of countably many closed subsets, one of those subsets must intersect a club set of levels. We conjecture that there should be some clearer characterization of when trees are \(\Delta\)-spaces: **Problem 3.8**.: _Find a characterization of trees which are \(Q\)-spaces / \(\Delta\)-spaces._ For a partially ordered set \(E\), \(\sigma E\) denotes the set of all bounded well-ordered subsets of \(E\). The order on \(\sigma E\) is defined as usual: \(s\leq t\) iff \(s\) is an initial segment of \(t\)[48]. Our aim is to consider the above Problem 3.8 for trees of the form \(\sigma E\). In [37], the following statement is claimed: \((*)\)\(\sigma\mathbb{Q}\) is a ZFC example of an \(\mathbb{R}\)-embeddable tree which is not countably metacompact (therefore, \(\sigma\mathbb{Q}\) is not a \(\Delta\)-space). Unfortunately, the proof of this claim \((*)\) has still not been published, nor could we independently verify the validity of the statement. However, under the conjecture that \(\sigma\mathbb{Q}\) is not a \(\Delta\)-space, we would obtain the following (provisional) result. **Conjecture 3.9**.: _Let \(E\) be a linearly ordered set. Then the following conditions are equivalent_: 1. _Neither_ \(\omega_{1}\) _nor rationals_ \(\mathbb{Q}\) _as linearly ordered sets are contained in_ \(E\)_._ 2. \(\sigma E\) _is a special tree._ 3. \(\sigma E\) _is a_ \(Q\)_-space._ 4. \(\sigma E\) _is a_ \(\Delta\)_-space._ (1) \(\Leftrightarrow\) (2) is known [48]. (2) \(\Rightarrow\) (3) \(\Rightarrow\) (4) have been mentioned before. Assume (4). Then \(E\) does not contain \(\omega_{1}\) because \(\omega_{1}\) is not a \(\Delta\)-space. And assuming that the claim (\(*\)) is true, we would conclude that \(E\) could not contain a copy of \(\mathbb{Q}\). This finishes the proof of (4) \(\Rightarrow\) (1). **Problem 3.10**.: _Prove or disprove the claim (\(*\)): \(\sigma\mathbb{Q}\) is not countably metacompact._ ## 4. MAD families Throughout this section \(D\) is an infinite set and \(\mathcal{A}\) is an almost disjoint (AD) family of countable subsets of \(D\). A topological space \(\Psi(D,\mathcal{A})\) is defined in a standard way: the underlying set of \(\Psi(D,\mathcal{A})\) is \(D\bigcup\mathcal{A}\), the points of \(D\) are isolated and a base of neighborhoods of \(A\in\mathcal{A}\) is the collection of all sets of the form \(\{A\}\cup B\), where \(A\setminus B\) is finite. It is known that every first-countable locally compact space in which the derived set is discrete is homeomorphic to some \(\Psi(D,\mathcal{A})\). Every \(\Psi(D,\mathcal{A})\) is a Tychonoff space because it is Hausdorff and zero-dimensional. Note also that all spaces \(\Psi(D,\mathcal{A})\) are scattered with the Cantor-Bendixson rank equal to \(2\). Observe that every subset of \(\mathcal{A}\) is closed in \(\Psi(D,\mathcal{A})\). Further, since \(D\) consists of isolated points of \(\Psi(D,\mathcal{A})\) we obtain the following easy fact: **Proposition 4.1**.: _Let \(Y\) be any space \(\Psi(D,\mathcal{A})\). Then \(Y\in\Delta\) if and only if \(Y\) is a countably metacompact space._ If \(D\) is the set of natural numbers \(\mathbb{N}\) and \(\mathcal{A}\) is a maximal almost disjoint (MAD) family of subsets of \(\mathbb{N}\), then \(\Psi(D,\mathcal{A})\) is called the Isbell-Mrowka space and is denoted by \(\Psi(\mathcal{A})\). **Theorem 4.2**.: ([25]) _Let \(\mathcal{A}\) be any uncountable almost disjoint family of subsets of \(\mathbb{N}\). Denote by \(X\) the one-point compactification of the space \(\Psi(\mathbb{N},\mathcal{A})\). Then \(X\in\Delta\), while \(X\) is not an Eberlein compact space._ Let \(D\) be \(\omega_{1}\) and \(\mathcal{A}\) be a MAD family of countable subsets of \(\omega_{1}\). It is unknown whether the space \(\Psi(D,\mathcal{A})\) in this case can be countably metacompact (therefore, a \(\Delta\)-space) for some MAD family \(\mathcal{A}\) in some model of ZFC [5]. Recall that the cardinal invariant \(\mathfrak{a}\) is defined as the minimal size of a MAD family on \(\mathbb{N}\) and \(\mathfrak{b}\) denotes the bounding number (see [9]). The details about Theorems 4.3 and 4.4 below can be found in [5], [46]. **Theorem 4.3**.: ([5]) _Assume \(\mathfrak{a}=\mathfrak{c}\). Then for every MAD family of countable subsets of \(D=\omega_{1}\) the space \(\Psi(D,\mathcal{A})\) is not countably metacompact. Therefore, \(\Psi(D,\mathcal{A})\notin\Delta\)._ **Theorem 4.4**.: ([46]) _If \(\mathfrak{c}=\aleph_{2}\) or if \(\mathfrak{b}^{+}=\mathfrak{c}\), then for every MAD family of countable subsets of \(D=\omega_{1}\) the space \(\Psi(D,\mathcal{A})\) is not countably metacompact. Therefore, \(\Psi(D,\mathcal{A})\notin\Delta\)._ A sketch of the construction of the following example was given in [5]. **Example 4.5**.: ([5]) In ZFC there exists a MAD family of countable subsets of \(D=\omega_{1}\) such that the space \(\Psi(D,\mathcal{A})\) is not countably metacompact. For completeness sake we describe the construction. Identify \(\omega_{1}\) with a subset of \([0,1]\) via an injection \(j:\omega_{1}\to[0,1]\). Let \(\mathcal{A}\) be a MAD family of countable subsets of \(\omega_{1}\) so that for each \(a\in\mathcal{A}\), \(j(a)\) is a convergent sequence in \([0,1]\). Note that for any subset \(S\subset\omega_{1}\), if \(cl\,j(S)\) in \([0,1]\) has size \(\mathfrak{c}\), then \(cl\,S\) in \(\Psi(\mathcal{A})\) also has size \(\mathfrak{c}\). Since any uncountable subset of \([0,1]\) has closure of size \(\mathfrak{c}\), it follows that the MAD family \(\mathcal{A}\) has the following key property: For any uncountable subset \(S\subseteq\omega_{1}\) there is a countable \(Y\subseteq S\) such that the set \(\{a\in\mathcal{A}:|a\cap Y|=\aleph_{0}\}\) has size \(\mathfrak{c}\). We use this property to define a partition of \(\mathcal{A}=\bigcup_{n\in\omega}\mathcal{A}_{n}\) as follows. Enumerate all countable subsets of \(\omega_{1}\) that have uncountable closure in \(\Psi(\mathcal{A})\) as \(\{Y_{\alpha}:\alpha<\mathfrak{c}\}\). Recursively choose distinct elements \(\{a_{\alpha,n}:\alpha<\mathfrak{c},n\in\omega\}\) such that for each \(\alpha<\mathfrak{c}\) and \(n\in\omega\), the set \(a_{\alpha,n}\cap Y_{\alpha}\) is infinite. Then fix the partition \(\mathcal{A}=\bigcup_{n\in\omega}\mathcal{A}_{n}\) so that the following inclusion holds: \(\{a_{\alpha,n}:\alpha<\mathfrak{c}\}\subseteq\mathcal{A}_{n}\) for all \(n\in\omega\). Claim: _The partition \(\{\mathcal{A}_{n}:n\in\omega\}\) has no point-finite open expansion._ Proof.: By way of contradiction, suppose that there is a point-finite open family \(\{U_{n}:n\in\omega\}\) such that \(U_{n}\supseteq\mathcal{A}_{n}\). For each \(\gamma\in\omega_{1}\) let \(k_{\gamma}\) be the maximal index \(k\) with \(\gamma\in U_{k}\). Then there are an uncountable \(S\subset\omega_{1}\) and \(k\in\omega\) such that \(k_{\gamma}=k\) for all \(\gamma\in S\). And there is a countable \(Y\subset S\) with uncountable closure in \(\Psi(\mathcal{A})\). This \(Y\) was enumerated as \(Y_{\alpha}\). Observe that \(Y_{\alpha}\cap U_{n}=\emptyset\) for all \(n>k\), by definition of \(S\). However, by choice of the partition, for every \(n\) there is \(a\in\mathcal{A}_{n}\) with \(a\cap Y_{\alpha}\) infinite. For \(n>k\) this contradicts that \(\{U_{n}:n\in\omega\}\) is an open expansion of \(\{\mathcal{A}_{n}:n\in\omega\}\). The obtained contradiction means that \(\Psi(D,\mathcal{A})\) is not countably metacompact. As an immediate consequence we obtain the negative answer to open Problem 5.11 posed in [25]. **Corollary 4.6**.: _Denote by \(X\) the one-point compactification of the locally compact space \(\Psi(D,\mathcal{A})\) from the above Example 4.5. Then \(X\) is a scattered compact space with the Cantor-Bendixson rank equal to 3, but \(X\notin\Delta\)._ Below we reiterate D. Burke's question from [5]: **Problem 4.7**.: _Does there exist in ZFC a MAD family of countable subsets of \(D\) with \(|D|\geq\aleph_{1}\) such that \(\Psi(D,\mathcal{A})\) is a \(\Delta\)-space?_ We close this section with a \(\Psi\)-space example answering questions from [31] and [32]. It has been shown in [26] that every compact \(\Delta\)-space has countable tightness, and assuming Proper Forcing Axiom (PFA) every countably compact \(\Delta\)-space is compact and so has countable tightness as well. Whether any countably compact \(\Delta\)-space has countable tightness in ZFC is an open problem. In addition, it has been shown in [32] that pseudocompact \(\Delta\)-spaces of countable tightness must be scattered. This all obviously gives rise to the following natural problem posed in [32]: **Problem 4.8**.: _[_32_, Question 4.1]_ _Suppose that \(X\) is a pseudocompact \(\Delta\)-space. Is it true that the tightness of \(X\) is countable?_ It has been noted in [31] that for any MAD family \(\mathcal{B}\) on \(\omega\), the one-point compactification of the Isbell-Mrowka space \(\Psi(\mathbb{N},\mathcal{B})\) is never Frechet-Urysohn. Also, there exists an Isbell-Mrowka \(\Psi\)-space \(X\) such that \(t(p,X_{p})=\aleph_{0}\) for every one-point extension \(X_{p}=X\cup\{p\}\) of \(X\), where \(p\in\beta(X)\setminus X\) and \(\beta(X)\) is the Stone-Cech compactification of \(X\)[31]. **Problem 4.9**.: _[_31_, Problem 2.15]_ _Does there exist an Isbell-Mrowka \(\Psi\)-space \(X\) such that \(t(p,X_{p})>\aleph_{0}\), for some point \(p\in\beta X\setminus X\)?_ Now we completely resolve Problems 4.8 and 4.9. **Theorem 4.10**.: _There is an Isbell-Mrowka \(\Psi\)-space \(X\) with a point \(p\in\beta X\setminus X\) such that \(X_{p}\) is a pseudocompact \(\Delta\)-space and \(t(p,X_{p})>\aleph_{0}\)._ Proof.: In [10] a MAD family \(\mathcal{M}\subseteq[\omega]^{\omega}\) is constructed with the property that the reminder of the Stone-Cech compactification \(\beta\Psi(\mathcal{M})\setminus\Psi(\mathcal{M})\) is homeomorphic to the compact ordinals space \(\omega_{1}+1\). We declare \(X=\Psi(\mathcal{M})\) and choose \(p\in\beta X\setminus X\) the point \(\omega_{1}\). Then \(X_{p}=X\cup\{p\}\) is the desired space. To see this, first note that since \(\mathcal{M}\) is maximal, \(\Psi(\mathcal{M})\) is pseudocompact. We conclude that \(X_{p}\) is pseudocompact, because \(X=\Psi(\mathcal{M})\) is dense in \(X_{p}\). Moreover, \(X_{p}\) is a \(\Delta\)-space, because adding a finite set does not destroy the \(\Delta\)-space property. In order to show that \(X_{p}\) has uncountable tightness at the point \(p\), we rely on some details of the construction. For the following definitions we refer to [9]. Define the quasi-order \(\subseteq^{*}\) on \(\mathscr{P}(\omega)\) by the rule: \(A\subseteq^{*}B\) if \(A\setminus B\) is finite. We say that \(A\) is a _pseudo-intersection_ of a family \(\mathscr{F}\) if \(A\subseteq^{*}F\) for each \(F\in\mathscr{F}\). We call \(\mathscr{T}\subseteq[\omega]^{\omega}\) a _tower_ if \(\mathscr{T}\) is well-ordered by \(\supseteq^{*}\) and has no infinite pseudo-intersection. A MAD family \(\mathcal{M}\) in [10] is constructed as a union of families of sets \(\mathcal{C}_{\alpha}\) along with a chain \[\{T_{\alpha}:\alpha<\omega_{1}\}\subseteq[\omega]^{\omega}\] that is \(\subseteq^{*}\)-increasing, and so that \(\mathcal{M}=\bigcup\{\mathcal{C}_{\alpha}:\alpha\leq\omega_{1}\}\) has the properties 1. For each \(\alpha\), \(C_{\alpha}\) is a non-empty family of infinite subsets of \(\omega\) such that \(\mathcal{C}_{\alpha}\subseteq\{x\subseteq T_{\alpha}:x\cap T_{\beta}=^{*} \emptyset\ \text{ for all }\ \beta<\alpha\}\), and 2. If \(\{T_{\alpha}:\alpha<\omega_{1}\}\) is not a tower, then \(\mathcal{C}_{\omega_{1}}\subseteq\{x:x\cap T_{\beta}=^{*}\emptyset\ \text{ for all }\ \beta<\omega_{1}\}\) is not empty, and if \(\{T_{\alpha}:\alpha<\omega_{1}\}\) is a tower then \(\mathcal{C}_{\omega_{1}}\) is empty. 3. The closure of \(T_{\alpha}\) in \(\beta\Psi(\mathcal{M})\) is clopen and is equal to \[T_{\alpha}\cup\left(\bigcup\{\mathcal{C}_{\alpha}:\beta\leq\alpha\}\right) \cup[0,\alpha]\] Let us comment the item (3). Indeed, the closure of \(T_{\alpha}\) is the set of \(x\in\mathcal{M}\) such that \(x\cap T_{\alpha}\) is infinite. And if \(\gamma>\alpha\) then each \(x\in C_{\gamma}\) has finite intersection with \(T_{\alpha}\). In the opposite case, if \(\gamma\leq\alpha\) then for each \(x\in C_{\gamma}\) we have that \(x\subseteq^{*}T_{\alpha}\). With this description, let us now define \(Y=\bigcup_{\alpha<\omega_{1}}\mathcal{C}_{\alpha}\) (by (2), \(Y\) may be all of \(\mathcal{M}\) if \(\mathcal{C}_{\omega_{1}}=\emptyset\)). Then \([0,\omega_{1})\) is contained in the closure of \(Y\) in \(\beta\Psi(\mathcal{M})\) and so \(\omega_{1}\) is in the closure of \(Y\) in \(X\). Moreover, any countable subset of \(Y\) is a subset of \(\bigcup_{\beta<\alpha}\mathcal{C}_{\beta}\) for some \(\alpha<\omega_{1}\) and hence, by (1) it is contained in the closure of \(T_{\alpha}\). By (3) the closure of \(T_{\alpha}\) in \(\beta\Psi(\mathcal{M})\) does not include the point \(\omega_{1}\). And so, it follows that \(\omega_{1}\) is not in the closure of any countable subset of \(Y\). So, \(X_{p}\) is a pseudocompact \(\Delta\)-space with uncountable tightness at the point \(p=\omega_{1}\). ## 5. Ladder systems Let \(L\) be a _ladder system_ over a stationary subset of limit ordinals \(D\subset\omega_{1}\). I.e. \(L=\{s_{\alpha}:\alpha\in D\}\), where each \(s_{\alpha}\) is an \(\omega\)-sequence in \(\alpha\) cofinal in \(\alpha\). Any such family is an almost disjoint family of countable subsets of \(\omega_{1}\) and so we can form the space \(\Psi(D,\mathcal{A})\), where \(\mathcal{A}=L\). Traditionally, in this case the resulting space is denoted in a bit different fashion. Namely, if \(L\) is a ladder system on a stationary set \(S\subseteq Lim(\omega_{1})\), we denote by \(X_{L}\) the topological space \(\omega_{1}\times\{0\}\cup S\times\{1\}\), where every point \((\alpha,0)\) is isolated and for each \(\alpha\in S\), a basic neighborhood of \((\alpha,1)\) consists of \(\{(\alpha,1)\}\) along with a cofinite subset of \(s_{\alpha}\times\{0\}\). We ask the following natural question: under which conditions on the ladder system \(L\), \(X_{L}\) is a \(\Delta\)-space? This question, in relation to other covering and separation properties of \(X_{L}\), actually was studied already in [4]. As a matter of fact, \(X_{L}\in\Delta\) has been characterized by a certain type of uniformization property imposed on \(L\). It was S. Shelah who introduced the notion of a ladder system \(L\) being uniformizable. One says that a ladder system is \(2\)_-uniformizable_ if for any sequence of functions \(\eta_{\alpha}:s_{\alpha}\to 2\) there is a uniformizing function \(f:\omega_{1}\to 2\), meaning that \(f\upharpoonright s_{\alpha}=^{*}\eta_{\alpha}\) (is equal for all but finitely many elements of \(s_{\alpha}\)) for all \(\alpha\in D\). While this notion was introduced in Shelah's work on Whitehead groups, it is straightforward to show that \(X_{L}\) is normal if and only if every sequence of constant functions admits a uniformizing function. Further, \(X_{L}\) is normal implies that \(X_{L}\) is a countably metacompact (which in turn is characterized by an even weaker notion of uniformizability [4, Section 1]). \(MA(\omega_{1})\) implies the stronger property of being \(\omega\)-uniformizable, meaning that any sequence of functions \(\eta_{\alpha}:s_{\alpha}\to\omega\) can be uniformized. It is also worth remarking that if a ladder system \(L\) is \(\omega\)-uniformizable, then the associated space \(X_{L}\) is \(\sigma\)-closed discrete, which of course implies that \(X_{L}\) is a \(\Delta\)-space. Indeed, if \(L=\{s_{\alpha}:\alpha\in\lim(\omega_{1})\}\) is any ladder system, then we can enumerate each \(s_{\alpha}\) in increasing order as \(\{s_{\alpha}(n):n\in\omega\}\). Let \(\eta_{\alpha}:s_{\alpha}\to\omega\) be given by \(\eta_{\alpha}(s_{\alpha}(n))=n\). If \(f\) uniformizes the family \(\{\eta_{\alpha}:\alpha\in\lim(\omega_{1})\}\), then it follows that the fibers \(f^{-1}(n)\) are closed discrete for all \(n\in\omega\) and therefore \(X_{L}\) is a \(\sigma\)-closed discrete space, so they are even \(Q\)-spaces. Summarizing these facts, we note the following **Remark 5.1**.: \(MA(\omega_{1})\) implies that every ladder system space \(X_{L}\) is a normal \(\sigma\)-closed discrete space (see [4], [8]), hence it is consistent that all \(X_{L}\) are in the class \(\Delta\). **Example 5.2**.: In ZFC there is a ladder system \(L\) on \(\omega_{1}\) such that the corresponding space \(X_{L}\) is countably metacompact [4, Section 1], hence \(X_{L}\in\Delta\). Indeed, any ladder system \(L\) with the property that the \(n\)-th element of each ladder \(s_{\alpha}\) is of the form \(\beta+n\) with \(\beta\) a limit ordinal leads to a \(\sigma\)-closed discrete space \(X_{L}\). Therefore, the one-point compactification of the locally compact space \(X_{L}\) provides a new example of a scattered compact \(X\in\Delta\), while \(X\) is not an Eberlein compact space. Now we present one of the main results of our paper. It shows that consistently there are \(X_{L}\notin\Delta\). **Theorem 5.3**.: _In forcing extension of ZFC obtained by adding one Cohen real, there is a ladder system \(L\) on \(\omega_{1}\) whose corresponding space \(X_{L}\) is not countably metacompact and hence \(X_{L}\notin\Delta\)._ Proof.: First we describe how to construct a ladder system coded by a single prescribed function \(g:\omega\to\omega\). For each limit countable ordinal \(\alpha\) fix a bijection \(e_{\alpha}:\alpha\to\omega\) and choose arbitrarily an increasing sequence \[\alpha(0)<...<\alpha(n)<...<\alpha\text{ cofinal in }\alpha.\] For each limit countable ordinal \(\alpha\) we define \(s_{\alpha}^{g}=\{\beta_{n}^{g}:n\in\omega\}\subset\omega_{1}\) as follows. Let \(n\in\omega\). Consider the set of all \(\beta\in[\alpha(n),\alpha(n+1))\) such that \(g(e_{\alpha}(\beta))>n\). If this set is not empty, we declare that \(\beta_{n}^{g}\) is the unique such \(\beta\) with \(e_{\alpha}(\beta)\) minimal. Otherwise we decide \(\beta_{n}^{g}=0\). Put \(S_{g}=\{\alpha\in Lim(\omega_{1}):s_{\alpha}^{g}\) is cofinal in \(\alpha\}\). Then \(S_{g}\) is a stationary set and \(L_{g}=\{s_{\alpha}^{g}:\alpha\in S_{g}\}\) is a ladder system on \(S_{g}\). Now we keep \(\{e_{\alpha}:\alpha\in Lim(\omega_{1})\}\) and sequences \(\alpha(n)\) for each \(\alpha\in Lim(\omega_{1})\) fixed in the ground model \(V\) and force with \(\mathbb{P}=\operatorname{Fn}(\omega,\omega)\). Let \(G\) be \(\mathbb{P}\) generic over \(V\) and let \(c=\bigcup G:\omega\rightarrow\omega\) be the generic function. The proof of the following claim is based on a standard density argument and only uses that for any infinite \(A\subseteq\omega\) in the ground model, the image \(c(A)\) is infinite. **Lemma 5.4**.: \(S_{c}=\{\alpha\in Lim(\omega_{1}):s_{\alpha}^{c}\) _is cofinal in \(\alpha\}=Lim(\omega_{1})\)._ Our aim is to prove that \(\Vdash X_{L_{c}}\) is not countably metacompact. On the contrary, assume that \(X_{L_{c}}\) is countably metacompact. Choose in \(V\) any partition \(Lim(\omega_{1})=\bigcup_{n\in\omega}S_{n}\) into pairwise disjoint stationary sets. Form the corresponding disjoint family consisting of closed sets \(\{S_{n}\times\{1\}:n\in\omega\}\). Fix names \(U_{n}\) for the open sets expanding \(S_{n}\times\{1\}\) in the extension. It suffices to assume that \[\Vdash\{U_{n}:n\in\omega\}\text{ is point-finite}\] and obtain a contradiction. Assuming that this open expansion is forced to be point-finite, we may fix for each ordinal \(\alpha\in\omega_{1}\), an element \(q_{\alpha}\in\mathbb{P}\) and a finite set \(F_{\alpha}\subseteq\omega\) such that \[q_{\alpha}\Vdash\text{``}(\alpha,0)\in U_{n}\text{ iff }n\in F_{\alpha}\text{''}\] By the pigeon hole principle, there is an uncountable \(A\subseteq\omega_{1}\), an element \(q\in P\) and a finite \(F\subset\omega\) such that \(F_{\alpha}=F\) and \(q_{\alpha}=q\) for all \(\alpha\in A\). Pick \(m\not\in F\) and consider the stationary set \(S_{m}\). In the extension, we have that for each \(\beta\in S_{m}\) there is a \(\gamma<\beta\) such that \((s_{\beta}^{c}\setminus\gamma)\times\{0\}\subseteq U_{m}\). So, for each \(\beta\in S_{m}\) we may extend \(q\) to \(p_{\beta}\) and fix \(\gamma_{\beta}\) such that \[p_{\beta}\Vdash\text{``}(s_{\beta}^{c}\setminus\gamma_{\beta})\times\{0\} \subseteq U_{m}\] And again by the pigeon hole principle, and the Fodor's lemma, we may fix a single \(\gamma\) and a single \(p\leq q\) and a stationary \(S_{m}^{\prime}\subseteq S_{m}\) such that for every \(\beta\in S_{m}^{\prime}\) \[p\Vdash\text{``}(s_{\beta}^{c}\setminus\gamma)\times\{0\}\subseteq U_{m}\] Choose \(\beta\in S_{m}^{\prime}\) such that \(A\cap\beta\) is unbounded in \(\beta\) and choose \(k\in\omega\) large enough so that 1. \(\operatorname{dom}(p)\subseteq e_{\beta}((0,\beta(k)])\), 2. \(k\) above is the maximum of the range of \(p\), 3. \(A\cap[\beta(k),\beta(k+1))\neq\emptyset\), and 4. \(\beta(k)>\gamma\). Fix \(\eta\in A\cap[\beta(k),\beta(k+1))\) and denote \(E=\{\xi\in[\beta(k),\beta(k+1)):e_{\beta}(\xi)<e_{\beta}(\eta)\}\). Then \(E\) is finite and \(e_{\beta}(E\cup\{\eta\})\) is disjoint from the domain of \(p\) (by (1)). So we may extend \(p\) to \(p^{\prime}\) defining \(p^{\prime}(e_{\beta}(\xi))<k\) for all \(\xi\in E\) and defining \(p^{\prime}(e_{\beta}(\eta))>k\). Then, by definition of \(s_{\beta}^{c}\) we have that \(\eta\in s_{\beta}^{c}\). And \(\eta>\gamma\) (by (4)). Putting this all together we have that \[p^{\prime}\Vdash\eta\in U_{m}\] However, by definition of \(A\), we also have that \(p\Vdash\eta\not\in U_{m}\) since \(m\not\in F\). Finally, in view of \(p^{\prime}\leq p\) this is a contradiction completing the proof. It is natural to investigate whether the class \(\Delta\) is invariant under the basic topological operations. An account about the progress in this direction can be found in [26], [32]. In particular, it has been shown that the product of a \(\Delta\)-space with a \(\sigma\)-closed discrete space is a \(\Delta\)-space [26, Corollary 2.9]. But the general question whether the class \(\Delta\) is invariant under finite products remains open even for the compact factors [25, Problem 5.10]. One may ask whether ladder system spaces, or more generally, the spaces \(\Psi(D,\mathcal{A})\) provide counterexamples to this question. **Proposition 5.5**.: _Let \(Y\) be any \(\Delta\)-space. Then_ 1. _The product_ \(\Psi(\mathbb{N},\mathcal{A})\times Y\) _is a_ \(\Delta\)_-space._ 2. _Assume_ \(MA(\omega_{1})\)_. Then the product_ \(X_{L}\times Y\) _is a_ \(\Delta\)_-space for every ladder system_ \(L\) _on_ \(\omega_{1}\)_._ Proof.: Both \(\Psi(\mathbb{N},\mathcal{A})\) and \(X_{L}\) are \(\sigma\)-closed discrete spaces. Similarly, we observe that all known examples of compact \(\Delta\)-spaces can be seen to be countable unions of scattered Eberlein compact spaces. Since the product of two scattered Eberlein compact spaces is a scattered Eberlein compact space, the finite products for all compact \(\Delta\)-spaces we are aware of remain to be \(\Delta\)-spaces. This motivates us to reiterate the following open problem. **Problem 5.6**.: _[_32_, Question 4.6]_ _Is it true that every compact \(\Delta\)-space is the countable union of Eberlein compact spaces?_ Recall that for the ladder system space \(X_{L}\), normality of \(X_{L}\) implies that \(X_{L}\in\Delta\). What can be said regarding the normality property of the product \(X_{L_{1}}\times X_{L_{2}}\) under \(MA(\omega_{1})\)? **Theorem 5.7**.: _Assume \(MA(\omega_{1})\). Then, for any pair of ladder systems, \(L_{1}\) and \(L_{2}\), the product \(X_{L_{1}}\times X_{L_{2}}\) is hereditarily normal._ Proof.: To fix our notation, let \(L_{1}=\{x_{\alpha}:\alpha\in S\}\) and \(L_{2}=\{y_{\alpha}:\alpha\in T\}\) be two ladder systems on stationary sets \(S,T\subseteq\operatorname{Lim}(\omega_{1})\), respectively. Note that \(X_{L_{1}}\times X_{L_{2}}\) is scattered of the height \(3\) with the closed discrete set \(L_{1}\times L_{2}\) on the top level in the Cantor-Bendixson decomposition. We first prove **Lemma 5.8**.: _Assume \(MA(\omega_{1})\). Then for any partition \(H:L_{1}\times L_{2}\to 2\), the sets \(H^{-1}(0)\) and \(H^{-1}(1)\) can be separated by disjoint open sets._ Proof.: To this end, we take the natural partially ordered set \(\mathbb{P}\) of finite partial neighborhood assignments to the points of \(L_{1}\times L_{2}\) respecting the partition. Namely, \(\mathbb{P}\) is the set of \(p\in Fn(S\times T,\omega)\) such that \[U_{p(\alpha,\beta)}(x_{\alpha})\times U_{p(\alpha,\beta)}(y_{\beta})\cap U_{p (\delta,\gamma)}(x_{\delta})\times U_{p(\delta,\gamma)}(y_{\gamma})=\emptyset,\] whenever \((\alpha,\beta),(\delta,\gamma)\in\operatorname{dom}(p)\) and \(H(\alpha,\beta)\neq H(\delta,\gamma)\). It suffices to show that \(\mathbb{P}\) has the ccc. We will show, in fact, that \(\mathbb{P}\) has Property K. To this end, fix \(\{p_{\alpha}:\alpha\in\omega_{1}\}\subseteq\mathbb{P}\) and for each \(\alpha\), let \[h(\alpha)=\max\{\eta<\alpha:\eta\text{ appears in the domain of }p_{\alpha}\}\] By the Fodor's lemma, there is an \(\xi^{\prime}\) and a stationary \(A\) such that \(h(\alpha)<\xi\) for all \(\alpha\in A\). Next define \(g:A\to\omega_{1}\) to be the maximum of the following two maxima \[\max\left[\alpha\cap\bigcup\{U_{p_{\alpha}(\delta,\gamma)}(x_{\delta}):(\delta, \gamma)\in\operatorname{dom}(p_{\alpha})\delta>\alpha\}\right]\] and \[\max\left[\alpha\cap\bigcup\{U_{p_{\alpha}(\delta,\gamma)}(y_{\gamma}):(\delta, \gamma)\in\operatorname{dom}(p_{\alpha})\gamma>\alpha\}\right]\] Again by the Fodor's lemma, there is a \(\xi>\xi^{\prime}\) and a stationary \(B\) such that \(g(\alpha)<\xi\) for all \(\alpha\in B\). By the pigeon hole principle, we may assume that for all \(\alpha\) and \(\beta\) in \(B\), \(p_{\alpha}\) and \(p_{\beta}\) are isomorphic conditions in the following sense: For each \(\alpha\) and \(\beta\) in \(B\), there is a bijection \(j:\operatorname{dom}(p_{\alpha})\to\operatorname{dom}(p_{\beta})\) such that 1. \(j\) is an order isomorphism with respect to the lexicographic ordering on \(\omega_{1}\times\omega_{1}\), 2. For all \((\gamma,\delta)\in\operatorname{dom}(p_{\alpha})\), \(H((\gamma,\delta))=H(j(\gamma,\delta))\) and \(p_{\alpha}(\gamma,\delta))=p_{\alpha}(j(\gamma,\delta))\) 3. Moreover, if \(j(\gamma,\delta)=(\gamma^{\prime},\delta^{\prime})\) then 1. \(\gamma<\alpha\) iff \(\gamma^{\prime}<\beta\) in which case both are less than \(\xi\) and we require \(\gamma=\gamma^{\prime}\), and \(\delta<\alpha\) iff \(\delta^{\prime}<\beta\) in which case both are also less than \(\xi\) and we also require \(\delta=\delta^{\prime}\) 2. \(\gamma=\alpha\) iff \(\gamma^{\prime}=\beta\) and \(\delta=\alpha\) iff \(\delta^{\prime}=\beta\) 3. For \(\gamma>\alpha\) then \(x_{\gamma}\cap\alpha=x_{\gamma^{\prime}}\cap\beta\) and for \(\delta>\alpha\) then \(y_{\delta}\cap\alpha=y_{\delta^{\prime}}\cap\beta\) (note: in this clause if \(\gamma>\alpha\) then \(x_{\gamma}\cap\alpha=x_{\gamma}\cap\xi\) and similarly for \(\gamma^{\prime},\delta,\delta^{\prime}\)). If \(\beta\in B\), let \(\beta^{+}\) denote the minimum of \(B\) above \(\beta\). Finally, we apply the Fodor's lemma and pigeon hole principle once again to \(B\). Take \(C\subset B\) stationary and \(F,G\) finite so that for all \(\beta\in C\), if \(\beta^{+}\in S\) then \(x_{\beta^{+}}\cap\beta=F\) and if \(\beta^{+}\in T\) then \(y_{\beta^{+}}\cap\beta=G\) (for \(\beta\in C\), \(\beta^{+}\) still denotes the minimum of \(B\) above \(\beta\)). Now, for any \(\alpha,\beta\in C\) we have that \(p_{\alpha^{+}}\) and \(p_{\beta^{+}}\) are compatible. Next, we show **Lemma 5.9**.: \(MA(\omega_{1})\) _implies that \(L_{1}\times\omega_{1}\) and \(\omega_{1}\times L_{2}\) can be separated by disjoint open sets._ Proof.: To this end, we define another partial ordered set \(\mathbb{Q}\) consisting of pairs \((p,q)\) such that \(p\in Fn(S\times\omega_{1},\omega)\) and \(q\in Fn(\omega_{1}\times T,\omega)\) such that for all \((\alpha,\gamma)\in\operatorname{dom}(p)\) and all \((\delta,\beta)\in\operatorname{dom}(q)\), the open sets \(U_{p(\alpha,\gamma)}(x_{\alpha})\times\{\gamma\}\) and \(\{\delta\}\times U_{q(\delta,\beta)}(y_{\beta})\) are disjoint. Note that since any finite union of basic open sets is clopen, for any \((\alpha,\gamma)\in S\times\omega_{1}\), the set of \((p,q)\in\mathbb{Q}\) such that \((\alpha,\gamma)\in\operatorname{dom}(p)\) is dense. Similarly, for any \((\delta,\beta)\in\omega_{1}\times T\), the set of \((p,q)\in\mathbb{Q}\) such that \((\delta,\beta)\in\operatorname{dom}(q)\) is also dense. And any filter generic with respect to these dense sets defines a pair of open sets separating as required. So, it suffices to show that \(\mathbb{Q}\) is CCC. To this end, fix \(\{(p_{\xi},q_{\xi}):\xi\in\omega_{1}\}\subseteq\mathbb{Q}\) and we will find an uncountable subset consisting of pairwise compatible elements. First, as we did before, we may thin out our sequence using the Fodor's lemma and the pigeon hole principle to obtain a stationary set \(A\) and an \(\eta\in\omega_{1}\) so that the subset \(\{(p_{\xi},q_{\xi}):\xi\in A\}\) consists of pairwise isomorphic conditions in the following sense: For all \(\xi,\chi\in A\) there are bijections \(h:\operatorname{dom}(p_{\xi})\to\operatorname{dom}(p_{\chi})\) and \(g:\operatorname{dom}(q_{\xi})\to\operatorname{dom}(q_{\chi})\). Moreover, for every \((\alpha,\delta)\in\operatorname{dom}(p_{\xi})\), if we denote \(h(\alpha,\delta)=(\alpha^{\prime},\delta^{\prime})\) then 1. \(p_{\xi}((\alpha,\delta)=p_{\chi}((\alpha^{\prime},\delta^{\prime}))\) 2. \(\alpha<\xi\) iff \(\alpha^{\prime}<\chi\) and in this case \(\alpha=\alpha^{\prime}<\eta\) 3. \(\delta<\xi\) iff \(\delta^{\prime}<\chi\) and in this case \(\delta=\delta^{\prime}<\eta\) 4. \(\alpha=\xi\) iff \(\alpha^{\prime}=\chi\) and \(\delta=\xi\) iff \(\delta^{\prime}=\chi\). 5. In the case that \(\alpha>\xi\) (and so also \(\alpha^{\prime}>\chi\)) we then have \(x_{\alpha}\cap\xi=x_{\alpha^{\prime}}\cap\chi\subseteq\eta\). And, for every \((\gamma,\beta)\in\operatorname{dom}(q_{\xi})\), if we denote \(g(\gamma,\beta)=(\gamma^{\prime},\beta^{\prime})\) then 1. \(q_{\xi}((\gamma,\beta)=q_{\chi}((\gamma^{\prime},\beta^{\prime}))\) 2. \(\beta<\xi\) iff \(\beta^{\prime}<\chi\) and in this case \(\beta=\beta^{\prime}<\eta\) 3. \(\gamma<\xi\) iff \(\gamma^{\prime}<\chi\) and in this case \(\gamma=\gamma^{\prime}<\eta\) 4. \(\beta=\xi\) iff \(\beta^{\prime}=\chi\) and \(\gamma=\xi\) iff \(\gamma^{\prime}=\chi\). 5. In the case that \(\beta>\xi\) (and so also \(\beta^{\prime}>\chi\)) we then have \(y_{\beta}\cap\xi=y_{\beta^{\prime}}\cap\chi\subseteq\eta\). Now we do one final thinning out. For each \(\nu\in A\), let \(\nu^{+}\) denote the minimum of \(A\setminus(\nu+1)\). Applying the Fodor's lemma one last time, we may fix \(B\subseteq A\) stationary, \(\eta^{\prime}\geq\eta\) and \(F,G\subseteq\eta^{\prime}\) finite such that for any \(\nu\in B\), if \(\nu^{+}\in S\) (where \(\nu^{+}\) still denotes the minimum of \(A\) above \(\nu\)), then \(x_{\nu^{+}}\cap\nu=F\) (and so \(x_{\nu^{+}}\cap[\eta^{\prime},\nu)=\emptyset\)), and if \(\nu^{+}\in T\) then \(y_{\nu^{+}}\cap\nu=G\) (and so \(y_{\nu^{+}}\cap[\eta^{\prime},\nu)=\emptyset\)). Now by going to a subsequence we may assume that 1. for \(\nu<\mu\) in \(B\), we have that \(\operatorname{dom}(p_{\nu^{+}})\cup\operatorname{dom}(q_{\nu^{+}})\subseteq \mu\times\mu\). We now have the following claim, which completes the proof that \(\mathbb{Q}\) has the CCC: Claim: _For all \(\nu,\mu\in B\), \((p_{\nu^{+}},q_{\nu^{+}})\) is compatible with \((p_{\mu^{+}},q_{\mu^{+}})\)_ Proof.: Suppose not and that \((p_{\nu^{+}},q_{\nu^{+}})\) is incompatible with \((p_{\mu^{+}},q_{\mu^{+}})\) where \(\nu<\mu\). Then by symmetry, we may assume that there is a \((\alpha,\delta)\in\operatorname{dom}(p_{\nu^{+}})\setminus\operatorname{dom}( p_{\mu^{+}})\) and a \((\gamma,\beta)\in\operatorname{dom}(q_{\mu^{+}})\setminus\operatorname{dom}( q_{\nu^{+}})\) witnessing incompatibility. This means that \[\gamma\in U_{p_{\nu^{+}}(\alpha,\delta)}(x_{\alpha})\text{ and }\delta\in U _{q_{\mu^{+}}(\gamma,\beta)}(y_{\beta})\] Since \((\alpha,\delta)\in\operatorname{dom}(p_{\nu^{+}})\) we have by \((*)\) that \(\alpha<\mu\) and since \(\gamma\in x_{\alpha}\) we then have \(\gamma<\mu\). Therefore, by our thinning out we have that \(\gamma<\eta\). Now, by the isomorphism between \((p_{\nu^{+}},q_{\nu^{+}})\) and \((p_{\mu^{+}},q_{\mu^{+}})\), we have that there is a \(\beta^{\prime}\) such that \((\gamma,\beta^{\prime})\in\operatorname{dom}(q_{\nu^{+}})\) and \(y_{\beta}\cap\eta=y_{\beta^{\prime}}\cap\eta\) and \(q_{\nu^{+}}(\gamma,\beta^{\prime})=q_{\mu^{+}}(\gamma,\beta)\). And now consider that \(\delta\in y_{\beta}\) and \(\delta<\mu\) (since it appears in the domain of \(p_{\nu^{+}}\)). Thus \(\delta<\eta\) and so also \(\delta\in y_{\beta^{\prime}}\). By the isomorphism of conditions, since \(\delta\in U_{q_{\mu^{+}}(\gamma,\beta)}(y_{\beta})\) we also have \(\delta\in U_{q_{\nu^{+}}(\gamma,\beta^{\prime})}(y_{\beta}^{\prime})\) (this is because \(q_{\mu^{+}}(\gamma,\beta)=q_{\nu^{+}}(\gamma,\beta^{\prime})\) and the initial segment of \(y_{\beta}\) up to \(\delta\) is the same as the initial segment of \(y_{\beta^{\prime}}\) up to \(\delta\)). So, we have that \((\gamma,\beta^{\prime})\in\operatorname{dom}(q_{\nu^{+}})\), \((\alpha,\delta)\in\operatorname{dom}(p_{\nu^{+}})\) and \(\delta\in U_{q_{\nu^{+}}(\gamma,\beta^{\prime})}(y_{\beta}^{\prime})\) and \(\gamma\in U_{p_{\nu^{+}}(\alpha,\delta)}(x_{\alpha})\) contradicting that \((p_{\nu^{+}},q_{\nu^{+}})\) is an element of \(\mathbb{Q}\). We need one more lemma. **Lemma 5.10**.: _Assume \(MA(\omega_{1})\). If \(H\subseteq L_{1}\times\omega_{1}\), \(K\subseteq L_{1}\times L_{2}\) and \(\operatorname{cl}H\cap K=\emptyset\), then \(H\) and \(K\) can be separated by disjoint open sets._ Remark that by symmetry, the version of Lemma 5.10, where \(H\) is taken as a subset of \(\omega_{1}\times L_{2}\) also holds and has the same proof. Proof.: We take the natural partially ordered set of finite partial neighborhood assignments to the points of \(H\cup K\) that approximate a disjoint open assignment. Namely, \(\mathbb{P}\) is the set of pairs \((p,q)\) such that \(p\in Fn(H,\omega)\) and \(q\in Fn(K,\omega)\) such that \[U_{p(\alpha,\beta)}(x_{\alpha})\times\{\beta\}\cap U_{q(\delta,\gamma)}(x_{ \delta})\times U_{q(\delta,\gamma)}(y_{\gamma})=\emptyset,\] whenever \((x_{\alpha},\beta)\in H\cap\operatorname{dom}(p)\) and \((x_{\delta},y_{\gamma})\in K\cap\operatorname{dom}(q)\). As above, it suffices to prove that \(\mathbb{P}\) has the CCC and to this end we index an arbitrary uncountable subset of \(\mathbb{P}\) as \(\{(p_{\alpha},q_{\alpha}):\alpha\in\omega_{1}\}\). By the same thinning out we did in the previous lemmas, we obtain stationary sets \(C\subseteq B\), a \(\xi\), finite sets \(G,J\subseteq\xi\) such that for each \(\alpha,\beta\in B\) the conditions \((p_{\alpha},q_{\alpha})\) and \((p_{\beta},q_{\beta})\) are isomorphic via a bijection \(j\), meaning 1. For each \((x_{\gamma},\eta)\in\operatorname{dom}_{\alpha}\), denoting \(j((x_{\gamma},\eta))\in\operatorname{dom}(p_{\beta})\) by \((x_{\gamma^{\prime}},\eta^{\prime})\), we have 1. \(\gamma<\alpha\) iff \(\gamma^{\prime}<\beta\) in which case \(\gamma=\gamma^{\prime}<\xi\) 2. \(\eta<\alpha\) iff \(\eta^{\prime}<\beta\) in which case \(\eta=\eta^{\prime}<\xi\) 3. \(\gamma=\alpha\) iff \(\gamma^{\prime}=\beta\) and \(\eta=\alpha\) iff \(\eta^{\prime}=\beta\) 4. \(\gamma>\alpha\) iff \(\gamma^{\prime}>\beta\) in which case \(x_{\gamma}\cap\alpha=x_{\gamma^{\prime}}\cap\beta\subseteq\xi\). 2. For each \((x_{\gamma},y_{\eta})\in\operatorname{dom}_{\alpha}\), denoting \(j((x_{\gamma},y_{\eta}))\in\operatorname{dom}(q_{\beta})\) by \((x_{\gamma^{\prime}},y_{\eta^{\prime}})\), we have the same type of isomorphism properties: 1. \(\gamma<\alpha\) iff \(\gamma^{\prime}<\beta\) in which case \(\gamma=\gamma^{\prime}<\xi\) 2. \(\eta<\alpha\) iff \(\eta^{\prime}<\beta\) in which case \(\eta=\eta^{\prime}<\xi\) 3. \(\gamma=\alpha\) iff \(\gamma^{\prime}=\beta\) and \(\eta=\alpha\) iff \(\eta^{\prime}=\beta\) 4. \(\gamma>\alpha\) iff \(\gamma^{\prime}>\beta\) in which case \(x_{\gamma}\cap\alpha=x_{\gamma^{\prime}}\cap\beta\subseteq\xi\). 5. \(\eta>\alpha\) iff \(\eta^{\prime}>\beta\) in which case \(y_{\eta}\cap\alpha=y_{\eta^{\prime}}\cap\beta\subseteq\xi\). And again, as before, for each \(\alpha\in C\), we denote \(\alpha^{+}\) the minimum of \(B\) above \(\alpha\), and so that 1. if \(\alpha^{+}\in S\) then \(x_{\alpha^{+}}\cap\alpha=G\) 2. if \(\alpha^{+}\in T\) then \(y_{\alpha^{+}}\cap\alpha=J\). And now we can conclude that the family \(\{(p_{\alpha^{+}},q_{\alpha^{+}}):\alpha\in C\}\) is centered. Finally, to prove that \(X_{L_{1}}\times X_{L_{2}}\) is hereditarily normal, fix \(A\) and \(B\) subsets of \(X_{L_{1}}\times X_{L_{2}}\) such that \(\mathit{cl}\,A\cap B=A\cap\mathit{cl}\,B=\emptyset\). By Lemma 5.8 we may fix \(U_{2}\) and \(V_{2}\) disjoint with \(A\cap L_{1}\times L_{2}\subseteq U_{2}\) and \(B\cap L_{1}\times L_{2}\subseteq V_{2}\). Next, by Lemma 5.9 fix disjoint open sets \(W_{1}\), and \(W_{2}\) such that \(L_{1}\times\omega_{1}\subseteq W_{1}\) and \(\omega_{1}\times L_{2}\subseteq W_{2}\). Since ladder system spaces \(X_{L}\) are normal under \(MA(\omega_{1})\) so are the subspaces \(X_{L_{1}}\times\omega_{1}\) and \(\omega_{1}\times X_{L_{2}}\) being free sums of \(\omega_{1}\) copies of the normal subspaces. So we may find disjoint open sets \(U_{1}^{1}\subseteq W_{1}\) and \(V_{1}^{1}\subseteq W_{1}\) such that \(A\cap X_{L_{1}}\times\omega_{1}\subseteq U_{1}^{1}\) and \(B\cap X_{L_{1}}\times\omega_{1}\subseteq V_{1}^{1}\). By Lemma 5.10 we can shrink \(V_{2}\) and \(U_{2}\) and also assume that \(U_{1}^{1}\cap V_{2}=V_{1}^{1}\cap U_{2}=\emptyset\). By the same argument, we can also find \(U_{1}^{2}\) and \(V_{1}^{2}\) separating \(A\cap\omega_{1}\times X_{L_{2}}\)\(\omega_{1}\) and \(B\cap\omega_{1}\cap X_{L_{2}}\) and may also assume that \(U_{1}^{2}\cap V_{2}=V_{1}^{2}\cap U_{2}=\emptyset\). So then we have that \(U=U_{2}\cup U_{1}^{1}\cup U_{1}^{2}(A\cap\omega_{1}\times\omega_{1})\) is an open set containing \(A\) and \(V=V_{2}\cup V_{1}^{1}\cup V_{1}^{2}(B\cap\omega_{1}\times\omega_{1})\) is an open set containing \(B\) and \(U\cap V=\emptyset\). This completes the proof that \(X_{L_{1}}\times X_{L_{2}}\) is hereditarily normal. In our last problem concerning ladder system spaces, we consider ladder systems over a set of ordinals \(D\subseteq\kappa\) where \(\kappa>\mathfrak{c}\). It was shown in [4] that it is consistent with CH that all ladder systems \(L\) on any subset of \(\omega_{1}\) have \(X_{L}\) countably paracompact, and so it is consistent that all ladder systems on \(\mathfrak{c}\) determine \(\Delta\)-spaces. However, we don't know whether assuming only ZFC one can construct a ladder system on some \(\kappa\) giving a ladder system space that is not in the class \(\Delta\). **Problem 5.11**.: _Does there exist in ZFC a ladder system \(L\) on some \(\kappa\) whose corresponding space \(X_{L}\) is not countably metacompact, hence \(X_{L}\notin\Delta\)?_ In other words, we ask for a ZFC example of a ladder system \(L\) on some uncountable cardinal that fails to have a relatively weak uniformization property (i.e. the property \(CM(L)\) as formulated in [4]). ## 6. Some examples of \(X\notin\Delta\) It follows from Theorem 4.2 that separable compact spaces \(X\in\Delta\) with \(|X|=\mathfrak{c}\) do exist in ZFC. On the other hand, no \(\Delta\)-set of reals can have cardinality \(\mathfrak{c}\). Below we will extend the last claim for much more general classes of topological spaces. Denote by \(o(X)\) the cardinality of the family of all open sets in \(X\). **Theorem 6.1**.: _If \(o(X)^{\aleph_{0}}\leq|X|\), then \(X\notin\Delta\)._ Proof.: Our proof is based on the argument which appears (implicitly) in [38]. Denote the cardinal \(o(X)^{\aleph_{0}}\) by \(\lambda\). Enumerate \(X=\{x_{\alpha}:\alpha<\tau\}\). On the contrary, assume that \(X\in\Delta\). Enumerate by \(\{\{U_{n}^{\alpha}:n\in\omega\}:\alpha<\lambda\}\) all countable sequences of open subsets of \(X\) with empty intersection. By assumption, \(\lambda\leq\tau\). For every \(\alpha<\lambda\) choose an \(n(\alpha)\in\omega\) such that \(x_{\alpha}\notin U_{n(\alpha)}^{\alpha}\). Define \(A_{n}=\{x_{\alpha}:n(\alpha)\geq n\}\). Clearly, \(\bigcap_{n\in\omega}A_{n}=\emptyset\). If there existed an \(\alpha<\lambda\) such that \(A_{n}\subset U_{n}^{\alpha}\), for each \(n\in\omega\), then we would have \(x_{\alpha}\in A_{n(\alpha)}\subseteq U_{n(\alpha)}^{\alpha}\), which is a contradiction. This means that \(X\notin\Delta\). The next Proposition 6.2 strengthens notably several results obtained in [13]. **Proposition 6.2**.: (a) _Let \(X\) be a hereditarily separable space. If \(|X|=\mathfrak{c}\), then \(X\notin\Delta\)._ (b) _Let \(X\) be a separable hereditarily Lindelof space. If \(|X|=\mathfrak{c}\), then \(X\notin\Delta\)._ Proof.: (a) For any \(X\), the inequality \(o(X)\leq|X|^{hd(X)}\) holds [21]. Since \(hd(X)=\aleph_{0}\) we get that \(o(X)^{\aleph_{0}}\leq\mathfrak{c}^{\aleph_{0}\times\aleph_{0}}=|X|\), and Theorem 6.1 applies. (b) For any (regular) \(X\), the inequalities \(w(X)\leq 2^{d(X)}\) and \(o(X)\leq w(X)^{hL(X)}\) hold [21], so again \(o(X)^{\aleph_{0}}\leq\mathfrak{c}^{\aleph_{0}\times\aleph_{0}}=|X|\), and Theorem 6.1 applies. Let \(\mathbb{S}\) denote the Sorgenfrey line. It has been shown in [13] that \(\mathbb{S}\notin\Delta\). Note that the proof presented in [13] is valid for any subset of \(\mathbb{S}\) containing a segment but it does not work for more complicated subspaces of \(\mathbb{S}\). Proposition 6.2(a) immediately implies **Corollary 6.3**.: _Let \(X\) be any subspace of the Sorgenfrey line \(\mathbb{S}\) with \(|X|=\mathfrak{c}\). Then \(X\notin\Delta\)._ **Remark 6.4**.: It is not clear whether an assumption of separability can be omitted in Proposition 6.2(b). Assume that \(X\) is a hereditarily Lindelof \(\Delta\)-space with \(|X|=\mathfrak{c}\). Then \(X\) can not be separable by Proposition 6.2(b). So, \(X\) would be an \(L\)-space which is also a \(\Delta\)-space. A highly nontrivial example of an \(L\)-space in ZFC was constructed by J. T. Moore [36]. Recently it was shown [33] that Moore's \(L\)-space is not a \(Q\)-set space and under the assumption that all Aronszajn trees are special, Moore's \(L\)-space is not a \(\Delta\)-space, but we don't know whether his construction can give an \(L\)-space which is also a \(\Delta\)-space. Every hereditarily Lindelof scattered space contains a countable dense set of isolated points, hence an \(L\)-space can not be homeomorphically embedded into a compact \(\Delta\)-space. As mentioned in Introduction, Z. Balogh [3] constructed a hereditarily paracompact, perfectly normal \(Q\)-set space \(X\) such that \(|X|=\mathfrak{c}\). Dennis Burke informed the authors that there are handwritten notes by Z. Balogh where he started to outline a strategy for constructing a Lindelof \(Q\)-set space of cardinality continuum. Such a space would be evidently a hereditarily Lindelof \(\Delta\)-space. However, according to Dennis Burke, these notes are incomplete and seem to leave things hanging. So, the following apparently very challenging problem remains open. **Problem 6.5**.: _Does there exist in ZFC an uncountable hereditarily Lindelof \(\Delta\)-space? or even an uncountable hereditarily Lindelof \(Q\)-set space?_ Recall that a space is _resolvable_ (\(\omega\)_-resolvable_) if it can be partitioned into \(2\) (countably many) dense subsets. **Proposition 6.6**.: _If \(X\) is Baire and \(\omega\)-resolvable, then it is not a \(\Delta\)-space._ Proof.: Fix a partition of \(X\) into countably many dense sets \(D_{n}\). If \(U_{n}\supseteq D_{n}\) then \(U_{n}\) is dense open and so \(\bigcap_{n\in\omega}U_{n}\neq\emptyset\) and so \(\{D_{n}:n\in\omega\}\) has no point-finite open expansion. Since it is well known that a Souslin line is Baire and \(\omega\)-resolvable we obtain **Corollary 6.7**.: _A Souslin line is not a \(\Delta\)-space._ More generally, any Lindelof regular space with all open sets uncountable is resolvable [14]. This result was improved in [23] to \(\omega\)-resolvable, and since Baire spaces without isolated points have all open subsets uncountable, we obtain **Corollary 6.8**.: _A Lindelof Baire space without isolated points is not a \(\Delta\)-space._ It follows, for example, that the classical \(L\)-spaces constructed from CH (see e.g. [41]) are not \(\Delta\)-spaces. Note that Corollary 6.8 provides a strengthening of a result mentioned in the Introduction: compact \(\Delta\)-spaces must be scattered [25]. Whether there are more interesting examples of Baire spaces that are either \(\Delta\)-spaces or \(Q\)-set spaces is open, so we ask **Problem 6.9**.: _Does every Baire \(\Delta\)-space have an isolated point? Are there uncountable Baire \(Q\)-set spaces?_ Corollary 6.8 also suggests a measure-theoretic analogue: **Problem 6.10**.: _If \(X\) admits a strictly positive \(\sigma\)-additive measure vanishing on points is then \(X\not\in\Delta\)?_ As mentioned in Introduction, \(\omega_{1}\) is an example of a first countable, locally compact, scattered space that is not a \(\Delta\)-space. We now characterize those subspaces of \(\omega_{1}\) which are \(\Delta\)-spaces (a fact that will be useful for what follows). **Theorem 6.11**.: _For a subset \(X\subseteq\omega_{1}\), the following conditions are equivalent:_ 1. \(X\) _is a nonstationary set._ 2. \(X\) _is a_ \(Q\)_-space._ 3. \(X\) _is a_ \(\Delta\)_-space._ Proof.: If \(X\subset\omega_{1}\) is nonstationary, then since \(\omega_{1}\setminus X\) contains a closed unbounded set, \(X\) is the free topological union of pairwise disjoint countable open sets. It is easily seen that any countable set is a \(Q\)-space and any free union of \(Q\)-spaces is a \(Q\)-space. This proves (1) \(\Rightarrow\) (2). (2) \(\Rightarrow\) (3) has been mentioned before. It remains to prove (3) \(\Rightarrow\) (1). Assume \(X\) is stationary. Let \(L=\{\alpha\in\omega_{1}:\alpha\text{ is a limit point of }X\}\). Then \(X^{{}^{\prime}}=X\cap L\) is also a stationary set. Represent \(X^{{}^{\prime}}\) as a union of countably many pairwise disjoint stationary sets \(\{X_{n}:n\in\omega\}\). As an easy application of Fodor's Lemma the family \(\{X_{n}:n\in\omega\}\) has no point-finite open expansion. Indeed, if \(Y\subseteq\omega_{1}\) is stationary, and \(Y\subseteq U\), where \(U\) is open, then \((\beta,\omega_{1})\subseteq U\) for some \(\beta\). Therefore, if \(U_{n}\) are open and \(X_{n}\subseteq U_{n}\) for each \(n\in\omega\), then there is an ordinal \(\gamma\in\omega_{1}\) such that \(X\setminus\gamma\subseteq\bigcap_{n\in\omega}U_{n}\). It follows that the disjoint family \(\{X_{n}:n\in\omega\}\) has no point-finite open expansion and so \(X\not\in\Delta\). Now we give an example of a Lindelof \(P\)-space answering negatively the following question posed in [12, Problem 24]: if \(X\) is a \(P\)-space, must \(C_{p}(X)\) be distinguished? **Example 6.12**.: The \(G_{\delta}\)-modification \(W_{\delta}\) of a topological space \(W\) is the space on the same underlying set generated by the family of all \(G_{\delta}\)-sets of \(W\). It is known that if a compact space \(W\) is scattered then \(W_{\delta}\) is Lindelof [34]. For instance, if \(W\) is the compact scattered ordinal space \([0,\omega_{1}]\), then \(X=W_{\delta}\) is the one-point Lindelofication of a discrete set. Let \(W\) be the compact scattered ordinal space \([0,\omega_{2}]\). Then \(X=W_{\delta}\) is a Lindelof \(P\)-space such that each ordinal \(\alpha\) with countable cofinality is isolated in \(X\), and each ordinal \(\alpha\) with uncountable cofinality is a limit point in \(X\). Note that the set of all ordinals in \(\omega_{2}\) with cofinality \(\omega_{1}\) is stationary. Therefore, repeating the argument of above Theorem 6.11, again by the Fodor's lemma, we show that \(W_{\delta}\notin\Delta\). **Problem 6.13**.: _Find a characterization of those scattered compact spaces \(W\) such that \(W_{\delta}\in\Delta\)._ We end the paper with some observations about the relationship between \(\Delta\)-spaces and \(Q\)-set spaces. Recall that Balogh's definition of a \(Q\)-set space requires the space to be not \(\sigma\)-discrete. He was interested in the existence of non-trivial examples of spaces where every subset was \(G_{\delta}\). We have described in the paper a number of compact \(\Delta\)-spaces in ZFC that contain subsets which are not \(G_{\delta}\) (e.g. Examples 4.2 and 5.2), but all of them are \(\sigma\)-discrete and have at least one point that is not a \(G_{\delta}\). In [6] J. Casas de la Rosa observed that the Alexandroff duplicate \(X\) of a \(Q\)-set space is a \(\Delta\)-space which is not \(\sigma\)-discrete and all singletons \(\{x\}\) are \(G_{\delta}\)-sets. But this space \(X\) does include a closed non-\(G_{\delta}\) subset. So we ask **Problem 6.14**.: _Is there in ZFC a \(\Delta\)-space which is not \(\sigma\)-discrete, has all closed sets \(G_{\delta}\) but is not a \(Q\)-set space?_ Perhaps a modification of the Balogh-Rudin technique that Balogh used to construct ZFC examples of \(Q\)-set spaces and Dowker spaces could be used to give a positive answer. S. Shelah [44] showed that consistently there is a normal ladder system space \(X_{L}\) (hence \(X_{L}\) is a \(\Delta\)-space) such that the closed discrete set of non-isolated points is not \(G_{\delta}\). On the other hand, under PMEA, in a first-countable, countably metacompact \(T_{1}\) space every closed discrete subset is a \(G_{\delta}\)-set [5]. Moreover, as we have mentioned in Remark 5.1, \(MA(\omega_{1})\) implies that every subset of a ladder system space \(X_{L}\) is \(G_{\delta}\). For a subclass of \(\Delta\)-spaces which includes all Tychonoff spaces embeddable into a scattered Eberlein compact space, we have a positive result, in ZFC. A family \(\{\mathcal{N}_{x}:x\in X\}\) of subsets of a topological space \(X\) is called a _point-finite neighborhood assignment_ if each \(\mathcal{N}_{x}\) is an open neighbourhood of \(x\) and for each \(u\in X\) the set \(\{x\in X:u\in\mathcal{N}_{x}\}\) is finite. It is proved in [13, Theorem 46] that if \(X\) admits a point-finite neighborhood assignment \(\{\mathcal{N}_{x}:x\in X\}\) then \(X\in\Delta\). Also, a compact space \(X\) admits a point-finite neighborhood assignment if and only if \(X\) is a scattered Eberlein compact space [13, Theorem 51]. **Theorem 6.15**.: _Assume that a topological space \(X\) admits a point-finite neighborhood assignment. If \(F\) is any subset of \(X\) consisting of points which are \(G_{\delta}\) in \(X\), then \(F\) is a \(G_{\delta}\)-set in \(X\)._ Proof.: Let \(\{\mathcal{N}_{x}:x\in X\}\) be a point-finite neighborhood assignment in \(X\). For every point \(x\in F\) fix the sequence of open sets \(\{U_{n}(x):n\in\omega\}\) such that \[U_{0}(x)\subseteq\mathcal{N}_{x},U_{n+1}(x)\subseteq U_{n}(x)\text{ and }\bigcap\{U_{n}(x):n\in\omega\}=\{x\}.\] Define open sets \(V_{n}=\bigcup_{x\in F}U_{n}(x)\) for every \(n\in\omega\). We claim that \(\bigcap_{n\in\omega}V_{n}=F\). Indeed, let \(y\notin F\) be any point. There are at most finitely many points \(x_{1},x_{2},\ldots,x_{k}\) in \(F\) such that \(y\in\mathcal{N}_{x_{i}}\). Since \(\bigcap_{n\in\omega}U_{n}(x_{i})=\{x_{i}\}\) for each \(i=1,2,\ldots,k\), there is one index \(n_{0}\) large enough such that \(y\notin U_{n}(x_{i})\) for every \(n>n_{0}\). Finally, \(y\notin V_{n}\) for every \(n>n_{0}\). Since as we have mentioned before, every scattered Eberlein compact space admits a point-finite neighborhood assignment we immediately derive **Corollary 6.16**.: _Let \(F\) be any subspace of a scattered Eberlein compact space \(X\) consisting of points which are \(G_{\delta}\) in \(X\). Then \(F\) is a \(Q\)-space._ Note that Theorem 6.15 generalizes [29, Theorem 1]. **Acknowledgements.** The authors are grateful to the referee for careful reading of the paper and valuable suggestions and comments.
2305.17520
USIM-DAL: Uncertainty-aware Statistical Image Modeling-based Dense Active Learning for Super-resolution
Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc. However, the high cost of annotation and labeling makes it challenging to achieve accurate results. We propose incorporating active learning into dense regression models to address this problem. Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance. Despite its potential, active learning has not been widely explored in high-dimensional computer vision regression tasks like super-resolution. We address this research gap and propose a new framework called USIM-DAL that leverages the statistical properties of colour images to learn informative priors using probabilistic deep neural networks that model the heteroscedastic predictive distribution allowing uncertainty quantification. Moreover, the aleatoric uncertainty from the network serves as a proxy for error that is used for active learning. Our experiments on a wide variety of datasets spanning applications in natural images (visual genome, BSD100), medical imaging (histopathology slides), and remote sensing (satellite images) demonstrate the efficacy of the newly proposed USIM-DAL and superiority over several dense regression active learning methods.
Vikrant Rangnekar, Uddeshya Upadhyay, Zeynep Akata, Biplab Banerjee
2023-05-27T16:33:43Z
http://arxiv.org/abs/2305.17520v1
USIM-DAL: Uncertainty-aware Statistical Image Modeling-based Dense Active Learning for Super-resolution ###### Abstract Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc. However, the high cost of annotation and labeling makes it challenging to achieve accurate results. We propose incorporating active learning into dense regression models to address this problem. Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance. Despite its potential, active learning has not been widely explored in high-dimensional computer vision regression tasks like super-resolution. We address this research gap and propose a new framework called _USIM-DAL_ that leverages the statistical properties of colour images to learn informative priors using probabilistic deep neural networks that model the heteroscedastic predictive distribution allowing uncertainty quantification. Moreover, the aleatoric uncertainty from the network serves as a proxy for error that is used for active learning. Our experiments on a wide variety of datasets spanning applications in natural images (visual genome, BSD100), medical imaging (histopathology slides), and remote sensing (satellite images) demonstrate the efficacy of the newly proposed _USIM-DAL_ and superiority over several dense regression active learning methods. ## 1 Introduction The paradigm of dense prediction is very important in computer vision, given that pixel-level regression tasks like super-resolution, restoration, depth estimation etc., help in holistic scene understanding. A common example of a pixel-level (i.e., dense) regression task is _Image super-resolution_ (SR) is the process of recovering high-resolution (HR) images from their low-resolution (LR) versions. It is an important class of image processing techniques in computer vision, deep learning, and image processing and offers a wide range of real-world applications, such as medical imaging [11], satellite imaging [23], surveillance [14] and security [15], and remote sensing [26], to name a few. The well-performing techniques for super-resolution often rely on deep learning-based methods that are trained in a supervised fashion, requiring high-resolution data as groundtruth. However, the acquisition of high-resolution imaging data (to be served as labels) for many real-world applications may be infeasible. Consider the example of histopathology microscopy from medical imaging, where the typical digital microscope takes significantly longer to acquire the high-resolution scans (i.e., at high magnification) image of the slide than low-magnification [1, 1]. Moreover, the acquired high-resolution scans also have a significantly larger memory footprint leading to an increase in storage resources [1]. Similarly, acquiring high spatial resolution images from satellites for remote sensing requires expensive sensors and hardware and has significantly higher operating costs [16, 20]. In such scenarios, generating a large volume of training samples is infeasible. As a remedy, concepts like zero-shot SR or single-image SR have been proposed. Nevertheless, zero-shot SR still requires ample supervision from the test image patches [22] to learn the transferrable model for novel scenarios with divergent distributions [27], and the performance of the single-image SR models is still affected by the lack of sufficient labeled data [10]. Notwithstanding these discussions, there are situations where there are restrictions on dealing with training samples within a pre-defined budget. For example, in histopathology microscopy, the constraint on available resources may allow high-resolution acquisition for only a limited number of patients/microscopy slides. One of the viable solutions in this regard is to select a subset of highly representative training samples from the available training set while respecting the budget and deploying them to train the SR model. This corresponds to the notion of active learning for subset selection. However, selecting the subset is challenging considering the fact that we need a quantitative measurement for the eligibility of a given training LR-HR pair to be selected. Many works have explored different _query functions_ to select a subset to label from a larger dataset [1, 11, 12]. However, most of them have been applied to classification or low-dimensional regression problems [13], and there still exists a gap on how to address this for dense regression tasks (e.g., super-resolution). Active learning technique to label those points for which the current model is least certain has been studied well in the context of classification [14]. While there are recent advances in uncertainty estimation using neural networks for dense regression [15, 16], it is yet to be studied if they can be leveraged in active learning for dense regression. In summary, our contributions are as follows: (i) We show how statistical image models can help alleviate the need for a large volume of high-resolution imaging data. (ii) We show that probabilistic deep networks, along with the statistical image models, can be used to learn informative prior about niche domain datasets that may allow limited access to high-resolution data. (iii) Our probabilistic deep network trained with the statistical image models allows us to estimate the uncertainty for the sample in a niche domain that can be leveraged for active learning as illustrated in Figure 1. ## 2 Related Work Active Learning.These are a set of techniques that involve selecting a minimal data subset to be annotated, representing the entire dataset, and providing maximum performance gains. Querying strategies for active learning can be broadly categorized into three categories: heterogeneity-based, performance-based, and representativeness-based models. Uncertainty sampling [1, 11, 12, 13], a type of heterogeneity-based model, is a standard active learning strategy where the learner aims to label those samples which have the most uncertain labelings. Non-Bayesian approaches[1, 11] dealing with entropy, distance from decision boundary, etc., also exist but are not scalable for deep learning [10]. Representation-based methods that aim at increasing the diversity in a batch[13] have also been studied. However, most of these works have been studied in the context of classification or low-dimensional regression problems, and the literature on dense regression is still sparse. Statistical Image models.The \(n\times n\) RGB images occupy the space of \(\mathbb{R}^{3n^{2}}\). However, the structured images occupy a small region in that space. The statistical properties of the samples in this small structured space can be leveraged to generate synthetic data that have similar statistics to real-world structured images. For instance, the observation that natural images follow a power law with respect to the magnitude of their Fourier Transform (FT) formed the basis for Wiener image denoising[15], Dead Leaves models [14] and fractals as image models [11, 12]. Similarly, works like [11, 13, 14] showed that outputs of zero mean wavelets to natural images are sparse and follow a generalized Laplacian distribution. Works like [14, 15] showed statistical models capable of producing realistic-looking textures. The recent work [1] takes this research a step closer to realistic image generation by learning from procedural noise processes and using the generated samples for pre-training the neural networks. However, it is only applied to classification. Figure 1: The proposed framework _USIM-DAL._ (Left-to-right) We train a probabilistic deep network for a dense regression task (e.g., super-resolution) on synthetic samples obtained from statistical image models as described in Section 3. The pre-trained model is used to identify the high-uncertainty samples from the domain-specific unlabeled set. Top-K highly uncertain samples are chosen for labeling on which the pre-trained network is further fine-tuned. Super-resolution.This consists of CNN-based methods to enhance the resolution of the image [1, 22, 23, 24]. Attention mechanism has proven to be ubiquitous, with [25] introducing channel and spatial attention modules for adaptive feature refinement. Transformers-based endeavors such as [10], achieve state-of-the-art results using multi-head self-attention for SR. [11] uses a probabilistic diffusion model and performs SR through an iterative denoising process. Works like [23, 24] use internal and external recurrence of information to get superior SR performance during inference. However, these works do not consider the problem of super-resolution in the active learning context, leaving a gap in the literature. Uncertainty Estimation.Quantifying uncertainty in machine learning models is crucial for safety-critical applications [22, 25, 26, 1]. Uncertainty can be broadly categorized into two classes: (i) Epistemic uncertainty (i.e., uncertainty in model weights [1, 19, 18, 17]). (ii) Aleatoric uncertainty (i.e., noise inherent in the observations) [10, 23]. The dense predictive uncertainty may be considered as a proxy for error and can be used for active learning purposes [11]. ## 3 Method We first formulate the problem in Section 3.1, and present preliminaries on active learning, statistical image models, and uncertainty estimation in Section 3.2. In Section 3.3, we describe the construction of _USIM-DAL_ that learns a prior via statistical image modeling, which is later used to select the most informative samples from the unlabeled set for labeling and further improving the model. ### Problem Formulation Let \(\mathcal{D}_{U}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) be the unlabeled set of input images from domain \(\mathbf{X}\) (i.e., \(\mathbf{x}_{i}\in\mathbf{X}\forall i\)). We consider the task where images (\(\mathbf{x}\)) are to be mapped to another set of dense continuous labels (\(\mathbf{y}\), e.g., other images, such that \(\mathbf{y}_{i}\in\mathbf{Y}\forall i\)). We want to learn a mapping \(\mathbf{\Psi}\) for the same, i.e., \(\mathbf{\Psi}:\mathbf{X}\rightarrow\mathbf{Y}\). However, we want to learn it under the constraint that we do not have sufficient _budget_ to "label" all the \(N\) samples in \(\mathcal{D}_{U}\) (i.e., acquire all the corresponding \(\mathbf{y}\)), but we do have a budget to label a significantly smaller subset of \(\mathcal{D}_{U}\) with \(K<<N\) samples, say \(\mathcal{D}_{U}^{K}\). This is a real-world constraint, as discussed in Section 2. In this work, we focus on the problem of super-resolution where the domain \(\mathbf{Y}\) consists of high-resolution images (corresponding to the low-resolution images in domain \(\mathbf{X}\)). We tackle the problem of choosing the set of \(K<<N\) samples (\(\mathcal{D}_{U}^{K}\)) that are highly representative of the entire unlabeled training set \(\mathcal{D}_{U}\), such that the learned mapping \(\mathbf{\Psi}\) on unseen data from a similar domain performs well. ### Preliminaries Active Learning.As discussed above, given a set of \(N\) unlabeled images \(\mathcal{D}_{U}\), we want to choose a set of \(K<<N\) samples (\(\mathcal{D}_{U}^{K}\)) that are highly representative of the entire unlabeled training set \(\mathcal{D}_{U}\). This is the problem of active learning, which consists of _query strategies_ that maps the entire unlabeled set \(\mathcal{D}_{U}\) to its subset. That is, the query strategy (constrained to choose \(K\) samples and parameterized by \(\phi\)) is given by, \(\mathcal{Q}_{K,\phi}:\mathcal{D}_{U}\rightarrow\mathcal{D}_{U}^{K}\). Many works explore designing the query strategy \(\mathcal{Q}_{K,\phi}\)[1, 19, 17]. However, they seldom attempt to design such a strategy for dense regression. Statistical Image Models (SIM).As discussed in [1], the statistical properties of RGB images can be exploited to generate synthetic images that can serve as an excellent pre-training learning signal. The generative model (based on statistical properties of RGB images) is described as \(\mathcal{G}(\cdot;\theta_{G}):\mathbf{z}\rightarrow\mathbf{x}\) where \(\mathbf{z}\) is a stochastic latent variable and \(\mathbf{x}\) is an image. The image generation is modelled as a hierarchical process in which, first, the parameters of a model are sampled. Then the image is sampled given these parameters and stochastic noise. Previous works [1] highlight the following statistical models. (i) **Spectrum:** based on the magnitude of the Fourier transform (FT). The FT of many natural images follows a power law, i.e., \(\frac{1}{|f|^{\alpha}}\), where \(|f|\) is the magnitude of frequency \(f\), and \(\alpha\) is a constant close to 1. For generative models, the sampled images are constrained to be random noise images that have FT magnitude following \(\frac{1}{|f_{\mathbf{z}}|^{\alpha}+|f_{\mathbf{y}}|^{\alpha}}\) with a and b being two random numbers uniformly sampled as detailed in [1]. (ii) **Wavelet-marginal model (WMM):** Generates the texture by modeling their histograms of wavelet coefficient as discussed in [14, 18]. (iii) **Color histograms:** As discussed in [1], this generative Figure 2: Samples generated from Statistical Image Models (combination of Spectrum + WMM + Color histogram). model follows the color distribution of the dead-leaves model [1]. Combining all these different models allows for capturing colour distributions, spectral components, and wavelet distributions that mimic those typical for natural images. Figure 2 shows examples of generated samples from such models. Uncertainty Estimation.Various works [12, 13] have proposed different methods to model the uncertainty estimates in the predictions made by DNNs for different tasks. Interestingly recent works [13, 14] have shown that for many real-world vision applications, modeling the aleatoric uncertainty allows for capturing erroneous predictions that may happen with out-of-distribution samples. To estimate the uncertainty for the regression tasks using deep network (say \(\mathbf{\Psi}(\cdot;\zeta):\mathbf{X}\rightarrow\mathbf{Y}\)), the model must capture the output distribution \(\mathcal{P}_{Y|X}\). This is often done by estimating \(\mathcal{P}_{Y|X}\) with a parametric distribution and learning the parameters of the said distribution using the deep network, which is then used to maximize the likelihood function. That is, for an input \(\mathbf{x}_{i}\), the model produces a set of parameters representing the output given by, \(\{\hat{\mathbf{y}}_{i},\hat{\nu}_{i}\dots\hat{\rho}_{i}\}:=\mathbf{\Psi}( \mathbf{x}_{i};\zeta)\), that characterizes the distribution \(\mathcal{P}_{Y|X}(\mathbf{y};\{\hat{\mathbf{y}}_{i},\hat{\nu}_{i}\dots\hat{ \rho}_{i}\})\), such that \(\mathbf{y}_{i}\sim\mathcal{P}_{Y|X}(\mathbf{y};\{\hat{\mathbf{y}}_{i},\hat{\nu }_{i}\dots\hat{\rho}_{i}\})\). The likelihood \(\mathcal{L}(\zeta;\mathcal{D}):=\prod_{i=1}^{N}\mathcal{P}_{Y|X}(\mathbf{y}_{i} ;\{\hat{\mathbf{y}}_{i},\hat{\nu}_{i}\dots\hat{\rho}_{i}\})\) is then maximized to estimate the optimal parameters of the network. Typically, the parameterized distribution is chosen to be _heteroscedastic_ Gaussian distribution, in which case \(\mathbf{\Psi}(\cdot;\zeta)\) is designed to predict the _mean_ and _variance_ of the Gaussian distribution, i.e., \(\{\hat{\mathbf{y}}_{i},\hat{\sigma}_{i}^{2}\}:=\mathbf{\Psi}(\mathbf{x}_{i}; \zeta)\). The optimization problem becomes, \[\zeta^{*}=\underset{\zeta}{\text{argmin}}\sum_{i=1}^{N}\frac{|\hat{\mathbf{y}}_ {i}-\mathbf{y}_{i}|^{2}}{2\hat{\sigma}_{i}^{2}}+\frac{\log(\hat{\sigma}_{i}^{2} )}{2} \tag{1}\] With Uncertainty\((\hat{\mathbf{y}}_{i})=\hat{\sigma}_{i}^{2}\). An important observation from Equation 1 is that, ignoring the dependence through \(\zeta\), the solution to Equation 1 decouples estimation of \(\hat{\mathbf{y}}_{i}\) and \(\hat{\sigma}_{i}\). That is, for minimizing with respect to \(\hat{\mathbf{y}}_{i}\) we need, \[\frac{\partial\left(\sum_{i=1}^{N}\frac{|\hat{\mathbf{y}}_{i}- \mathbf{y}_{i}|^{2}}{2\hat{\sigma}_{i}^{2}}+\frac{\log(\hat{\sigma}_{i}^{2})} {2}\right)}{\partial\hat{\mathbf{y}}_{i}}=0 \tag{2}\] \[\frac{\partial^{2}\left(\sum_{i=1}^{N}\frac{|\hat{\mathbf{y}}_{i} -\mathbf{y}_{i}|^{2}}{2\hat{\sigma}_{i}^{2}}+\frac{\log(\hat{\sigma}_{i}^{2})} {2}\right)}{\partial\hat{\mathbf{y}}_{i}^{2}}>0 \tag{3}\] Equation 2 & 3 lead to \(\hat{\mathbf{y}}_{i}=\mathbf{y}_{i}\ \forall i\). Similarly for minimizing with respect to \(\hat{\sigma}_{i}\) we need, \[\frac{\partial\left(\sum_{i=1}^{N}\frac{|\hat{\mathbf{y}}_{i}- \mathbf{y}_{i}|^{2}}{2\hat{\sigma}_{i}^{2}}+\frac{\log(\hat{\sigma}_{i}^{2})} {2}\right)}{\partial\hat{\sigma}_{i}}=0 \tag{4}\] \[\frac{\partial^{2}\left(\sum_{i=1}^{N}\frac{|\hat{\mathbf{y}}_{i} -\mathbf{y}_{i}|^{2}}{2\hat{\sigma}_{i}^{2}}+\frac{\log(\hat{\sigma}_{i}^{2})} {2}\right)}{\partial\hat{\sigma}_{i}^{2}}>0 \tag{5}\] Equation 4 & 5 lead to \(\hat{\sigma}_{i}^{2}=|\hat{\mathbf{y}}_{i}-\mathbf{y}_{i}|^{2}\ \forall i\). That is, the estimation \(\hat{\sigma}_{i}^{2}\) should perfectly reflect the squared error. Therefore, a higher \(\hat{\sigma}_{i}^{2}\) indicates higher error. We leverage this observation to design our dense active learning framework as described in Section 3.3. ### Constructing _Usim-Dal_ To tackle the problem mentioned in Section 3.1 (i.e., choosing a small subset), we leverage the fact that even before training the model with the labelled set, we can train a model based on the samples that we get from statistical image model as described above, which can then be used to make inference on the unlabeled domain-specific dataset identifying the high-uncertainty samples. The high-uncertainty samples can then be labelled and used to fine-tune the model. We constraint the generative process for statistical image models as, Similar to [1], we treat image generation as a hierarchical process in which first the parameters of a model, \(\theta_{G}\), are sampled. Then the image is sampled given these parameters and stochastic noise, i.e., \[\theta_{G}\sim prior(\theta_{G})\text{ and }\mathbf{z}\sim prior( \mathbf{z}) \tag{6}\] \[\mathbf{x}=\mathcal{G}(\mathbf{z};\theta_{G}) \tag{7}\] In particular, for super-resolution, we create a large (synthetic) labelled dataset using the samples from the statistical image models, say \(\mathcal{D}_{SL}=\{(\texttt{low}(\mathbf{x}_{s,i}),\mathbf{x}_{s,i})\}_{i=1}^{M}\). Where \(\mathbf{x}_{s,i}\) are generated samples from statistical image model and \(\texttt{low}(\cdot)\), is the 4\(\times\) down-sampling operation. We then train the network \(\mathbf{\Psi}(\cdot;\zeta)\) on \(\mathcal{D}_{SL}\) using Equation 1, leading to the optimal parameter \(\zeta_{SL}^{*}\), as shown in Figure 1. The trained model \(\mathbf{\Psi}(\cdot;\zeta_{SL}^{*})\) is then run in inference mode on all the samples of the unlabeled set \(\mathcal{D}_{U}\) and gather the top uncertain samples for labeling, that is, \[\{\hat{\mathbf{y}}_{i},\hat{\sigma}_{i}\}:=\mathbf{\Psi}( \mathbf{x}_{i};\zeta_{SL}^{*})\ \forall\mathbf{x}_{i}\in\mathcal{D}_{U} \tag{8}\] \[\mathcal{D}_{U}^{K}:=\{\mathbf{x}_{j}\}\forall j\in\texttt{topK} \left(\{(\hat{\sigma}_{i})\}_{i=1}^{N}\right) \tag{9}\] Where, \(\langle\cdot\rangle\) represents the mean operation, and \(\texttt{topK}\big{(}\{\hat{\sigma}_{i}\}_{i=1}^{N}\big{)}\) returns the indices of "top-K" most uncertain samples (i.e., mean uncertainty is high). We then acquire the labels for the samples in \(\mathcal{D}_{U}^{K}\), giving us, \(\mathcal{D}_{UL}^{K}=\{(\mathbf{x}_{j},\mathbf{y}_{j})\}\). As discussed in Section 3.2, the input samples in \(\mathbf{D}_{UL}^{K}\) serve as a proxy to the set of \(K\) samples that would have the highest error between the prediction made by the model \(\mathbf{\Psi}(\cdot;\zeta_{SL}^{*})\) and the ground truth. That leads to better fine-tuning. The model \(\mathbf{\Psi}(\cdot;\zeta_{SL}^{*})\) is then fine-tuned on \(\mathcal{D}_{UL}^{K}\) via Equation 1, leading to the final state of the model \(\mathbf{\Psi}(\cdot;\zeta_{KL}^{*})\) (shown in Figure 1) that can be used for inferring on the new sample. _USIM-DAL_ models the aleatoric uncertainties in the prediction. Still, it is crucial to note that it leverages the Statistical Image Modeling (SIM)-based synthetic images for pertaining and learning important priors for color images that broadly capture different niche domains such as medical images, satellite images, etc. Therefore, the initial model, capable of estimating the aleatoric uncertainty (trained on SIM-based synthetic images), can reasonably capture the uncertainty as a proxy for reconstruction error for domain-specific images that are not necessarily out-of-distribution images. Moreover, picking samples with high reconstruction errors for subsequent fine-tuning of the model yields better performance on similar highly erroneous cases, iteratively improving the model. Furthermore, in high-dimensional regression cases, the aleatoric and epistemic uncertainty often influence each other and are not independent Kendall and Gal (2017), Upadhyay et al. (2022), Zhang et al. (2019). ## 4 Experiments and Results We provide an overview of the experiments performed and the results obtained. In Section 4.1, we describe the task and various methods used for comparison. Section 4.3 analyzes the performance of various dense active learning algorithms for super-resolution and shows that our proposed method _USIM-DAL_ can help greatly improve the performance when constrained with a limited budget. ### Tasks, Datasets, and Methods We present the results of all our experiments on the super-resolution task. We demonstrate our proposed framework using a probabilistic SRGAN (which is the adaptation of SRGAN (Ledig et al., 2017) that estimates pixel-wise uncertainty as described in (Kendall and Gal, 2017)) model. We evaluate the performance of various models on a wide variety of domains like (i) Natural Images (with Set5, Set14, BSD100, and Visual Genome dataset (Ledig et al., 2017; Martin et al., 2001; Krishna et al., 2017)). (ii) Satellite Images (with PatternNet dataset (Zhou et al., 2018)). (ii) Histopathology Medical Images (with Camera-Lyon dataset (Litjens et al., 2018)). The evaluation protocol is designed to constraint all the training domain datasets to be restricted by a small fixed number of images (also called _training budget_). We used different training budgets of 500, 1000, 2000, 3000 and 5000 images for natural and satellite domains. For both natural and satellite images, the input image resolution was set to \(64\times 64\). For natural images the training dataset was obtained from Visual Genome (separate from the test-set). Similarly, for the histopathology medical images, the input image resolution was set to \(32\times 32\) and we used training budgets of 4000, 8000, 12000, and 16000. We compare the super-resolution performance in terms of metrics MSE, MAE, PSNR, and SSIM (Wang et al., 2004) for the following methods on respective test sets: (i) SRGAN model trained from scratch with a randomly chosen subset satisfying the training budget from the entire training data (called _Random_). (ii) SRGAN model trained from scratch on a large synthetically generated dataset via statistical image modeling (as described in Section 3.2). This model is called _SIM_. (iii) SRGAN model trained from scratch on a large synthetically generated dataset via statistical image modeling and then fine-tuned on a randomly chosen subset satisfying the training budget from the entire training data, called _SIM+Random_. (iv) SRGAN model trained from scratch on a large synthetically generated dataset via statistical image modeling and then fine-tuned on a subset chosen using uncertainty estimates, satisfying the training budget from the entire training data, called _USIM-DAL_. ### Dense Active Learning via Uncertainty Estimation Our method proposes to utilize a probabilistic network that is learned from synthetic images sampled from statistical image models (i.e., \(\mathbf{\Psi}(\cdot;\zeta_{SL}^{*})\) mentioned in Section 3.3). Figure 3: Output of the pre-trained probabilistic deep network (which is trained using synthetic images sampled from statistical image models) on samples from _unseen_ natural image datasets. (a) LR input, (b) HR groundtruth, (c) Predicted output, SR, from the network, (d) Predicted uncertainty from the network, (e) Error between SR and groundtruth. Figure 3 shows the output of probabilistic SRGAN trained on synthetic images evaluated on samples from natural images. We observe that (i) The predicted super-resolved images (Figure 3-(c)) are still reasonable. (ii) The uncertainty estimates (Figure 3-(d)) still resemble the structures from the images and are a reasonable proxy to the error maps (Figure 3-(e)) between the predictions and the ground truth, even though the model has never seen the natural images. We use the predicted uncertainty from this model to identify the samples from the real-world domain that would lead to high errors. Figure 4 shows the distribution of mean uncertainty values for samples in (i) Statistical Noise (ii) Natural (ii) Satellite (iii) Medical image datasets. We notice that the model trained on synthetic images leads to a gaussian distribution for the mean uncertainty values on the synthetic image datasets. We obtain similar distributions for other datasets from different domains. This further emphasizes that uncertainty estimates obtained from \(\boldsymbol{\Psi}(\cdot;\zeta_{SL}^{*})\) can be used as a proxy to identify the highly uncertain (therefore erroneous) samples from different domains (i.e., the samples close to the right tail of the distributions). ### _Usim-Dal_ for Super-Resolution Table 1 shows the performance of different methods on multiple natural image datasets, including Set5, Set14, BSD100, and Visual Genome (VG). We observe that with the smallest training budget of 500 images, _USIM-DAL_ performs the best with a PSNR/MAE of 25.174/0.035 (Table 1 shows the results with a scaling factor for better accommodation) compared to _SIM+Random_ with PSNR/MAE of 25/0.039 and _SIM_ with PSNR/MAE of 24.8/0.037. We also notice that at this budget, choosing the random subset of the training dataset to train the model from scratch performs the worst with PSNR/MAE of 23.36/0.043. As the budget increases (left to right in Tabel 1), the performances of all the methods also improve. However, a similar trend is observed where the _USIM-DAL_ performs better than _SIM+Random_, _SIM_, and _Random_. We observe a similar trend for other natural image datasets. This allows us to make the following observations: (i) Using a synthetic training image dataset (sampled from the statistical image model, discussed in Section 3.2) leads to better performance than using a small random subset of training images from the original domain (i.e., _SIM_ better than _Random_). (ii) Using the above synthetic training image dataset to train a model and later fine-tuning it with domain-specific samples lead to further improvements (i.e., both _USIM-DAL_ and _SIM+Random_ better than _SIM_). (iii) With a limited budget, fine-tuning a model (pre-trained on synthetic Figure 4: Distribution of mean uncertainty for samples in Statistical Image Noise, PatternNet (satellite), Camelyon (medical), Visual Genome (natural) datasets. Figure 5: Evaluation of various methods on histopathology medical domain (i.e., Camelyon dataset) and satellite imaging domain (i.e., PatternNet dataset) at various fine-tuning budgets. The yellow curve is the _SIM_ baseline. The red curve is the SIM model fine-tuned with random samples (i.e., _SIM+Random_). The blue curve is the SIM model fine-tuned with the highest uncertain samples (i.e., _USIM-DAL_). training image dataset) using high-uncertainty samples from the training set (as decided by the _USIM-DAL_) is better than using the random samples from the training set (i.e., _USIM-DAL_ better than _SIM+Random_). We perform a similar set of experiments with other imaging domains, namely, (i) Satellite imaging (using PatternNet dataset) and (ii) Medical imaging (using Camelyon histopathology dataset). We observe a similar (to natural images) trend in these domains. Figure 5 shows the performance (measured using PSNR) for different methods on these two domains, with varying training budgets. For satellite imaging, at the lowest training budget of 500 images, _USIM-DAL_ with PSNR of 23.5 performs better than _SIM+Random_ with PSNR of 23.4 and _SIM_ with a PSNR of 23.2. We observe that as the training budget increases to 2000 images, _USIM-DAL_ (with PSNR of 23.6) outperforms _SIM+Random_ (with PSNR of 23.35) with an even higher margin. As we increase the training budget further, the _SIM+Random_ model starts performing similarly to _USIM-DAL_. With a budget of 5000 samples, _USIM-DAL_ has a performance of 23.62, and _SIM+Random_ has a performance of 23.60. Given a domain with large (specific to datasets) training budgets, the performance achieved from random sampling and active learning strategies will converge. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**D**} & \multirow{2}{*}{**Methods**} & \multicolumn{3}{c}{**500**} & \multicolumn{3}{c}{**1000**} & \multicolumn{3}{c}{**2000**} & \multicolumn{3}{c}{**3000**} & \multicolumn{3}{c}{**5000**} \\ & & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM & MSE/MAE/PSNR/SSIM \\ \hline \multirow{6}{*}{**SVNR**} & Random & 4.129 / 3.854 / 24.784 / 7.232 & 3.398 / 3.720 / 24.957 / 7.319 & 3.660 / 3.588 / 25.271 / 7.422 & 3.586 / 3.529 / 25.334 / 7.465 & 3.500 / 3.420 / 25.514 / 7.539 \\ & SIM & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 & 3.431 / 3.524 / 25.641 / 7.541 \\ & SIM +Random & 2.976 / 3.139 / 26.283 / 7.839 & 2.958 / 3.099 / 26.377 / 7.872 & 2.941 / 3.081 / 26.45 / 7.896 & 2.934 / 3.088 / 26.436 / 7.910 & 2.912 / 3.066 / 26.546 / 7.935 \\ & USIM-DAL & **2.926 / 3.088 / 26.484 / 7.869** & **2.884 / 3.009 / 26.550 / 7.894** & **2.848 / 3.027 / 26.619 / 7.931** & **2.843 / 3.029 / 26.644 / 7.944** & **2.831 / 3.025 / 26.699 / 7.943** \\ \hline \hline \multirow{6}{*}{**SVNR**} & Random & 6.254 / 4.750 / 22.535 / 6.333 & 6.111 / 4.669 / 22.576 / 6.382 & 5.942 / 4.564 / 22.701 / 6.468 & 5.862 / 4.539 / 22.616 / 6.488 & 5.800 / 4.450 / 22.886 / 5.994 \\ & SIM & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 & 4.852 / 4.303 / 22.897 / 6.383 \\ & SIM +Random & 4.488 / 3.907 / 23.748 / **7.016** & 4.485 / 3.871 / 23.787 / **7.862** & 4.444 / 3.828 / 24.160 / 7.159 & 4.426 / 3.828 / 24.162 / 7.179 & 4.396 / 3.798 / 24.090 / 7.198 \\ & USIM-DAL & **4.376 / 3.836 / 23.810** / 6.5984 & **4.366 / 3.816 / 23.818 / 7.000** & **4.331 / 3.767 / 24.288 / **7.177** & **4.317 / 3.749 / 24.422 / 7.208** & **4.292 / 3.728 / 24.583 / 7.227** \\ \hline \hline \multirow{6}{*}{**SVNR**} & Random & 4.857 / 4.338 / 23.357 / 6.072 & 4.778 / 4.294 / 23.427 / 6.098 & 4.670 / 4.226 / 23.583 / 6.160 & 4.630 / 4.207 / 23.598 / 6.187 & 4.600 / 4.160 / 23.703 / 6.214 \\ & SIM & 3.526 / 3.738 / 24.805 / 6.713 & 3.526 / 3.738 / 24.805 / 6.713 & 3.526 / 3.738 / 24.805 / 6.713 & 3.526 / 3.738 / 24.805 / 6.713 & 3.526 / 3.738 / 24.805 / 6.713 \\ & SIM +Random & 3.362 / 3.578 / 25.007 / 6.786 & 3.352 / 3.559 / 25.043 / 6.794 & 3.328 / 3.539 / 25.092 / 6.812 & 3.323 / 3.540 / 25.085 / 6.816 & 3.305 / 3.519 / 25.137 / 6.834 \\ & USIM-DAL & **3.299 / 3.520 / 25.174 / 6.826** & **3.293 / 3.530 / 25.191 / 6.830** & **3.282 / 3.504 / **25.207 / 6.838** & **3.277 / 3.496 / 25.212 / 6.844** & **3.262 / 3.486 / 25.23 / 6.854** \\ \hline \hline \multirow{6}{*}{**SVNR**} & Random & 4.442 / 3.946 / 23.935 / 6.853 & 4.346 / 3.892 / 24.033 / 6.889 & 4.231 / 3.818 / 24.200 / 6.954 & 4.182 / 3.797 / 24.216 / 6.983 & 4.120 / 3.718 / 24.353 / 7.032 \\ & SIM & 4.310 / 3.963 / 24.055 / 6.826 & 4.310 / 3.963 / 24.055 / 6.826 & 4.310 / 3.963 / 24.055 / 6.826 & 4.310 / 3.963 / 24.055 / 6.826 & 4.310 / 3.963 / 24.055 / 6.826 \\ \cline{1-1} & SIM +Random & 4.038 / 3.721 / 24.396 / 7.036 & 4.026 / 3.690 / 24.423 / 7.056 & 3.993 / 3.663 / 24.96 / 7.088 & 3.977 / 3.661 / 24.515 / 7.101 & 3.943 / 3.631 / 24.563 / 7.126 \\ \cline{1-1} & USIM-DAL & **3.966 / 3.668 / 24.543 / 7.856** & **3.949 / 3.657 / 24.570 / 7.809** & **3.925 / 3.623 / 24.642 / 7.109** & **3.908 / 3.648 / 24.666 / 7.126** & **3.88 For Camelyon dataset, we use the input image resolution of 32\(\times\)32. We observe that _USIM-DAL_ performs the best across all budgets when compared to _SIM+Random_ and _SIM_. We also note that high-frequency features that are typically present in high-resolution scans (i.e., obtained at 20\(\times\) or 40\(\times\) magnification from the histopathology microscope) make the super-resolution problem harder and require more data to achieve good performance. Figure 6 summarizes the performance gain (in terms of PSNR) by using _USIM-DAL_ (i.e., uncertainty-based active learning strategy for dense regression) compared to _SIM+Random_ (i.e., no active learning, randomly choosing a subset from real training domain), relative to _SIM_ (i.e., no real samples used from the domain) at best performing limited budgets. That is, the relative percentage boost in performance is reported as: \[\frac{(\text{PSNR}_{\text{USIM-DAL}}-\text{PSNR}_{\text{SIM+Random}})*100}{ \text{PSNR}_{\text{SIM+Random}}-\text{PSNR}_{\text{SIM}}} \tag{10}\] We note that _USIM-DAL_ consistently performs better than _SIM+Random_, with the relative percentage boost in PSNR of 26.14% for Set5 to 142.69% for PatternNet. Figure 7 shows the qualitative outputs of different models on multiple datasets. On all the datasets, we notice that the output obtained by _USIM-DAL_ is better than the output of _SIM+Random_ that is better than _SIM_ and _Random_. ## 5 Discussion and Conclusion In this work, we presented a novel framework called _USIM-DAL_ that is designed to perform active learning for dense-regression tasks, such as image super-resolution. Dense-regression tasks, such as super-resolution, are an important class of problem for which deep learning offers a wide range of solutions applicable to medical imaging, security, and remote sensing. However, most of these solutions often rely on supervision signals derived from high-resolution images. Due to the time-consuming acquisition of high-resolution images or expensive sensors, hardware, and operational costs involved, it is not always feasible to generate large volumes of high-resolution imaging data. But in real-world scenarios, a limited budget for acquiring high-resolution Figure 7: Qualitative results from different methods (performing 4\(\times\) super-resolution) including (b) _Random_, (c) _SIM_, (e) _SIM+Random_, (f) _USIM-DAL_ on (i) BSD100, (ii) Visual Genome, (iii) PatternNet, and (iv) Camelyon datasets. (a) LR input, and (d) HR groundtruth. Input resolution for BSD100, Visual Genome, and PatternNet is \(64\times 64\), and for Camelyon is \(32\times 32\). (f) _USIM-DAL_ produces the most visually appealing outputs. data is often available. This calls for active learning that chooses a subset from large unlabeled set to perform labeling to train the models. While multiple querying strategies (in the context of active learning) exist for the classification tasks, the same for dense regression tasks are seldom discussed. Our work paves the way for using modern uncertainty estimation techniques for active learning in dense regression tasks. We show that a large synthetic dataset acquired using statistical image models can be used to learn informative priors for various domains, including natural images, medical images, satellite images, and more. The learned prior can then be used to choose the subset consisting of high-uncertainty samples that can then be labeled and used to fine-tune the prior further. Through extensive experimentation, we show that our approach generalizes well to a wide variety of domains, including medical and satellite imaging. we show that active learning performed by proposed querying strategy (i.e., _USIM-DAL_) leads to gains of upto 140% / 53% with respect to a random selection strategy (i.e., SIM+Random) relative to no dataset-specific fine-tuning (i.e., _SIM_) on satellite/medical imaging. **Acknowledgements.** This work has been partially funded by the ERC (853489 - DEXIM) and by the DFG (2064/1 - Project number 390727645). The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Uddeshya Upadhyay.
2304.08729
Quantum atmosphere effective radii for different spin fields from quantum gravity inspired black holes
Quantum atmosphere effective radii for the emission of spin-0, -1/2, -1, and -2 massless fields from Schwarzschild, Tangherlini, non-commutative geometry inspired, and polymeric black holes are calculated. The power observed from the black hole at spatial infinity taking greybody factors into account is compared to an equal-power black-body radiator of the same temperature but different effective radius. A large range of different radii are obtained for different spin fields and black holes. The equal-power black-body effective radius is not, in general, a good proxy for the location of the quantum atmosphere.
Douglas M. Gingrich
2023-04-18T04:52:52Z
http://arxiv.org/abs/2304.08729v1
###### Abstract ###### Abstract Quantum atmosphere effective radii for the emission of spin-0, -1/2, -1, and -2 massless fields from Schwarzschild, Tangherlini, non-commutative geometry inspired, and polymeric black holes are calculated. The power observed from the black hole at spatial infinity taking greybody factors into account is compared to an equal-power black-body radiator of the same temperature but different effective radius. A large range of different radii are obtained for different spin fields and black holes. The equal-power black-body effective radius is not, in general, a good proxy for the location of the quantum atmosphere. **Quantum atmosphere effective radii for different spin fields from quantum gravity inspired black holes** Douglas M. Gingrich _Department of Physics, University of Alberta, Edmonton, AB T6G 2E1 Canada_ _TRIUMF, Vancouver, BC V6T 2A3 Canada_ e-mail: [email protected] ## 1 Introduction The Hawking radiation from evaporating black holes is thought to originate from quantum excitations near the horizon [1]. Giddings [2] has argued that the radiation originates from an effective radius \(r_{\rm A}\) outside the horizon radius \(r_{\rm H}\) call the quantum atmosphere: \(r_{\rm A}-r_{\rm H}\sim r_{\rm H}\). It is of interest to test the validity of Giddings' claim. The quantum atmosphere is the location where most of the Hawking radiation comes from. A few different arguments have been given for the location of the quantum atmosphere. The thermal wavelength of typical Hawking radiation is much larger than the horizon size. Heuristic arguments using a gravitational version of the Schwinger effect for particle production by tidal forces outside the horizon have been made [3, 4]. Another reasoning uses the \((1+1)\) dimensional renormalized stress-energy tensor [2, 3, 5]. In addition, the radius can be given by an effective black-body emission surface [2, 6]. In this paper, we examine the later of these definitions. Ref. [3, 5] have corroborated Giddings' conclusion by obtaining \(r_{\rm A}-r_{\rm H}\approx r_{\rm H}\) for the Schwarzschild black hole using gravitational Schwinger effect arguments and a more precise calculation using the stress-energy tensor. While the different arguments agree that the location of the quantum atmosphere is some distance from the horizon, they do not all give a common estimate for the numerical value. It is of interest to examine if Giddings' arguments are applicable to other types of black holes. Hod [6] showed that the quantum atmosphere radius for a massless scalar field from a Tangherlini black hole emitting radiation in the bulk is a decreasing function of the number of space dimensions; Hod finds \(r_{\rm A}-r_{\rm H}\ll r_{\rm H}\) for high number of extra dimensions. The Reissner-Nordstrom black hole has also been considered in Ref. [4]. These metrics give contradicting conclusions to Ref. [2, 3, 5]. In this paper, we calculate exact greybody factors numerically for all spin fields. The results are used to calculate the double-differential frequency spectrum which is then integrated over all frequencies to obtain the power. By equating the power to that of a black body, we determine an effective emission surface of the quantum atmosphere. The potentials seen by different spin fields are different so we could expect the quantum atmosphere to depend on the emitted field's spin. We find that the apparent radius should not be used, in general, as a proxy for the location of the quantum atmosphere. For example, \(r_{\rm A}\) can not be used as a definition for the location of the quantum atmosphere for gravitons for most black hole metrics we consider. ## 2 Effective radius calculation The effective potential barrier around a black hole is commonly encoded in a set of transmission coefficients, greybody factors, that depend on the properties of the black hole, the properties of the emitted radiation, frequency and modes of the emitted radiation. A physical observable that can be formed from the transmission coefficients is the absorption cross section which is a sum of the transmission coefficients over all radiation modes divided by the frequency squared. Weighting the absorption cross section by a temperature-dependent statistical factor corresponding to the spin-statistics of the emitted radiation gives the radiation flux or power per unit frequency. By integrating over all energies, the total radiated power or luminosity is obtained. In the absence of absorption - step-function transmission coefficients - the Stefan-Boltzmann law is obtained. For black holes, one can convert the temperature dependence into a dependence on the horizon radius, and in principle a dependence on the black hole parameters. The power thus allows a determination of the effects of the transmission coefficients integrated over all frequencies. By comparing the power generated by a black hole with the equivalent power from a black body seen at spatial infinity, one obtains and effective area for the black hole, or in the case of spherically symmetric black holes, an effective radius. The method of equal-power infers the size of the radiating body. The calculated power emitted from a black hole seen by an observe at spatial infinity is compared to the equivalent power \(P_{\rm B}\) from an idealize black-body radiator in flat space using the generalized Stefan-Boltzmann relation (see for example Ref. [7]) \[P_{\rm B}=\sigma A_{n+2}(r_{\rm A})T_{\rm B}^{n+4}\,, \tag{1}\] where \(T_{\rm B}\) is the black- body temperature, \(A_{n+2}(r_{\rm A})\) is the surface area of a \((n+4)\)-dimensional emitting body, and \(\sigma\) is the appropriate Stefan-Boltzmann constant for bosons or fermions in \(n\) extra dimensions. We use units of \(G=c=\hbar=k_{\rm B}=1\). Although Eq. (1) is written in the general form to allow comparison with higher-dimensional black holes, it reduces to the more familiar form of the Stefan-Boltzmann law when \(n=0\). For a black hole, once the greybody factors \(\Gamma_{s,\ell}(\omega)\) for massless spin field \(s\) emitted with spheroidal harmonic mode \(\ell\) with frequency \(\omega\) have been calculated, the absorption cross section in four spacetime dimensions is obtained: \[\sigma_{s}(\omega)=\frac{\pi}{\omega^{2}}\sum_{\ell\geq s}(2\ell+1)\Gamma_{s, \ell}(\omega)\,. \tag{2}\] The \((2\ell+1)\) factor is the degeneracy of the axial quantum number or angular momentum \(m\) modes. The total power in four spacetime dimensions is then given by \[P=\frac{1}{2\pi^{2}}\int_{0}^{\infty}\frac{\omega^{3}\sigma_{s}(\omega)}{\exp( \omega/T)-(-1)^{2s}}\mathrm{d}\omega\,, \tag{3}\] where \(T\) is the Hawking temperature as measured at spatial infinity. We define the effective radius \(r_{\mathrm{A}}\) of the black hole quantum atmospheres by equating the Hawking radiation power from the black hole Eq. (3) with the corresponding Stefan-Boltzmann radiation power of a flat space perfect black-body emitter Eq. (1): \[P(r_{\mathrm{H}},T)=P_{\mathrm{B}}(r_{A},T_{\mathrm{B}})\,. \tag{4}\] This equation determines the effective radius assuming equal temperature: \(T=T_{\mathrm{B}}\). One could likewise determine the effective temperature of the filtered radiation by assuming equal radii [8]. Using \(A_{n+2}\propto R^{n+2}\), we obtain the effective radius using \[\frac{r_{\mathrm{A}}}{r_{\mathrm{H}}}=\left[\frac{P(r_{\mathrm{H}},T)}{P_{ \mathrm{B}}(r_{\mathrm{H}},T)}\right]^{\frac{1}{n+2}}\,, \tag{5}\] where \(P\) depends on the emitted field's spin and \(P_{\mathrm{B}}\) is different for bosons and fermions. The dimensionless radii \(r_{\mathrm{A}}/r_{\mathrm{H}}\) characterizes the black hole quantum atmospheres. As in Ref. [6], it is beneficial to characterize the effective quantum atmosphere using \[\bar{r}_{\mathrm{A}}=\frac{r_{\mathrm{A}}-r_{\mathrm{H}}}{r_{\mathrm{H}}}\,. \tag{6}\] Values of \(\bar{r}_{\mathrm{A}}\gtrsim 1\) validate Giddings' argument and negative values imply the quantum atmosphere is behind the horizon. ## 3 Black holes thermodynamics In this section, we write down the black-body power for the different metrics consider. The formula contain only a single polarization for each spin field. We make no claim about the validity of the two quantum inspired black hole metrics considered here. They are partly chosen for their different black-body features and the ease of greybody calculation. ### Schwarzschild-Tangherlini black holes For the Schwarzschild-Tangherlini [9] black hole radiating into the bulk, the higher dimensional \((n+4)\) black-body power is [7, 6] \[P_{\rm B}=\sigma A_{n+2}(R)T^{n+4}\,, \tag{7}\] where the higher-dimensional Stefan-Boltzmann constant is \[\sigma=\frac{(n+3)\Gamma((n+3)/2)\zeta(n+4)}{2\pi^{(n+3)/2+1}}\,, \tag{8}\] and \(\Gamma\) is the gamma function and \(\zeta\) is the Riemann zeta function. The higher-dimensional surface area of the emitting body of radius \(R\) is \[A_{n+2}(R)=\frac{2\pi^{(n+3)/2}}{\Gamma((n+3)/2)}R^{n+2}\,. \tag{9}\] We will also need the black hole temperature \[T=\frac{n+1}{4\pi r_{\rm H}}\,, \tag{10}\] where \[r_{\rm H}=\frac{1}{\sqrt{\pi}M_{*}}\left(\frac{M}{M_{*}}\right)^{1/(n+1)} \left[\frac{8\Gamma((n+3)/2)}{n+2}\right]^{1/(n+1)}\,. \tag{11}\] The above equations reduce to the familiar Stefan-Boltzmann law and Schwarzschild black hole when \(n=0\), and \(M_{*}=\sqrt{\hbar c/G}\) is the Planck mass. ### Non-commutative geometry inspired black holes Non-commutative geometry inspired black holes are interesting in that the form of the black-body area of the Schwarzschild-Tangherlini remains unchanged but the temperature dependence is different [10, 11]. The temperature is given by \[T=\frac{n+1}{4\pi r_{\rm H}}\left[1-\frac{2}{n+1}\left(\frac{r_{\rm H}}{2 \sqrt{\theta}}\right)^{n+3}\frac{\rm e^{-r_{\rm H}/(4\theta)}}{\gamma\left( \frac{n+3}{2},\frac{r_{\rm H}^{2}}{4\theta}\right)}\right]\,, \tag{12}\] where \(\gamma\) is the upper incomplete gamma function. The horizon radius is obtained by solving \[\frac{M}{M_{*}}=\frac{n+2}{8\gamma\left(\frac{n+3}{2},\frac{r^{2}}{4\theta} \right)}(\sqrt{\pi}M_{*}r_{\rm H})^{n+1}\,. \tag{13}\] The minimum length parameter \(\sqrt{\theta}\) is take to be a free parameter and could be well above the Planck length. As \(\theta\to 0\), the radius and temperature approach the Tangherlini values. The metric give one, two, or no horizon. For a single horizon the temperature vanishes and a black hole remnant is expected to form. The temperature has a maximum but vanishes at the remnant radius. The non-commutative black hole is similar to the Tangherlini black hole for large masses. To model the effects of an effective ultra-violet cut-off in the frequency \(\omega\) of the emitted quanta an additional factor [12] of \(\exp(-\theta\omega^{2}/2)\) should multiply Eq. (3). Although we have included this factor, it has a small effect. ### Polymeric black holes In loop quantum gravity, semi-classical corrections due to the effects of quantum gravity have been derived to give a so-called polymer Schwarzschild black hole [13, 14]. The model has two free parameters \(\epsilon\) and \(a_{0}\). The parameter \(a_{0}=8\pi A_{\rm min}\) is related to the minimum area of loop quantum gravity and is expected to be of the Planck scale. A positive deformation parameter \(\epsilon\) represents the typical scale of the geometry fluctuations in the Hamiltonian constraints of the theory as they get renormalized from the Planck scale to the astrophysical scales. It's thought that \(\epsilon\ll 1\), and values of \(\epsilon\lesssim 0.8\) will have little effect on what follows. For large \(\epsilon\), deviations from the Schwarzschild metric are apparent for astronomical size black holes. The horizon area is not the usual form but is given by \[A=4\pi(2m)^{2}\left[1+\left(\frac{\sqrt{a_{0}}}{2m}\right)^{4}\right]\,. \tag{14}\] The temperature is given by \[T=\frac{1}{4\pi(2m)}(1-P(\epsilon)^{2})\left[1+\left(\frac{\sqrt{a_{0}}}{2m} \right)^{4}\right]^{-1}\,, \tag{15}\] where the polymerization function is \[P(\epsilon)=\frac{\sqrt{1+\epsilon^{2}}-1}{\sqrt{1+\epsilon^{2}}+1}\,. \tag{16}\] The total integrated power given by the Stefan-Boltzmann law is \[P=\frac{\sigma}{256\pi^{3}}m^{-2}(1-P(\epsilon)^{2})^{4}\left[1+\left(\frac{ \sqrt{a_{0}}}{2m}\right)^{4}\right]^{-3}\,, \tag{17}\] where \(\sigma=\pi^{2}/120\) for bosons and \(\sigma=7\pi^{2}/960\) for fermions. In the above equations \(m\) is a parameter that is related to the ADM mass \(M\) by \(M=m(1+P)^{2}\). Results We calculate the quantum atmosphere effective radius for spin-0, -1/2, -1, and -2 massless fields from two quantum inspired black holes. Our calculations are numerical and follow the procedures used in Ref [15] which are based on the general potentials in Ref [16] and the path-ordered matrix exponentials in Ref [17]. The procedure enables previously rather difficult calculations. ### Schwarzschild black hole We consider the Schwarzschild black holes as a warm-up. Table 1 shows dimensionless effective radii for all spin fields from a Schwarzschild black hole. Our numerical calculations reproduce the results of Page [18] for spin-1, -1/2, -2, and Elster [19] for scalars. In terms of the quantum atmosphere, the case of spin-1 was first discussed in Ref. [2] and the spin-0 in Ref. [6]. The case of spin-2 shows a breakdown of Gidding's principle (Ref. [2] restricted the discussion to \(s\leq 1\)). For the Schwarzschild black hole, the black-body power is well known to have a \(P\sim M^{-2}\) dependence. We find that including graybody factors, this mass dependence is maintained, i.e. \(\Gamma\) does not introduce any additional \(M\) dependence. ### Tangherlini black hole To help validate our procedure, we reproduce a previous result in Ref. [6]. Table 2 shows dimensionless effective radii for scalars from a Tangherlini black hole radiating in the bulk. We have taken \(M=M_{*}=1\). To obtain these results, we have calculated the emission on the brane and used the bulk-to-brane emission ratios obtained in Ref. [20]. Our results agree with Ref. [6] to within the numerical accuracy of the calculations. We are now equipped to calculate something new. Table 3 shows dimensionless effective radii for all spin fields from a Tangherlini black hole radiating on the brane. Looking at the large values of \(\bar{r}_{\rm A}\) for brane emission, we reach a different conclusion from bulk emission, and support Giddings' argument much better. \begin{table} \begin{tabular}{c c c c c c c} \hline \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \(\bar{r}_{\rm A}\) & 0.99 & 0.71 & 0.59 & 0.50 & 0.44 & 0.39 & 0.33 \\ \hline \end{tabular} \end{table} Table 2: Dimensionless radii \(\bar{r}_{\rm A}\) for a massless scalar field from a \((n+4)\)-dimensional Tangherlini black hole radiating in the bulk. ### Non-commutative geometry inspired black hole The non-commutative geometry inspired black hole we consider has a minimum horizon radius at a finite mass (a black hole remnant), and a temperature that has a maximum before the temperature vanishes. Thus the power does not follow the \(M^{-2}\) dependence near the end of the black hole's lifetime and \(\bar{r}_{\rm A}\) depends on the black hole mass. For high \(M\sqrt{\theta}\), we reproduce the Schwarzschild results. For the black-body case, below about \(M\sqrt{\theta}<6\), the power dependence deviates from a pure \(M^{-2}\) dependence and vanishes as \(M\to 1.9/\sqrt{\theta}\). The black hole power falls faster than the black-body power with \(M\) except for the spin-0 field. Table 4 shows dimensionless effective radii for all spin fields from a non-commutative geometry inspired black hole in higher dimensions radiating on the brane at the maximum temperature; we have taken \(\sqrt{\theta}=1\). ### Polymeric black hole The polymeric black hole also has a maximum temperature but the temperature vanishes at zero mass. Combined with the non-trivial area dependence, the power does not follow a \(M^{-2}\) dependence and \(\bar{r}_{\rm A}\) depends on the mass of the black hole. We have taken \(\epsilon=0.01\) and \(a_{0}=1\). For this value of \(\epsilon\), \(P=2.5\times 10^{-5}\) and gives a negligible contribution to the power, and causes \(m\approx M\). For high \(2M/\sqrt{a_{0}}\), we reproduce the Schwarzschild results. For the \begin{table} \begin{tabular}{c c c c c c c c c} \hline & & & \multicolumn{6}{c}{\(n\)} \\ \(s\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline 0 & 1.70 & \(-0.98\) & 5.85 & 7.35 & 7.93 & 7.62 & 6.60 & 5.21 \\ 1/2 & 1.03 & \(-0.98\) & 6.03 & 7.51 & 7.99 & 7.57 & 6.47 & 5.04 \\ 1 & 0.12 & \(-0.99\) & 5.33 & 7.22 & 8.04 & 7.83 & 6.83 & 5.42 \\ 2 & \(-0.68\) & \(-0.99\) & 2.75 & 4.50 & 5.55 & 5.77 & 5.25 & 4.28 \\ \hline \end{tabular} \end{table} Table 4: Dimensionless radii \(\bar{r}_{\rm A}\) for massless fields of spin \(s\) from a \((n+4)\)-dimensional non-commutative geometry inspired black hole radiating on the brane with the maximum temperature and \(\sqrt{\theta}=1\). \begin{table} \begin{tabular}{c c c c c c c c} \hline & & & \multicolumn{6}{c}{\(n\)} \\ \(s\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline 0 & 3.44 & 4.98 & 5.83 & 5.86 & 5.21 & 4.16 & 2.78 \\ 1/2 & 3.44 & 5.10 & 5.90 & 5.84 & 5.13 & 4.04 & 2.71 \\ 1 & 2.68 & 4.70 & 5.82 & 5.99 & 5.39 & 4.32 & 2.95 \\ 2 & 0.96 & 2.67 & 3.89 & 4.37 & 4.14 & 3.43 & 2.34 \\ \hline \end{tabular} \end{table} Table 3: Dimensionless radii \(\bar{r}_{\rm A}\) for massless fields of spin \(s\) from a \((n+4)\)-dimensional Tangherlini black hole radiating on the brane. black-body case, below about \(2M/\sqrt{a_{0}}<2\), the power dependence deviates from a pure \(M^{-2}\) dependence and vanishes as \(M\to 0\). The black hole power falls faster than the black-body power with \(M\) except for the spin-0 field. Table 5 shows dimensionless effective radii for all spin fields from a polymeric black hole at the maximum temperature. ## 5 Discussion We have calculated the quantum atmosphere for all massless spin fields for the first time. Two quantum gravity inspired metrics posing different black-body power formula have been compared with exact numerical calculations of the total power from the black hole including greybody factors. Giddings' argument of \(\bar{r}_{\rm A}\sim 1\) clearly depends on the spin of the emitted radiation, decreasing by a factor of about six when going from scalars to vectors, and in general does not apply to gravitons. Hod's [6] result \(\bar{r}_{\rm A}<1\) for higher-dimensional black holes is reproduced, but if the radiation is confined to our brane, the conclusion is very different. Values of \(\bar{r}_{\rm A}\sim 5\) for most spins and extra dimensions are obtained. The higher-dimensional form of the black-body formula plays a significant role beyond just the greybody factors. We have examined two quantum gravity inspired black holes in the regime were quantum effects are important and the radiation will have its maximum intensity. The quantum atmosphere for scalar fields in four space-time dimensions appears similar regardless of the quantum inspired metric and is similar to Schwarzschild black holes. The power in the spin-0 field always has a quantum atmosphere radius of about 1.7 times the horizon radius in four space-time dimensions. We can see that, in general, the effective radius of an equivalent black-body radiator is not a good proxy for the quantum atmosphere. On the other hand, the effective radius \(\bar{r}_{\rm A}\) could be considered an intuitive measure of greybody effects on the total power received by an observer. The greybody factors themselves are of little interest until they are used to calculate physical observables. It is common to calculate the absorption cross section and compare the high-frequency limit against the geometric cross section and the low-frequency limit against the surface area. These limits allow an easily quantifiable measure of the greybody effects of different metrics. Perhaps a more measurement observable, someday, will be the total particle fluxes and energy spectra measured by a distant observer. First measurements of these quantities are likely to be integrated over the detecting instrument's acceptance and \begin{table} \begin{tabular}{c c c c c} \hline \(s\) & 0 & 1/2 & 1 & 2 \\ \hline \(\bar{r}_{\rm A}\) & 1.57 & 0.62 & \(-0.22\) & \(-0.90\) \\ \hline \end{tabular} \end{table} Table 5: Dimensionless radii \(\bar{r}_{\rm A}\) for massless fields of spin \(s\) from a polymeric black hole with the maximum temperature, and \(\epsilon=0.01\) and \(a_{0}=1\). resolution to obtain single numbers for the number of particles per unit time and energy per unit time (or power), before full spectra are measured. Expressing these measurements in terms of an effective black-body radius could prove to be a useful mnemonic for elucidating quantum gravity effects. ## 6 Acknowledgments We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). Nous remercions le Conseil de recherches en sciences naturelles et en genie du Canada (CRSNG) de son soutien.
2302.13939
SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks
As the size of large language models continue to scale, so does the computational resources required to run it. Spiking Neural Networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have also proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation. In this paper, inspired by the Receptance Weighted Key Value (RWKV) language model, we successfully implement `SpikeGPT', a generative language model with binary, event-driven spiking activation units. We train the proposed model on two model variants: 45M and 216M parameters. To the best of our knowledge, SpikeGPT is the largest backpropagation-trained SNN model to date, rendering it suitable for both the generation and comprehension of natural language. We achieve this by modifying the transformer block to replace multi-head self attention to reduce quadratic computational complexity O(N^2) to linear complexity O(N) with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our preliminary experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 20x fewer operations when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Our code implementation is available at https://github.com/ridgerchu/SpikeGPT.
Rui-Jie Zhu, Qihang Zhao, Guoqi Li, Jason K. Eshraghian
2023-02-27T16:43:04Z
http://arxiv.org/abs/2302.13939v5
# SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks ###### Abstract As the size of large language models continue to scale, so does the computational resources required to run it. Spiking neural networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have also proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and we are yet to see the effectiveness of SNNs in language generation. In this paper, inspired by the RWKV language model, we successfully implement 'SpikeGPT', a generative language model with pure binary, event-driven spiking activation units. We train the proposed model on three model variants: 45M, 125M and 260M parameters. To the best of our knowledge, this is 4\(\times\) larger than any functional backprop-trained SNN to date. We achieve this by modifying the transformer block to replace multi-head self attention to reduce quadratic computational complexity to linear with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our preliminary experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 5\(\times\) less energy consumption when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Our code implementation is available at [https://github.com/ridgerchu/SpikeGPT](https://github.com/ridgerchu/SpikeGPT). ## 1 Introduction Artificial Neural Networks (ANNs) have recently achieved widespread, public-facing impact in Natural Language Processing (NLP), but has come with a significant computational and energy consumption burden across training and deployment. As examples, training GPT-3 was projected to use 190,000 kWh of energy [3; 9; 1]. Deploying ChatGPT into every modern word processor will witness millions of users in need of on-demand inference of large language models [34]. SNNs, inspired by neuroscientific models of neuronal firing, offer a more energy-efficient alternative by using discrete spikes to compute and transmit information [25]. Spike-based computing combined with neuromorphic hardware holds great potential for low-energy AI [8; 30; 40], and its effectiveness in integration with deep learning has been demonstrated through numerous studies [38; 37; 15; 13]. At this stage, the performance of SNNs in NLP and generation tasks remains relatively under-investigated. While SNNs have shown competitiveness in computer vision tasks such as classification and object detection [2; 21; 5], they have yet to attain similar success in generation tasks. The parallelization of input tokens, a widely-used and highly effective method in the transformer block, cannot be readily integrated with recurrent SNNs [42]. Although previous research has indicated that the conversion of ANNs to SNNs can lead to competitive performance in NLP tasks, direct training of SNNs results in a performance loss of approximately 20% compared to the conversion approach[24]. The sequential structure of linguistic data presents a unique advantage for the utilization of SNNs, notwithstanding the difficulties faced by recurrent networks in NLP. The benefits of SNNs are that they provide a more energy-efficient alternative to conventional models because of their sparsely active neurons, event-driven embedding of data, and binarized spiking activations. The drawbacks of SNNs in an NLP context include the vanishing gradient problem where long-range dependencies can no longer be extracted, the total absence of learning in excessively sparsified models [11], and the extreme constraint on layer-to-layer bandwidth, where activations are binarized spikes [12]. These issues means that training large-scale SNNs via error backpropagation is extremely challenging, leading to an absence of performant SNNs in language generation. Our proposed SpikeGPT language model provides solutions to these challenges, thus combining the high performance of large-scale language models with the computational efficiency of SNNs. ### Contributions To the best of our knowledge, SpikeGPT is the first generative SNN language model and the largest SNN trained to date in terms of parameter count, with the largest version at 260M parameters (4\(\times\) more than the previous largest SNN) [45]. Our results demonstrate that a small-scale variant with 45M parameters performs competitively against similar transformer models, with approximately 22\(\times\) less synaptic operations that rely on expensive memory accesses. The implementation of SpikeGPT is based on integrating recurrence into the Transformer block such that it is compatible with SNNs and eliminates quadratic computational complexity, allowing for the representation of words as event-driven spikes. Combining recurrent dynamics with linear attention enables our network to stream incoming data word-by-word, and commence computation before a sentence has been completed, while still retaining long-range dependencies present in complex syntactic structures. Our experiments show that SpikeGPT achieves competitive performance on all tested datasets while consuming significantly less energy compared to traditional artificial neural network models. Our contributions in the field of NLP and language generation can be succinctly described as follows: 1. We provide the first demonstration of language-generation using direct-SNN training; 2. We achieve performance comparable to that of ANNs, while preserving the energy efficiency of spike-based computations; 3. We have successfully combined the powerful Transformer architecture with SNNs, without the need for additional simulation time-steps, by utilizing linearization and recurrent Transformer blocks. This work can pave the way for effectively training large-scale SNNs. ## 2 Related Works Although language generation has not previously been achieved with SNNs, this section provides an overview of how SNNs have been used in basic NLP tasks, and the ways in which transformers have been adopted for SNNs. ### Spiking Neural Networks for Natural Language Processing Ref. [43] proposes a bi-directional SNN for sentiment classification and machine translation tasks. Their approach uses spiking encoders, which replace costly multiplication operations with much cheaper additive operations to significantly reduce computational energy consumption. Similarly, Ref. [24] presents a two-step method to train SNNs for text classification, with a simple and effective way to encode pre-trained word embeddings as spike trains. Their results indicate that the converted SNNs achieve comparable results to their ANN counterparts and are more robust against adversarial attacks. Furthermore, Ref. [10] demonstrate the train-and-constrain methodology that enables the mapping of machine-learned recurrent neural networks (RNNs) on a substrate of spiking neurons. The authors achieve 74% accuracy on a question classification task using less than 0.025% of the cores on one TrueNorth chip [30], showcasing the potential for SNNs in classification tasks in NLP. ### Transformer in Spiking Neural Networks The Transformer model, first introduced in [42], has shown significant success in various NLP tasks. However, the application of the Transformer model to spiking neural networks (SNNs) has been relatively limited. The first Spiking Transformer model was proposed by [45], which proposes spiking self-attention to model visual features using sparse Query, Key and Value matrices. Ref. [23] proposes another variant on Transformer-based SNNs, adopting spatio-temporal attention instead of spatial or temporal-wise attention to better incorporate the attention mechanism within the Transformer. While Transformers were initially proposed to solve NLP tasks, the SNN-based Transformers were only applies to vision tasks. We believe this is because the computational complexity of self-attention scales quadratically with sequence length (\(\mathcal{O}(T^{2})\)), and the extra temporal dimension further increases this to the cubic order (\(\mathcal{O}(T^{3})\)). The additional challenges of extreme sparsity, non-differential operators, approximate gradients, and single-bit activations that are characteristic of SNNs make training convergence more challenging. The demonstrated image classification tasks have a far smaller number of output classes, which shrinks the scale of demonstrated networks. Image classification also does not exploit the inherent long-range learning capacity of self-attention. Therefore, there is underexplored potential in the application of Transformer models in other SNN-based applications beyond vision tasks. In the following sections, we demonstrate how we reduce this computational complexity to enable scaled-up models that are capable of language generation. Figure 1: Model Architecture. Methods ### Model Architecture The high-level architecture of SpikeGPT is shown in Fig. 1. The following sections formalize the various components of the model. ### Binary Embedding To maintain consistency with the binary activations of SNNs, we propose a binary embedding step to convert the continuous outputs of the embedding layer into binary spikes. The conversion is performed using a Heaviside function for feed-forward propagation, which maps the continuous values to binary spikes. As this is a non-differentiable functino, the arctangent function (a sigmoid-like shape) is applied as a'surrogate gradient' for backward propagation to provide a biased gradient estimator [15; 32], which can be represented as: \[\sigma^{\prime}(x)=\frac{1}{\pi}\arctan(\pi x)+\frac{1}{2} \tag{1}\] This allows us to convert continuous embedding values into spikes using non-differentiable functions, while still being able to perform backpropagation and update the weights of the embedding layer [32]. ### Token Shift Given an input \(X\), we perform a _token shift_ operation on it as follows: \[\begin{split}& X_{s}=\text{ZeroPad}_{[0,0,-1,1]}(X)\\ & W_{\text{shift}}=[(\frac{i}{E})^{n/N}],i=1,\cdots,E\\ &\mathcal{X}=W_{\text{shift}}\odot X+(1-W_{\text{shift}})\odot X _{s}\end{split} \tag{2}\] where \(E\) is the embedding size of each token, ZeroPad denotes the zero padding operation2, \(n\) is the current block, and \(N\) is the total number of blocks. Footnote 2: The subscript \([0,0,-1,1]\) is written with PyTorch syntax in mind, where \(-1\) clips the top row and \(1\) zero-pads the bottom row. The _token shift_ operator combines information from the global context with information of the original token to provide the token with better contextual information. This strengthens the connection between the token and its neighboring tokens, making it easier for the model to learn the token combinations that have appeared before. This is similar to the induction head [33]. To some extent, _token shift_ is a lightweight and inexpensive alternative to the attention mechanism. ### Spiking RWKV (SRWKV) #### 3.4.1 Recall Self-Attention The self-attention operation lies at the heart of Transformers. In Transformers, self-attention takes an input sequence \(X\), and applies a scaled dot product attention. Formally, self-attention is defined as: \[f(X)=\sigma(\frac{Q(K)^{T}}{\sqrt{d_{k}}})V,\text{s.t.}\ Q=XM_{Q},K=XM_{K},V=XM_ {V} \tag{3}\] where \(M_{Q}\in\mathbb{R}^{d\times d_{k}}\), \(M_{K}\in\mathbb{R}^{d\times d_{k}}\), \(M_{V}\in\mathbb{R}^{d\times d_{v}}\) are linear transformations, and \(\sigma\) is the non-linearity function by default set as the _softmax_ (applied to each row of a matrix). \(d_{k}\), \(d_{v}\) are dimensions for key and value, respectively. Self-attention enables the model to learn the dependencies between any two tokens in a sequence. #### 3.4.2 Receptance Weighted Key Value (RWKV) In this section, we introduce vanilla RWKV in natural language generation [36]. Inspired by the Attention Free Transformer [44], RWKV acts as a replacement for self-attention. It reduces computational complexity by swapping matrix-matrix multiplication with a convolution that sweeps along the time dimension. We subsequently modify this step to instead operate recurrently on input data. This modification enables compatibility with recurrent SNNs, thus making it more manageable to run on limited resources. **Vanilla RWKV:** Given an input token-shifted embedding vector \(\mathcal{X}\), similar to self-attention, RWKV first applies a linear transform \(R=\mathcal{X}M_{R}\), \(K=\mathcal{X}M_{K}\), \(V=\mathcal{X}M_{V}\). \(\mathcal{X}\) is a time-varying embedding (varying over the sequence), and so \(R,K,V\) are also time-varying. Fig. 1 depicts the sequence unrolled into a set of 2-D matrices. \(M_{R}\), \(M_{K}\) and \(M_{V}\) consist of learnable parameters, where \(K\) and \(V\) can be likened to the key and value matrices of self-attention. \(R\) is referred to as the receptance matrix, where each element indicates the acceptance of past information. Next, the following operation is applied:3 Footnote 3: \(\{M_{R},M_{K},M_{V}\}\in\mathbb{R}^{E\times H}\), where \(H\) denotes hidden size. In RWKV, we set \(E=H\). \[Y_{t}=\sigma(R_{t})\odot\frac{\sum_{i=1}^{t}\text{exp}(W_{(T-i+1)})\odot\text {exp}(K_{i})\odot V_{i}}{\sum_{i=1}^{t}\text{exp}(W_{(T-i+1)})\odot\text{exp}( K_{i})} \tag{4}\] where \(\odot\) is the element-wise product, \(T\) is the sequence length, \(\sigma\) is the nonlinearity applied to \(R\) with the default being sigmoid; \(W\in\mathbb{R}^{T\times E}\) is the positional weight decay matrix. \(W\) encodes the sequential importance of a given word on subsequent words. It is not directly learnable, but is determined by other learnable parameters. Intuitively, as time \(t\) increases, the vector \(Y_{t}\) is dependent on a longer history, represented by the summation of an increasing number of terms. For the target position \(t\), RWKV performs a weighted summation in the positional interval of \([1,t]\), and takes the Hadamard product of the weighted result with the receptance \(\sigma(R_{t})\). By taking the sigmoid of \(R_{t}\), the receptance acts as a 'forget gate' by eliminating unnecessary historical information. **Similarity to Multi-Headed Self-Attention:** Distinct from the method of calculating the matching degree4 between tokens by the self-attention mechanism, RWKV decomposes the calculation of matching degree into: \(\alpha_{ij}=\sigma(R_{i})\odot\text{exp}(W_{T-i+1})\odot\text{exp}(K_{j})\), where \(\alpha_{ij}\in\mathbb{R}^{E}\) is a vector. Each element in \(\alpha_{ij}\), that is \(\alpha_{ijk}\), represents the matching degree at the k-th position of the embedding of the i-th and j-th tokens. In other words, it can be seen as a multi-headed RWKV with \(E\) heads, each of which has a hidden size=1, which is similar to the multi-headed self-attention (MHA) mechanism. Footnote 4: A scalar in self-attention, \(\alpha_{ij}=Q_{i}K_{j}^{\mathcal{T}}\) **Positional Weight Decay:** The positional weight bias \(W\) is a function of both learnable parameters and pre-calculated matrices formalized below. In general, for a given word, the elements of \(W\) decay over the sequence. When this rule of thumb does not hold, this likely means the model is embedding long-range dependencies across a sequence. The positional weight bias matrix \(W\) is determined by three matrices, \(W_{d}\), \(W_{c}\) and \(W_{f}\): \[W_{d} =\ln(W_{s}),W_{s}\in\mathbb{R}^{E\times 1} \tag{5}\] \[W_{c} =[(-T+2)\quad(-T+3)\quad(-T+4)\quad\cdots\quad-1\quad 0]\in \mathbb{R}^{1\times(T-1)}\] (6) \[W_{f} =[\ln(0.3)\ \ln(0.3)\ \cdots\ \ln(0.3)]\in\mathbb{R}^{E\times 1} \tag{7}\] where \(W_{s}\) is a pre-calculated matrix dependent on the layer and size of \(E\), \(W_{d}\) and \(W_{f}\) are both learnable, and \(W_{c}\) is a static, pre-calculated matrix based on a decay prior. The final matrix \(W\) is calculated below: \[W=\text{exp}(concat(W_{d}\times\text{exp}(W_{c}),W_{f})),W\in\mathbb{R}^{E \times T} \tag{8}\] where \(concat\) denotes the concatenation of two tensors in the temporal dimension, and the operator '\(\times\)' is the outer-product of two vectors. **RWKV as a 1-D Convolution:** Eq. 4 only calculates the weighted summation across target positions \(t\). On the basis of Eq. 4, the values of all target positions can be represented as a 1-D convolution: \[Y=\sigma(R)\odot\frac{\text{exp}(W)\otimes(\text{LeftPad}(\text{exp}(K)\odot V ))}{\text{exp}(W)\otimes\text{LeftPad}(\text{exp}(K))} \tag{9}\] where \(\otimes\) denotes the 1-D convolution operation, LeftPad applies zero-padding to all columns preceding the \(T-1^{th}\) position. Consider \(W\) to be a large convolutional kernel, performing a convolution with the matrix \(\text{exp}(K)\) (or \(\text{exp}(K)\odot V\)). The computational complexity of the complete convolution is \(\mathcal{O}(ET^{2})\) (assuming the number of filters matches the sequence length, and \(E\) is the embedding size). This can be further optimized by adopting the Fast Fourier Transform (\(FFT\)) to reduce the time complexity of the whole convolution operation to \(\mathcal{O}(ET\text{log}T)\). **Compatibility with SNNs:** The behavior of individual spiking neurons in an SNN is often described using differential equations, which cannot be solved analytically in closed-form expressions. In the context of recurrent networks, these equations must be solved numerically, which typically requires iterative methods that calculate the system's behavior step-by-step over time. Fortunately, from Eq. 4, we are able to derive a recurrent form of RWKV, which is perfectly compatible with recurrent SNNs. #### 3.4.3 RWKV Enabled SNN The serial RNN form of RWKV is expressed as follows: \[Y[t+1]=\sigma(RX[t])\cdot\frac{\text{exp}(KY[t])\cdot(VY[t])+\text{exp}(W) \cdot A[t]}{\text{exp}(KY[t])+\text{exp}(W)\cdot B[t]} \tag{10}\] where \(t\) represents the time step index, and variables \(R,W,K,V\) are the same as Eq. 9. The hidden states \(A\) and \(B\) are represented by \[A[t]=\text{exp}(KY[t-1])\cdot(VY[t-1])+\text{exp}(W)\cdot A[t-1] \tag{11}\] and \[B[t]=\text{exp}(KY[t-1])+\text{exp}(W)\cdot B[t-1] \tag{12}\] Finally, we integrate the spiking neuron model into the Spiking-RWKV module. As RWKV has been serialized, not only does the computational complexity decrease from \(\mathcal{O}(ETlogT)\) to \(\mathcal{O}(ET)\), but the output of RWKV can be sequentially passed directly to spiking neurons without having to unsqueeze dimensionality for feed-forward. This is in stark contrast to prior SNN-based Transformer methods which combine matrix-matrix multiplications along with recurrence. This leads to computational complexity scaling cubically with sequence length, without enhancing the network's ability to learn sequential information. Consequently, we achieve a more streamlined approach in our feed-forward process, allowing us to effectively process data in a streaming manner. We employ the Leaky Integrate-and-Fire (LIF) neuron as the default spiking neuron of our model, a widely used model for SNNs often trained via error backpropagation [25]. The formula is represented as follows: \[\left\{\begin{array}{l}U[t]=H[t]+\beta(Y[t]-(H[t-1]-U_{\text{reset}}))\\ S[t]=\Theta(U[t]-U_{\text{threshold}})\\ H[t]=U[t]\cdot(1-S[t])\end{array}\right. \tag{13}\] where \(\beta\) is a decay factor, \(U\) is the membrane potential (or hidden state) of the neuron, \(S\) is the spiking tensor with binarized elements, \(Y\) denotes the output of the previous series RWKV block (see Eq. 10), \(\Theta(\cdot)\) denotes the Heaviside function, and \(H\) represents the reset process after spike emission. We set \(U_{threshold}=1\) and \(U_{reset}=0\) as done in Refs. [46; 27; 28]. To overcome the non-differentiable problem during the back-propagation caused by the Heaviside step function \(\Theta(\cdot)\), we employ the surrogate gradient approach. As with the binary embedding in Sec. 3.2, we utilize the arctangent surrogate function (Eq. 1) during the backward pass. ### Spiking Receptance Feed-Forward Networks (SRFFN) Each block in our model contains a fully connected feed-forward network with a gating mechanism (SRFFN), which is applied to normalized and token-shifted output of each spiking-RWKV module. This SRFFN module consists of three linear transformations with \(ReLU^{2}\) activations as follows: \[Y^{\prime}[t]=\sigma(M_{P}X[t])\odot M_{S}(ReLU^{2}(M_{G}X[t])) \tag{14}\] where \(Y^{\prime}[t]\) denotes the output of SRFFN at time-step \(t\) which is then passed to the spiking neuron (Eq. 13). \(\{M_{P},M_{G},M_{S}\}\in\mathbb{R}^{E\times H}\) are learnable parameters of the linear transformations. SRFFN is a variant of the Gated Linear Unit (GLU) [7], which can control the degree of information flowing into the model by \(\sigma(M_{P}X[t])\). In order to maintain the consistency of SRFFN and GEGLU [39] parameters, we set the size of \(H\) from the SRFFN to \(4E\). ### Training & Inference This section will be updated with details for each model once all SpikeGPT variants have completed training. Until then, we refer the reader to the SpikeGPT repository. ## 4 Experiments We conduct a series of experiments to optimize the performance of our SpikeGPT model by training it with three varying parameter scales: 45 million, 125 million, and 260 million parameters. Our code is based on PyTorch [35] and SpikingJelly [14]. ### Datasets We test two variants of the 45 million parameter model; one where \(T=1024\) and another where \(T=3,072\). We used the Enwik8 dataset to conduct both training and testing. The findings of this experiment are presented in Table 1. To explore the efficiency of our 125 million parameter scale, we trained our model using the BookCorpus [47] dataset, and text generated samples are provided in Fig. 3. Our most extensive model with 260 million parameters was trained using the OpenWebText2 [17] dataset. Text samples of this experiment are shown in Fig. 2. At present, we are conducting additional experiments on the larger models and will update this preprint once completed. All experiments were conducted on four NVIDIA V100 graphic cards. For the models of 45M, 120M and 260M, we trained them for 12, 24 and 48 hours respectively. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & Binary & L & d & T & Train bpc & Test bpc & SynOps \\ \hline Transformer & ✗ & 12 & 512 & 1024 & 0.977 & 1.137 & \(9.6\times 10^{10}\) \\ Transformer & ✗ & 24 & 256 & 1024 & 1.039 & 1.130 & - \\ \hline Reformer & ✗ & 12 & 512 & 1024 & 1.040 & 1.195 & - \\ Synthesizer & ✗ & 12 & 512 & 1024 & 0.994 & 1.298 & - \\ Linear Transformer & ✗ & 12 & 512 & 1024 & 0.981 & 1.207 & \\ Performer & ✗ & 12 & 512 & 1024 & 1.002 & 1.199 & - \\ \hline Stacked LSTM & ✗ & 7 & - & - & 1.420 & 1.670 & - \\ SHA-LSTM (no attention) & ✗ & 4 & 1024 & 1024 & - & 1.330 & - \\ \hline SpikeGPT 45M & ✓ & 12 & 512 & 1024 & 1.113 & 1.283 & \(4.35\times 10^{9}\) \\ SpikeGPT 45M & ✓ & 12 & 512 & 3072 & 0.864 & 1.262 & \(1.30\times 10^{10}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Enwik8 results, measured in bits per character (bpc): the lower the better. Baseline comparisons are made with Reformer [22], Synthesizer [41] (the best performing dense version), Linear Transformer [20], Performer [4], Stacked LSTM [18] and SHA-LSTM [29]. \(L,d\), and \(T\) denote the number of blocks (network depth), dimension of features, and sequence length, respectively. Both Linear Transformer and Performer are implemented with customized CUDA kernels (github.com/idiap/fast-transformers), and all other models are implemented in native Pytorch. (**Note: Interimim results. Still in training; to be updated.**) ### Comparisons With the Enwik8 dataset, we use the same training/test splits and pre-processing conventions as Ref. [6]. A 12 layer, 512-dimensional, 8-head architecture with 1024-neuron dense layer serves as our Transformer benchmark. We also compare with a number of effective, similarly sized Transformer baselines, including Reformer [22], Synthesizer [41], Linear Transformer [20], Performer [4]. In addition, as we are using recurrent structures, we included representative LSTM variants [19]: Stacked LSTM [18], and SHA-LSTM [29]. From Tab. 1, we see that with the \(L=12,d=512,T=3072\) architecture, the proposed model achieves the lowest training bits per character (bpc), which is an indicator for high model capacity. ### Results While our model's test performance is slightly less than that of the standard Transformer and several other Transformer variations, it nonetheless remains similar in performance with 22\(\times\) less synaptic operations (SynOps). SynOps is a metric that accounts for activation sparsity, where only multiply-accumulate operations using non-zero activations are counted. The Transformer is measured using full precision (flt32) SynOps, whereas SpikeGPT uses binarized SynOps. Therefore, a given SynOp for SpikeGPT is substantially cheaper in terms of energy consumption compared to a SynOp of the Transformer. Neuromorphic hardware is able to exploit activation sparsity by skipping memory access and computation when no spikes are emitted [8; 31; 26; 16]. We continue to optimize the larger-scale models and will update this preprint as we develop more detailed performance metrics. Figure 3: Example of text generated by SpikeGPT 120M. The model is trained on BookCorpus. Figure 2: Example of text generated by SpikeGPT 260M. The model is trained on OpenWebText2. Conclusion Our preliminary results demonstrate that event-driven spiking activations are not only capable of language generation, but they can do so with fewer high-cost operations. We develop techniques that promote lightweight models for the NLP community, and make large-scale models for the neuromorphic and SNN community more effective. We demonstrate how large SNNs can be trained in a way that harnesses advances in transformers and our own serialized version of the attention mechanisms. In the meantime, we continue to test and validate our larger scale models and will continue to update this preprint and provide our code implementation here: [https://github.com/ridgerchu/SpikeGPT](https://github.com/ridgerchu/SpikeGPT). ## Acknowledgements We are grateful to Bo Peng for his fruitful comments, corrections and inspiration in making large-language models more accessible.
2309.00389
Building and Managing a Tropical Fish Facility: A Do-It-Yourself Guide
At the core of most research in zoological disciplines, ranging from developmental biology to genetics to behavioral biology, is the ability to keep animals in captivity. While facilities for traditional model organisms often benefit from well-established designs, construction of a facility for less commonly studied organisms can present a challenge. Here, we detail the process of designing, constructing, and operating a specialized 10,000-liter aquatic facility dedicated to housing cichlid fishes for research purposes. The facility, comprising 42 aquaria capable of division into up to 126 compartments, a flow-through rack for juveniles, egg tumblers for eggs and embryos, and a microinjection setup, provides a comprehensive environment for all life stages of cichlid fishes. We anticipate that a similar design can be also used also for other tropical teleost fishes. This resource is designed to promote increased efficiency and success in cichlid fish breeding and research, thereby offering significant insights for aquatic research labs seeking to build or optimize their own infrastructures.
Claudius F. Kratochwil, Muktai Kuwalekar, Jan Haege, Nidal Karagic
2023-09-01T11:06:10Z
http://arxiv.org/abs/2309.00389v1
# Building and Managing a Tropical Fish Facility: A Do-It-Yourself Guide ###### Abstract At the core of most research in zoological disciplines, ranging from developmental biology to genetics to behavioral biology, is the ability to keep animals in captivity. While facilities for traditional model organisms often benefit from well-established designs, construction of a facility for less commonly studied organisms can present a challenge. Here, we detail the process of designing, constructing, and operating a specialized 10,000-liter aquatic facility dedicated to housing cichild fishes for research purposes. The facility, comprising 42 aquaria capable of division into up to 126 compartments, a flow-through rack for juveniles, egg numbers for eggs and embryos, and a microinjection setup, provides a comprehensive environment for all life stages of cichild fishes. We anticipate that a similar design can be also used also for other tropical teleost fishes. This resource is designed to promote increased efficiency and success in cichild fish breeding and research, thereby offering significant insights for aquatic research labs seeking to build or optimize their own infrastructures. A 1 (1) 1 Footnote 1: dagger}\) 1 * **Timeline:** Determine your timeline for the fish facility's partial or full operation. Consider the option of building in stages to meet urgent research needs first and gradually expand the facility. Phased construction allows for effective financial management, flexibility in design and functionalities, and the opportunity to refine and optimize future expansions based on lessons learned from the initial phase. * **Research Objectives:** Clarify your research goals and determine the number of fish required in the short and long term. Consider factors such as the frequency of experiments, breeding requirements, and sample sizes needed for your research. * **Collaboration and Decision Making:** Engage in discussions with relevant people within your university to determine who needs to be involved in the decision-making process regarding the fish facility. Deans, directors, veterinarians, university architects, or electricians may have valuable input, potential concerns, or disagreements that could influence your plans. Seek their expertise and address any potential issues early on to ensure a smooth implementation. * **Additional Room Requirements:** Consider the need for additional space within the room, beyond the fish tanks. Depending on your research objectives, it may be advantageous to allocate space for equipment such as microinjection setups, microscopes, computers for data recording, worktables, video or photography setups, or 2D shakers for raising embryos. Assess the requirements of your specific research projects and plan the room layout accordingly. * **Heating:** Evaluate the heating situation of the room. If you plan to house tropical fish species, maintaining warm temperatures is crucial. Determine if the room is already heated or if additional heating measures are required. Consider whether precise temperature control is necessary or if a certain temperature range is acceptable. Additionally, decide whether you prefer to heat individual tanks using heaters or if alternative heating methods are available and suitable for your needs. * **Other External Factors:** It is important to consider various external factors that can impact its functionality. If the room has windows, managing light-dark rhythms and controlling algae growth under direct sunlight can present challenges. Ideally, the facility would be sheltered from external light and illuminated with broad spectrum LED lamps that operate automatically in a natural light-dark-cycle. Ineffective ventilation can also affect the room's humidity due to the presence of a large water surface. To address these issues, it may be necessary to incorporate a chemulifier into the facility. This helps maintain optimal humidity levels, ensuring a better working environment, preventing mold development, and safeguarding the proper functioning of technical equipment, especially electronic and optical devices. Figure 1: **The process of building a facility.** Building a facility from scratch can be a challenging task, especially for a young research team. **(A)** It starts with the decision of what to keep from former users of the room and getting everything cleaned up. **(B)** However, the challenges do not diminish with an empty room, as one must make long-term decisions about how the room will be configured. **(C)** Establishing the facility is a milestone, but it is only the beginning of the next journey, which is to maintain a functioning and organized facility and a productive working environment capable of addressing unforeseen challenges. \(\bullet\)**Water Supply:** Assess the availability and quality of the water supply in the room or building. Consider factors such as water source, water pressure, water temperature, water hardness, pH levels, and any potential contaminants as chlorine or dissolved metals that may be harmful to the fish. Consider water treatment systems if necessary. Sufficient water pressure is particularly important to assess for some aquarium systems, including the one described here. In case the water pressure is not sufficient, a pumping system and an elevated reservoir might be needed. \(\bullet\)**Water Change Frequency:** Decide on the desired frequency of water changes based on the fish species and number, facility size, and available resources. Water changes can range from constant change (flow-through systems) to regular exchanges in large volume sump systems or manual changes in individual aquaria. Consider the impact on fish wellbeing and requirements, start-up funding, and the time you can invest in maintenance. \(\bullet\)**Drainage System:** Evaluate the facility's drainage system thoroughly to ensure effective water management and prevent potential flooding. Adequate drainage is vital for maintaining a healthy environment. The drainage rate will also play a role in determining the organization of water changes and the consideration of implementing a flow-through system for efficient water circulation. \(\bullet\)**Experiment Types:** Determine the nature of the experiments you plan to conduct in the facility. Consider whether you will focus solely on maintenance or if you will also conduct behavioral experiments, breeding, or genetic crossing. This will influence the size and type of tanks required. \(\bullet\)**Labeling system.** In a facility that accommodates numerous users, species, and strains, an efficient labeling system is essential. The ideal system should allow for easy application, repositioning, and removal of labels while minimizing the risk of loss of information. Additionally, having the capability to label both the quantity and type of food disadvantageous. Electronic systems offer the benefit of data storage, but they lead to a significant increase in the time needed for data entry. \(\bullet\)**Disease Prevention:** Carefully consider from where to buy fish and evaluate the need to regularly introduce new fish into the facility. Consider the potential risks associated with disease transmission and implement appropriate quarantine protocols to minimize the introduction of pathogens or contaminants. Are there any particularly troublesome fish diseases for your species (e.g., Tilapia Lake Virus for cichlid fishes) and in what regions are they found? Unfortunately, for many tropical fish species little information is available regarding specific disease susceptibility. It can make sense to use the more extensive literature on related commercially used species. Major diseases of relevance to cichlids can for example be found in the Nile Tilapia fact sheet from the Cultured Aquatic Species Information Program of the Food and Agriculture Organization of the United Nations (FAO) (Rakocy 2006). \(\bullet\)**Power Supply and Utilities:** Assess the availability of power plugs in the facility for lighting, heaters, and other equipment. Especially during water change with cold water, heaters need a significant amount of power that can lead to circuit overloads if the heaters are not distributed over different circuits. Additionally, consider whether compressed air piping is necessary for aeration and sponge filters. Ensure that the facility's utilities can support the operational requirements. \(\bullet\)**Maintenance Responsibilities:** Determine who will be responsible for maintaining the fish facility. Assess whether technical staff will be available or if lab members will take care of the facility and how weekends, holidays and holiday seasons will be handled. Consider the workload required per week and ensure it aligns with the available resources and long-term sustainability of the facility. \(\bullet\)**Consideration of Hazards:** Finally, but very importantly, assess potential hazards to humans, animals, the building, and the environment within the fish facility. Contemplate scenarios such as broken aquaria, tubing failures, fires, natural disasters, power outages, compressed air system malfunctions, or technical failures in any operational systems within the room. Evaluate the effects of both shorter and longer durations of these incidents. Implement safety measures and develop contingency plans to mitigate risks and minimize the impact of such events on the well-being of the fish, the facility, and the surrounding environment. This for example also includes getting conformation on proper sealing of the floor, as floor renovations will be challenging once the facility is in place. These initial considerations will help guide the design, construction, and operation of your fish facility, setting the stage for a successful and productive research environment. It is also recommended to consult with experts or experienced researchers, to ensure all necessary aspects are considered. ## 3 | Thinking it through: A case study of a cichlid fish facility In this chapter, we share our thought process of designing a specialized facility for our Integrative Evolutionary Biology lab at the University of Helsinki, Finland. In our research we focus on the Genetics, Genomics, and Evolutionary Developmental Biology of East African cichlids. Our primary objectives for our research were to create a space that would enable us to maintain 10-15 species, to perform hybrid crosses and raise \(\mathrm{F_{i}}\) and \(\mathrm{F_{z}}\) generations in sufficient number and to independently raise clutches of embryos for developmental analyses. These considerations heavily influenced our design choices. To do so we had to create an optimal research environment aligned with our objectives and needs. We encountered various considerations that we already discussed in the second chapter, from space availability and funding to long-term maintenance plans and potential hazards. Collaboration and open communication with colleagues and experts played a vital role in guiding our decisions. Our personal journey should serve as a relatable account, offering practical guidance for fellow researchers. By sharing what worked for us and what lessons we learned, we hope to assist others in making informed choices and navigating the complexities of building their own specialized research facilities. The decision about the general setup necessitates consideration of several interdependent factors. Most importantly among these is the availability of physical space, which shapes the facility's design and capacity, influencing tank numbers, equipment layout, and workflow design. Moreover, funding constraints, both for building and running costs, might impact the construction, maintenance, and operation of the facility, as well as personal involvement in its construction and maintenance. Accommodating the distinct requirements of fish species, such as water quality parameters, is crucial for their well-being and successful breeding. Furthermore, the facility's design must align with the needs of various research experiments, necessitating adaptable configurations to cater to tanks for raising, maintaining, isolating, and experimental setups that eventually include specifically controlled environments. Throughout the entire process, there must be communication with necessary authorities and involved parties, whether it be veterinarians, ethics boards, university architects, neighboring labs and facilities, electricians, directors, or deans. **Space and Funding.** In our case, we had access to ca. 32m1(ca. 5.3 x 6m) of space to construct our fish facility and related infrastructure. Our primary goal was to maximize the available space for housing the fish, while still allowing comfortable movement within the facility. Additionally, we allocated an area for a table equipped with a microinjector and a microscope to perform microinjections, ensuring we had the necessary tools for our research activities. The inclusion of a sink in the room was a nice addition as it is essential for cleaning purposes of filters and other equipment. At the beginning it is important to make a floor plan to plan precisely where aquaria and other equipment will go. Funding-wise we luckily had access to flexible start-up funding, which allowed us to dynamically set priorities for lab equipment and fish facility. After exploring commercial solutions that would have cost between at least 150,000C for a very limited aquarium space, we opted for a custom solution that we built ourselves, reducing the cost to roughly 30,000C while being much more functional and spacious. This decision carried the risk of potential challenges and failures, but it provided us with the opportunity to build the facility to meet our specific needs. **Fish species and their space, water, and substrate requirements.** Our goal was to accommodate around 10-15 species of East African Haplochromine cichilds from Lake Malawi and Lake Victoria. These fish require relatively large aquarium, ideally exceeding 150 litters. Among the critical environmental considerations for these fish, water hardness took precedence. In Lake Malawi, water hardness can range from 10 to 20 degrees of hardness (dH), which significantly surpasses the average tap water hardness in Finland, usually ranging from 2 to 4 dH. Consequently, we needed to manipulate water parameters to establish a suitable environment for our cichilds. This simultaneously ruled out the possibility of employing a flow-through system (as salt would be washed out) and using smaller aquarium (owing to difficulties in maintaining stable hardness levels). Additionally, the water temperature couldn't be controlled. To enhance water hardness, we introduce buffering substances (Sodium hydrogen carbonate NaHCO\({}_{3}\) and Magnesium Sulphate, MgSO\({}_{4}\) x 7H\({}_{2}\)O). This helped stabilize the pH at 8.0 and elevate hardness levels (GH 7, KH 10) within the desired range for East African cichilds (pH: 7.6 - 8.4; GH (General Hardness): 4 - 12 dGH; KH (Carbonate Hardness): **A****A Figure 3: **Sheff and piping design.****(A)** The panel presents a simplified illustration of our aquarium facility featuring four interconnected racks (2m height; 1m width) housing eight 230-liter aquarium. The system is designed to expedite water changes while mitigating the risks associated with broken aquarium and disease. The aquarium are semi-isolated and can also be fully isolated for optimal control. **(B)** A more detailed schematic demonstrates the system’s functionality. During the water change process, OutB1 and OutB2 are opened, allowing the removal of 20% of water from all eight aquarium. OutC can adjust the flow rate to the drainage if needed. To introduce new water, InA is opened, and the InB and UC values are adjusted to ensure uniform water inflow. Minor discrepancies are balanced out as the aquarium are semi-connected. Out D can be utilized at the end to clear the tubes of any remaining water. the disadvantages of PMMA is that it can warp slightly, but the sheets can be turned from time to time to make them return to the original shape. One of the most expensive items were the shelves to carry the aquaria. Yet, due to the substantial weight they must support (a single aquarium filled with water weighs nearly 250 kg), careful consideration is necessary. As a solution, we have chosen to procure heavy-duty shelves that offer the capability of being interconnected. These shelves not only provide the necessary strength to accommodate significant loads but also allow for the simultaneous installation of cables, lighting, pipes, and tubes. Additionally, they offer storage space on the top surface. **Finding creative solutions to reduce maintenance time and costs.** When it comes to maintaining a facility with multiple aquaria, the most time-consuming tasks are water change and cleaning of the aquaria. Our objective was to develop a system that is safe, easy to use, and minimizes risks related to material damage, human error, and disease. We excluded a flow-through system due to issues with water hardness, temperature, and costs. While a sump system was another option, we opted against it due to its complexity, space requirements, and the potential for leaks and overflows. However, we wanted to avoid an isolated aquarium system that is time-intensive to maintain. Thus, we designed a system where aquaria could be simultaneously drained to 80% using a valve (**Figure 3**). This design also allowed for individual aquarium isolation and was falsiafe, ensuring that a single aquarium breakage would not lead to complete emptying of the system. Similarly, we aimed for an efficient water inflow system, enabling all aquaria to be refilled using a valve. To enhance the aquaria system, the only alteration required for both the aquarium units and their kids was to place an order for aquarium units that featured a predfilled hole at one of the lower corners to facilitate outflow (**Figure 2A**). We divided the lid into two sections: a smaller, non-opening part which included the hole for water inflow preventing water spillage, and a larger part that can be easily opened without any hindrance (**Figure 4A**). **Lighting and heating.** The last decisions to be taken at the beginning pertained to heating, lighting, filtration, and aeration. Regarding heating, we opted for standard sera 150W heaters (**Figure 4C, F**). An alternative consideration was room heating; however, this option is not only more costly but also exhibits greater variability. The temperature can be monitored with thermometers. We recommend non-digital options, as batteries of digital options have to be frequently replaced. In terms of lighting, a crucial aspect was the capacity to modulate intensity and on/off timing while minimizing energy consumption. Consequently, we selected NICREW LED lights (**Figure 4D, E**), which boast high efficiency (180 LEDs), low power usage, water resistance, and extendable brackets to accommodate varying aquarium widths. Moreover, these lights offer the flexibility to control the on/off and brightness settings of both the white and blue LEDs either independently or simultaneously. **Filtration and oxygenation.** The final deliberation encompassed aeration and filtration. The top criteria here were good filtration, easy maintenance, and low risk of outage. As we had a compressed air system installed in the room, we decided to have two large sponge filters per aquarium (**Figure 4C, F**). Sponge filters offer a range of advantages that make them a practical choice for aquarium filtration. They excel in mechanical filtration by effectively trapping larger debris and particles from the water column. Furthermore, their porous structure provides ample surface area for beneficial bacteria, supporting biological filtration and the conversion of harmful ammonia and nitrite into less toxic nitrate. These filters also function as aeration devices, generating gentle water movement and surface agitation to promote oxygen exchange. This makes them suitable for aquaria with delicate or slow-moving fry, as they create a mild current that does not harm them. Additionally, their reliance on compressed air for operation means that they are less affected by power outages, as compressed air systems are often restored early due to their involvement in critical processes. Another benefit lies in their ease of maintenance - a simple rinsing of the sponge during regular water changes suffices for cleaning. However, it is important to note that while sponge filters do have disadvantages as they might not be sufficient for heavily stocked aquaria. Such aquaria might need additional mechanical Figure 4: **Aquaria and their equipment.** **(A)** Acyclic sheet lid with inflow system. **(B)** Self-drilled hole (using 20mm hole saw) in acrylic sheet lid to be able to easily open lids. **(C)** Acrylic sheet divider with self-drilled holes (8mm) and heater, artificial plants, and a slightly smaller sponge filter that we tried at the beginning. **(D)** Lightning system with LED lights over every aquarium. **(E)** Lipids are simple fixed with cable ties. **(F)** A fully running aquarium with sponge filters, heater, sand, artificial plants and hiding places. filtration to prevent the clogging of the sponge material which would compromise biofililtration. Yet, in our experience two large sponge filters (height of sponge material 10cm, diameter 11cm) in combination with our substrate that also contributes to biofililtration and a biweekly filter cleaning regime turned out to be sufficient for a 230-liter aquarium with medium densities of fish. It should be noted that biofililtration efficiency, and thus the amount of internal surface needed in the filtration unit, depends strongly on factors such as water flow, salinity, pH, oxygen and most importantly temperature. Hence our practice as described above might not be applicable under different conditions and especially if lower water temperatures are used. The amount of biofilier media needed given a certain daily feed input can be for example calculated using freely available resources like the FAO handbook on Small-scale aquaponic food production (Carruthers, 2015). Very importantly, when new aquaria are used for the first time an initial system cycling has to be performed in order to establish bacteria populations. This process may take 3-5 weeks and needs some regular effort in terms of adding an ammonia source by monitoring the levels of different nitrogen compounds. ## 4 Making it happen: decisions, costs and the story of building our clothlid fish facility ### \(I\) Deciding on and starting the implementation. The next step was to start building and testing the facility. We implemented a two-phase construction approach for our fish facility. The first phase involved building a quarter of the facility, which allowed us to begin operations promptly and test the facility on a smaller scale. Interestingly, the first phase did not necessitate significant changes (a few listed below) or optimization to our original plan, indicating the soundness of our initial designs and operations. However, the greatest value derived from this phase was the confirmation that our general approach worked. Having witnessed the functionality and success of our strategy on a smaller scale, it significantly reduced the stress and uncertainty associated with the large investment of constructing the remaining three-quarters of the facility in the second phase. Hence, the phased approach not only got us up and running quickly, but it also gave us the confidence to move forward with the construction of the whole facility. Figure 5: **List of items and where they are installed. This figure provides a summary of all items that are built into the facility.** Figure 6: **Piping construction.****(A)** inflow system of the upper level. **(B)** inflow at one aquarium.**(C)** Side view on aquarium feed through and water level control system. **(D)** Bottom view on aquarium feed through and water level control system. **(E)** Water change outflow control. **(F)** Pipes can be fixed to the rack using polyethylene blocks and pipe clamps. **(G)** The system can be also extended across racks. The only important thing is that the piping is horizontal. Using couplings in regular intervals eases building the system and allows to replace parts if necessary. **Installation of the aquaria.** The installation of the facility proceeded surprisingly smoothly. Initially, shelves were set up, and the aquaria were carefully positioned on green camping mats to mitigate the risk of glass fractures from stress. Following this, outflow and inflow pipes were introduced into the setup. For this purpose, we utilized PVC pipes, fittings, and valves (**Figures 5 and 6**). Particularly for valves it is important to ensure a high quality, which means that they do not only effectively hold the water pressure, but that they can also be smoothly opened and closed. To ensure accurate lengths, pipes were cut using a reliable pipe cutter, which saved time and simplified the task. To ensure seamless pipe connections, a deburring tool was employed on both the inside and outside ends of each pipe. This step created smooth, chamfered edges, facilitating even distribution of solvent for effective pipe joining. Before the gluing process, we applied PVC cleaner to eliminate any residues and impurities from areas where glue would be used. The components were then bonded using PVC adhesive, a process known as solvent welding. The elements are then carefully joined together, and during the subsequent five minutes, they must remain undisturbed. This brief waiting period is crucial for the adhesive to establish a strong initial bond between the components. After the five-minute interval, the connected elements are left to dry for a minimum of 24 hours. This extended drying duration is essential to ensure the adhesive undergoes a thorough curing process. Over this time, the adhesive undergoes a chemical reaction that leads to the fusion of the PVC material. As a result, a robust and long-lasting connection is formed. For the cleaning and gluing of the elements, safety instructions should be read. Ensuring proper ventilation is essential, along with using suitable protective gear like gloves and clothing. The setup's overall arrangement is illustrated in **Figures 5 and 6**. Incorporating PVC screw connections with adhesive sockets at intervals simplifies future installation, replacement, and maintenance tasks. For secure placement, pipe clamps are utilized (**Figure 6F**), attaching them to the shelf's vertical bars using wooden or high-density polyethylene blocks. After a minimum of 24 hours to allow the adhesive to set, the system can be tested. Minor leaks, if any, can be resolved by draining the pipes and applying underwater adhesive (e.g., Hobby fix Aquarium glue). This process can be repeated until all leaks are fixed. Following this, all aquaria should be filled and tested to ensure both the tanks and interconnected pipes hold water effectively. **Equipping the aquaria.** Next on the list is to add filters, lights, heaters, substrate, aquarium enrichment like plastic plants and hiding places, thermometers, and lids. This process should be relatively straightforward. Filters can be connected via valve manifolds and adapters to the compressed air system. Lights are mounted onto the horizontal metal bars, and the light level and on/off cycle are set up. Heaters are simply added to the back of the aquarium. For electric plug it is important that they are placed above the aquaria to avoid that water reaches them. The substrate, plastic plants, and hiding places are then added to the aquarium. Figure 7: **The overall designing of a facility.** After designing the individual racks, it becomes crucial to contemplate the most efficient utilization of the available space. Providing sufficient room for additional tasks, equipment, water supply, and drainage connections is of paramount importance. In our case, all aquaria located on the same level within racks of the same color are semi-connected. This implies that water changes across the entire facility are streamlined by merely opening four valves—two from the upper level and two from the lower level of the green and orange systems—to remove 20% of the water. Subsequently, by closing these valves and opening two others (green and orange system), water is replenished in all aquaria simultaneously. Figure 8: **Integration of a regular zebrafish rack.****A-D**: Zebrafish racks (**A**) are often used to raise young fish larvae. We integrated an available rack in our facility by adding it to outflow of the aquaria (**B**). A fraction of the water during water exchange goes into the zebrafish rack (**C**). Old water of the zebrafish rack is flowing hereby into the pipes of the water change system (**D**) Figure 9: **Additional items in the facility.****(A)** A stereomicroscope with attached LCD screen and microinjection setup. **(B)** Egg tumbler for raising cichild eggs. **(C)** Manifolds make managing pressured air for sponge filters and tumblers easier. **(D, E)** Self-constructed lid holders make cleaning easier. **(F, G)** A good investment is a cleaner that uses water pressure to generate suction (JRL Proclean Aqua In-Out Water Change Set). **(H)** In some facilities a dehumidifier might be needed to reduce the humidity that can quickly rise in a room with many aquaria. **(H)**An additional external filter might help to keep critical aquaria (e.g., for egg incubation) clean. The lids are ideally already ordered in two pieces. One hole is added to the small piece for water inflow, while another hole is added to the larger piece to allow for easy opening (this also serves as a convenient way to feed the fish later (**Figure 4A, B**). On the next day, the first salt is added (necessary in our case). The aquaria need to run for four weeks before the first fish can be introduced. During this time, water changes should be monitored and tested. Temperature and parameters should be checked to confirm that they are within the correct range. Lastly, one must decide on fish food for differently sized fish, as well as fish fry and juveniles. Already during the initial cycling fish food should be added from time to time to allow the buildup of ammonia- and nitrite-oxidizing bacteria. **Adding fish to the aquaria.** After a month, a few fish can be added to one of the aquaria. If they appear healthy after a few days, more can be gradually introduced. In our case the fish were evidently happy as they showed no signs of discomfort and even started to breed after a few days. Therefore, we also acquired ten cichlid egg tumblers and added them all to one aquarium to be able to raise eggs outside of the mouthbrooding females. Moreover, we added catfish (_Ancistrus spp_) to all aquaria to reduce algae growth on the glass. During this period there might be fluctuation of water parameters and algae growth, so close monitoring is important. **Learning from what is not working and further optimization.** While there were no major issues in this phase, there were a few things we realized that needed optimization. The biggest problem concerned the fact that the drainage system could not take that much water causing a mild flooding (that however was no issue in this waterproof room). To circumvent this problem, we added another valve to the outflow that limited the waterflow to just so much that the drainage could take it (**Figure 3B**). The second problem was some mild algae growth that we managed to get under to control by reducing the light intensity of the LED lights, while maintaining physiologically relevant light conditions. As this was a particular problem in the aquaria with the egg tumblers, we added an external filtration unit with an UV filter to this aquarium. Third, we realized later during summer month that the humidity in the room would increase quite drastically, why we decided to install an air dehumidifier that kept humidity around 50%. ### Expanding the facility. **Adding more shelves and aquaria.** Expanding the facility should be already much easier than the first step, as there is confidence that the design works in practice. Challenges might only come by certain equipment not being available any more, which happened in our case for some of the flights and valves. Also costs for some items might increase, as it was the case for us for the aquaria. It might be therefore beneficial to expand the facility as soon as possible after the first phase has been completed successfully. Something else that is important to consider, is how much space is needed for safely working in the facility during daily work as well as maintenance and how much space is needed for additional task that are conducted in the room, while at this stage making a clear room plan is of great importance (**Figure 7**). **Incorporating an existing zebrafish rack in the system.** A typical situation to be confronted with is if and how to incorporate existing material and premanufactured units into the system. In our case we inherited a fully functional zebrafish rack. These racks are expensive (one usually costs around 15.000E) and are excellent for keeping individuals separated during genotyping or for raising juveniles. Our challenge was that this rack needed water with the same parameters then the rest of the facility. Therefore, we decided, to reroute some of the water released during water changes into the zebrafish rack, the excess water is then released over the overflow of the rack (**Figure 8**). While this is no fresh water, we expected and could confirm that the water parameters (e.g., nitrate and nitrite) are excellent in the zebrafish rack, due to the regular water changes in the facility that we conduct and the amount of water that will be always replaced from the zebrafish rack (flowthrough of ca. 500 liter). This was a nice way to make sure that the water parameters of all systems align. It has a risk of infections spreading from the main aquaria to the zebrafish rack, but as aquaria with sick fish can be blocked via the valves these effects can be at least reduced. Also, the zebrafish rack has an additional UV filter next to mechanical and biological filtration that can further mitigate the risks. Still, it might be advised to not use such a rack for important fish. **Adding workingplaces for additional tasks.** Additional tasks that might be conducted in the fish facility are observations and manipulations on a microscope, microinjections for transgenesis and Crisp-Cas9, outreach events with demonstrations, e.g., of embryos at higher magnifications, but also storage space (and means to reach it) and cleaning equipment. At the same time, one wants to probably maximize the space for aquaria. We decided to add one small table, with a microscope and a microinjector (**Figure 9**). To aid training and outreach events, we obtained a microscope with an external monitor mounted to it as well as the possibility to record videos and images on an SD card. For storage space we use a shelf above the sink as well as the space on top of the aquarium shelves. To reach this space easily and organize it we bought a 3-step-ladder and several 40-liter wash baskets to conveniently store additional equipment. ### Establishing maintenance workflows **Feeding.** The choice of food is highly important, as it can significantly impact water quality. We utilize high-quality granule food. This food is available in three sizes (<0.5 mm, 0.5-1 mm, 1-2 mm). We employ a color-coding system for our labeled aquaria to ensure the right-sized food is provided. For fry, we directly feed decapsulated brine shrimp eggs. Feeding eggs directly offers an advantage as they do not need to be hatched. They can be conveniently frozen in aliquots at -20C. Before feeding, they need to be placed in a small cup with aquarium water for 15 minutes. As the eggs are stored in highly saline water, they expand when introduced into the less salty aquarium water. This expansion before ingestion is crucial, as ingestion of brine shrimp eggs before expansion could be fatal. Despite the higher cost of decapsulated eggs, this method significantly reduces the egg usage during hatching, preventing overproduction when only a few larvae need to be fed. This method also saves time, space, and mitigates the risk of forgetting to set up arena. **Daily Maintenance.** Ensuring the facility's functionality and the well-being of the fish demands daily checkups. This can be seamlessly integrated with feeding routines. Clear protocols and training are vital to identify sick or injured fish and know what actions to take in such cases. Filters, lights, temperature, water levels, leakage (water on the ground), water flow in zebrafish rack systems, humidity, and any signs of animal or technical issues should be diligently checked. These findings should be reported through a designated system, whether addressed or not, allowing others to review and providing an avenue for feedback. This ensures that relevant parties are informed of the situation. A report sheet might even be required by certain authorities. Daily maintenance usually takes 15-30 minutes, though it may take more time if issues need to be addressed. **Weekly Maintenance.** Efficient water changes and maintenance are essential for conserving time and resources, while simultaneously upholding the facility's functionality and well-being of the fish. For our facility, which consists of 42 x 230-liter aquaria and a zebrafish rack, we conduct a weekly water change of 20%. This amount may seem substantial, but a two-week interval is too infrequent, and a non-fixed schedule is more challenging to organize. However, this weekly schedule ensures good water quality for heavily stocked aquaria and the zebrafish rack. During the water change, water is first released by opening the outflow valves. Emptying does not require close monitoring due to the \begin{table} \begin{tabular}{l l} \hline **Item** & \\ \hline **Shelves and Aquaria** & \\ 10 x Heavy duty shelves (2 units, one for 6 and one for 4 aquaria) & \(\sim\)30000C \\ 10 x Aquarium 99 x 50 x 47cm, 8mm wall with drilled hole (Pavlica, Czech Republic) & \(\sim\)1100C \\ 1 x Delivery of aquaria (Price of one palette within Europe, \(\sim\)450€) & \(\sim\)900€ \\ 10 x Two-part acryl sheet cover and dividers & \\ 40 x Suction Cup Divider Supports (e.g., Atyhao) & \(\sim\)20€ \\ 10 x Camping mats to put under aquaria & \(\sim\)50€ \\ \hline **Water inflow system** & \\ 14 x PVC ball valves with adhesive sockets (0 20mm, Cpepe) \#Val20 & \(\sim\)130C \\ 8x coupling (screwable) with adhesive sockets (0 20mm) \#Cou20 & \(\sim\)10€ \\ 10x T-piece with adhesive sockets (0 20mm) \#Tpi20 & \(\sim\)4€ \\ 6x PVC 90° elbow piece with adhesive sockets (0 20mm) \#Elb20 & \(\sim\)20C \\ 20x PVC tm pipes (0 20mm) & \(\sim\)20€ \\ 20x Pipe clamps (0 20mm) & \(\sim\)6€ \\ Optional depending on water supply: 1x Ball valve (variable size and connection) \#ValX & \(<\)10€ \\ Optional depending on water supply: 1x Hose \#HosX & \(<\)10€ \\ Optional depending on water supply: 1x hose tail with adhesive socket (0 20mm) \#HotX & \(<\)10€ \\ \hline **Water drainage system** & \\ 14 x PVC ball valves with adhesive sockets (0 32mm, e.g., Cpepe) \#Val32 & \(\sim\)190C \\ 10 x PVC aquarium feed through with adhesive sockets (0 32mm) and 40mm outer thread (to fix feed through). Compatibility with pre-drilled hole & \(\sim\)50€ \\ has to be checked. & \\ 8x coupling (screwable) with adhesive socket (0 32mm) \#Cou32 & \(\sim\)16€ \\ 10x T-piece with adhesive sockets (0 32mm) \#Tpi32 & \(\sim\)7€ \\ 23x 90° elbow piece with adhesive sockets (0 32mm) \#Elb32 & \(\sim\)14€ \\ 10x PVC end piece with adhesive socket (0 32mm) \#End32 & \(\sim\)5€ \\ 10x pipe transition nipple with outer thread (1°) and adhesive socket (0 32mm) \#Nip32 & \(\sim\)7€ \\ 10x 90° elbow piece with inner thread (1°) and adhesive socket (0 32mm) \#Elb32IT & \(\sim\)10€ \\ 25x PVC tm pipes (0 32mm) & \(\sim\)42€ \\ 25x Pipe clamps (0 32mm) & \(\sim\)9€ \\ 1x Pipe reduction ring with adhesive sockets (0 20mm x 0 32mm) \#Red2032 & \(\sim\)16 \\ \hline \end{tabular} \end{table} Table 1: Costs of establishing a fictional facility with 10 aquaria. For most items it is advised to buy one or more extra items in case they break (depending on storage space, how likely it is and what the collateral damage would be). Total cost for this setup with 10 aquaria without optional items would be around 9000C. “Costs are roughly given for a facility with 10 aquaria. built-in water level control. Subsequently, the outflow valves are closed, and the inflow valves are opened. While refilling the aquaria, salt is added to reach the desired hardness and pH-levels. For effective dissolution, the salt can be pre-dissolved in bottles placed in each aquarium, preventing oversights, and enabling quicker dissolution. While continuous presence is not necessary, a timer is essential to prevent overflow. It is vital to check everything is working properly one hour and one day after the water change, including water levels, temperature, and fish behavior. It is important to pay special attention to water temperature, as colder water is often used (as warm water can affect fish well-being as copper id often used for warm water pipes). Temperature dropping not excessively (below 20degC) is acceptable and may even stimulate mating. However, it's essential that the temperature swiftly returns to the normal range. Avoid conducting water changes before holidays or weekends, as issues are less likely to be detected during those times. Cleaning is done bi-weekly, involving vacuuming sand debris, and washing sponge filters in a bucket with aquarium water. A tap/faucet-based system is recommended for vacuuming, using tap water flow pressure to create suction that directly draws debris into the drainage (**Figure 9F, G**). Cleaning can be at the same time as the water release. The combination of cleaning and water change takes approximately 1/2 to 2 hours, and having two people around can expedite the process, reduce errors, and make it more manageable. The time remains the same for water changes without cleaning, but there is a more extended break during water release and refill. \begin{table} \begin{tabular}{l l l} \hline **Item** & **Cost Before** & **Total Cost** \\ \hline **Food** & & **Cost Before** & **Total Cost** \\ \hline Fine Food (\(\text{c.0}\),\(\text{5nm}\);\(\text{Nurtiamare}\)) & 25\(\text{C per kg}\) & 25\(\text{C}\) \\ Medium Food (\(\text{0.5}\)-\(\text{1mm}\);\(\text{Nurtiamare}\)) & 25\(\text{C per kg}\) & 25\(\text{C}\) \\ Large Food (\(\text{1}\)-\(\text{2mm}\);\(\text{Nurtiamare}\)) & 25\(\text{C per kg}\) & 25\(\text{C}\) \\ Decapsulated Artemia Eggs (ArtemiaVita, \(\text{250ml}\) = 40 million eggs) & 30\(\text{C for 250ml}\) & 30\(\text{C}\) \\ \hline _Salt_ & & **Cost Before** & **Total Cost** \\ \hline Epson Salt (Magnesium Sulphate, MgO4 \(\pi\) 7H2O) & 24\(\text{C per 5kg}\) & 24\(\text{C}\) \\ Natron Salt (Sodium hydrogen carbonate NaHCO3) & 14\(\text{C per 5kg}\) & 28\(\text{C}\) \\ \hline _Water parameter measurement_ & & **Cost Before** & **Total Cost** \\ \hline Strips to check water parameters (e.g., Tetra Test 6-in-1) & & **Cost Before** & **Total Cost** \\ \hline \end{tabular} \end{table} Table 2: Running costs for a fictional facility (excluding rent, water, and electricity). *Costs are roughly given for one year for a facility with 10 aquaria. For some items (fine and medium food, artemia, less than the given amount is needed, while the total cost was reduced slightly). **Monthly and yearly maintenance.** Regular, more thorough checks are recommended. Monthly checks should encompass checking essential water parameters, including Nitrate and Nirtite levels. Additionally, parameters such as pH, hardness, chlorine, chloramine, copper, and heavy metals should be also checked but depend on the quality of the water used for refilling aquaria. Some algae growth is tolerable, but if it becomes problematic because of the type or amount of algae, adjustments in light levels, using an external filter with UV, performing more extensive water changes, adding more algae-eating catfish, or even resorting to chemical treatments can help control it. Keeping track of fish and knowing when to breed them to maintain stocks can become challenging. Conducting an inventory once or twice a year can help reduce chaos without being overly time-consuming. It also aids in fair space distribution and accommodating new projects. **Establishing safety measures and contingency plans.** Even when everything seems well-controlled, unexpected events can occur. Aquaria might break, equipment can malfunction, power outages, heatwaves, or disease outbreaks can happen. It is prudent to have a written contingency plan outlining the steps to take when something goes wrong. This is especially critical in severe situations where quick action is necessary. Furthermore, maintaining backups of filters, heaters, external filters, and air pumps provides an alternative means of oxygenation. ## 5 CoSTS ### Costs of building a facility. Costs of building a facility (**Table 1**) will be influenced by many things including the space, needs, expertise and time as well as where you are placed. Still, as a reference we wanted to provide a summary of the approximate costs per item and a facility with 10 aquaria to allow easy calculation. ### Costs of running a facility. For the cost of running a facility (**Table 2**) there will be a wide range of items needed that depend on the specific needs. Also here, we provide costs for a fictional facility with 10 aquaria to allow easy calculation. ## 6 Conclusion In summary, we have detailed the construction of a purpose-built facility designed specifically for tropical fish. We trust that our thought processes and decision-making can offer valuable insights to researchers who are contemplating the establishment of their own facilities. While variations might exist from one location to another, having a tangible example of how it can be achieved can provide guidance and complementary advice, enriching one's personal experience. Having undertaken this endeavor ourselves, despite occasional moments of stress, proved to be an immensely rewarding and enjoyable process. At present, our facility accommodates 15 distinct species, housing over 1000 fish, with nearly all species engaging in regular breeding activities. The simplicity of maintenance translates to significant time savings, while still having granted us the luxury of customizing our setup according to our preferences. ## Acknowledgments CFK would like to thank Ralf Schneider for his invaluable expertise in aquaria design and construction, as well as to Jan Gerwin and Ralf Schneider for their comprehensive teachings on fish keeping and maintenance. CFK is grateful for the exceptional advice and thoughts about fishes and fish facilities provided by (sorted in alphabetical order) Karoliina Alm, Ehsan Pashay Ahi, Ingo Braasch, Ulrika Candolin, Nicolas Kolm, Axel Meyer, Nikolai Piavchenko, Craig Primmer, Emilia Santos, Walter Salzburger, and Jukka-Pekka Verta. We also extend our thanks to the Association of Finnish Cichlid Hobbyist "Ciklidistitr "y" ([https://www.ciklidistitr.fi](https://www.ciklidistitr.fi)), particularly Jari Nyman, for their engaging discussions and a recent aquarium donation. Importantly, we also want to express our gratitude to the Institute of Biotechnology, HiLIFE for their financial support of our facility and the Faculty for Biological Environmental Sciences of the University of Helsinki for the space, both of which has been crucial to the realizations of the here presented facility and our research. CFK acknowledges the use of AI language models, specifically ChatGPT, for proofreading and improving the writing of this manuscript as well as the use of Adobe Illustrator software for creating and refining the figures used in this publication and Adobe InDesign to layout the manuscript. ## Conflicts of interest The authors declare that there are no conflicts of interest. ## Author contributions CFK designed the facility and wrote the manuscript. MK, JH, NK were incremental at different stages of the process to make the facility a working facility from helping to build it (MK) to establishing, improving, and maintaining workflows (MK, JH, NK). Everybody read and commented on the manuscript. ## Data availability statement All data (i.e., engineering data, cost estimates, and biological data about water parameters) are directly presented in the manuscript and its accompanying tables.
2310.15181
Poisson structure and Integrability of a Hamiltonian flow for the inhomogeneous six-vertex model
We compute the action-angle variables for a Hamiltonian flow of the inhomogeneous six-vertex model, from a formulation introduced in a 2022 work due to Keating, Reshetikhin, and Sridhar, hence confirming a conjecture of the authors as to whether the Hamiltonian flow is integrable. To demonstrate that such an integrability property of the Hamiltonian holds from the action-angle variables, we make use of previous methods for studying Hamiltonian systems, implemented by Faddeev and Takhtajan, in which it was shown that integrability of a Hamiltonian system holds for the nonlinear Schrodinger's equation by computing action-angle variables from the Poisson bracket, which is connected to the analysis of entries of the monodromy and transfer matrices. For the inhomogeneous six-vertex model, an approach for computing the action-angle variables is possible through formulating several relations between entries of the quantum monodromy, and transfer, matrices, which can be not only be further examined from the structure of $L$ operators, but also from computing several Poisson brackets parameterized from entries of the monodromy matrix.
Pete Rigas
2023-10-20T04:00:57Z
http://arxiv.org/abs/2310.15181v2
# Poisson structure and Integrability of a Hamiltonian flow for the inhomogeneous six-vertex model ###### Abstract We compute the action-angle variables for a Hamiltonian flow of the inhomogeneous six-vertex model, from a formulation introduced in a 2022 work due to Keating, Reshetikhin, and Sridhar, hence confirming a conjecture of the authors as to whether the Hamiltonian flow is integrable. To demonstrate that such an integrability property of the Hamiltonian holds from the action-angle variables, we make use of previous methods for studying Hamiltonian systems, implemented by Fadeev and Takhtajan, in which it was shown that integrability of a Hamiltonian system holds for the nonlinear Schrodinger's equation by computing action-angle variables from the Poisson bracket, which is connected to the analysis of entries of the monodromy and transfer matrices. For the inhomogeneous six-vertex model, an approach for computing the action-angle variables is possible through formulating several relations between entries of the quantum monodromy, and transfer, matrices, which can be not only be further examined from the structure of \(L\) operators, but also from computing several Poisson brackets parameterized from entries of the monodromy matrix. 1 Footnote 1: _Keywords_: Six-vertex model, integrability, inhomogeneity, Hamiltonian flow, action-angle variables, Poisson bracket ## 1 Introduction ### Overview The six-vertex model, originally introduced by chemists in [9], has emerged as a model of several studies in Probability theory, with recent efforts devoted towards rigorous descriptions of the localization, and delocalization, phases of the height function [3], the free energy [4], transition between disordered and antiferroelectric phases [6], and other related aspects pertaining to Russo-Seymour-Welsh results and crossing probabilities of the height function over strips of the square lattice, and interior of the cylinder, [10], in addition to classical work at the beginning of the seventies for computing the residual free entropy of the model on the torus [8]. To answer one conjecture raised in [7] regarding the integrability of the Hamiltonian flow for the six-vertex model, we make use of approaches described in [5] for determining the action-angle variables of the Hamiltonian that is introduced in the next section below. With such an approach, we resolve one conjecture raised by the authors in [7] as to whether the Hamiltonian flow for the six-vertex model that they introduce is integrable. ### Six-vertex model objects Over the torus \(\mathbf{T}_{N}\equiv\big{(}V\big{(}\mathbf{T}_{N}\big{)},E\big{(}\mathbf{T}_{ N}\big{)}\big{)}\), the six-vertex model can be defined through the probability measure, \[\mathbf{P}_{\mathbf{T}_{N}}\big{[}\omega\big{]}\equiv\mathbf{P} \big{[}\omega\big{]}\equiv\frac{w\big{(}\omega\big{)}}{Z_{\mathbf{T}_{N}}}\ \,\] where \(\omega\) is a _six-vertex configuration_ determined by the six possible configurations (see Figure 1 and Figure 2), with the weight in the numerator of the probability measure taking the form, \[w_{\text{6V}}\big{(}\omega\big{)}\equiv w\big{(}\omega\big{)} \equiv a_{1}^{n_{1}}a_{2}^{n_{2}}b_{1}^{n_{3}}b_{2}^{n_{4}}c_{1}^{n_{5}}c_{2} ^{n_{6}}\ \,\] for \(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2}\geq 0\), with the partition function, \[Z_{\mathbf{T}_{N}}\big{(}\omega,\Omega\big{)}\equiv Z_{\mathbf{T}_{N}}=\sum_{ \omega\in\Omega(\mathbf{T}_{N})}w\big{(}\omega\big{)}\ \.\] Besides \(\mathbf{P}_{\mathbf{T}_{N}}\big{[}\cdot\big{]}\), the disorder parameter of the six-vertex model is of the form, \[\Delta\equiv\frac{a_{1}a_{2}+a_{3}a_{4}-a_{5}a_{6}}{2\sqrt{a_{1}a_{2}a_{3}a_{4}} }\ \.\] For non-symmetric weights, the weights of the six-vertex model admit the parametrization, \[a_{1}\equiv a\ \text{exp}\big{(}H+V\big{)}\ \,\] \[a_{2}\equiv a\ \text{exp}\big{(}-H-V\big{)}\ \,\] \[b_{1}\equiv\ \text{exp}\big{(}H-V\big{)}\ \,\] \[b_{2}\equiv\ \text{exp}\big{(}-H+V\big{)}\ \,\] \[c_{1}\equiv c\lambda\ \,\] \[c_{2}\equiv c\lambda^{-1}\ \,\] for \(a_{1}\equiv a_{2}\equiv a\), \(b_{1}\equiv b_{2}\equiv b\), \(c_{1}\equiv c_{2}\equiv c\), and \(\lambda\geq 1\), and external fields \(H,V\). From such a parametrization of the weights as given above, one can form the so-called \(R\)-matrix, for the standard basis of \(\mathbf{C}^{2}\), with, \[R\equiv R\big{(}u,H,V\big{)}\equiv\begin{bmatrix}a\ \text{exp}\big{(}H+V\big{)}&0&0&0\\ 0&b\ \text{exp}\big{(}H-V\big{)}&c&0\\ 0&c&b\ \text{exp}\big{(}-H+V\big{)}&0\\ 0&0&0&a\ \text{exp}\big{(}-H-V\big{)}\end{bmatrix}\] in the tensor product basis \(e_{1}\otimes e_{1}\), \(e_{1}\otimes e_{2}\), \(e_{2}\otimes e_{1}\), \(e_{2}\otimes e_{2}\), for \(e_{1}\equiv\big{[}1\ 0\big{]}^{\text{T}}\) and \(e_{1}\equiv\big{[}0\ 1\big{]}^{\text{T}}\). For vanishing external fields \(H\equiv V\equiv 0\), if we denote \(R\big{(}u\big{)}\equiv R\big{(}u,0,0\big{)}\), the matrix above admits the identity, \[R\big{(}u,H,V\big{)}=\bigg{(}\text{diag}\big{[}\text{exp}\big{(} \frac{H}{2}\big{)},\text{exp}\big{(}-\frac{H}{2}\big{)}\big{]}\otimes\text{ diag}\big{[}\text{exp}\big{(}\frac{V}{2}\big{)},\text{exp}\big{(}-\frac{V}{2} \big{)}\big{]}\bigg{)}R\big{(}u\big{)}\bigg{(}\text{diag}\big{[}\text{exp} \big{(}\frac{H}{2}\big{)},\text{exp}\big{(}-\frac{H}{2}\big{)}\big{]}\otimes\cdots\] \[\text{diag}\big{[}\text{exp}\big{(}\frac{V}{2}\big{)},\text{exp} \big{(}-\frac{V}{2}\big{)}\big{]}\bigg{)}\equiv\big{(}D^{H}\otimes D^{V} \big{)}R\big{(}u\big{)}\big{(}D^{H}\otimes D^{V}\big{)}\ \,\] Figure 1: One depiction of each possible vertex for the six-vertex model, adapted from [1]. Figure 2: Another depiction of each possible vertex for the six-vertex model, adapted from [5]. over \({\bf C}^{2}\otimes{\bf C}^{2}\otimes{\bf C}^{2}\). In widely celebrated work, [1], Baxter demonstrated that the \(R\) matrix satisfies the Yang-Baxter equation, \[R_{12}\big{(}u\big{)}R_{13}\big{(}u+v\big{)}R_{23}\big{(}v\big{)}=R_{23}\big{(}v \big{)}R_{13}\big{(}u+v\big{)}R_{12}\big{(}u\big{)}\ \.\] For \(\Delta<-1\), Baxter's parametrization, for the weights, given \(0<u<\eta\), \[a\equiv\sinh\big{(}\eta-u\big{)}\ \,\ b\equiv\sinh\big{(}u\big{)}\ \,\ c\equiv \sinh\big{(}\eta\big{)}\ \,\] allows one to classify the eigenvalues of the transfer matrix which is related to the Bethe equations. Besides this fact, equipped with \(u\), the horizontal lines of the square lattice, \(H\) and \(V\), two external fields, the partition function over the \(N\times M\) torus, \({\bf T}_{NM}\), can be expressed with the summation, \[Z_{{\bf T}_{MN}}\big{(}u,H\big{)}\equiv Z_{{\bf T}_{MN}}=\sum_{n=0}^{N}\!\! \exp\!\big{(}M\big{(}N-2n\big{)}V\big{)}Z_{{\bf T}_{MN}}^{n}\big{(}u,H\big{)}\ \,\] for the semigrand canonical partition function, \[Z_{{\bf T}_{MN}}^{n}\big{(}u,H\big{)}\equiv\sum_{\begin{subarray}{c}i\geq 1\\ \{\alpha_{i}\}\end{subarray}}\!\!\big{(}\Lambda_{\{\alpha_{i}\}}\big{(}u,H \big{)}\big{)}^{M}\ \,\] for countably many solutions \(\big{\{}\alpha_{i}\big{\}}\) to the Bethe equation, \[\prod_{k=1}^{N}\!\frac{\sinh\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}}{ \sinh\big{(}\frac{\eta}{2}-i\alpha_{j}+v_{k}\big{)}}=\exp\!\big{(}2HN\big{)} \prod_{m=1,m\neq j}^{n}\!\!\frac{\sinh\big{(}i\big{(}\alpha_{j}-\alpha_{m} \big{)}+\eta\big{)}}{\sinh\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta \big{)}}\ \,\] and for the eigenvalue \(\Lambda_{\{\alpha_{i}\}}\), \[\Lambda\equiv\Lambda_{\{\alpha_{i}\}}\equiv\exp\!\big{(}NH\big{)} \!\prod_{k=1}^{N}\!\!\sinh\!\big{(}\eta-u+v_{k}\big{)}\!\prod_{j=1}^{n}\!\! \frac{\sinh\!\big{(}\frac{\eta}{2}+u-i\alpha_{j}\big{)}}{\sinh\!\big{(}\frac{ \eta}{2}-u+i\alpha_{j}\big{)}}+\exp\!\big{(}-NH\big{)}\!\prod_{k=1}^{n}\!\! \sinh\!\big{(}u-v_{k}\big{)}\times\cdots\\ \prod_{j=1}^{n}\!\frac{\sinh\!\big{(}\frac{3\eta}{2}-u+i\alpha_{j} \big{)}}{\sinh\!\big{(}u-\frac{\eta}{2}-i\alpha_{j}\big{)}}\ \,\] with corresponding eigenvector, \[\prod_{i=1}^{n}\!\!B\big{(}\alpha_{i}\big{)}\,|\Downarrow\rangle\equiv\prod_{ i=1}^{n}\!\!B\big{(}\alpha_{i}\big{)}\!\left(\bigotimes_{i=1}^{n}\,| \Downarrow\right)\equiv\prod_{i=1}^{n}\!\!B\big{(}\alpha_{i}\big{)}\!\left( \,|\Downarrow\rangle\otimes_{\cdots\cdots}^{n-2}\otimes|\Downarrow\,\right) \equiv\prod_{i=1}^{n}\!\!B\big{(}\alpha_{i}\big{)}\!\left(\binom{0}{1}\! \otimes_{\cdots\cdots}^{n-2}\otimes\binom{0}{1}\right)\ \,\] of the transfer matrix, \[t\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}:\big{(}{\bf C}^{2}\big{)}^{\otimes N} \longrightarrow\big{(}{\bf C}^{2}\big{)}^{\otimes N}\ \,\] which, explicitly, is proportional to the trace of the quantum monodromy matrix, which is given by, \[t\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\equiv\prod_{i=1}^{N}\!D_{i}^{2V}\! \mbox{Tr}_{a}\big{[}T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,0\big{)}\big{]}\ \,\] for the quantum monodromy matrix, \[T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,0\big{)}:{\bf C}^{2}\otimes\big{(}{\bf C }^{2}\big{)}^{\otimes N}\longrightarrow{\bf C}^{2}\otimes\big{(}{\bf C}^{2} \big{)}^{\otimes N}\mapsto\prod_{i=1}^{N}\!\mbox{diag}\big{(}\exp\!\big{(}2H \big{)},\exp\!\big{(}2H\big{)}\big{)}R_{ia}\big{(}u-v_{i}\big{)}\ \,\] where each \(v_{i}\) is chosen so that each \(u-v_{i}\) is a spectral parameter given at site \(i\). In the next section, to apply the formalism first introduced for Hamiltonian systems in the nonlinear Schrodinger's equation to the quantum monodromy and transfer matrices of the six-vertex model, observe that the product of diagonal matrices with each \(R_{ia}\) is equivalent to, \[\mbox{diag}\big{(}\mbox{exp}\big{(}2H\big{)},\mbox{exp}\big{(}2H\big{)}\big{)}R _{1a}\big{(}u-v_{1}\big{)}\cdots\times R_{(N-1)a}\big{(}u-v_{N-1}\big{)}\mbox {diag}\big{(}\mbox{exp}\big{(}2H\big{)},\mbox{exp}\big{(}2H\big{)}\big{)}R_{Na} \big{(}u-v_{N}\big{)}\ \.\] To formulate the Hamiltonian flow for the six-vertex model, from the statement of the Bethe equations, and their eigenvalues, introduce the functions \(\psi_{u}^{\pm}\big{(}\alpha+iu\big{)}\equiv\psi_{\pm}\big{(}\alpha+iu\big{)}\), which are given by, [7], \[\psi_{+}\big{(}\alpha+iu\big{)}=\log\big{[}\big{]}\frac{\mbox{ exp}\big{(}\eta+2i\alpha\big{)}-\mbox{exp}\big{(}2u-\eta\big{)}}{\mbox{exp} \big{(}2u\big{)}-\mbox{exp}\big{(}\eta+2i\alpha\big{)}}\big{|}\big{]}\equiv \log\big{[}\frac{\big{|}\mbox{exp}\big{(}2\eta+2i\alpha\big{)}-\mbox{exp} \big{(}2u-\eta\big{)}\big{|}}{\mbox{exp}\big{(}2u\big{)}-\mbox{exp}\big{(} \eta+2i\alpha\big{)}}\big{|}\big{]}\] \[\equiv\log\big{[}\frac{\big{|}\mbox{sinh}\big{(}\frac{3\eta}{2}- u+i\alpha\big{)}}{\big{|}\mbox{sinh}\big{(}u-\frac{\eta}{2}-i\alpha\big{)}}\big{|} \big{]}\] \[=\log\big{[}\frac{\sinh\big{(}\frac{3\eta}{2}-u+i\alpha\big{)}}{ \sinh\big{(}u-\frac{\eta}{2}-i\alpha\big{)}}\big{]}\ \,\] as well as the relation, \[\Theta\big{(}\alpha-\beta\big{)}\equiv\frac{1+\mbox{exp}\big{(}ip\big{(}\alpha \big{)}+ip\big{(}\beta\big{)}\big{)}-2\mbox{exp}\big{(}ip\big{(}\alpha\big{)} \big{)}}{1+\mbox{exp}\big{(}ip\big{(}\alpha\big{)}+ip\big{(}\beta\big{)}\big{)} -2\mbox{exp}\big{(}ip\big{(}\beta\big{)}\big{)}}=-\frac{\sinh\big{(}i\alpha-i \beta+\eta\big{)}}{\sinh\big{(}i\alpha-i\beta-\eta\big{)}}\ \,\] for, \[p\big{(}\alpha\big{)}=\log\big{[}\big{|}\frac{\sinh\big{(}\frac{\eta}{2}+i \alpha\big{)}}{\sinh\big{(}\frac{\eta}{2}-i\alpha\big{)}}\big{|}\big{]}\equiv \log\big{[}\frac{\sinh\big{(}\frac{\eta}{2}+i\alpha\big{)}}{\sinh\big{(}\frac {\eta}{2}-i\alpha\big{)}}\big{]}\ \,\] from which the statement of the Bethe equations is equivalent to, in terms of a parameterization of \(\psi_{-}\big{(}\alpha+v_{k}\big{)}\), and \(\mbox{exp}\big{(}\Theta\big{(}\alpha_{j}-\alpha_{m}\big{)}\big{)}\), while the statement for the eigenvalue of the Bethe equations is equivalent to, given some \(\big{\{}\alpha_{i}\big{\}}\), in terms of a parameterization of \(\psi_{\pm}\big{(}\alpha+v_{k}\big{)}\), \[\Lambda\big{(}\psi_{\pm}\big{(}\alpha+iu\big{)}\big{)}\equiv \Lambda_{\{\alpha_{i}\}}\equiv\mbox{exp}\big{(}NH\big{)}\!\prod_{k=1}^{N}\!\! \sinh\!\big{(}\eta-u+v_{k}\big{)}\!\prod_{j=1}^{N}\!\!\exp\!\big{(}\psi_{+}\big{(} \alpha_{j}+iu\big{)}\big{)}+\mbox{exp}\big{(}-NH\big{)}\!\prod_{k=1}^{n}\!\! \sinh\!\big{(}u-v_{k}\big{)}\times\cdots\] \[\prod_{j=1}^{n}\!\!\exp\!\big{(}\psi_{-}\big{(}\alpha_{j}+iu\big{)} \big{)}\ \.\] To introduce the Hamiltonian formulation of the six-vertex model, in which given a height function representation, from the set of all possible asymptotic height functions \(\mathcal{H}_{L,q}\), for Lipchitz \(h\sim\mathcal{H}_{L,q}\) and \(x,x^{\prime}\in\big{[}0,L\big{]}\), with, \[|h\big{(}x\big{)}-h\big{(}x^{\prime}\big{)}|<|x-x^{\prime}|\ \,\] and periodic, with, \[h\big{(}L,y\big{)}=h\big{(}0,y\big{)}+q\ \,\] for \(h:\big{[}0,L\big{]}\longrightarrow\mathbf{R}\), and fixed \(0<q<1\). From such a sampling of the height function, as well as another periodic function \(\pi\big{(}x\big{)}\) in \(x\), the pair \(\big{(}\pi\big{(}x\big{)},h\big{(}x\big{)}\big{)}\) can be identified with the cotangent space \(T^{*}\mathcal{H}_{L,q}\), while the flow of the Hamiltonian can be identified with the pair \(\big{(}\pi\big{(}x,y\big{)},h\big{(}x,y\big{)}\big{)}\), in which, \[H_{u}\big{(}\pi\big{(}x,y\big{)},h\big{(}x,y\big{)}\big{)}\equiv H_{u}\big{(} \pi,h\big{)}=\int_{\,[0,L]}\mathcal{H}_{u-v(x)}\big{[}\partial_{x}h\big{(}x \big{)},\pi\big{(}x\big{)}\big{]}\mathrm{d}x\ \,\] over \(T^{*}\mathcal{H}_{L,q}\), for a solution \(h\big{(}x,y\big{)}\) to the Euler Lagrange equations. Under the integral over \(x\) provided above, \(\mathcal{H}_{u}\) is the semigrand canonical free energy, \[\mathcal{H}_{u}\big{(}q,H\big{)}\equiv\mathcal{H}_{u}=\log\big{[}Z^{n}_{ \mathbf{T}_{MN}}\big{(}u,H\big{)}\big{]}\ \,\] which can alternatively be expressed as the maximum over \(\pm\), with, \[\mathcal{H}_{u}\big{(}q,H\big{)}\equiv\max_{\pm}\,\mathcal{H}^{\pm}_{u}\big{(} q,H\big{)}\equiv\max_{\pm}\big{\{}\pm H+l_{\pm}+\int_{C}\psi^{\pm}_{u}\big{(} \alpha\big{)}\rho\big{(}\alpha\big{)}\ \mathrm{d}\alpha\big{\}}\ \,\] (H) with \(l_{-}\equiv\log\,\sinh\big{(}\eta-u\big{)}\), \(l_{+}\equiv\log\,\sinh\,u\), and the density, \[\rho\big{(}\alpha\big{)}\equiv\#\big{\{}\text{Bethe roots along contours }C\big{\}}\equiv\bigcup_{\alpha}\big{\{}\alpha:\alpha\cap C\neq\emptyset\big{\}} \equiv\bigg{|}\big{\{}\alpha:\alpha\cap C\neq\emptyset\big{\}}\bigg{|}\ \.\] To connect the cotangent space \(T^{*}_{\phi_{1}}\mathcal{H}_{q,l}\) with \(T^{*}_{\phi_{2}}\mathcal{H}_{q,l}\) at time \(T\) given the initial flow line \(\big{(}\pi_{0},h_{0}\big{)}\), one determines the unique critical point of the functional, \[S\big{(}\pi,h\big{)}\equiv S=\int_{\,[0,L]}\,\int_{\,[0,T]}\big{[}\pi\big{(} x,y\big{)}\partial_{y}h\big{(}x,y\big{)}-\mathcal{H}_{u-v(x)}\big{(}\partial_{x}h \big{(}x,y\big{)},\pi\big{(}x,y\big{)}\big{)}\big{]}\mathrm{d}y\ \mathrm{d}x\ \,\] which is \(\big{(}\pi_{0},h_{0}\big{)}\). Altogether, \[Z^{n}_{\mathbf{T}_{MN}}=\exp\bigl{(}NM\mathcal{H}_{u}\big{(}q,H\big{)}\big{)} \big{(}1+\mathrm{o}\big{(}1\big{)}\big{)}\ \,\] asymptotically, for \(M>>N\), \(n>0\), and \(q=\frac{n}{N}\) as \(N\longrightarrow+\infty\). ### Paper organization In the remaining subsection before \(2\), we provide an overview of the Hamiltonian formulation discussed in [5], from which computations are carried out using the Poisson bracket, either on functionals or on matrix functionals, to determine the action-angle variables. Such variables coincide with ones which have nonzero Poisson bracket. The quantities that are included in the Poisson bracket are taken from entries of the monodromy matrix as \(x\longrightarrow\pm\infty\), \(y\longrightarrow\pm\infty\), or as \(x\longrightarrow\pm\infty\), \(y\longrightarrow\pm\infty\) simultaneously. To make use of a similar approach for the Hamiltonian formulation described in the previous section from [7] in the six-vertex model, one must identify the dependency of the Hamiltonian formulation on the parametrization introduced for the Bethe equations, as well as for the eigenvalues of the Bethe equation. In \(2\) we analyze properties of the quantum mondromy, and transfer, matrices, from L operators. The L operator allows for us to obtain expressions for entries of the monodromy matrix that is introduced for the Hamiltonian flow of the six-vertex model, hence allowing us to deduce the action-angle variables of the Hamiltonian flow. In the next section, \(2\), we begin discussing inhomogeneities of the six-vertex model, and its connection with computing action-angle variables for showing that the Hamiltonian flow is integrable. In _2.2_, we introduce the L operator, and make use of a specific case of the L operator for computing each entry of the monodromy matrix raised to an arbitrary power. These relations are of upmost importance in performing computations with the Poisson bracket, in which in [5] it was shown, for the nonlinear Schrodinger's equation, that there are sixteen relations involving the Poisson bracket. Each relation relation is obtained from the fact that the Poisson bracket of the tensor product of two reduced monodromy matrices can be expressed in terms of a Poisson bracket between different entries of the reduced monodromy matrices. From the monodromy matrix defined in [7] for studying inhomogeneities of the six-vertex model, sixteen relations from the Poisson bracket of the tensor product of reduced monodromy matrices can also be formulated in the context of the Hamiltonian flow and the quantum monodromy matrix. As a result, one can determine the action-angle variables for the inhomogeneous six-vertex model by determining which Poisson brackets vanish, from the set of sixteen relations. **Theorem 1** (_precursor to the main result, integrability of the Hamiltonian flow for the inhomogeneous six-vertex model from nine Poisson brackets parametrized in entries of the monodromy matrix_). For the nine terms, * First term, \(\mathcal{P}_{1}\): \[\boxed{\mathcal{P}_{1}}\equiv\left\{\mathscr{P}_{1}\big{(}\mathrm{sin}\big{(}2 \eta\big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\big{(}\mathrm{sin} \big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime}\right\}\approx\big{[} \big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{C}_{1}\big{]}^{2} \bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}} -\frac{A_{3}\big{(}u^{\prime}\big{)}B_{3}\big{(}u\big{)}}{u^{\prime}-u}\bigg{]} \enspace,\] * Second term, \(\mathcal{P}_{2}\): \[\boxed{\mathcal{P}_{2}}\equiv\left\{\mathscr{P}_{1}\big{(}\mathrm{sin}\big{(}2 \eta\big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{A}_{2}^{ \prime}\right\}\approx-\big{[}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n -3}\mathscr{C}_{1}\big{]}\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3}\big{(}u^{ \prime}\big{)}}{u-u^{\prime}}-\frac{B_{3}\big{(}u\big{)}A_{3}\big{(}u^{\prime }\big{)}}{u^{\prime}-u}\bigg{]}\enspace,\] * Third term, \(\mathcal{P}_{3}\): \[\boxed{\mathcal{P}_{3}}\equiv\left\{\mathscr{P}_{1}\big{(}\mathrm{sin} \big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{A}_{3 }^{\prime}\right\}\approx\sum_{n^{\prime}:m+n^{\prime}=n-3}\big{(}\mathrm{sin} \big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{[}\sum_{1\leq i\leq m}\frac{ \partial}{\partial u}\bigg{(}\prod_{1\leq i\leq m}(\mathscr{C}_{2})_{i}\bigg{)} \bigg{]}\times\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad * [leftmargin=*] * Fifth term continued, \(\mathcal{P}_{5}\): \[\bigg{[}\prod_{1\leq j\neq i\leq n-3}(\mathscr{C}_{2})_{i}\bigg{]}-\big{(}B_{3} \big{(}u\big{)}+A_{3}\big{(}u\big{)}\big{)}\bigg{[}\prod_{1\leq i\leq n-3}( \mathscr{C}_{2})_{i}\bigg{]}\bigg{[}\sum_{1\leq j\leq n-3}\bigg{[}\frac{ \partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\frac{\partial( \mathscr{C}_{2})_{i}}{\partial u}-\frac{\partial A_{3}\big{(}u^{\prime}\big{)}} {\partial u}\frac{\partial(\mathscr{C}_{2})_{i}}{\partial u}\bigg{]}\bigg{]} \bigg{]}\times\cdots\] \[\bigg{[}\prod_{1\leq j\neq i\leq n-3}(\mathscr{C}_{2})_{j}\bigg{]}\,\] * Sixth term, \(\mathcal{P}_{6}\): \[\boxed{\mathcal{P}_{6}}\equiv\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{2}, \mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\bigg{\}}\approx\bigg{[}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}( \mathscr{C}_{2})_{i}\bigg{)}\ \big{(}\sin(2\eta)\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}( \mathscr{C}_{1})_{j}\bigg{)}\bigg{]}\ \bigg{]}\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}+\cdots\] \[\bigg{[}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}(\mathscr{ C}_{2})_{i}\bigg{)}\ \big{(}\sin(2\eta)\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}( \mathscr{C}_{1})_{j}\bigg{)}\bigg{]}\ \bigg{]}\bigg{[}\frac{B_{3}\big{(}u\big{)}A_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}-\cdots\] \[\bigg{[}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq j\leq n^{\prime}}( \mathscr{C}_{2})_{i}\bigg{)}\bigg{(}\sin(2\eta)\big{)}^{n^{\prime}-1}\mathscr{ C}_{1}\bigg{]}\bigg{)}\times\cdots\] \[\bigg{[}\bigg{(}\prod_{1\leq j\neq i\leq m}(\mathscr{C}_{2})_{j} \bigg{)}\bigg{(}\frac{\partial A_{3}\big{(}u\big{)}}{\partial u}+\frac{ \partial B_{3}\big{(}u\big{)}}{\partial u}\bigg{)}+\bigg{(}\frac{\partial( \mathscr{C}_{2})_{i}}{\partial u}\bigg{)}\bigg{(}\prod_{1\leq j\neq i\leq m}( \mathscr{C}_{2})_{j}\bigg{)}\bigg{(}\frac{\partial A_{3}\big{(}u^{\prime} \big{)}}{\partial u^{\prime}}+\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{ \partial u^{\prime}}\bigg{)}\bigg{]}\ \bigg{]}\ \,\] * Sevent term, \(\mathcal{P}_{7}\): \[\boxed{\mathcal{P}_{7}}\equiv\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{3}, \mathscr{P}_{2}\big{(}\sin(2\eta)\big{)}^{n-3}\mathscr{A}_{1}^{\prime}\bigg{\}} \approx\bigg{[}\big{(}\sin(2\eta)\big{)}^{n-3}\mathscr{C}_{1}\bigg{]}\bigg{[} \frac{A_{3}\big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}+\cdots\] \[\bigg{[}\big{(}\sin(2\eta)\big{)}^{n-3}\mathscr{C}_{1}\bigg{]} \bigg{[}\frac{B_{3}\big{(}u\big{)}A_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime} }\bigg{]}+\cdots\] \[\bigg{[}\sum_{1\leq j\leq n^{\prime}}\big{(}\sin(2\eta)\big{)}^{ n^{\prime}-1}\mathscr{C}_{1}\bigg{]}\bigg{[}B_{3}\big{(}u\big{)}\bigg{[}\ \sum_{1\leq i\leq m}\bigg{[}\ \frac{\partial}{\partial u}\prod_{1\leq i\leq m}(\mathscr{C}_{2})_{i}\bigg{]} \bigg{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u^{\prime}}- \frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}-\cdots\] \[\bigg{[}\frac{\partial}{\partial u^{\prime}}\prod_{1\leq i\leq m }\sin(\mathscr{C}_{2})_{i}\bigg{]}\bigg{[}\frac{\partial A_{3}\big{(}u^{\prime} \big{)}}{\partial u}-\frac{\partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u ^{\prime}}\bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\,\] * Eighth term, \(\mathcal{P}_{8}\): \[\boxed{\mathcal{P}_{8}}\equiv\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\bigg{\}}\approx-\bigg{[}\bigg{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}(\mathscr{C}_{2}) _{i}\bigg{)}\big{(}\sin(2\eta)\big{)}^{n^{\prime}-1}\mathscr{C}_{1}\bigg{]} \bigg{)}\bigg{(}\prod_{1\leq i\leq n-3}(\mathscr{C}_{2})_{i}\bigg{)}\bigg{]} \times\cdots\] \[\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^ {\prime}}\bigg{]}-\cdots\] \[\Big{[}\sum_{1\leq i\leq m}\left[\frac{\partial}{\partial u^{\prime}} \bigg{[}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{ \varepsilon}\bigr{)}\ \bigg{]}\ \bigg{]}\ \right]\ \Big{]}\ \,\] taking the summation of Poisson brackets, \[\sum_{1\leq i\leq 9}\mathcal{P}_{i}\ \,\] equals, \[\left\{\left(A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\right)\left[\left(\sin \big{(}2\eta\big{)}\right)^{n-3}\mathscr{A}_{1}+\mathscr{A}_{2}+\mathscr{A}_{3} \right],\left(A_{3}\big{(}u^{\prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)} \right)\left[\left(\sin\big{(}2\eta\big{)}\right)^{n-3}\mathscr{A}_{1}^{\prime }+\mathscr{A}_{2}^{\prime}+\mathscr{A}_{3}^{\prime}\right]\right\}\ \.\] To demonstrate that the desired conclusion above holds for the main result, which can be extended, through very similar manipulations of the Poisson bracket for fifteen other relations that are also parametrized in entries of the monodromy matrix, we perform several computations with L operators in \(2\). From performing computations with the L operators, we obtain, explicitly, expressions for \(\mathscr{A}_{1}\), \(\mathscr{A}_{2}\), \(\mathscr{A}_{3}\), \(\mathscr{A}_{1}^{\prime}\), \(\mathscr{A}_{2}^{\prime}\), and \(\mathscr{A}_{3}^{\prime}\). Below, we introduce the action angle variables and the result that the action-angle variables, in canonical coordinates, vanish. **Definition** (_action-angle variables for the inhomogeneous six-vertex model_). Define, \[\varphi\big{(}u\big{)}=-\mathrm{arg}\big{(}B\big{(}u\big{)}\big{)}\ \,\] \[\rho\big{(}u\big{)}=\frac{1}{\pi}\mathrm{log}\big{[}1+\epsilon \big{|}B\big{(}u\big{)}\big{|}^{2}\big{]}\ \,\] corresponding to action-angle variables for teh inhomogeneous six-vertex model. To determine the second action-angle variable \(\rho\big{(}u\big{)}\) from \(\varphi\big{(}u\big{)}\), we introduce the result below for determining the canonically conjugate variable. **Lemma** (_determining, from a computation with the Poisson bracket, canonically conjugate variables to \(\varphi\big{(}u\big{)}\)_). There exists a function \(f\) for which, \[\left\{f\big{(}\big{|}B\big{(}u\big{)}\big{|}^{2}\big{)},\varphi\big{(}u^{ \prime}\big{)}\right\}=f^{\prime}\big{(}\big{|}B\big{(}u\big{)}\big{|}^{2} \big{)}\big{[}1+\epsilon\big{|}b\big{(}u\big{)}\big{|}^{2}\big{]}\Longleftrightarrow f \big{(}x\big{)}=\frac{1}{\pi}\mathrm{log}\big{[}1+\epsilon x\big{]}\ \.\] _Proof of Lemma_. Refer to the computation, with the Poisson bracket, of the canonically conjugate variable to \(\varphi\big{(}\mu\big{)}\) provided in _Chapter III, 7_ of [5]. A very similar computation provides the desired function above, from which we conclude the argument. **Theorem 2** (_main result, vanishing of the Poisson bracket of the action-angle variables in canonical coordinates_). In canonical coordinates, the two Poisson brackets, \[\left\{\varphi\big{(}u_{1}\big{)},\varphi\big{(}u_{2}\big{)}\right\}\ \,\] \[\left\{\rho\big{(}u_{1}\big{)},\rho\big{(}u_{2}\big{)}\right\}\ \,\] vanish. ### From the reduced monodromy, and monodromy, matrices to a Hamiltonian system A use of the Hamiltonian approach from [5] is summarized below. #### 1.4.1 Previous application of the Hamiltonian formulation for the nonlinear Schrodinger's equation With \(T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\), which besides the previous diagonal representation can also be expressed as, \[T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\equiv\begin{bmatrix}A\big{(}u \big{)}&B\big{(}u\big{)}\\ C\big{(}u\big{)}&D\big{(}u\big{)}\end{bmatrix}\] one may proceed to determine the angle-action variables of the Hamiltonian system, as described in [5]. To implement the computations for this approach for the nonlinear Schrodinger's equation, introduce, under slightly different parameters, the reduced monodromy matrix, from the limit as \(x\longrightarrow+\infty\), and as \(y\longrightarrow-\infty\), (_Chapter III_, [5]), \[\underset{N\longrightarrow+\infty}{\lim}T_{N}\big{(}\lambda\big{)}\equiv T \big{(}\lambda\big{)}=\underset{\begin{subarray}{c}x\longrightarrow+\infty\\ y\longrightarrow-\infty\end{subarray}}{\lim}E\big{(}-x,\lambda\big{)}T\big{(}x,y,\lambda\big{)}E\big{(}y,\lambda\big{)}\ \,\] for the matrix exponential, \[E\big{(}x,\lambda\big{)}\equiv\exp\!\big{(}\lambda xU_{1}\big{)}\ \,\] and also from the clockwise exponential action, \[T_{L}\big{(}\lambda\big{)}\equiv\exp\!\bigg{[}\underset{\text{clockwise action}}{\int_{[-L,L]}}U\big{(}x,\lambda\big{)}\text{d}x\bigg{]}\ \,\] for a real parameter \(\lambda\), with the integrand \(U\), a \(2\times 2\) matrix, and \(N\geq L\), which satisfies the first order PDE, \[\frac{\partial F}{\partial x}=U\big{(}x,\lambda\big{)}F\ \,\] for the vector valued function \(F\big{(}x,t\big{)}\equiv F\equiv\big{[}f_{1},f_{2}\big{]}^{\text{T}}\). The nonconstant prefactor for \(F\) takes the form of the linear combination, \[U=U_{0}+\lambda U_{1}\ \,\] with, \[U_{0}\equiv\sqrt{\chi}\begin{bmatrix}0&\psi\\ \bar{\psi}&0\end{bmatrix}\ \,\] \[U_{1}\equiv\frac{1}{2i}\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\ \.\] The coordinates appearing in the definition of \(U_{0}\) satisfy the second order PDE, the nonlinear Schrodinger's equation, with solutions \(\psi\), \[i\frac{\partial\psi}{\partial t}=-\frac{\partial^{2}\psi}{\partial x^{2}}+2 \chi\big{|}\psi\big{|}^{2}\psi\ \,\] for real \(\chi\), a coupling constant, with \(\big{|}\psi\big{|}^{2}=\psi\bar{\psi}\). In the following infinite limit, the solutions satisfy, \[\underset{x\longrightarrow\pm\infty}{\lim}\psi\big{(}x\big{)}= \rho_{1}^{(\pm)}\ \,\] \[\underset{x\longrightarrow\pm\infty}{\lim}\psi\big{(}x\big{)}= \rho_{2}^{(\pm)}\ \,\] for real \(\rho_{1}^{(\pm)},\rho_{2}^{(\pm)}\) with \(\rho_{1}^{(\pm)}\neq\rho_{2}^{(\pm)}\). Equipped with solutions to the nonlinear PDE above, the Hamiltonian admits the following integral representation over the real line, (_Chapter III_, [5]), \[H\big{(}x,\psi,\bar{\psi},\chi\big{)}\equiv H\equiv\int_{\mathbf{R}}\big{(} \frac{\partial\psi}{\partial x}\frac{\partial\bar{\psi}}{\partial x}+\chi \bar{\psi}^{2}\psi^{2}\big{)}\ \text{d}x\ \,\] Under the matrix representation for the reduced monodromy matrix, (_Chapter I_, [5]), \[T_{L}\big{(}\lambda\big{)}\equiv T\big{(}\lambda\big{)}\equiv\begin{bmatrix}a( \lambda)&\epsilon b(\lambda)\\ b(\lambda)&\bar{a}(\lambda)\end{bmatrix}\] the transition coefficients for discrete spectra occur iff \(\epsilon\equiv-1\), and are given by, \[T_{-}^{(1)}\big{(}x,\lambda_{j}\big{)}=\gamma_{j}T_{+}^{(2)}\big{(}x,\lambda_{ j}\big{)}\ \,\] for \(1\leq j\leq n\), where the \(\lambda_{j}\) parameter appearing in \(T_{-}^{(1)}\big{(}x,\cdot\big{)}\), and in \(T_{+}^{(2)}\big{(}x,\cdot\big{)}\), is the collection of zeros, \(\big{\{}\lambda_{j}\big{\}}\), of \(a(\lambda)\) in the upper half plane, and \(\gamma_{j}\) denotes another real parameter, the _transition amplitude_, between \(T_{-}^{(1)}\big{(}x,\cdot\big{)}\) and \(T_{+}^{(2)}\big{(}x,\cdot\big{)}\). For another real parameter \(\mu\), the identity \[\big{\{}T\big{(}x,y,\lambda\big{)}\bigotimes T\big{(}x,y,\mu\big{)}\big{\}}= \big{\{}r\big{(}\lambda-\mu\big{)},T\big{(}x,y,\lambda\big{)}\bigotimes T\big{ (}x,y,\mu\big{)}\big{\}}\ \,\] of the Poisson bracket \(\big{\{}\cdot,\cdot\big{\}}\), which satisfies the anticommutativity, and bilinearity properties, Leibniz' rule, and the Jacobi identity, holds for \(y<x\), where, \[r_{\pm}\big{(}\lambda-\mu\big{)}=\lim_{y\to\pm\infty}\!\!\!\!\!\!\!\!\!\!\!E \big{(}y,\mu-\lambda\big{)}\bigotimes E\big{(}y,\lambda-\mu\big{)}r\big{(} \lambda-\mu\big{)}\ \,\] under the convention for the tensor product, between \(A\) and \(B\), with respect to the Poisson bracket, (_Chapter III_, [5]), \[\big{\{}A^{\otimes}_{?}B\big{\}}_{jk,mn}\equiv\big{\{}A_{jm},B_{kn}\big{\}}\ \.\] #### 1.4.2 Poisson bracket and the action-angle variables The action of the Poisson bracket that is implemented for demonstrating that the Hamiltonian flow for the nonlinear Schrodinger's equation is integrable depends upon whether the arguments of the Poisson bracket are functionals, or matrix functionals. In the first case, the bracket takes the form, (_Chapter III_, [5]), \[\big{\{}F,G\big{\}}\equiv i\int_{[-L,L]}\big{(}\frac{\delta F}{\delta\psi} \frac{\delta G}{\delta\psi}-\frac{\delta F}{\delta\psi}\frac{\delta G}{\delta \psi}\big{)}\ \mathrm{d}x\ \,\] for functionals \(F\) and \(G\), while for the second case, the bracket takes the form, \[\big{\{}A^{\bigotimes}B\big{\}}\equiv i\int_{[-L,L]}\left(\frac{\delta A}{ \delta\psi}\bigotimes\frac{\delta B}{\delta\bar{\psi}}-\frac{\delta A}{\delta \bar{\psi}}\bigotimes\frac{\delta B}{\delta\psi}\right)\ \mathrm{d}x\ \,\] for matrix functionals \(A\) and \(B\), and with \(\psi\equiv\psi\big{(}x\big{)}\) and \(\bar{\psi}\equiv\bar{\psi}\big{(}x\big{)}\) having support over \(\big{(}-L,L\big{)}\). Similarly, from \(T\big{(}x,y,\lambda\big{)}\), \[T_{\pm}\big{(}x,\lambda\big{)}=\lim_{y\to\pm\infty}T\big{(}x,y,\lambda\big{)} E\big{(}y,\lambda\big{)}\ \.\] From the first identity involving the Poisson bracket, one similarly has, as either \(y\longrightarrow-\infty\), or as \(y\longrightarrow+\infty\), independently of \(y\), \[\big{\{}T_{-}\big{(}x,\lambda\big{)}\bigotimes T_{-}\big{(}x,\mu \big{)}\big{\}}=r\big{(}\lambda-\mu\big{)}T_{-}\big{(}x,\lambda\big{)}\bigotimes T _{-}\big{(}x,\mu\big{)}-T_{-}\big{(}x,\lambda\big{)}\bigotimes T_{-}\big{(}x, \mu\big{)}r_{-}\big{(}\lambda-\mu\big{)}\ \,\] \[\big{\{}T_{+}\big{(}x,\lambda\big{)}\bigotimes T_{+}\big{(}x,\mu \big{)}\big{\}}=T_{+}\big{(}x,\lambda\big{)}\bigotimes T_{+}\big{(}x,\mu\big{)} r_{+}\big{(}\lambda-\mu\big{)}-r\big{(}\lambda-\mu\big{)}T_{+}\big{(}x, \lambda\big{)}\bigotimes T_{+}\big{(}x,\mu\big{)}\ \,\] while, between \(T_{-}\big{(}x,\lambda\big{)}\) and \(T_{+}\big{(}x,\lambda\big{)}\), \[\big{\{}T_{-}\big{(}x,\lambda\big{)}\bigotimes T_{+}\big{(}x,\lambda\big{)} \big{\}}=0\ \.\] From the quantities introduced thus far, the action angle variables, given a Hamiltonian flow, are determined by whether the Poisson bracket vanishes or not, in which the only terms with nonvanishing Poisson bracket are, \[\big{\{}\Phi\big{(}\lambda\big{)},\Phi\big{(}\lambda\big{)}\big{\}}=0\ \,\] for the function, \[\Phi\big{(}\lambda\big{)}\equiv\sqrt{\rho\big{(}\lambda\big{)}}\text{exp}\big{(} -i\phi\big{(}\lambda\big{)}\big{)}\ \,\] and its complex conjugate, \(\Phi\big{(}\lambda\big{)}\), as well as, \[\rho\big{(}\lambda\big{)}\equiv\frac{1}{2\pi\chi}\text{log}\big{[}1+ \epsilon\big{|}b\big{(}\lambda\big{)}\big{|}^{2}\big{]}\ \,\] \[\phi\big{(}\lambda\big{)}\equiv-\text{arg}\big{(}b\big{(}\lambda \big{)}\big{)}\ \.\] With the monodromy, and transfer, matrices of the previous section which were respectively denoted with \(t\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\) and \(T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,0\big{)}\), in the next section we state the key property that the action-angle variables satisfy. Action-angle variables for the Hamiltonian flow in the \(\Delta<-1\) regime of the six-vertex model To make use of the Poisson bracket for determining which variables with respect to it vanish, it suffices to demonstrate that the Poisson bracket vanishes for certain entries of the quantum monodromy matrix. We recall the representation presented in _1.4.1_ with \(T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\) of the quantum monodromy matrix, which was introduced alongside the definition of the transfer matrix, \(t\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\). ### Inhomogeneities in the horizontal direction In the setting of the six-vertex model, the action-angle variables are determined by the structure of the quantum monodromy matrix, and hence of the transfer matrix. From the background provided in the previous section on the Hamiltonian structure of the nonlinear Schrodinger's equation, instead of sending \(x\longrightarrow\pm\infty\), and \(y\longrightarrow-\infty\), we send the spectral parameters at each site \(i\) to \(-\infty\), in which, \[u-v_{1}\longrightarrow-\infty,\cdots,u-v_{N}\longrightarrow-\infty\ \,\] from each term of the R-matrix appearing in the quantum monodromy matrix, \[T_{a}\big{(}u,H,V\big{)}=\lim_{v_{k}\longrightarrow-\infty}T_{a}\big{(}u, \big{\{}v_{k}\big{\}},H,V\big{)}=\lim_{v_{i}\longrightarrow-\infty}\ \bigg{[}\prod_{i=1}^{N}\big{(}\text{diag}\big{(}\text{exp}\big{(}2H\big{)}, \text{exp}\big{(}2H\big{)}\big{)}\big{)}R_{ia}\big{(}u-v_{i}\big{)}\bigg{]}\ \.\] As \(v_{i}\longrightarrow-\infty,\cdots,v_{N}\longrightarrow-\infty\), the above limit is equivalent to, \[\lim_{v_{1}\longrightarrow-\infty}\bigg{[}\big{(}\text{diag}\big{(}\text{exp }\big{(}2H\big{)},\text{exp}\big{(}2H\big{)}\big{)}\big{)}R_{1a}\big{(}u-v_{1 }\big{)}\cdots\big{(}\text{diag}\big{(}\text{exp}\big{(}2H\big{)},\text{exp} \big{(}2H\big{)}\big{)}\big{)}R_{Na}\big{(}u-v_{N}\big{)}\bigg{]}\.\] From \(T_{a}\) above, in comparison to the monodromy matrix \(T\big{(}\lambda\big{)}\) for the nonlinear Schrodinger's equation, the terms dependent on \(u\), \(B\big{(}u\big{)}\) and \(C\big{(}u\big{)}\), are not complex conjugates of one another as \(b\big{(}\lambda\big{)}\) and \(b\big{(}\lambda\big{)}\) are, as provided in the definition of \(T\big{(}\lambda\big{)}\). In order to demonstrate that the six-vertex model is integrable from properties of the Hamiltonian flow, and to also make sense of the multi dimensional limit show above, it suffices to show that the Poisson bracket of the generator of motion integrals vanish. First, recall, from (H), \[\mathcal{H}_{u}\big{(}q,H\big{)}\equiv\max_{\pm}\,\mathcal{H}_{u}^{\pm}\big{(}q, H\big{)}\equiv\max_{\pm}\big{\{}\pm H+l_{\pm}+\int_{C}\psi_{u}^{\pm}\big{(} \alpha\big{)}\rho\big{(}\alpha\big{)}\ \mathrm{d}\alpha\big{\}}\ \,\] which, explicitly in terms of \(\psi\), is equivalent to, \[\mathcal{H}_{u}^{\pm}\big{(}q,H\big{)}\equiv\ \ \left\{\begin{array}{l}+\Rightarrow\max_{+} \big{\{}H+\log\,\sinh\,u+\int_{C}\log[\frac{\sinh(\frac{3}{2}+u-i\alpha-iu)}{ \sinh(\frac{3}{2}-u+i\alpha-iu)}]\rho\big{(}\alpha\big{)}\ \mathrm{d}\alpha\big{\}}\ \,\\ -\Rightarrow\max_{-}\big{\{}-H+\log\,\sinh(\eta-u)+\int_{C}\log[\frac{\sinh( \frac{3}{2}-u+i\alpha-iu)}{\sinh(u-\frac{3}{2}-i\alpha-iu)}]\rho\big{(}\alpha \big{)}\ \mathrm{d}\alpha\big{\}}\ \.\end{array}\right.\] With each motion integral introduced above for \(+\) and for \(-\), we proceed to identify the action-angle variables for the Hamiltonian flow of the six-vertex model from the motion integrals above. To make such an identification of which entries of the monodromy matrix appear in expressions for the generating functions of the the motion integrals, observe that the quantum monodromy matrix, denoted earlier with \(T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\), upon setting \(u\equiv\lambda_{\alpha}\) for the eigenvalue of the quantum monodromy matrix, can be expressed as, \[\begin{bmatrix}A\big{(}\lambda_{\alpha}\big{)}&B\big{(}\lambda_{\alpha}\big{)} \\ C\big{(}\lambda_{\alpha}\big{)}&D\big{(}\lambda_{\alpha}\big{)}\end{bmatrix}\] which admits a decomposition in terms of \(L\) operators, from the product, \[\prod_{i=0}^{N-1}L_{\alpha,N-i}\big{(}\lambda_{\alpha},v_{N-i}\big{)}\ \,\] where the \(L\) operator is defined as, [2], \[L_{\alpha,k}\big{(}\lambda_{\alpha},v_{k}\big{)}\equiv\begin{bmatrix}\sin \big{(}\lambda_{\alpha}-v_{k}+\eta\sigma_{k}^{z}\big{)}&\sin\big{(}2\eta\big{)} \sigma_{k}^{-}\\ \sin\big{(}2\eta\big{)}\sigma_{k}^{+}&\sin\big{(}\lambda_{\alpha}-v_{k}-\eta \sigma_{k}^{z}\big{)}\end{bmatrix}\ \,\] corresponding to the \(k\) th horizontal line, and \(\alpha\) th vertical line, with \(L_{\alpha,k}\curvearrowright\) vertical space \(\otimes\)horizontal space. From the block decomposition above in terms of \(A\), \(B\), \(C\) and \(D\), one also has that, \[\mathrm{tr}\big{(}T_{a}\big{(}\lambda_{a},\big{\{}v_{k}\big{\}},H,V\big{)} \big{)}=A\big{(}\lambda_{\alpha}\big{)}+D\big{(}\lambda_{\alpha}\big{)}\ \.\] From the product expansion over \(N-1\)\(L\) operators for \(T_{a}\), one expands the product, \[\prod_{i=0}^{N-1}\begin{bmatrix}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta \sigma_{N-i}^{z}\big{)}&\sin\big{(}2\eta\big{)}\sigma_{N-i}^{-}\\ \sin\big{(}2\eta\big{)}\sigma_{N-i}^{+}&\sin\big{(}\lambda_{\alpha}-v_{N-i}- \eta\sigma_{N-i}^{z}\big{)}\end{bmatrix}=\begin{bmatrix}\sin\big{(}\lambda_ {\alpha}-v_{N}+\eta\sigma_{N}^{z}\big{)}&\sin\big{(}2\eta\big{)}\sigma_{N}^{-} \\ \sin\big{(}2\eta\big{)}\sigma_{N}^{+}&\sin\big{(}\lambda_{\alpha}-v_{N}-\eta \sigma_{N}^{z}\big{)}\end{bmatrix}\cdots\] To determine the action-angle variables of the Hamiltonian formulation for the inhomogeneous six-vertex model, we present the arguments for the result below. First, we determine the form of each expression from carrying out the multiplication of two by two L operators. Specifically, we compute the series of matrices, \[\left\{\begin{array}{cc}\big{[}A_{i}\big{(}\lambda_{\alpha}\big{)}&B_{i} \big{(}\lambda_{\alpha}\big{)}\\ C_{i}\big{(}\lambda_{\alpha}\big{)}&D_{i}\big{(}\lambda_{\alpha}\big{)}\end{array} \right\}_{1\leq i\leq n}\ \,\] where each entry of the \(n\) th \(L\) operator is determined by computing \(n-1\) matrix multiplications. ### Computing L operators **Lemma 1** (_collecting terms from the product of two by two \(L\) operators_). The first entry of the \(L\) operator product, \[\prod_{i=0}^{3}\!\!L_{\alpha,N-i}\big{(}\lambda_{\alpha},v_{N-i}\big{)}\ \,\] has an expansion of the form, \[A_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\prod_{0\leq i\leq 3}\sin \!\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\big{)}+\big{(}\sin\! \big{(}2\eta\big{)}\big{)}^{2}\bigg{[}\ \bigg{(}\prod_{ \begin{subarray}{c}0\leq i\leq 1\\ i=0,-\\ i\equiv 1,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{ \begin{subarray}{c}2\leq i\leq 3\\ i\equiv 2,-\\ i\equiv 3,+\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof of Lemma 1.: We collect terms from the first two terms in the product of \(L\) operators, \[\begin{bmatrix}\sin\bigl{(}\lambda_{\alpha}-v_{N}+\eta\sigma_{N}^{z}\bigr{)}&\sin \bigl{(}2\eta\bigr{)}\sigma_{N}^{-}\\ \sin\bigl{(}2\eta\bigr{)}\sigma_{N}^{+}&\sin\bigl{(}\lambda_{\alpha}-v_{N}- \eta\sigma_{N}^{z}\bigr{)}\end{bmatrix}\begin{bmatrix}\sin\bigl{(}\lambda_{ \alpha}-v_{N-1}+\eta\sigma_{N-1}^{z}\bigr{)}&\sin\bigl{(}2\eta\bigr{)}\sigma_{N -1}^{-}\\ \sin\bigl{(}2\eta\bigr{)}\sigma_{N-1}^{+}&\sin\bigl{(}\lambda_{\alpha}-v_{N-1}- \eta\sigma_{N-1}^{z}\bigr{)}\end{bmatrix}\] from which one obtains a resultant matrix of the form, \[\begin{bmatrix}\mathbf{1}&\mathbf{2}\\ \mathbf{3}&\mathbf{4}\end{bmatrix}\enspace,\] from the following expression for the first and second entries, \[\mathbf{1}^{0}\equiv\mathbf{1}\equiv\sin\bigl{(}\lambda_{\alpha}-v_{N}+\eta \sigma_{N}^{z}\bigr{)}\,\sin\bigl{(}\lambda_{\alpha}-v_{N-1}+\eta\sigma_{N-1}^ {z}\bigr{)}+\sin\bigl{(}2\eta\bigr{)}\sigma_{N}^{-}\!\!\sin\bigl{(}2\eta\bigr{)} \sigma_{N-1}^{+}\enspace,\] As we proceed with the computations, to determine the entries of each entry of the matrix of the \(L\) operator, we denote the entries of each such two by two matrix with \(\mathbf{1}^{i}\), \(\mathbf{2}^{i}\), \(\mathbf{3}^{i}\), and \(\mathbf{4}^{i}\), where \(0\leq i\leq N-1\). For the remaining terms in the product of \(L\) operators, \[\begin{bmatrix}\sin\bigl{(}\lambda_{\alpha}-v_{N-2}+\eta\sigma_{N-2}^{z}\bigr{)}& \sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{-}\\ \sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{+}&\sin\bigl{(}\lambda_{\alpha}-v_{N-2 }-\eta\sigma_{N-2}^{z}\bigr{)}\end{bmatrix}\cdots\begin{bmatrix}\sin\bigl{(} \lambda_{\alpha}-v_{1}+\eta\sigma_{1}^{z}\bigr{)}&\sin\bigl{(}2\eta\bigr{)} \sigma_{1}^{-}\\ \sin\bigl{(}2\eta\bigr{)}\sigma_{1}^{+}&\sin\bigl{(}\lambda_{\alpha}-v_{1}- \eta\sigma_{1}^{z}\bigr{)}\end{bmatrix}\] continuing the computation along similar lines, implies that the product of operators, \[\begin{bmatrix}\mathbf{1}&\mathbf{2}\\ \mathbf{3}&\mathbf{4}\end{bmatrix}\begin{bmatrix}\sin\bigl{(}\lambda_{\alpha} -v_{N-2}+\eta\sigma_{N-2}^{z}\bigr{)}&\sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{ -}\\ \sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{+}&\sin\bigl{(}\lambda_{\alpha}-v_{N-2}- \eta\sigma_{N-2}^{z}\bigr{)}\end{bmatrix}\cdots\begin{bmatrix}\sin\bigl{(} \lambda_{\alpha}-v_{1}+\eta\sigma_{1}^{z}\bigr{)}&\sin\bigl{(}2\eta\bigr{)} \sigma_{1}^{-}\\ \sin\bigl{(}2\eta\bigr{)}\sigma_{1}^{+}&\sin\bigl{(}\lambda_{\alpha}-v_{1}- \eta\sigma_{1}^{z}\bigr{)}\end{bmatrix}\enspace,\] takes the form, \[\begin{bmatrix}\mathbf{1}\!\sin\bigl{(}\lambda_{\alpha}-v_{N-2}+\eta\sigma_{N -2}^{z}\bigr{)}+\mathbf{2}\!\sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{+}&\mathbf{ 1}\!\sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{-}+\mathbf{2}\!\sin\bigl{(}\lambda_ {\alpha}-v_{N-2}-\eta\sigma_{N-2}^{z}\bigr{)}\\ \mathbf{3}\!\sin\bigl{(}\lambda_{\alpha}-v_{N-2}+\eta\sigma_{N-2}^{z}\bigr{)}+ \mathbf{4}\!\sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{+}&\mathbf{3}\!\sin\bigl{(} 2\eta\bigr{)}\sigma_{N-2}^{-}+\mathbf{4}\!\sin\bigl{(}\lambda_{\alpha}-v_{N-2 }-\eta\sigma_{N-2}^{z}\bigr{)}\end{bmatrix}\] Distributing terms entrywise in the matrix with terms \(\mathbf{1}\), \(\mathbf{2}\), \(\mathbf{3}\), and \(\mathbf{4}\) above yields the two by two matrix with entries, respectively given by \(\mathbf{1}^{1}\), \(\mathbf{2}^{1}\), \(\mathbf{3}^{1}\), and \(\mathbf{4}^{1}\), \[\mathbf{1}^{1}\equiv\mathbf{1}\!\sin\bigl{(}\lambda_{\alpha}-v_{N-2}+\eta \sigma_{N-2}^{z}\bigr{)}+\mathbf{2}\!\sin\bigl{(}2\eta\bigr{)}\sigma_{N-2}^{+}\] corresponding to the first term, \(\mathbf{1}^{1}\), which has the product representation, \[\prod_{0\leq i\leq 2}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta \sigma_{N-i}^{z}\bigr{)}+\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{2}\!\biggl{(} \prod_{\begin{subarray}{c}0\leq i\leq 1\\ i\equiv 1,+\end{subarray}}\!\sigma_{N-i}^{-+}\biggr{)}\!\biggl{(}\sin\bigl{(} \lambda_{\alpha}-v_{N-2}+\eta\sigma_{N-2}^{z}\bigr{)}\biggr{)}+\bigl{(}\sin \bigl{(}2\eta\bigr{)}\bigr{)}^{2}\times\cdots\] \[\biggl{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,-\end{subarray}}\!\sigma_{N-i}^{-+}\biggr{)}\!\biggl{(}\sin\bigl{(} \lambda_{\alpha}-v_{N}+\eta\sigma_{N}^{z}\bigr{)}\biggr{)}+\bigl{(}\sin\bigl{(} 2\eta\bigr{)}\bigr{)}^{2}\!\biggl{(}\prod_{\begin{subarray}{c}\text{even }i\,0\leq i\leq 2\\ i\equiv 0,-\end{subarray}}\!\sigma_{N-i}^{-+}\biggr{)}\!\biggl{(}\sin\bigl{(} \lambda_{\alpha}-v_{N-1}-\eta\sigma_{N-1}^{z}\bigr{)}\biggr{)}\enspace,\] as well as the following product representation for \(\mathbf{2}^{1}\), in which, \[\mathbf{2}^{1}\equiv\big{(}\mathrm{sin}\big{(}2\eta\big{)}\sigma_{N-2}^{-}\big{)} \prod_{0\leq i\leq 1}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}+\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{3}\bigg{(}\prod_{ \begin{subarray}{c}0\leq i\leq 2\\ i=0,-\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}+\big{(}\mathrm{sin}\big{(}2 \eta\big{)}\sigma_{N-1}^{-}\big{)}\times\cdots\\ \bigg{(}\prod_{\begin{subarray}{c}\mathrm{even}\ i:\ 0\leq i\leq 2\\ i\equiv 0,+\eta\\ i\equiv 2,-\eta\end{subarray}}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-i}\pm \eta\sigma_{N-i}^{z}\big{)}\bigg{)}+\big{(}\mathrm{sin}\big{(}2\eta\big{)} \sigma_{N}^{-}\big{)}\prod_{1\leq i\leq 2}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-i}- \eta\sigma_{N-i}^{z}\big{)}\enspace,\] from the expansion, \[\mathbf{2}^{1}\equiv\mathbf{1}\mathrm{sin}\big{(}2\eta\big{)}\sigma_{N-2}^{-} +\mathbf{2}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-2}-\eta\sigma_{N-2}^{z} \big{)}\] From the two by two matrix with entries \(\mathbf{1}^{1}\), \(\mathbf{2}^{1}\), \(\mathbf{3}^{1}\) and \(\mathbf{4}^{1}\), the remaining terms of the product takes the form, \[\begin{bmatrix}\mathbf{1}^{1}&\mathbf{2}^{1}\\ \mathbf{3}^{1}&\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\mathrm{sin}\big{(} \lambda_{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}&\mathrm{sin}\big{(}2\eta \big{)}\sigma_{N-3}^{-}\\ \mathrm{sin}\big{(}2\eta\big{)}\sigma_{N-3}^{+}&\mathrm{sin}\big{(}\lambda_{ \alpha}-v_{N-3}-\eta\sigma_{N-3}^{z}\big{)}\end{bmatrix}\cdots\times \begin{bmatrix}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{1}+\eta\sigma_{1}^{z} \big{)}&\mathrm{sin}\big{(}2\eta\big{)}\sigma_{1}^{-}\\ \mathrm{sin}\big{(}2\eta\big{)}\sigma_{1}^{+}&\mathrm{sin}\big{(}\lambda_{ \alpha}-v_{1}-\eta\sigma_{1}^{z}\big{)}\end{bmatrix}\enspace,\] which implies, for, \[\begin{bmatrix}\mathbf{1}^{1}&\mathbf{2}^{1}\\ \mathbf{3}^{1}&\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\mathrm{sin}\big{(} \lambda_{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}&\mathrm{sin}\big{(}2\eta \big{)}\sigma_{N-3}^{-}\\ \mathrm{sin}\big{(}2\eta\big{)}\sigma_{N-3}^{+}&\mathrm{sin}\big{(}\lambda_{ \alpha}-v_{N-3}-\eta\sigma_{N-3}^{z}\big{)}\end{bmatrix}\equiv\begin{bmatrix} \mathbf{1}^{2}&\mathbf{2}^{2}\\ \mathbf{3}^{2}&\mathbf{4}^{2}\end{bmatrix}\] has a first entry that is given by, \[\mathbf{1}^{2}\equiv\mathbf{1}^{1}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-3} +\eta\sigma_{N-3}^{z}\big{)}+\mathbf{2}^{1}\mathrm{sin}\big{(}2\eta\big{)} \sigma_{N-3}^{+}\enspace.\] The first entry of the resultant matrix above is given by, \[\mathbf{1}^{2}=\bigg{[}\prod_{\begin{subarray}{c}0\leq i\leq 2 \\ i\equiv 0,+\\ i\equiv 2,-\eta\end{subarray}}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-i}+ \eta\sigma_{N-i}^{z}\big{)}+\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{2} \bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ i\equiv 0,-\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\bigg{(}\mathrm{sin}\big{(} \lambda_{\alpha}-v_{N-2}+\eta\sigma_{N-2}^{z}\big{)}\bigg{)}\bigg{]}\times\cdots\\ \bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,-\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\bigg{(}\mathrm{sin}\big{(} \lambda_{\alpha}-v_{N-1}-\eta\sigma_{N-1}^{z}\big{)}\bigg{)}\bigg{]}\times\cdots\\ \bigg{[}\Big{(}\mathrm{sin}\big{(}2\eta\big{)}\sigma_{N-2}^{-} \bigg{)}\bigg{(}\prod_{0\leq i\leq 1}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-i}+ \eta\sigma_{N-i}^{z}\big{)}\bigg{)}+\big{(}\mathrm{sin}\big{(}2\eta\big{)} \big{)}^{3}\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 2\\ i\equiv 0,-\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}+\big{(}\mathrm{sin}\big{(}2 \eta\big{)}\sigma_{N-1}^{-,+}\big{)}\times\cdots\\ \bigg{(}\prod_{\begin{subarray}{c}\mathrm{even}\ i\ 0\leq i\leq 2\\ i\equiv 0,+\eta\\ i\equiv 2,-\eta\end{subarray}}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{N-i}\pm \eta\sigma_{N-i}^{z}\big{)}\bigg{)}+\bigg{(}\mathrm{sin}\big{(}2\eta\big{)} \sigma_{N-1}^{-,+}\bigg{)}\bigg{)}\bigg{]}\mathrm{sin}\big{(}2\eta\big{)} \sigma_{N-3}^{+}\enspace,\] which has the following equivalent product representation, after collecting terms, \[\prod_{0\leq i\leq 3}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z }\bigr{)}+\bigl{(}\sin(2\eta)\bigr{)}^{2}\biggl{(}\prod_{\begin{subarray}{c}0\leq i \leq 1\\ i=0,-\\ i\equiv 1,+\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}\biggl{(}\prod_{\begin{subarray}{c} \leq i\leq 3\\ i\equiv 1,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z }\bigr{)}\biggr{)}+\cdots\] \[\bigl{(}\sin(2\eta)\bigr{)}^{2}\biggl{(}\prod_{\begin{subarray}{c}1\leq i\leq 2 \\ i\equiv 1,-\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}\biggl{(}\prod_{\begin{subarray}{c} i\text{ odd: }1\leq i\leq 3\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N- i}^{z}\bigr{)}\biggr{)}+\cdots\] \[\bigl{(}\sin(2\eta)\bigr{)}^{2}\biggl{(}\prod_{\begin{subarray}{c}1\leq i \leq 2\\ i\equiv 1,-\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}\biggl{(}\prod_{\begin{subarray} {c}0\leq i\leq 1\\ i\equiv 1,-\\ i\equiv 3,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N- i}^{z}\bigr{)}\biggr{)}+\cdots\] \[\bigl{(}\sin(2\eta)\bigr{)}^{3}\biggl{(}\prod_{\begin{subarray}{c}0\leq i \leq 3\\ i\equiv 0,-\\ i\equiv 1,+\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}+\biggl{(}\prod_{ \begin{subarray}{c}i\text{ odd: }1\leq i\leq 3\\ i\equiv 1,-\\ i\equiv 3,+\end{subarray}}\sin(2\eta)\sigma_{N-i}^{-}\biggr{)}\times\] \[\biggl{(}\prod_{\begin{subarray}{c}\text{even }i\text{ \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{-2}\bigg{(}\prod_{\begin{subarray}{c}i \text{ odd : }1\leq i\leq 3\\ i=1,-\\ i=3,+\end{subarray}}\sin\!\big{(}2\eta\big{)}\sigma_{N-i}^{-,+}\bigg{)}\bigg{(} \prod_{\begin{subarray}{c}i\text{ even : }0\leq i\leq 2\\ i=0,+\eta\\ i\equiv 2,-\eta\end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\cdots\] From the superposition above, grouping together terms depending upon whether there is a product of even, or odd, terms of, \[\sin\!\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma_{N-i}^{z}\big{)}\enspace,\] or upon whether there is a product of terms of, \[\sigma_{N-i}^{-,+}\enspace,\] implies that the expression for \(A\big{(}\lambda_{\alpha}\big{)}\), after repeating the computation for the two by two matrix multiplication \(n\) times, can be obtained by grouping together terms under the \(\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2}\), in which, \[\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ i\equiv 0,-\\ i\equiv 1,+\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! corresponding to a fifth group of terms, and, \[\bigg{(}\prod_{\begin{subarray}{c}i\text{ odd}:\,1\leq i\leq 3\\ i=1,-\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\ \bigg{(}\prod_{1\leq i\leq 2}\sin \bigl{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\bigr{)}\bigg{)}\ \,\] corresponding to a sixth group of terms, hence implying the desired form for the first coefficient, \[A_{3}\bigl{(}\lambda_{\alpha}\bigr{)}\equiv\prod_{0\leq i\leq 3}\sin \bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\bigr{)}+\bigl{(}\sin \bigl{(}2\eta\bigr{)}\bigr{)}^{2}\bigg{[}\bigg{(}\prod_{\begin{subarray}{c}0 \leq i\leq 1\\ i\equiv 0,-\\ i\equiv 1,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{2\leq i\leq 3}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+ \eta\sigma_{N-i}^{z}\bigr{)}\bigg{)}+\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}2\leq i\leq 3\\ i\equiv 2,-\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\bigg{(}\prod_{0\leq i\leq 1 }\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\bigr{)}\bigg{)}+\cdots\] \[\big{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{-1}\bigg{(}\prod_{ \begin{subarray}{c}i\text{ odd}:\,1\leq i\leq 3\\ i\equiv 1,-\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}2\eta\bigr{)}\sigma_{N-i}^{-,+}\bigg{)} \bigg{(}\prod_{\begin{subarray}{c}i\text{ even}:\,0\leq i\leq 2\\ i\equiv 0,+\eta\\ i\equiv 2,-\eta\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta \sigma_{N-i}^{z}\bigr{)}\bigg{)}+\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,-\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\ \bigg{(}\prod_{ \begin{subarray}{c}i\text{ odd}:\,1\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 2,-\\ i\equiv 3,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}-\eta \sigma_{N-i}^{z}\bigr{)}\bigg{)}+\cdots\] \[\sin\bigl{(}2\eta\bigr{)}\ \bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 2,-\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}+\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}i\text{ odd}:\,1\leq i\leq 3\\ i\equiv 1,-\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\ \bigg{(}\prod_{1\leq i\leq 2}\sin \bigl{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\bigr{)}\bigg{)}\bigg{]}\ \,\] from which we conclude the argument. **Lemma 2** (_collecting terms from the product of two by two L operators_). The second entry of the \(L\) operator product, \[\prod_{i=0}^{3}\!L_{\alpha,N-i}\bigl{(}\lambda_{\alpha},v_{N-i}\bigr{)}\ \,\] has an expansion of the form, \[B_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\big{(}\sin(2\eta)\big{)} \;\sigma_{N-3}^{-}\;\Big{(}\prod_{\begin{subarray}{c}0\leq i\leq 2\\ \text{$i=0,-$}\\ i=1,+\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\Big{)}+\big{(}\sin\big{(}2\eta)\big{)}^{3}\bigg{[}\;\big{(}\sin\big{(}2 \eta)\big{)}^{-1}\times\cdots\] \[\qquad\qquad\qquad\qquad\bigg{[}\;\bigg{(}\prod_{\begin{subarray} {c}0\leq i\leq 2\\ \text{$i=0,-$}\\ i=1,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\;\bigg{(}\prod_{\begin{subarray} {c}2\leq i\leq 3\\ \text{$i=2,+$}\\ i=3,-\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ \text{$i=1,-$}\\ i=2,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\;\bigg{(}\prod_{\begin{subarray} {c}0\leq i\leq 1\\ \text{$i=1,-$}\\ i=3,+\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad while the second entry of the L operator product, \[\prod_{i=0}^{2}L_{\alpha,N-i}\big{(}\lambda_{\alpha},v_{N-i}\big{)}\] has an expansion of the form, \[B_{2}\big{(}\lambda_{\alpha}\big{)}\equiv\big{(}\!\sin\!\big{(}2\eta\big{)} \sigma_{N-2}^{-}\!\big{)}\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ \text{$i=0$,}\\ \text{$i=2$,}\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! corresponding to the second term, \(\mathbf{2}^{1}\) which has the product representation, \[\big{(}\sin\!\big{(}2\eta\big{)}\sigma_{N-2}^{-}\big{)}\bigg{(}\prod_{ \begin{subarray}{c}0\leq i\leq 1\\ i=0,-\\ i\equiv 2,-\end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\big{(}\sin\!\big{(}2\eta\big{)}\sigma_{N-i}^{-}\big{)}+\big{(} \sin\!\big{(}2\eta\big{)}\sigma_{N-1}^{-}\big{)}\times\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}\text{even $i$ : $0\leq i\leq 2$}\\ i\equiv 0,+\eta\\ i\equiv 2,-\end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\big{(}\sin\!\big{(}2\eta\big{)}\sigma_{N}^{-}\big{)}\bigg{(} \prod_{\begin{subarray}{c}1\leq i\leq 2\end{subarray}}\sin\!\big{(}\lambda_{ \alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\big{)}\bigg{)}\enspace.\] From the two by two matrix with entries \(\mathbf{1}^{1}\), \(\mathbf{2}^{1}\), \(\mathbf{3}^{1}\) and \(\mathbf{4}^{1}\), the remaining terms of the product takes the form, \[\begin{bmatrix}\mathbf{1}^{1}&\mathbf{2}^{1}\\ \mathbf{3}^{1}&\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\sin\!\big{(}\lambda _{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}&\sin\!\big{(}2\eta\big{)}\sigma _{N-3}^{-}\\ \sin\!\big{(}2\eta\big{)}\sigma_{N-3}^{+}&\sin\!\big{(}\lambda_{\alpha}-v_{N-3 }-\eta\sigma_{N-3}^{z}\big{)}\end{bmatrix}\cdots\times\begin{bmatrix}\sin\! \big{(}\lambda_{\alpha}-v_{1}+\eta\sigma_{1}^{z}\big{)}&\sin\!\big{(}2\eta \big{)}\sigma_{1}^{-}\\ \sin\!\big{(}2\eta\big{)}\sigma_{1}^{+}&\sin\!\big{(}\lambda_{\alpha}-v_{1}- \eta\sigma_{1}^{z}\big{)}\end{bmatrix}\enspace,\] which implies, for, \[\begin{bmatrix}\mathbf{1}^{1}&\mathbf{2}^{1}\\ \mathbf{3}^{1}&\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\sin\!\big{(}\lambda _{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}&\sin\!\big{(}2\eta\big{)}\sigma _{N-3}^{-}\\ \sin\!\big{(}2\eta\big{)}\sigma_{N-3}^{+}&\sin\!\big{(}\lambda_{\alpha}-v_{N-3 }-\eta\sigma_{N-3}^{z}\big{)}\end{bmatrix}\equiv\begin{bmatrix}\mathbf{1}^{2}& \mathbf{2}^{2}\\ \mathbf{3}^{2}&\mathbf{4}^{2}\end{bmatrix}\] has a second entry that is given by, \[\mathbf{2}^{2}\equiv\mathbf{1}^{1}\!\sin\!\big{(}2\eta\big{)}\sigma_{N-3}^{-}+ \mathbf{2}^{1}\!\sin\!\big{(}\lambda_{\alpha}-v_{N-3}-\eta\sigma_{N-3}^{z} \big{)}\enspace.\] The superposition above has the equivalent product representation, after collecting terms, \[\mathbf{2}^{2}=\bigg{[}\mathbf{1}^{1}\!\sin\!\big{(}\lambda_{ \alpha}-v_{N-2}+\eta\sigma_{N-2}^{z}\big{)}+\mathbf{2}^{1}\!\sin\!\big{(}2\eta \big{)}\sigma_{N-2}^{+}\bigg{]}\!\sin\!\big{(}2\eta\big{)}\sigma_{N-3}^{-}+\cdots\] \[\bigg{[}\!\sin\!\big{(}2\eta\big{)}\sigma_{N-2}^{-}+\mathbf{2} \!\sin\!\big{(}\lambda_{\alpha}-v_{N-2}-\eta\sigma_{N-2}^{z}\big{)}\bigg{]}\! \sin\!\big{(}\lambda_{\alpha}-v_{N-3}-\eta\sigma_{N-3}^{z}\big{)}\] is equivalent to, \[\bigg{[}\bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,-\\ i\equiv 2,+\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\Big{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i=1,-\\ i\equiv 2,+\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\big{(}\sin\big{(}2\eta\big{)}\big{)}\ \sigma_{N-2}^{-}\ \bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 1\\ i\equiv 0,-\\ i\equiv 1,+\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\sigma_{N-1}^{-}\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 2,i\equiv 3\\ i\equiv 0,+\eta\\ i\equiv 0,+\eta\\ i\equiv 2,-\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\cdots\] Under the \(\big{(}\sin\big{(}2\eta\big{)}\big{)}^{3}\) prefactor, further grouping terms under \(\big{(}\sin\big{(}2\eta\big{)}\big{)}^{-1}\) yields, \[\big{(}\sin\big{(}2\eta\big{)}\big{)}^{-1}\bigg{[}\sigma_{N-2}^{-}\bigg{(}\ \prod_{ \begin{subarray}{c} 0\leq i\leq 1\\ i\equiv 0,-\\ i\equiv 1,+\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 2,i\equiv 3\\ i\equiv 0,-\eta\\ i\equiv 2,-\eta\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\cdots\] Under the \(\big{(}\sin\big{(}2\eta\big{)}\big{)}^{3}\) prefactor, further grouping terms under \(\big{(}\sin\big{(}2\eta\big{)}\big{)}^{-1}\) yields, \[\big{(}\sin\big{(}2\eta\big{)}\big{)}^{-1}\bigg{[}\sigma_{N-2}^{-}\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 1\\ i\equiv 0,-\\ i\equiv 1,+\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 1,i\equiv 3\\ i\equiv 0,1,+\eta\\ i\equiv 3,-\eta\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\cdots\] Under the \(\big{(}\sin\big{(}2\eta\big{)}\big{)}^{3}\) prefactor, further grouping terms under \(\big{(}\sin\big{(}2\eta\big{)}\big{)}\) yields, \[\big{(}\sin\big{(}2\eta\big{)}\big{)}^{-1}\bigg{[}\sigma_{N-2}^{-}\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 1\\ i\equiv 0,-\\ i\equiv 1,+\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 1,i\equiv 3\\ i\equiv 0,1,+\eta\\ i\equiv 3,-\eta\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\cdots\] Under the \(\big{(}\sin\big{(}2\eta\big{)}\big{)}^{3}\) prefactor, further grouping terms under \(\big{(}\sin\big{(}2\eta\big{)}\big{)}\) yields, \[\big{(}\sin\big{(}2\eta\big{)}\big{)}\bigg{[}\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 2\\ i\equiv 0,-\\ i\equiv 1,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}+\sigma_{N-2}^{-}\bigg{(}\ \prod_{ \begin{subarray}{c} 0\leq i\leq 1,i\equiv 3\\ i\equiv 0,1,+\eta\\ i\equiv 3,-\eta\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\cdots\] Besides the group of terms above, under the \(\big{(}\sin\big{(}2\eta\big{)}\big{)}^{3}\), grouping terms under \(\big{(}\sin\big{(}2\eta\big{)}\big{)}\) yields, \[\big{(}\sin\big{(}2\eta\big{)}\big{)}\bigg{[}\bigg{(}\ \prod_{ \begin{subarray}{c} 0\leq i\leq 2\\ i\equiv 1,+\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}+\bigg{(}\ \prod_{ \begin{subarray}{c} 0\leq i\leq 1,i\equiv 3\\ i\equiv 0,1,+\eta\\ i\equiv 3,-\eta\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\sigma_{N-2}^{-}\bigg{(}\ \prod_{\begin{subarray}{c} 0\leq i\leq 2\\ i\equiv 0,-\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}+\cdots\] Lastly, one has other terms, which are of the form, \[\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ i=0,+\\ i=1,-\end{subarray}}\!\!\!\sigma_{N-i}^{-+}\bigg{)}\!\sin\!\big{(}\lambda_{ \alpha}-v_{N-2}+\eta\sigma_{N-2}^{z}\big{)}+\bigg{(}\prod_{\begin{subarray}{c}1 \leq i\leq 2\\ i=1,-\\ i=2,+\end{subarray}}\!\!\sigma_{N-i}^{-+}\bigg{)}\!\sin\!\big{(}\lambda_{ \alpha}-v_{N}+\eta\sigma_{N}^{z}\big{)}+\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}\text{even }i\ :\ 0\leq i\leq 2\\ i=0,-\\ i=2,+\end{subarray}}\!\!\!\sigma_{N-i}^{-+}\bigg{)}\!\sin\!\big{(}\lambda_{ \alpha}-v_{N}+\eta\sigma_{N}^{z}\big{)}\ \.\] Under the \(\big{(}\!\sin\!\big{(}2\eta\big{)}\big{)}^{3}\) prefactor, further grouping terms under \(\big{(}\!\sin\!\big{(}2\eta\big{)}\big{)}^{-1}\) yields, \[\big{(}\!\sin\!\big{(}2\eta\big{)}\big{)}^{3}\bigg{[}\cdots+\big{(} \!\sin\!\big{(}2\eta\big{)}\big{)}^{-1}\bigg{[}\sigma_{N-2}^{-}\bigg{(}\prod_{ \begin{subarray}{c}0\leq i\leq 1\\ i\leq 0,+\\ i=2,-\end{subarray}}\!\!\!\sin\!\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N -i}^{z}\big{)}\bigg{)}+\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}\text{even }i\ :\ 0\leq i\leq 2\\ i=0,+\\ i=2,-\end{subarray}}\!\!\!\sin\!\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma_{N -i}^{z}\big{)}\bigg{)}+\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\leq 2\end{subarray}}\!\!\!\sin\!\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N -i}^{z}\big{)}\bigg{)}+\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ i=0,-\\ i=3,-\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! corresponding to a third group of terms. Altogether, \[B_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\big{(}\sin\!\big{(}2\eta\big{)}\big{)} \ \sigma_{N\!-\!3}^{-}\ \bigg{(}\ \prod_{\begin{subarray}{c}0\leq i\leq 2\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 2\\ i=0,-\\ i\equiv 1,+\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\bigg{]}\ +\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ i\equiv 0,+\\ i\equiv 1,-\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\text{sin}\big{(}\lambda_{ \alpha}-v_{N-2}+\eta\sigma_{N-2}^{z}\big{)}+\bigg{(}\prod_{\begin{subarray}{c }1\leq i\leq 2\\ i\equiv 1,-\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\text{sin}\big{(}\lambda_{ \alpha}-v_{N}+\eta\sigma_{N}^{z}\big{)}+\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 3\\ i\equiv 0,+\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\text{sin}\big{(}\lambda_{ \alpha}-v_{N}+\eta\sigma_{N}^{z}\big{)}\bigg{]}\ \,\] from which we conclude the argument. **Lemma 3** (_collecting terms from the product of two by two L operators_). The third entry of the \(L\) operator product, \[\prod_{i=0}^{3}\!L_{\alpha,N-i}\big{(}\lambda_{\alpha},v_{N-i}\big{)}\ \,\] has an expansion of the form, \[C_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\big{(}\text{sin}\big{(}2 \eta\big{)}\big{)}^{3}\text{sin}\big{(}\lambda_{\alpha}-v_{N-n}-\eta\sigma_{N -n}^{z}\big{)}\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 3\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\bigg{)}+\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z} \big{)}\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ i\equiv 1,-\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\bigg{]}+\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)} \bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}+\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)} \bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}+\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)} \bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 2,-\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}+\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}\big{)}\bigg{(}\prod_{ \begin{subarray}{c}0\leq i\leq 2\\ i\equiv 3,+\end{subarray}}\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N -i}^{z}\big{)}\bigg{)}\bigg{]}+\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\times\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z} \big{)}\bigg{)}\ \bigg{]}\ \ +\cdots\] \[\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z} \big{)}\bigg{(}\prod_{\begin{subarray}{c}0\leq i\leq 1\\ i\equiv 2,+\end{subarray}}\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N -i}^{z}\big{)}\bigg{)}\ \,\] while the third entry of the L operator product, \[\prod_{i=0}^{2}L_{\alpha,N-i}\big{(}\lambda_{\alpha},v_{N-i}\big{)}\] has an expansion of the form, \[C_{2}\big{(}\lambda_{\alpha}\big{)}\equiv\big{(}\sin\!\big{(}2 \eta\big{)}\sigma_{N}^{+}\big{)}\prod_{1\leq i\leq 2}\sin\!\big{(}\lambda_{ \alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\big{)}+\big{(}\sin\!\big{(}2\eta\big{)} \big{)}\prod_{\begin{subarray}{c}\text{even }i\text{ }0\leq i\leq 2\\ i=0,-\eta\\ i\equiv 2,+\eta\end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}+\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \[\begin{split}\big{(}\sin\big{(}2\eta\big{)}\sigma_{N}^{+}\big{)} \prod_{1\leq i\leq 2}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}+\big{(}\sin\big{(}2\eta\big{)}\big{)}\prod_{\begin{subarray}{c}\text{ even }i:\ 0\leq i\leq 2\\ i\equiv 2,+\eta\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}+\big{(}\sin\big{(}2\eta\big{)}\big{)}^{3}\times\cdots \\ \prod_{i\equiv 1}\bigl{(}\sigma_{N-i}^{-}\big{)}^{2}\sigma_{N-i-1}^{+}+ \big{(}\big{(}\sin\big{(}2\eta\big{)}\sigma_{N-2}^{+}\big{)}\prod_{\begin{subarray} {c}0\leq i\leq 2\\ i\equiv 2,-\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z} \big{)}\.\end{split}\] The fourth term, \(\mathbf{4}^{1}\), has the product representation, \[\begin{split}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{2}\prod_{ \begin{subarray}{c}\text{even }i:\ 0\leq i\leq 2\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{(}\sin\bigl{(}\lambda_{ \alpha}-v_{N-1}+\eta\sigma_{N-1}^{z}\big{)}\bigg{)}+\big{(}\sin\big{(}2\eta \big{)}\big{)}^{2}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{(}\sin\bigl{(}\lambda_{ \alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)}\bigg{)}+\cdots\\ \big{(}\sin\big{(}2\eta\big{)}\big{)}^{2}\prod_{i\equiv 1}\sigma_{N-i}^{-} \bigg{(}\sin\bigl{(}\lambda_{\alpha}-v_{N-2}-\eta\sigma_{N-2}^{z}\big{)}\bigg{)} +\prod_{0\leq i\leq 2}\sin\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z} \big{)}\ \.\end{split}\] From the two by two matrix with entries \(\mathbf{1}^{1}\), \(\mathbf{2}^{1}\), \(\mathbf{3}^{1}\) and \(\mathbf{4}^{1}\), the remaining terms of the product takes the form, \[\begin{bmatrix}\mathbf{1}^{1}&\mathbf{2}^{1}\\ \mathbf{3}^{1}&\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\sin\big{(}\lambda_ {\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}&\sin\big{(}2\eta\big{)}\sigma_{N- 3}^{-}\\ \sin\big{(}2\eta\big{)}\sigma_{N-3}^{+}&\sin\big{(}\lambda_{\alpha}-v_{N-3}- \eta\sigma_{N-3}^{z}\big{)}\end{bmatrix}\cdots\times\begin{bmatrix}\sin\big{(} \lambda_{\alpha}-v_{1}+\eta\sigma_{1}^{z}\big{)}&\sin\big{(}2\eta\big{)}\sigma _{1}^{-}\\ \sin\big{(}2\eta\big{)}\sigma_{1}^{+}&\sin\big{(}\lambda_{\alpha}-v_{1}-\eta \sigma_{1}^{z}\big{)}\end{bmatrix}\,\] which implies, for, \[\begin{bmatrix}\mathbf{1}^{1}&\mathbf{2}^{1}\\ \mathbf{3}^{1}&\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\sin\big{(}\lambda_ {\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}&\sin\big{(}2\eta\big{)}\sigma_{N- 3}^{-}\\ \sin\big{(}2\eta\big{)}\sigma_{N-3}^{+}&\sin\big{(}\lambda_{\alpha}-v_{N-3}- \eta\sigma_{N-3}^{z}\big{)}\end{bmatrix}\equiv\begin{bmatrix}\mathbf{1}^{2}& \mathbf{2}^{2}\\ \mathbf{3}^{2}&\mathbf{4}^{2}\end{bmatrix}\] has a third entry that is given by, \[\mathbf{3}^{2}\equiv\mathbf{3}^{1}\text{sin}\big{(}\lambda_{\alpha}-v_{N-3}+ \eta\sigma_{N-3}^{z}\big{)}+\mathbf{4}^{1}\text{sin}\big{(}2\eta\big{)}\sigma _{N-3}^{+}\ \.\] The superposition above has the equivalent product representation, after collecting terms, \[\begin{split}\bigg{[}\big{(}\sin\big{(}2\eta\big{)}\sigma_{N}^{+} \big{)}\prod_{1\leq i\leq 2}\sin\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}+\big{(}\sin\big{(}2\eta\big{)}\big{)}\prod_{\begin{subarray}{c}\text{ even }i:\ 0\leq i\leq 2\\ i\equiv 0,-\eta\\ i\equiv 2,+\eta\end{subarray}}\sin\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}+\big{(}\sin\big{(}2\eta\big{)}\big{)}^{3}\times\cdots \\ \prod_{i\equiv 1}\bigl{(}\sigma_{N-i}^{-}\big{)}^{2}\sigma_{N-i-1}^{+}+ \big{(}\big{(}\sin\big{(}2\eta\big{)}\sigma_{N-2}^{+}\big{)}\prod_{0\leq i \leq 1}\sin\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\big{)}\bigg{]} \times\cdots\\ \sin\big{(}\lambda_{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}+ \bigg{[}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{2}\prod_{\begin{subarray}{c}\text{ even }i:\ 0\leq i\leq 2\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{(}\sin\big{(}\lambda_{\alpha}-v_{N-1}+ \eta\sigma_{N-1}^{z}\big{)}\bigg{)}+\cdots\\ \big{(}\sin\big{(}2\eta\big{)}\big{)}^{2}\prod_{\begin{subarray}{c}\text{ even }i:\ 0\leq i\leq 2\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{(}\sin\big{(}\lambda_{\alpha}-v_{N- 1}+\eta\sigma_{N-1}^{z}\big{)}\bigg{)}+\cdots\\ \prod_{0\leq i\leq 2}\sin\big{(}\lambda_{\alpha}-v_{N-i}-\eta \sigma_{N-i}^{z}\big{)}\bigg{]}\sigma_{N-3}^{+}\end{split}\] \[\begin{split}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{2}\prod_{ \begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{(}\sin\big{(}\lambda_{\alpha}-v_{N}- \eta\sigma_{N}^{z}\big{)}\bigg{)}+\big{(}\sin\big{(}2\eta\big{)}\big{)}^{2}\prod_{ \begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-}\bigg{(}\sin\big{(}\lambda_{\alpha}-v_{N- 2}-\eta\sigma_{N-2}^{z}\big{)}\bigg{)}+\cdots\\ \prod_{0\leq i\leq 2}\sin\big{(}\lambda_{\alpha}-v_{N-i}-\eta \sigma_{N-i}^{z}\big{)}\bigg{]}\sigma_{N-3}^{+}\end{split}\] Distributing terms further implies, \[\big{(}\sin\!\big{(}2\eta\big{)}\sigma_{N}^{+}\big{)}\ \bigg{(}\prod_{1\leq i\leq 3} \sin\!\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\big{)}\bigg{)}+\big{(} \sin\!\big{(}2\eta\big{)}\big{)}\ \sin\!\big{(}\lambda_{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}\times\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}\text{even $i$ : $0\leq i\leq 2$}\\ i\equiv 0,-\eta\end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{3}\ \sin\!\big{(}\lambda_{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}\bigg{(}\prod _{i\equiv 1}\!\big{(}\sigma_{N-i}^{-}\big{)}^{2}\sigma_{N-i-1}^{+}\bigg{)}+\cdots\] \[\big{(}\sin\!\big{(}2\eta\big{)}\sigma_{N-2}^{+}\big{)}\sin\! \big{(}\lambda_{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\big{)}\ \bigg{(}\prod_{ \begin{subarray}{c}0\leq i\leq 1\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\sin\bigl{(}2\eta\bigr{)}\biggl{[}\sigma_{N}^{+}\biggl{(}\prod_{1\leq i\leq 3} \sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\bigr{)}\biggr{)}+\sin \bigl{(}\lambda_{\alpha}-v_{N-3}+\eta\sigma_{N-3}^{z}\bigr{)}\times\cdots\] \[\biggl{(}\prod_{\begin{subarray}{c}\text{even }i\,:\,0\leq i\leq 2\\ i\equiv 0,-\eta\\ i\equiv 2,+\eta\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\bigr{)}\biggr{)}+\sigma_{N-2}^{+}\sin\bigl{(}\lambda_{\alpha}- v_{N-3}+\eta\sigma_{N-3}^{z}\bigr{)}\ \biggl{(}\prod_{0\leq i\leq 1}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}- \eta\sigma_{N-i}^{z}\bigr{)}\biggr{)}\biggr{]}\ \,\] which yields, \[\sin\bigl{(}2\eta\bigr{)}\biggl{[}\sigma_{N}^{+}\biggl{(}\prod_{1 \leq i\leq 3}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \bigr{)}\biggr{)}+\sin\bigl{(}\lambda_{\alpha}-v_{N-n}+\eta\sigma_{N-n}^{z} \bigr{)}\times\cdots\] \[\biggl{[}\biggl{(}\prod_{\begin{subarray}{c}\text{even }i\,:\,0\leq i\leq 3\\ i\equiv 0,-\eta\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma_{N-i}^{ z}\bigr{)}\biggr{)}+\cdots\] \[\sigma_{N-1}^{+}\biggl{(}\prod_{0\leq i\leq 3}\sin\bigl{(} \lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\bigr{)}\biggr{)}\biggr{]}\ \biggr{]}\ \.\] Similarly, the terms under the \(\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{2}\) yield, \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{2}\biggl{[}\sigma_{N- i}^{+}\biggl{[}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\bigr{)} \biggl{(}\prod_{\begin{subarray}{c}\text{even }i\,:\,0\leq i\leq 3\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}+\cdots\] \[\sin\bigl{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\bigr{)} \biggl{(}\prod_{\begin{subarray}{c}\text{odd }i\,:\,1\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}\biggr{]}+\cdots\] \[\sin\bigl{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\bigr{)} \biggl{(}\prod_{\begin{subarray}{c}\text{odd }i\,:\,1\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}+\cdots\] \[\sigma_{N-(n-3)}^{+}\biggl{(}\prod_{0\leq i\leq 2}\sin\bigl{(} \lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\bigr{)}\biggr{)}\biggr{]}\ \,\] while rearranging terms under the \(\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{3}\) prefactor yields, \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{3}\sin\bigl{(}\lambda_{\alpha}-v_{N -n}-\eta\sigma_{N-n}^{z}\bigr{)}\biggl{(}\prod_{0\leq i\leq 3}\sin\bigl{(} \lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\bigr{)}\biggr{)}\ \.\] Hence, \[C_{3}\bigl{(}\lambda_{\alpha}\bigr{)}\equiv\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{3}\sin\bigl{(}\lambda_{\alpha}-v_{N-n}-\eta\sigma_{N-n}^{z }\bigr{)}\biggl{(}\prod_{0\leq i\leq 3}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}- \eta\sigma_{N-i}^{z}\bigr{)}\biggr{)}+\cdots\] \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{2}\biggl{[}\sigma_{N-i}^ {+}\biggl{[}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z}\bigr{)} \biggl{(}\prod_{\begin{subarray}{c}\text{even }i\,:\,0\leq i\leq 3\\ i\equiv 2,+\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}+\cdots\] \[\sin\bigl{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\bigr{)} \biggl{(}\prod_{\begin{subarray}{c}\text{odd }i\,:\,1\leq i\leq 3\\ i\equiv 3,+\end{subarray}}\sigma_{N-i}^{-+}\biggr{)}\biggr{]}+\cdots\,\] \[\sin\bigl{(}\lambda_{\alpha}-v_{N}-\eta\sigma^{z}_{N}\bigr{)}\ \biggl{(}\ \prod_{ \begin{subarray}{c}1\leq i\leq 3\\ i=1,+\\ i\equiv 2,-\\ i\equiv 3,+\end{subarray}}\sigma^{-+}_{N-i}\biggr{)}+\cdots\] \[\sigma^{+}_{N-(n-3)}\biggl{(}\prod_{ \begin{subarray}{c}0\leq i\leq 2\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma^{z}_{N- i}\bigr{)}\biggr{)}\biggr{]}+\cdots\] \[\sin\bigl{(}2\eta\bigr{)}\biggl{[}\sigma^{+}_{N}\biggl{(}\prod_{ \begin{subarray}{c}1\leq i\leq 3\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma^{z}_{N- i}\bigr{)}\biggr{)}+\cdots\] \[\sin\bigl{(}\lambda_{\alpha}-v_{N-n}+\eta\sigma^{z}_{N-n}\bigr{)}\times\cdots\] \[\biggl{[}\biggl{(}\prod_{\begin{subarray}{c}\text{even }i:\,0\leq i\leq 3\\ i\equiv 0,-\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}\pm\eta\sigma^{z}_{N- i}\bigr{)}\biggr{)}+\cdots\] \[\sigma^{+}_{N-1}\biggl{(}\prod_{ \begin{subarray}{c}0\leq i\leq 3\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma^{z}_{N- i}\bigr{)}\biggr{)}\biggr{]}\ \biggr{]}\ +\cdots\] \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{3}\sin\bigl{(}\lambda_{ \alpha}-v_{N-3}-\eta\sigma^{z}_{N-3}\bigr{)}\biggl{(}\prod_{ \begin{subarray}{c}0\leq i\leq 1\\ i\equiv 2,+\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma^{z}_{N- i}\bigr{)}\biggr{)}\ \,\] from which we conclude the argument. **Lemma 4** (_collecting terms from the product of two by two L operators_). The fourth entry of the \(L\) operator product, \[\prod_{i=0}^{3}\!L_{\alpha,N-i}\bigl{(}\lambda_{\alpha},v_{N-i}\bigr{)}\ \,\] has an expansion of the form, \[D_{3}\bigl{(}\lambda_{\alpha}\bigr{)}\equiv\prod_{ \begin{subarray}{c}0\leq i\leq 3\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma^{z}_{N-i}\bigr{)}\biggr{)}+\cdots\] \[\biggl{(}\prod_{\begin{subarray}{c}\text{odd }i:\,1\leq i\leq 3\\ i\equiv 1,+\\ i\equiv 3,-\end{subarray}}\sin\bigl{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma^{z}_{N-i}\bigr{)}\biggr{)}+\cdots\] \[\biggl{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\prod_{i=0}^{2}L_{\alpha,N-i}\big{(}\lambda_{\alpha},v_{N-i}\big{)}\] has an expansion of the form, \[D_{2}\big{(}\lambda_{\alpha}\big{)}\equiv\big{(}\text{sin}\big{(}2 \eta\big{)}\big{)}^{2}\bigg{(}\prod_{\begin{subarray}{c}\text{even }i:\ 0\leq i\leq 2\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\ \bigg{(}\text{sin}\big{(}\lambda_{\alpha}-v_{N-1}+\eta \sigma_{N-1}^{z}\big{)}\bigg{)}+\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{2} \bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-+}\bigg{)}\ \times\cdots\\ \bigg{(}\text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{ z}\big{)}\bigg{)}+\cdots\\ \big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{2}\bigg{(}\prod_{i\equiv 1} \sigma_{N-i}^{-}\bigg{)}\bigg{(}\text{sin}\big{(}\lambda_{\alpha}-v_{N-2}-\eta \sigma_{N-2}^{z}\big{)}\bigg{)}+\prod_{0\leq i\leq 2}\text{sin}\big{(}\lambda_{ \alpha}-v_{N-i}-\eta\sigma_{N-i}^{z}\big{)}\ \.\] _Proof of Lemma 4._ We collect terms from the first two terms in the product of \(L\) operators, \[\begin{bmatrix}\text{sin}\big{(}\lambda_{\alpha}-v_{N}+\eta\sigma_{N}^{z} \big{)}&\text{sin}\big{(}2\eta\big{)}\sigma_{N}^{-}\\ \text{sin}\big{(}2\eta\big{)}\sigma_{N}^{+}&\text{sin}\big{(}\lambda_{\alpha} -v_{N}-\eta\sigma_{N}^{z}\big{)}\end{bmatrix}\begin{bmatrix}\text{sin}\big{(} \lambda_{\alpha}-v_{N-1}+\eta\sigma_{N-1}^{z}\big{)}&\text{sin}\big{(}2\eta \big{)}\sigma_{N-1}^{-}\\ \text{sin}\big{(}2\eta\big{)}\sigma_{N-1}^{+}&\text{sin}\big{(}\lambda_{\alpha }-v_{N-1}-\eta\sigma_{N-1}^{z}\big{)}\end{bmatrix}\] from which one obtains a resultant matrix of the form, \[\begin{bmatrix}\text{\bf 1}&\text{\bf 2}\\ \text{\bf 3}&\text{\bf 4}\end{bmatrix}\ \,\] which has the following expression for the third and fourth entries, with, \[\begin{array}{c}\text{\bf 3}^{0}\equiv\text{\bf 3}\equiv\text{sin}\big{(}2 \eta\big{)}\sigma_{N}^{+}\text{sin}\big{(}\lambda_{\alpha}-v_{N-1}+\eta \sigma_{N-1}^{z}\big{)}+\text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^ {z}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{N-1}^{+}\ \,\\ \text{\bf 4}^{0}\equiv\text{\bf 4}\equiv\text{sin}\big{(}2\eta\big{)} \sigma_{N-1}^{-}\text{sin}\big{(}2\eta\big{)}\sigma_{N-1}^{-}+\text{sin}\big{(} \lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)}\text{sin}\big{(}\lambda_{ \alpha}-v_{N-1}-\eta\sigma_{N-1}^{z}\big{)}\ \,\end{array}\] Performing rearrangements for the final term, yields, \[\begin{array}{c}\text{\bf 4}^{1}\equiv\text{\bf 3}\text{sin}\big{(}2 \eta\big{)}\sigma_{N-2}^{-}+\text{\bf 4}\text{sin}\big{(}\lambda_{\alpha}-v_{N-2}- \eta\sigma_{N-2}^{z}\big{)}\\ \equiv\bigg{(}\text{sin}\big{(}2\eta\big{)}\sigma_{N}^{+}\text{sin}\big{(} \lambda_{\alpha}-v_{N-1}+\eta\sigma_{N-1}^{z}\big{)}+\text{sin}\big{(}\lambda_{ \alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{N -1}^{+}\bigg{)}\text{sin}\big{(}2\eta\big{)}\sigma_{N-2}^{-}+\cdots\\ \bigg{(}\text{sin}\big{(}2\eta\big{)}\sigma_{N-1}^{-}\text{sin}\big{(}2\eta \big{)}\sigma_{N-1}^{-}+\text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^ {z}\big{)}\text{sin}\big{(}\lambda_{\alpha}-v_{N-1}-\eta\sigma_{N-1}^{z}\big{)} \bigg{)}\text{sin}\big{(}\lambda_{\alpha}-v_{N-2}-\eta\sigma_{N-2}^{z}\big{)} \\ \equiv\text{sin}\big{(}2\eta\big{)}\sigma_{N}^{+}\text{sin}\big{(}\lambda_{ \alpha}-v_{N-1}+\eta\sigma_{N-1}^{z}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{ N-2}^{-}+\sin\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)}\text{sin} \big{(}2\eta\big{)}\sigma_{N-1}^{+}\text{sin}\big{(}2\eta\big{)}\sigma_{N-2}^{-}+ \cdots\\ \text{sin}\big{(}2\eta\big{)}\sigma_{N-1}^{-}\text{sin}\big{(}2\eta \big{)}\sigma_{N-1}^{-}\text{sin}\big{(}\lambda_{\alpha}-v_{N-2}-\eta\sigma_{N- 2}^{z}\big{)}\ \,\\ \text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)}\text{sin} \big{(}\lambda_{\alpha}-v_{N-1}-\eta\sigma_{N-1}^{z}\big{)}\text{sin}\big{(} \lambda_{\alpha}-v_{N-2}-\eta\sigma_{N-2}^{z}\big{)}\ \,\end{array}\] corresponding to the fourth term, \(\text{\bf 4}^{1}\) which has the product representation, \[\begin{array}{c}\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{2}\bigg{(} \prod_{\begin{subarray}{c}\text{even }i:\ 0\leq i\leq 2\\ i\equiv 0,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\ \bigg{(}\text{sin}\big{(}\lambda_{\alpha}-v_{N-1}+\eta \sigma_{N-1}^{z}\big{)}\bigg{)}+\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{2} \bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ i\equiv 1,+\\ i\equiv 2,-\end{subarray}}\sigma_{N-i}^{-,+}\bigg{)}\ \times\cdots\\ \bigg{(}\text{sin}\big{(}\lambda_{\alpha}-v_{N}-\eta\sigma_{N}^{z}\big{)} \bigg{)}+\cdots\\ \Big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{2}\bigg{(}\prod_{i\equiv 1} \sigma_{N-i}^{-}\bigg{)}\bigg{(}\text{sin}\big{(}\lambda_{\alpha}-v_{N-2}-\eta \sigma_{N-2}^{z}\big{)}\bigg{)}+\prod_{0\leq i\leq 2}\text{sin}\big{(}\lambda_{\alpha}-v_{N-i}-\eta \sigma_{N-i}^{z}\big{)}\ \.\end{array}\] As demonstrated in arguments for the previous lemma, the third term has the product expansion, \[\begin{split}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2}\bigg{(}\prod_{i= 1}\!\!\sigma_{N-i}^{-}\big{)}\left(\sin\!\big{(}\lambda_{\alpha}-v_{N-2}-\eta \sigma_{N-2}^{z}\big{)}\right)+\cdots\\ \bigg{(}\prod_{0\leq i\leq 2}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}- \eta\sigma_{N-3}^{z}\big{)}\bigg{)}\bigg{]}\sin\!\big{(}\lambda_{\alpha}-v_{N- 3}-\eta\sigma_{N-3}^{z}\big{)}\ \.\end{split}\] Proceeding further from the superposition above, we group terms under the \(\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2}\) prefactor, from which, \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2}\bigg{[}\ \sigma_{N}^{+}\sigma_{N-3}^{+} \bigg{(}\prod_{\begin{subarray}{c}1\leq i\leq 2\\ \text{$i\equiv 0,+$}\\ i\equiv 2,-\end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}+\eta\sigma_{N-i}^{z} \big{)}\bigg{)}+\sigma_{-N-3}^{-}\bigg{(}\prod_{\begin{subarray}{c}\text{even $i\ \colon 0\leq i\leq 2\\ \text{$i\equiv 0,-\eta$}\\ i\equiv 2,+\eta\end{subarray}}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}\pm\eta \sigma_{N-i}^{z}\big{)}\bigg{)}+\cdots\] \[\sigma_{N-2}^{+}\sigma_{N-3}^{-}\bigg{(}\prod_{\begin{subarray}{c}0 \leq i\leq 1\\ \text{$i\equiv 0,+$}\\ i\equiv 2,-\end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}-\eta \sigma_{N-i}^{z}\big{)}\bigg{)}\bigg{]}\ \,\] while the remaining term, for the first group of terms, under the \(\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{3}\) prefactor, is, \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{3}\bigg{(}\ \prod_{\begin{subarray}{c}\text{$i \in 1$}\end{subarray}}\big{(}\sigma_{N-i}^{-}\big{)}^{2}\bigg{)}\sigma_{N-i-1}^{+}\ \.\] In the second group of terms, collecting terms under the prefactor \(\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2}\) yields, while for the remaining term, \[\prod_{\begin{subarray}{c}0\leq i\leq 3 \end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z} \big{)}\ \.\] The desired expression for the final entry of the monodromy matrix, \(D_{3}\big{(}\lambda_{\alpha}\big{)}\), takes the form, \[D_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\prod_{\begin{subarray}{ c}0\leq i\leq 3 \end{subarray}}\sin\!\big{(}\lambda_{\alpha}-v_{N-i}-\eta\sigma_{N-i}^{z} \big{)}+\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2}\bigg{[}\bigg{(}\prod_{ \begin{subarray}{c}\text{even $i\colon 0\leq i\leq 2\\ \text{$i\equiv 0,+$}\\ i\equiv 2,-\end{subarray}}}\sigma_{N-i}^{-+}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{\begin{subarray}{c}\text{even $i\colon 1\leq i\leq 2 \\ \text{$i\equiv 0,+$}\\ i\equiv 2,-\end{subarray}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! in terms of, \[\begin{bmatrix}\mathbf{1}^{n}&\mathbf{2}^{n}\\ \mathbf{3}^{n}&\mathbf{4}^{n}\end{bmatrix}\ \.\] **Lemma 4**: (_further iterating the matrix computation_). The \(n\) th L operator, \[\begin{bmatrix}A_{n}\big{(}\lambda_{\alpha}\big{)}&B_{n}\big{(}\lambda_{\alpha} \big{)}\\ C_{n}\big{(}\lambda_{\alpha}\big{)}&D_{n}\big{(}\lambda_{\alpha}\big{)}\end{bmatrix} \equiv\begin{bmatrix}\mathbf{1}^{n}&\mathbf{2}^{n}\\ \mathbf{3}^{n}&\mathbf{4}^{n}\end{bmatrix}\equiv\prod_{1\leq i\leq n}\begin{bmatrix} \mathbf{1}^{i}&\mathbf{2}^{i}\\ \mathbf{3}^{i}&\mathbf{4}^{i}\end{bmatrix}\ \,\] can be expressed in terms of, \[\begin{bmatrix}\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_{3} \big{(}\lambda_{\alpha}\big{)}\bigg{)}\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)} \big{)}^{n-3}\mathscr{A}_{1}+\mathscr{A}_{2}+\mathscr{A}_{3}\bigg{]}&\bigg{(} A_{3}\big{(}\lambda_{\alpha}\big{)}+B_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{B}_{1}+ \mathscr{B}_{2}+\mathscr{B}_{3}\bigg{]}\\ \bigg{(}C_{3}\big{(}\lambda_{\alpha}\big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{C}_{1}+ \mathscr{C}_{2}+\mathscr{C}_{3}\bigg{]}&\bigg{(}C_{3}\big{(}\lambda_{\alpha} \big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\bigg{[}\big{(}\sin\!\big{(} 2\eta\big{)}\big{)}^{n-3}\mathscr{D}_{1}+\mathscr{D}_{2}+\mathscr{D}_{3}\bigg{]} \bigg{]}\ \,\] for, \[\mathscr{A}_{1}\equiv\mathscr{C}_{1}\equiv\prod_{1\leq i\leq n-3} \left(\mathscr{C}_{1}\right)_{i}\equiv\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+ }\ \,\] \[\mathscr{A}_{2}\equiv\mathscr{C}_{2}\equiv\prod_{1\leq i\leq n-3 }\left(\mathscr{A}_{2}\right)_{i}\equiv\prod_{1\leq i\leq n-3}\left(\mathscr{C} _{2}\right)_{i}\equiv\prod_{1\leq i\leq n-3}\sin\!\left(\lambda_{\alpha}-v_{n-i }+\eta\sigma_{n-i}^{z}\right)\ \,\] \[\mathscr{A}_{3}\equiv\mathscr{C}_{3}\equiv\sum_{ 1\leq i\leq m\atop 1\leq j\leq n^{\prime}\atop m+n^{\prime}=n-3}\left(\mathscr{C}_{3} \right)_{i,j}\equiv\sum_{ 1\leq i\leq m\atop 1\leq j\leq n^{\prime}\atop m+n^{\prime}=n-3}\bigg{[} \bigg{(}\prod_{1\leq i\leq m}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}\pm\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\ \left(\sin\!\big{(}2\eta\big{)}\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\ \,\] corresponding to the first and third entries, \[\mathscr{B}_{1}\equiv\mathscr{D}_{1}\equiv\prod_{1\leq i\leq n-3} \left(\mathscr{B}_{1}\right)_{i}\equiv\prod_{2\leq i\leq n-3}\sigma_{n-i}^{-,+ }\ \,\] \[\mathscr{B}_{2}\equiv\mathscr{D}_{2}\equiv\prod_{2\leq i\leq n-3 }\left(\mathscr{B}_{2}\right)_{i}\equiv\prod_{2\leq i\leq n-3}\left(\mathscr{D} _{2}\right)_{i}\equiv\prod_{2\leq i\leq n-3}\sin\!\left(\lambda_{\alpha}-v_{n-i }+\eta\sigma_{n-i}^{z}\right)\ \,\] \[\mathscr{B}_{3}\equiv\mathscr{D}_{3}\equiv\sum_{ 2\leq i\leq m\atop 2\leq j\leq n^{\prime}\atop m+n^{\prime}=n-3}\left( \mathscr{D}_{3}\right)_{i,j}\equiv\sum_{ 2\leq i\leq m\atop 2\leq j\leq n^{\prime}\atop m+n^{\prime}=n-3}\bigg{[} \bigg{(}\prod_{2\leq i\leq m}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}\pm\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\ \left(\sin\!\big{(}2\eta\big{)}\right)^{n^{\prime}-1}\!\bigg{(}\prod_{2\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\ \,\] corresponding to the second and fourth entries. _Proof of Lemma 4._ By direct computation, we begin by computing the product of L operators, \[\begin{bmatrix}\mathbf{1}^{0}&\mathbf{2}^{0}\\ \mathbf{3}^{0}&\mathbf{4}^{0}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{1}&\mathbf{ 2}^{1}\\ \mathbf{3}^{1}&\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{2}&\mathbf{ 2}^{2}\\ \mathbf{3}^{2}&\mathbf{4}^{2}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{3}&\mathbf{ 2}^{3}\\ \mathbf{3}^{3}&\mathbf{4}^{3}\end{bmatrix}\] after which we compute the product of three arbitrary L operators, \[\begin{bmatrix}\mathbf{1}^{i-3}&\mathbf{2}^{i-3}\\ \mathbf{3}^{i-3}&\mathbf{4}^{i-3}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{i-2}& \mathbf{2}^{i-2}\\ \mathbf{3}^{i-2}&\mathbf{4}^{i-2}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{i-1}& \mathbf{2}^{i-1}\\ \mathbf{3}^{i-1}&\mathbf{4}^{i-1}\end{bmatrix}\ \,\] from the \(n\) fold product, \[\prod_{1\leq i\leq n}\begin{bmatrix}\mathbf{1}^{i}&\mathbf{2}^{i}\\ \mathbf{3}^{i}&\mathbf{4}^{i}\end{bmatrix}\ \.\] For the first product of two by two L operators, for \(i\equiv 0\), \(i\equiv 1\), \(i\equiv 2\), and \(i\equiv 3\), by brute force one has, \[\begin{bmatrix}\mathbf{1}^{0}\mathbf{1}^{1}+\mathbf{2}^{0}\mathbf{3}^{1}& \mathbf{1}^{0}\mathbf{2}^{1}+\mathbf{2}^{0}\mathbf{4}^{1}\\ \mathbf{1}^{1}\mathbf{3}^{0}+\mathbf{3}^{1}\mathbf{4}^{0}&\mathbf{2}^{1}\mathbf{3}^ {0}+\mathbf{4}^{0}\mathbf{4}^{1}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{2}& \mathbf{2}^{2}\\ \mathbf{3}^{2}&\mathbf{4}^{2}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{3}&\mathbf{ 2}^{3}\\ \mathbf{3}^{3}&\mathbf{4}^{3}\end{bmatrix}\] is equivalent to, \[\begin{bmatrix}\left(\mathbf{1}^{0}\mathbf{1}^{1}+\mathbf{2}^{0}\mathbf{3}^{1} \right)\mathbf{1}^{2}+\left(\mathbf{1}^{0}\mathbf{2}^{1}+\mathbf{2}^{0}\mathbf{ 4}^{1}\right)\mathbf{3}^{2}&\left(\mathbf{1}^{0}\mathbf{1}^{1}+\mathbf{2}^{0} \mathbf{3}^{1}\right)\mathbf{2}^{2}+\left(\mathbf{1}^{0}\mathbf{2}^{1}+ \mathbf{2}^{0}\mathbf{4}^{1}\right)\mathbf{4}^{2}\\ \left(\mathbf{1}^{1}\mathbf{3}^{0}+\mathbf{3}^{1}\mathbf{4}^{0}\right)\mathbf{1 }^{2}+\left(\mathbf{2}^{1}\mathbf{3}^{0}+\mathbf{4}^{0}\mathbf{4}^{1}\right) \mathbf{3}^{2}&\left(\mathbf{1}^{1}\mathbf{3}^{0}+\mathbf{3}^{1}\mathbf{4}^{0 }\right)\mathbf{2}^{2}+\left(\mathbf{2}^{1}\mathbf{3}^{0}+\mathbf{4}^{0} \mathbf{4}^{1}\right)\mathbf{4}^{2}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{3} &\mathbf{2}^{3}\\ \mathbf{3}^{3}&\mathbf{4}^{3}\end{bmatrix}\.\] From previous arguments given in **Lemmas 1**-4, the fact that, \[\begin{bmatrix}A_{3}\big{(}\lambda_{\alpha}\big{)}&B_{3}\big{(}\lambda_{\alpha }\big{)}\\ C_{3}\big{(}\lambda_{\alpha}\big{)}&D_{3}\big{(}\lambda_{\alpha}\big{)}\end{bmatrix}\] equals, \[\begin{bmatrix}\left(\mathbf{1}^{0}\mathbf{1}^{1}+\mathbf{2}^{0}\mathbf{3}^{1} \right)\mathbf{1}^{2}+\left(\mathbf{1}^{0}\mathbf{2}^{1}+\mathbf{2}^{0} \mathbf{4}^{1}\right)\mathbf{3}^{2}&\left(\mathbf{1}^{0}\mathbf{1}^{1}+ \mathbf{2}^{0}\mathbf{3}^{1}\right)\mathbf{2}^{2}+\left(\mathbf{1}^{0} \mathbf{2}^{1}+\mathbf{2}^{0}\mathbf{4}^{1}\right)\mathbf{4}^{2}\\ \left(\mathbf{1}^{1}\mathbf{3}^{0}+\mathbf{3}^{1}\mathbf{4}^{0}\right)\mathbf{1 }^{2}+\left(\mathbf{2}^{1}\mathbf{3}^{0}+\mathbf{4}^{0}\mathbf{4}^{1}\right) \mathbf{3}^{2}&\left(\mathbf{1}^{1}\mathbf{3}^{0}+\mathbf{3}^{1}\mathbf{4}^{0 }\right)\mathbf{2}^{2}+\left(\mathbf{2}^{1}\mathbf{3}^{0}+\mathbf{4}^{0} \mathbf{4}^{1}\right)\mathbf{4}^{2}\end{bmatrix}\,\] implies, entry by entry, that, \[A_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\left(\mathbf{1}^{0} \mathbf{1}^{1}+\mathbf{2}^{0}\mathbf{3}^{1}\right)\mathbf{1}^{2}+\left( \mathbf{1}^{0}\mathbf{2}^{1}+\mathbf{2}^{0}\mathbf{4}^{1}\right)\mathbf{3}^{2 }\,\] \[B_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\left(\mathbf{1}^{0} \mathbf{1}^{1}+\mathbf{2}^{0}\mathbf{3}^{1}\right)\mathbf{2}^{2}+\left( \mathbf{1}^{0}\mathbf{2}^{1}+\mathbf{2}^{0}\mathbf{4}^{1}\right)\,\] \[C_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\left(\mathbf{1}^{1} \mathbf{3}^{0}+\mathbf{3}^{1}\mathbf{4}^{0}\right)\mathbf{1}^{2}+\left( \mathbf{2}^{1}\mathbf{3}^{0}+\mathbf{4}^{0}\mathbf{4}^{1}\right)\mathbf{3}^{2 }\ \,\] \[D_{3}\big{(}\lambda_{\alpha}\big{)}\equiv\left(\mathbf{1}^{1} \mathbf{3}^{0}+\mathbf{3}^{1}\mathbf{4}^{0}\right)\mathbf{2}^{2}+\left( \mathbf{2}^{1}\mathbf{3}^{0}+\mathbf{4}^{0}\mathbf{4}^{1}\right)\mathbf{4}^{2 }\ \.\] In terms of \(A_{3}\big{(}\lambda_{\alpha}\big{)}\), \(B_{3}\big{(}\lambda_{\alpha}\big{)}\), \(C_{3}\big{(}\lambda_{\alpha}\big{)}\) and \(D_{3}\big{(}\lambda_{\alpha}\big{)}\), performing the final multiplication of L operators, \[\begin{bmatrix}A_{3}\big{(}\lambda_{\alpha}\big{)}&B_{3}\big{(}\lambda_{\alpha} \big{)}\\ C_{3}\big{(}\lambda_{\alpha}\big{)}&D_{3}\big{(}\lambda_{\alpha}\big{)}\end{bmatrix} \begin{bmatrix}\mathbf{1}^{3}&\mathbf{2}^{3}\\ \mathbf{3}^{3}&\mathbf{4}^{3}\end{bmatrix}\] equals, \[\begin{bmatrix}A_{3}\big{(}\lambda_{\alpha}\big{)}\mathbf{1}^{3}+B_{3}\big{(} \lambda_{\alpha}\big{)}\mathbf{3}^{3}&A_{3}\big{(}\lambda_{\alpha}\big{)} \mathbf{2}^{3}+B_{3}\big{(}\lambda_{\alpha}\big{)}\mathbf{4}^{3}\\ C_{3}\big{(}\lambda_{\alpha}\big{)}\mathbf{1}^{3}+D_{3}\big{(}\lambda_{\alpha} \big{)}\mathbf{3}^{3}&C_{3}\big{(}\lambda_{\alpha}\big{)}\mathbf{2}^{3}+D_{3} \big{(}\lambda_{\alpha}\big{)}\mathbf{4}^{3}\end{bmatrix}\.\] Hence, \[\prod_{0\leq i\leq 3}\begin{bmatrix}\mathbf{1}^{i}&\mathbf{2}^{i} \\ \mathbf{3}^{i}&\mathbf{4}^{i}\end{bmatrix}\equiv\begin{bmatrix}A_{3}\big{(} \lambda_{\alpha}\big{)}\mathbf{1}^{3}+B_{3}\big{(}\lambda_{\alpha}\big{)} \mathbf{3}^{3}&A_{3}\big{(}\lambda_{\alpha}\big{)}\mathbf{2}^{3}+B_{3}\big{(} \lambda_{\alpha}\big{)}\mathbf{4}^{3}\\ C_{3}\big{(}\lambda_{\alpha}\big{)}\mathbf{1}^{3}+D_{3}\big{(}\lambda_{\alpha} \big{)}\mathbf{3}^{3}&C_{3}\big{(}\lambda_{\alpha}\big{)}\mathbf{2}^{3}+D_{3} \big{(}\lambda_{\alpha}\big{)}\mathbf{4}^{3}\end{bmatrix}\.\] With such an expression, we make use of the following relations, \[A_{3}\big{(}\lambda_{\alpha}\big{)} =A_{2}\big{(}\lambda_{\alpha}\big{)}\mathbf{1}^{2}+B_{2}\big{(} \lambda_{\alpha}\big{)}\mathbf{3}^{2}\equiv A_{2}\big{(}\lambda_{\alpha}\big{)} \mathrm{sin}\big{(}\lambda_{\alpha}-v_{2}+\eta\sigma_{2}^{z}\big{)}+B_{2}\big{(} \lambda_{\alpha}\big{)}\mathrm{sin}\big{(}2\eta\big{)}\sigma_{2}^{+}\,\] \[B_{3}\big{(}\lambda_{\alpha}\big{)} =A_{2}\big{(}\lambda_{\alpha}\big{)}\mathbf{2}^{2}+B_{2}\big{(} \lambda_{\alpha}\big{)}\mathbf{4}^{2}\equiv A_{2}\big{(}\lambda_{\alpha}\big{)} \mathrm{sin}\big{(}2\eta\big{)}\sigma_{2}^{-}+B_{2}\big{(}\lambda_{\alpha}\big{)} \mathrm{sin}\big{(}\lambda_{\alpha}-v_{2}-\eta\sigma_{2}^{z}\big{)}\ \,\] \[C_{3}\big{(}\lambda_{\alpha}\big{)} =C_{2}\big{(}\lambda_{\alpha}\big{)}\mathbf{1}^{2}+D_{2}\big{(} \lambda_{\alpha}\big{)}\mathbf{4}^{2}\equiv C_{2}\big{(}\lambda_{\alpha} \big{)}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{2}+\eta\sigma_{2}^{z}\big{)}+D_{2} \big{(}\lambda_{\alpha}\big{)}\mathrm{sin}\big{(}2\eta\big{)}\sigma_{2}^{+}\,\] \[D_{3}\big{(}\lambda_{\alpha}\big{)} =C_{2}\big{(}\lambda_{\alpha}\big{)}\mathbf{2}^{2}+D_{2}\big{(} \lambda_{\alpha}\big{)}\mathbf{4}^{2}\equiv C_{2}\big{(}\lambda_{\alpha} \big{)}\mathrm{sin}\big{(}2\eta\big{)}\sigma_{2}^{-}+D_{2}\big{(}\lambda_{\alpha} \big{)}\mathrm{sin}\big{(}\lambda_{\alpha}-v_{2}-\eta\sigma_{2}^{z}\big{)}\ \.\] To finish the arguments for **Lemma 4**, we introduce the result below. **Lemma 5** (_iteratively obtaining the entries of the n th L operator from the entries of the third L operator_). The first entry of the \(n\) th L operator can be expressed in terms of, \[A_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}A_{3}\big{(}\lambda_{ \alpha}\big{)}+B_{3}\big \[B_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_ {3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\big{(}\text{sin}\big{(}2\eta\big{)} \big{)}^{n-4}\bigg{(}\prod_{2\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}+\cdots\] \[\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{(}\prod_{2\leq i\leq n-3}\text{sin}\big{(}\lambda_{\alpha}-v_{n-i }+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}+\cdots\] \[\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{(}\sum_{\begin{subarray}{c}2\leq i\leq m\\ 2\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{2\leq i\leq m}\text{sin} \big{(}\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\,\big{(} \text{sin}\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{2\leq j\leq n ^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\enspace.\] The third entry of the \(n\) th L operator can be expressed in terms of, \[C_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}C_{3}\big{(}\lambda_{\alpha} \big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\Big{(}\text{sin}\big{(}2 \eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}+\cdots\] \[\bigg{(}C_{3}\big{(}\lambda_{\alpha}\big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\text{sin} \big{(}\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\,\big{(} \text{sin}\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n ^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\enspace.\] The fourth entry of the \(n\) th L operator can be expressed in terms of, \[D_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}C_{3}\big{(}\lambda _{\alpha}\big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\Big{(}\text{sin} \big{(}2\eta\big{)}\big{)}^{n-4}\bigg{(}\prod_{2\leq i\leq n-3}\sigma_{n-i}^{-,+ }\bigg{)}+\cdots\] \[\bigg{(}C_{3}\big{(}\lambda_{\alpha}\big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{(}\sum_{\begin{subarray}{c}2\leq i\leq m\\ 2\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{2\leq i\leq m}\text{sin} \big{(}\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\,\big{(} \text{sin}\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{2\leq j\leq n ^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\enspace.\] Proof of Lemma 5.: Observe, for the first entry of the \(n\) th L operator, \[\prod_{1\leq i\leq n}\begin{bmatrix}\mathbf{1}^{i}&\mathbf{2}^{i}\\ \mathbf{3}^{i}&\mathbf{4}^{i}\end{bmatrix}\enspace,\] that, \[A_{n}\big{(}\lambda_{\alpha}\big{)}=A_{n-1}\big{(}\lambda_{\alpha}\big{)} \mathbf{1}^{n-1}+B_{n-1}\big{(}\lambda_{\alpha}\big{)}\mathbf{3}^{n-1}\equiv A _{n-1}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}\lambda_{\alpha}-v_{n-1}+ \eta\sigma_{n-1}^{z}\big{)}+B_{n-1}\big{(}\lambda_{\alpha}\big{)}\text{sin} \big{(}2\eta\big{)}\sigma_{n-1}^{+}\enspace. \tag{1}\] From the equality above, making the substitution for \(A_{n-1}\big{(}\lambda_{\alpha}\big{)}\) in terms of \(A_{n-2}\big{(}\lambda_{\alpha}\big{)}\), implies, \[A_{n-1}\big{(}\lambda_{\alpha}\big{)}=A_{n-2}\big{(}\lambda_{\alpha}\big{)} \mathbf{1}^{n-2}+B_{n-2}\big{(}\lambda_{\alpha}\big{)}\mathbf{3}^{n-2}\equiv A _{n-2}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}\lambda_{\alpha}-v_{n-2}+ \eta\sigma_{n-2}^{z}\big{)}+B_{n-2}\big{(}\lambda_{\alpha}\big{)}\text{sin} \big{(}2\eta\big{)}\sigma_{n-2}^{+}\enspace.\] Similarly, \[B_{n-1}\big{(}\lambda_{\alpha}\big{)}=A_{n-2}\big{(}\lambda_{\alpha}\big{)} \mathbf{2}^{n-2}+B_{n-2}\big{(}\lambda_{\alpha}\big{)}\mathbf{4}^{n-2}\equiv A _{n-2}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{n-2}^{- }+B_{n-2}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}\lambda_{\alpha}-v_{n-2}- \eta\sigma_{n-2}^{z}\big{)}\enspace.\] ( 1 \[{}^{\ast}\] ) Rewriting (1) with the expression for \(A_{n-1}\big{(}\lambda_{\alpha}\big{)}\) and \(B_{n-1}\big{(}\lambda_{\alpha}\big{)}\) implies, \[(1)\equiv\bigg{(}A_{n-2}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(} \lambda_{\alpha}-v_{n-2}+\eta\sigma^{z}_{n-2}\big{)}+B_{n-2}\big{(}\lambda_{ \alpha}\big{)}\!\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-2}\bigg{)}\!\sin\!\big{(} \lambda_{\alpha}-v_{n-1}+\eta\sigma^{z}_{n-1}\big{)}+\cdots\] \[\bigg{(}A_{n-2}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(}2\eta \big{)}\sigma^{-}_{n-2}+B_{n-2}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(} \lambda_{\alpha}-v_{n-2}-\eta\sigma^{z}_{n-2}\big{)}\bigg{)}\!\sin\!\big{(}2 \eta\big{)}\sigma^{+}_{n-1}\ \.\] Grouping together like terms from the expression above, \[A_{n-2}\big{(}\lambda_{\alpha}\big{)}\bigg{(}\prod_{1\leq i\leq 2}\sin\! \big{(}\lambda_{\alpha}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}+\big{(}\!\sin\! \big{(}2\eta\big{)}\big{)}^{2}\prod_{1\leq i\leq 2}\sigma^{-,+}_{n-i}\bigg{)}+\cdots\] \[B_{n-2}\big{(}\lambda_{\alpha}\big{)}\bigg{(}\!\sin\!\big{(}2 \eta\big{)}\sigma^{+}_{n-2}\!\sin\!\big{(}\lambda_{\alpha}-v_{n-1}+\eta\sigma ^{z}_{n-1}\big{)}+\sin\!\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma^{z}_{n-2} \big{)}\!\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-1}\bigg{)}\ . \tag{2}\] Continuing along similar lines, in which we substitute for \(A_{n-2}\big{(}\lambda_{\alpha}\big{)}\) and \(B_{n-2}\big{(}\lambda_{\alpha}\big{)}\) to rewrite (2) implies, \[(2)\equiv\bigg{(}A_{n-3}\big{(}\lambda_{\alpha}\big{)}\!\sin\! \big{(}\lambda_{\alpha}-v_{n-3}+\eta\sigma^{z}_{n-3}\big{)}+B_{n-3}\big{(} \lambda_{\alpha}\big{)}\!\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-3}\bigg{)} \bigg{(}\prod_{1\leq i\leq 2}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma^{z}_{n-i}\big{)}+\cdots\] \[\big{(}\!\sin\!\big{(}2\eta\big{)}\big{)}^{2}\prod_{1\leq i\leq 2 }\sigma^{-,+}_{n-i}\bigg{)}+\cdots\] \[\bigg{(}A_{n-3}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(}2\eta \big{)}\sigma^{-}_{n-3}+B_{n-3}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(} \lambda_{\alpha}-v_{n-3}-\eta\sigma^{z}_{n-3}\big{)}\bigg{)}\times\cdots\] \[\bigg{(}\!\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-2}\!\sin\!\big{(} \lambda_{\alpha}-v_{n-1}+\eta\sigma^{z}_{n-1}\big{)}+\sin\!\big{(}\lambda_{ \alpha}-v_{n-2}-\eta\sigma^{z}_{n-2}\big{)}\!\sin\!\big{(}2\eta\big{)}\sigma ^{+}_{n-1}\bigg{)}\ \,\] from which performing additional rearrangements implies, for the first term, that, \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\bigg{(}\prod_{1\leq i\leq 3} \sin\!\big{(}\lambda_{\alpha}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)}+B_{n- 3}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-3} \bigg{(}\prod_{1\leq i\leq 2}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma^{z}_{n-i}\big{)}\bigg{)}+\cdots\] \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\big{(}\!\sin\!\big{(}2\eta \big{)}\big{)}^{2}\!\sin\!\big{(}\lambda_{\alpha}-v_{n-3}+\eta\sigma^{z}_{n-3} \big{)}\bigg{(}\prod_{1\leq i\leq 2}\sigma^{-,+}_{n-i}\bigg{)}+B_{n-3}\big{(} \lambda_{\alpha}\big{)}\big{(}\!\sin\!\big{(}2\eta\big{)}\big{)}^{3}\bigg{(} \prod_{1\leq i\leq 3}\sigma^{-,+}_{n-i}\bigg{)}\ \,\] ( \[2^{*}\] ) and for the second term, that, \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\big{(}\!\sin\!\big{(}2\eta \big{)}\big{)}^{2}\!\sin\!\big{(}\lambda_{\alpha}-v_{n-1}+\eta\sigma^{z}_{n-1} \big{)}\bigg{(}\prod_{2\leq i\leq 3}\sigma^{-,+}_{N-i}\bigg{)}+B_{n-3}\big{(} \lambda_{\alpha}\big{)}\!\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-2}\times\cdots\] \[\bigg{(}\prod_{i\text{ odd : }1\leq i\leq 3}\sin\!\big{(}\lambda_{ \alpha}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\big{)}\bigg{)}+A_{n-3}\big{(}\lambda_{ \alpha}\big{)}\!\sin\!\big{(}2\eta\big{)}\big{)}^{2}\!\sin\!\big{(}\lambda_{ \alpha}-v_{n-2}-\eta\sigma^{z}_{n-2}\big{)}\times\cdots\] \[\bigg{(}\prod_{i\text{ odd : }1\leq i\leq 3}\sigma^{-,+}_{N-i}\bigg{)}+B_{n- 3}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-1}\bigg{(} \prod_{2\leq i\leq 3}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}-\eta\sigma^{z}_{n-i}\big{)} \bigg{)}\ \.\] ( \[2^{**}\] ) To extrapolate the formula to \(A_{3}\big{(}\lambda_{\alpha}\big{)}\), from the previous two terms above, substitute for \(A_{n-3}\big{(}\lambda_{\alpha}\big{)}\) and \(B_{n-3}\big{(}\lambda_{\alpha}\big{)}\), in which, from (\(2^{*}\)), \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\bigg{(}\prod_{1\leq i\leq 3}\sin\!\big{(} \lambda_{\alpha}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\!\sin\!\big{(}\lambda_{ \alpha}-v_{n-4}+\eta\sigma^{z}_{n-4}\big{)}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\! \sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-4}\bigg{)}\bigg{(}\prod_{1\leq i\leq 3}\sin\!\big{(} \lambda_{\alpha}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\] is equivalent to, \[A_{n-4}\big{(}\lambda_{\alpha}\big{)}\bigg{(}\prod_{1\leq i\leq 4}\sin\bigl{(} \lambda_{\alpha}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}+B_{n-4}\big{(} \lambda_{\alpha}\big{)}\sin\bigl{(}2\eta\big{)}\sigma_{n-4}^{+}\bigg{(}\prod_{1 \leq i\leq 3}\sin\bigl{(}\lambda_{\alpha}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)} \bigg{)}\ \.\] (2\[{}^{*}-1\] ) From \((2^{*})\), the second term with \(A_{n-3}\big{(}\lambda_{\alpha}\big{)}\), \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\bigl{(}\sin\bigl{(}2\eta\big{)}\big{)}^{2 }\sin\bigl{(}\lambda_{\alpha}-v_{n-3}+\eta\sigma_{n-3}^{z}\bigr{)}\bigg{(}\prod _{1\leq i\leq 2}\sigma_{n-i}^{-+}\bigg{)}\ \,\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\bigl{(}\lambda_{\alpha}-v_{ n-4}+\eta\sigma_{n-4}^{z}\bigr{)}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin \bigl{(}2\eta\big{)}\sigma_{n-4}^{+}\bigg{)}\bigl{(}\sin\bigl{(}2\eta\big{)} \big{)}^{2}\sin\bigl{(}\lambda_{\alpha}-v_{n-3}+\eta\sigma_{n-3}^{z}\big{)} \bigg{(}\prod_{1\leq i\leq 2}\sigma_{n-i}^{-,+}\bigg{)}\] is equivalent to, \[A_{n-4}\big{(}\lambda_{\alpha}\big{)}\bigl{(}\sin\bigl{(}2\eta\big{)}\bigr{)}^{ 2}\bigg{(}\prod_{3\leq i\leq 4}\sin\bigl{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigg{(}\prod_{1\leq i\leq 2}\sigma_{n-i}^{-,+} \bigg{)}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\bigl{(}\sin\bigl{(}2\eta\big{)} \big{)}^{3}\bigg{(}\prod_{1\leq i\leq 2}\sigma_{n-i}^{-,+}\bigg{)}\times\cdots\] From \((2^{*})\), the third entry with \(B_{n-3}\big{(}\lambda_{\alpha}\big{)}\), \[B_{n-3}\big{(}\lambda_{\alpha}\big{)}\sin\bigl{(}2\eta\big{)}\sigma_{n-3}^{+} \bigg{(}\prod_{1\leq i\leq 2}\sin\bigl{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\bigg{)}\ \,\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\bigl{(}2\eta\big{)}\sigma_{ n-4}^{-}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\bigl{(}\lambda_{\alpha}-v_{ n-4}-\eta\sigma_{n-4}^{z}\bigr{)}\bigg{)}\sin\bigl{(}2\eta\big{)}\sigma_{n-3}^{+} \bigg{(}\prod_{1\leq i\leq 2}\sin\bigl{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\bigg{)}\] is equivalent to, \[A_{n-4}\big{(}\lambda_{\alpha}\big{)}\bigl{(}\sin\bigl{(}2\eta \big{)}\big{)}^{2}\bigg{(}\prod_{3\leq i\leq 4}\sigma_{n-i}^{-,+}\bigg{)} \bigg{(}\prod_{1\leq i\leq 2}\sin\bigl{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\bigg{)}+\cdots\] \[B_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\bigl{(}2\eta\big{)} \sigma_{n-3}^{+}\sin\bigl{(}\lambda_{\alpha}-v_{n-4}-\eta\sigma_{n-4}^{z}\big{)} \bigg{(}\prod_{1\leq i\leq 2}\sin\bigl{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\bigg{)}\ \.\] (2\[{}^{*}-3\] ) From \((2^{*})\), the fourth entry with \(B_{n-3}\big{(}\lambda_{\alpha}\big{)}\), \[B_{n-3}\big{(}\lambda_{\alpha}\big{)}\bigl{(}\sin\bigl{(}2\eta\big{)}\big{)}^{ 3}\bigg{(}\prod_{1\leq i\leq 3}\sigma_{n-i}^{-,+}\bigg{)}\ \,\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\bigl{(}2\eta\big{)}\sigma_{n-4 }^{-}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\bigl{(}\lambda_{\alpha}-v_{n-4}- \eta\sigma_{n-4}^{z}\big{)}\bigg{)}\bigl{(}\sin\bigl{(}2\eta\big{)}\big{)}^{3} \bigg{(}\prod_{1\leq i\leq 3}\sigma_{n-i}^{-,+}\bigg{)}\] is equivalent to, \[A_{n-4}\big{(}\lambda_{\alpha}\big{)}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2} \sigma^{+}_{n-2}\sigma^{-}_{n-4}\bigg{(}\prod_{i\text{ odd }:\,1\leq i\leq 3}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}\pm\eta \sigma^{z}_{n-i}\big{)}\bigg{)}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\times\cdots\] From \((2^{**})\), the third term, \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{2 }\sin\!\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma^{z}_{n-2}\big{)}\bigg{(}\prod _{i\text{ odd }:\,1\leq i\leq 3}\sigma^{-,+}_{N-i}\bigg{)}\ \,\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\!\big{(}\lambda_{\alpha}-v_ {n-4}+\eta\sigma^{z}_{n-4}\big{)}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin \!\big{(}2\eta\big{)}\sigma^{+}_{n-4}\bigg{)}\big{(}\sin\!\big{(}2\eta\big{)} \big{)}^{2}\sin\!\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma^{z}_{n-2}\big{)} \times\cdots\] From \((2^{**})\), the second term, \[B_{n-3}\big{(}\lambda_{\alpha}\big{)}\sin\!\big{(}2\eta\big{)}\sigma^{+}_{n-2 }\bigg{(}\prod_{i\text{ odd }:\,1\leq i\leq 3}\sin\!\big{(}\lambda_{\alpha}-v_{n-i} \pm\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\ \,\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\!\big{(}2\eta\big{)}\sigma^ {-}_{n-4}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\!\big{(}\lambda_{\alpha} -v_{n-4}-\eta\sigma^{z}_{n-4}\big{)}\bigg{)}\sin\!\big{(}2\eta\big{)}\sigma^{ +}_{n-2}\bigg{(}\prod_{i\text{ odd }:\,1\leq i\leq 3}\sin\!\big{(}\lambda_{\alpha}-v_{n-i} \pm\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\ \,\] is equivalent to, \[A_{n-4}\big{(}\lambda_{\alpha}\big{)}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{ 2}\sigma^{+}_{n-2}\sigma^{-}_{n-4}\bigg{(}\prod_{i\text{ odd }:\,1\leq i\leq 3}\sin\!\big{(} \lambda_{\alpha}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\big{)}\bigg{)}+B_{n-4}\big{(} \lambda_{\alpha}\big{)}\times\cdots\] From \((2^{**})\), the third term, \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{ 2}\sin\!\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma^{z}_{n-2}\big{)}\bigg{(} \prod_{i\text{ odd }:\,1\leq i\leq 3}\sigma^{-,+}_{N-i}\bigg{)}\ \,\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\!\big{(}\lambda_{\alpha}-v_ {n-4}+\eta\sigma^{z}_{n-4}\big{)}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\sin\! \big{(}2\eta\big{)}\sigma^{+}_{n-4}\bigg{)}\big{(}\sin\!\big{(}2\eta\big{)} \big{)}^{2}\sin\!\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma^{z}_{n-2}\big{)} \times\cdots\] \[\bigg{(}\prod_{i\text{ odd }:\,1\leq i\leq 3}\sigma^{-,+}_{N-i}\bigg{)}\] is equivalent to, \[A_{n-4}\big{(}\lambda_{\alpha}\big{)}\big{(}\text{sin}\big{(}2 \eta\big{)}\big{)}^{2}\text{sin}\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma_{n-2}^ {z}\big{)}\text{sin}\big{(}\lambda_{\alpha}-v_{n-4}+\eta\sigma_{n-4}^{z}\big{)} \bigg{(}\prod_{i\text{ odd }:\,1\leq i\leq 3}\sigma_{N-i}^{-,+}\bigg{)}+\cdots\] \[B_{n-4}\big{(}\lambda_{\alpha}\big{)}\big{(}\text{sin}\big{(}2 \eta\big{)}\big{)}^{3}\text{sin}\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma_{n- 2}^{z}\big{)}\sigma_{n-4}^{+}\bigg{(}\prod_{i\text{ odd }:\,1\leq i\leq 3}\sigma_{N-i}^{-,+}\bigg{)}\ \.\] ( \[2^{**}-3\] ) From (\(2^{**}\)), the fourth term, \[B_{n-3}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{n- 1}^{+}\bigg{(}\prod_{2\leq i\leq 3}\text{sin}\big{(}\lambda_{\alpha}-v_{n-i}- \eta\sigma_{n-i}^{z}\big{)}\ \bigg{)}\ \,\] equals, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2 \eta\big{)}\sigma_{n-4}^{-}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\text{sin} \big{(}\lambda_{\alpha}-v_{n-4}-\eta\sigma_{n-4}^{z}\big{)}\bigg{)}\text{sin} \big{(}2\eta\big{)}\sigma_{n-1}^{+}\bigg{(}\prod_{2\leq i\leq 3}\text{sin} \big{(}\lambda_{\alpha}-v_{n-i}-\eta\sigma_{n-i}^{z}\big{)}\ \bigg{)}\ \,\] is equivalent to, \[A_{n-4}\big{(}\lambda_{\alpha}\big{)}\big{(}\text{sin}\big{(}2 \eta\big{)}\big{)}^{2}\sigma_{n-1}^{+}\sigma_{n-4}^{+}\bigg{(}\prod_{2\leq i \leq 3}\text{sin}\big{(}\lambda_{\alpha}-v_{n-i}-\eta\sigma_{n-i}^{z}\big{)} \ \bigg{)}+\cdots\] \[B_{n-4}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2\eta \big{)}\sigma_{n-1}^{+}\bigg{(}\prod_{2\leq i\leq 4}\text{sin}\big{(}\lambda_{ \alpha}-v_{n-i}-\eta\sigma_{n-i}^{z}\big{)}\ \bigg{)}\.\] ( \[2^{**}-4\] ) Extrapolating the formulas from (1) and (\(2^{*}\)) yields, \[A_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}A_{3}\big{(}\lambda_ {\alpha}\big{)}+B_{3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\big{(}\text{sin} \big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{ -,+}\bigg{)}+\cdots\] \[\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_{3}\big{(}\lambda_ {\alpha}\big{)}\bigg{)}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\text{sin} \big{(}\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \big{(}\text{sin} \big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime }}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\ \.\] Along similar lines, repeating the computations since the beginning of the proof yields another formula for \(C_{n}\big{(}\lambda_{\alpha}\big{)}\), in which, \[C_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}C_{3}\big{(}\lambda_ {\alpha}\big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\big{(}\text{sin} \big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+} \bigg{)}+\cdots\] \[\bigg{(}C_{3}\big{(}\lambda_{\alpha}\big{)}+D_{3}\big{(}\lambda_ {\alpha}\big{)}\bigg{)}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\text{sin} \big{(}\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \big{(}\text{sin} \big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\ \.\] To obtain the desired formula for \(B_{n-1}\big{(}\lambda_{\alpha}\big{)}\), we pursue the following steps. Rewriting (\(1^{*}\)), in terms of \(A_{n-2}\big{(}\lambda_{\alpha}\big{)}\) and \(B_{n-2}\big{(}\lambda_{\alpha}\big{)}\), implies, \[(1^{*})\equiv\bigg{(}A_{n-3}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(} \lambda_{\alpha}-v_{n-3}+\eta\sigma_{n-3}^{z}\big{)}+B_{n-3}\big{(}\lambda_{ \alpha}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{n-3}^{+}\bigg{)}\text{sin} \big{(}2\eta\big{)}\sigma_{n-2}^{-}+\cdots\] which is equivalent to the superposition, \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{n-2}^ {-}\text{sin}\big{(}\lambda_{\alpha}-v_{n-3}+\eta\sigma_{n-3}^{z}\big{)}+B_{n- 3}\big{(}\lambda_{\alpha}\big{)}\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{2 }\bigg{(}\prod_{2\leq i\leq 3}\sigma_{n-i}^{+,-}\bigg{)}\ \,\] for the first term, and, \[A_{n-3}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2\eta\big{)}\sigma_{n-3 }^{-}\text{sin}\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma_{n-2}^{z}\big{)}+B_ {n-3}\big{(}\lambda_{\alpha}\big{)}\bigg{(}\prod_{2\leq i\leq 3}\text{sin} \big{(}\lambda_{\alpha}-v_{n-i}-\eta\sigma_{n-i}^{z}\big{)}\ \,\] for the second term. From the two terms above, substituting for \(A_{n-3}\big{(}\lambda_{\alpha}\big{)}\) and \(B_{n-3}\big{(}\lambda_{\alpha}\big{)}\) yields, for the first term, while for the second term, \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2 \eta\big{)}\sigma_{n-4}^{-}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\text{sin} \big{(}2\eta\big{)}\sigma_{n-4}^{+}\bigg{)}\text{sin}\big{(}2\eta\big{)} \sigma_{n-3}^{-}\text{sin}\big{(}\lambda_{\alpha}-v_{n-2}-\eta\sigma_{n-2}^{z} \big{)}+\cdots\] \[\bigg{(}A_{n-4}\big{(}\lambda_{\alpha}\big{)}\text{sin}\big{(}2 \eta\big{)}\sigma_{n-4}^{-}+B_{n-4}\big{(}\lambda_{\alpha}\big{)}\text{sin} \big{(}\lambda_{\alpha}-v_{n-4}-\eta\sigma_{n-4}^{z}\big{)}\bigg{)}\bigg{(} \prod_{2\leq i\leq 3}\text{sin}\big{(}\lambda_{\alpha}-v_{n-i}-\eta\sigma_{n-i}^{z} \big{)}\bigg{)}\ \.\] (1 \[{}^{*}-2\] Performing rearrangements from \((1^{*}-1)\) implies, while performing rearrangements from \((1^{*}-2)\) of terms implies, Extrapolating the formulas from previous computations yields, \[B_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_ {3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\big{(}\sin(2\eta)\big{)}^{n-4}\bigg{(} \prod_{2\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}+\cdots\] \[\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{(}\prod_{2\leq i\leq n-3}\sin\big{(}\lambda_{\alpha}-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{)}+\cdots\] \[\bigg{(}A_{3}\big{(}\lambda_{\alpha}\big{)}+B_{3}\big{(}\lambda_{\alpha}\big{)} \bigg{)}\bigg{(}\sum_{\begin{array}{c}2\leq i\leq m\\ 2\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{array}}\bigg{[}\bigg{(}\prod_{2\leq i\leq m}\sin\big{(} \lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \big{(}\sin(2\eta)\big{)}^{n^{\prime}-1}\bigg{(}\prod_{2\leq j\leq n^{\prime}} \sigma_{n-i}^{-,+}\bigg{)}\bigg{]}\bigg{)}\ \.\] Along similar lines, repeating the computations since the beginning of the proof yields another formula for \(D_{n}\big{(}\lambda_{\alpha}\big{)}\), in which, \[D_{n}\big{(}\lambda_{\alpha}\big{)}=\bigg{(}C_{3}\big{(}\lambda_{\alpha} \big{)}+D_{3}\big{(}\lambda_{\alpha}\big{)}\bigg{)}\big{(}\sin\big{(}2\eta \big{)}\big{)}^{n-4}\bigg{(}\prod_{2\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}+\cdots\] \[\bigg{(}C_{3}\big{(}\lambda_{\alpha}\big{)}+D_{3}\big{(}\lambda_{\alpha} \big{)}\bigg{)}\bigg{(}\prod_{\begin{array}{c}2\leq i\leq m\\ 2\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{array}}\bigg{[}\bigg{(}\prod_{2\leq i\leq m}\sin\big{(} \lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \big{(}\sin\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{2\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\ \,\] from which we conclude the argument. With the expressions from **Lemma 5** for \(A_{n}\big{(}\lambda_{\alpha}\big{)}\), \(B_{n}\big{(}\lambda_{\alpha}\big{)}\), \(C_{n}\big{(}\lambda_{\alpha}\big{)}\) and \(D_{n}\big{(}\lambda_{\alpha}\big{)}\), we continue with the following computation. Recall, before **Lemma 5**, that we demonstrated that the product of L operators, \[\prod_{0\leq i\leq 3}\ \begin{bmatrix}\mathbf{1}^{i}&\mathbf{2}^{i}\\ \mathbf{3}^{i}&\mathbf{4}^{i}\end{bmatrix}\] has the expansion, To make use of the expressions in the matrix product above, we perform additional computations below, in which, \[\begin{bmatrix}\mathbf{1}^{i-2}&\mathbf{2}^{i-2}\\ \mathbf{3}^{i-2}&\mathbf{4}^{i-2}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{i-1}& \mathbf{2}^{i-1}\\ \mathbf{3}^{i-1}&\mathbf{4}^{i-1}\end{bmatrix}\equiv\begin{bmatrix}\mathbf{1}^ {i-2}\mathbf{1}^{i-1}+\mathbf{2}^{i-2}\mathbf{3}^{i-1}&\mathbf{1}^{i-2} \mathbf{2}^{i-1}+\mathbf{2}^{i-2}\mathbf{4}^{i-1}\\ \mathbf{3}^{i-2}\mathbf{1}^{i-1}+\mathbf{4}^{i-2}\mathbf{3}^{i-1}&\mathbf{3}^{ i-2}\mathbf{2}^{i-1}+\mathbf{4}^{i-1}\mathbf{4}^{i-2}\end{bmatrix}\ \.\] Substituting the expression for the product above with the \(i-3\) th L operator implies, \[\begin{bmatrix}\mathbf{1}^{i-3}&\mathbf{2}^{i-3}\\ \mathbf{3}^{i-3}&\mathbf{4}^{i-3}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{i-2} \mathbf{1}^{i-1}+\mathbf{2}^{i-2}\mathbf{3}^{i-1}&\mathbf{1}^{i-2}\mathbf{2}^ {i-1}+\mathbf{2}^{i-2}\mathbf{4}^{i-1}\\ \mathbf{3}^{i-2}\mathbf{1}^{i-1}+\mathbf{4}^{i-2}\mathbf{3}^{i-1}&\mathbf{3}^{ i-2}\mathbf{2}^{i-1}+\mathbf{4}^{i-1}\mathbf{4}^{i-2}\end{bmatrix}\ \,\] is equal to, \[\begin{bmatrix}\prod_{1\leq j\leq 3}\mathbf{1}^{i-j}+\mathbf{1}^{i-3} \mathbf{2}^{i-2}\mathbf{3}^{i-1}+\mathbf{1}^{i-1}\mathbf{2}^{i-3}\mathbf{3}^{i- 2}+\mathbf{2}^{i-3}\mathbf{3}^{i-1}\mathbf{4}^{i-2}&\cdots\\ \mathbf{3}^{i-3}\prod_{1\leq j\leq 2}\mathbf{1}^{i-j}+\mathbf{2}^{i-2}\prod_{j \text{ odd }:1\leq j\leq 3}\mathbf{3}^{i-j}+\mathbf{1}^{i-1}\mathbf{3}^{i-2} \mathbf{4}^{i-3}+\mathbf{3}^{i-1}\prod_{2\leq j\leq 3}\mathbf{4}^{i-j}&\cdots\end{bmatrix}\] \[\begin{bmatrix}\prod_{2\leq j\leq 3}\mathbf{1}^{i-j}\mathbf{2}^{i-1}+\mathbf{1}^{i-3} \mathbf{2}^{i-2}\mathbf{4}^{i-1}+\prod_{j\text{ odd }:1\leq j\leq 3}\mathbf{2}^{i-j}\mathbf{3}^{i-2}+ \mathbf{2}^{i-3}\prod_{1\leq j\leq 3}\mathbf{4}^{i-j}\\ \mathbf{1}^{i-2}\mathbf{2}^{i-1}\mathbf{3}^{i-3}+\mathbf{2}^{i-2}\mathbf{3}^{i- 3}\mathbf{4}^{i-1}+\mathbf{2}^{i-1}\mathbf{3}^{i-2}\mathbf{4}^{i-3}+\prod_{1\leq j \leq 3}\mathbf{4}^{i-j}\end{bmatrix}\ \,\] from the fact that, each entry of the matrix above is respectively given by, \[\begin{array}{l}{\bf 1}^{i-3}\big{(}\prod_{1\leq j\leq 3}{\bf 1}^{i-j}+{\bf 2}^{i-2}{ \bf 3}^{i-1}\big{)}+{\bf 2}^{i-3}\big{(}{\bf 3}^{i-2}{\bf 1}^{i-1}+{\bf 4}^{i-2}{ \bf 3}^{i-1}\big{)}\ \,\\ \\ {\bf 1}^{i-3}\big{(}{\bf 1}^{i-2}{\bf 2}^{i-1}+{\bf 2}^{i-2}{\bf 4}^{i-1}\big{)}+{ \bf 2}^{i-3}\big{(}{\bf 3}^{i-2}{\bf 2}^{i-1}+\prod_{1\leq j\leq 2}{\bf 4}^{i-j} \big{)}\ \,\\ \\ {\bf 3}^{i-3}\big{(}\prod_{1\leq j\leq 2}{\bf 1}^{i-j}+{\bf 2}^{i-2}{\bf 3}^{i-1} \big{)}+{\bf 4}^{i-3}\big{(}{\bf 3}^{i-2}{\bf 1}^{i-1}+{\bf 4}^{i-2}{\bf 3}^{i-1} \big{)}\ \,\\ \\ {\bf 3}^{i-3}\big{(}{\bf 1}^{i-2}{\bf 2}^{i-1}+{\bf 2}^{i-2}{\bf 4}^{i-1} \big{)}+{\bf 4}^{i-3}\big{(}{\bf 3}^{i-2}{\bf 2}^{i-1}+\prod_{1\leq j\leq 2}{ \bf 4}^{i-j}\big{)}\ \.\end{array}\] Iterating the computation, by making use of the expression above, implies that the matrix product, \[\begin{array}{l}\begin{bmatrix}\prod_{1\leq j\leq 3}{\bf 1}^{i-j}+{\bf 1}^{i-3}{ \bf 2}^{i-2}{\bf 3}^{i-1}+{\bf 1}^{i-1}{\bf 2}^{i-3}{\bf 3}^{i-2}+{\bf 2}^{i-3}{ \bf 3}^{i-1}{\bf 4}^{i-2}&\cdots\\ {\bf 3}^{i-3}\prod_{1\leq j\leq 2}{\bf 1}^{i-j}+{\bf 2}^{i-2}{\bf 1}\prod_{j\ \rm odd \ :1\leq j\leq 3}{\bf 3}^{i-j}+{\bf 1}^{i-1}{\bf 3}^{i-2}{\bf 4}^{i-3}+{\bf 3}^{i-1}\prod_{2\leq j\leq 3 }{\bf 4}^{i-j}&\cdots\end{bmatrix}\\ \begin{bmatrix}\prod_{2\leq j\leq 3}{\bf 1}^{i-j}{\bf 2}^{i-1}+{\bf 1}^{i-3}{ \bf 2}^{i-2}{\bf 4}^{i-1}+\prod_{j\ \rm odd\ :1\leq j\leq 3}{\bf 2}^{i-j}{\bf 3}^{i-2}+{\bf 2}^{i-3} \prod_{1\leq j\leq 2}{\bf 4}^{i-j}\end{bmatrix}\begin{bmatrix}{\bf 1}^{i-1}&{ \bf 2}^{i-1}\\ {\bf 3}^{i-1}&{\bf 4}^{i-1}\end{bmatrix}\ \,\end{array}\] is equivalent to the matrix with entries that are respectively given by, \[\begin{array}{l}\mbox{First entry}\equiv\ \ \left\{\begin{array}{l}{\bf 1}^{i-1} \big{(}\prod_{1\leq j\leq 3}{\bf 1}^{i-j}+{\bf 1}^{i-3}{\bf 2}^{i-2}{\bf 3}^{i-1}+{ \bf 1}^{i-1}{\bf 2}^{i-3}{\bf 3}^{i-2}+{\bf 2}^{i-3}{\bf 3}^{i-1}{\bf 4}^{i-2} \big{)}+\cdots\\ {\bf 3}^{i-1}\big{(}\prod_{2\leq j\leq 3}{\bf 1}^{i-j}{\bf 2}^{i-1}+{\bf 1}^{i-3}{ \bf 2}^{i-2}{\bf 4}^{i-1}+\prod_{j\ \rm odd\ :1\leq j\leq 3}{\bf 2}^{i-j}{\bf 3}^{i-2}+{ \bf 2}^{i-3}\prod_{1\leq j\leq 2}{\bf 4}^{i-j}\big{)}\end{array}\right.\ \,\end{array}\right.\ \ \[\begin{bmatrix}\mathbf{1}^{i-3}&\mathbf{2}^{i-3}\\ \mathbf{3}^{i-3}&\mathbf{4}^{i-3}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{i-2}& \mathbf{2}^{i-2}\\ \mathbf{3}^{i-2}&\mathbf{4}^{i-2}\end{bmatrix}\begin{bmatrix}\mathbf{1}^{i-1}& \mathbf{2}^{i-1}\\ \mathbf{3}^{i-1}&\mathbf{4}^{i-1}\end{bmatrix}\equiv\begin{bmatrix}\text{ First entry}&\text{Second entry}\\ \text{Third entry}&\text{Fourth entry}\end{bmatrix}\enspace,\] from which the desired form for the product of \(n\) L operators, \[\prod_{1\leq i\leq n}\begin{bmatrix}\mathbf{1}^{i}&\mathbf{2}^{i}\\ \mathbf{3}^{i}&\mathbf{4}^{i}\end{bmatrix}\enspace,\] would take the desired form, in which, \[\begin{bmatrix}\text{First entry}&\text{Second entry}\\ \text{Third entry}&\text{Fourth entry}\end{bmatrix}\] is equal to, \[\begin{bmatrix}\begin{pmatrix}A_{i-3}\big{(}\lambda_{\alpha}\big{)}+B_{i-3} \big{(}\lambda_{\alpha}\big{)}\end{pmatrix}\begin{pmatrix}\left(\sin\!\left(2 \eta\right)\right)^{n-\left(i-3\right)}\!\mathscr{A}_{1}^{\prime}+\mathscr{A }_{2}^{\prime}+\mathscr{A}_{3}^{\prime}\end{pmatrix}&\cdots\\ \begin{pmatrix}\left(C_{i-3}\big{(}\lambda_{\alpha}\big{)}+D_{i-3}\big{(} \lambda_{\alpha}\big{)}\right)\end{pmatrix}\begin{pmatrix}\left(\sin\!\left(2 \eta\right)\right)^{n-\left(i-3\right)}\!\mathscr{C}_{1}^{\prime}+\mathscr{C}_ {2}^{\prime}+\mathscr{C}_{3}^{\prime}\end{pmatrix}&\cdots\end{bmatrix}\] for, \[\mathscr{A}_{1}^{\prime}\equiv\mathscr{C}_{1}^{\prime}\equiv \prod_{1\leq i\leq n-\left(i-3\right)}\!\sigma_{n-i}^{-,+}\enspace,\] \[\mathscr{A}_{2}^{\prime}\equiv\mathscr{C}_{2}^{\prime}\equiv \prod_{1\leq i\leq n-\left(i-3\right)}\sin\!\left(\lambda_{\alpha}-v_{n-i}+\eta \sigma_{n-i}^{z}\right)\enspace,\] \[\mathscr{A}_{3}^{\prime}\equiv\mathscr{C}_{3}^{\prime}\equiv \sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-\left(i-3\right)\end{subarray}}\left[\bigg{(}\prod_{1\leq i \leq m}\sin\!\left(\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right) \right)\,\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{ 1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\right]\enspace,\] and, \[\mathscr{B}_{1}^{\prime}\equiv\mathscr{B}_{1}^{\prime}=\prod_{2 \leq i\leq n-\left(i-3\right)}\!\sigma_{n-i}^{-,+}\enspace,\] \[\mathscr{B}_{2}^{\prime}\equiv\mathscr{C}_{2}^{\prime}\equiv \prod_{2\leq i\leq n-\left(i-3\right)}\!\sin\!\left(\lambda_{\alpha}-v_{n-i}+ \eta\sigma_{n-i}^{z}\right)\enspace,\] \[\mathscr{B}_{3}^{\prime}\equiv\mathscr{D}_{3}^{\prime}\equiv \sum_{\begin{subarray}{c}2\leq i\leq m\\ 2\leq j\leq n^{\prime}\\ m+n^{\prime}=n-\left(i-3\right)\end{subarray}}\left[\bigg{(}\prod_{2\leq i\leq m }\sin\!\left(\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\bigg{)} \,\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{2\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\right]\enspace.\] Iterating the previous arguments for computing the entries of each L operator, from the product, \[\begin{bmatrix}\begin{pmatrix}A_{i-3}\big{(}\lambda_{\alpha}\big{)}+B_{i-3} \big{(}\lambda_{\alpha}\big{)}\end{pmatrix}\begin{pmatrix}\left(\sin\!\left(2 \eta\right)\right)^{n-\left(i-3\right)}\!\mathscr{A}_{1}^{\prime}+\mathscr{A }_{2}^{\prime}+\mathscr{A}_{3}^{\prime}\end{pmatrix}&\cdots\\ \begin{pmatrix}\left(C_{i-3}\big{(}\lambda_{\alpha}\big{)}+D_{i-3}\big{(} \lambda_{\alpha}\big{)}\end{pmatrix}\begin{pmatrix}\left(\sin\!\left(2\eta \right)\right)^{n-\left(i-3\right)}\!\mathscr{C}_{1}^{\prime}+\mathscr{C}_{2}^ {\prime}+\mathscr{C}_{3}^{\prime}\end{pmatrix}&\cdots\end{bmatrix}\end{bmatrix}\] yields similar expressions for \(\left\{\mathscr{A}_{i}\right\}_{1\leq i\leq 3}\), \(\left\{\mathscr{B}_{i}\right\}_{1\leq i\leq 3}\), \(\left\{\mathscr{C}_{i}\right\}_{1\leq i\leq 3}\) and \(\left\{\mathscr{D}_{i}\right\}_{1\leq i\leq 3}\), from \(\left\{\mathscr{A}_{i}^{\prime}\right\}_{1\leq i\leq 3}\), \(\left\{\mathscr{C}_{i}^{\prime}\right\}_{1\leq i\leq 3}\), \(\left\{\mathscr{C}_{i}^{\prime}\right\}_{1\leq i\leq 3}\), \(\left\{\mathscr{C}_{i}^{\prime}\right\}_{1\leq i\leq 3}\), \(\left\{\mathscr{C}_{i}^{\prime}\right\}_{1\leq i\leq 3}\), \(\left\{\mathscr{C}_{i}^{\prime}\right\}_{1\leq i\leq 3}\) and \(\left\{\mathscr{D}_{i}^{\prime}\right\}_{1\leq i\leq 3}\). We conclude the argument. ### Returning to the quantum monodromy matrix From expressions obtained for entries of the monodromy matrix, \[\begin{bmatrix}A\big{(}\lambda_{\alpha}\big{)}&B\big{(}\lambda_{\alpha}\big{)}\\ C\big{(}\lambda_{\alpha}\big{)}&D\big{(}\lambda_{\alpha}\big{)}\end{bmatrix}\] in terms of \(A_{3}\big{(}\lambda_{\alpha}\big{)}\), \(B_{3}\big{(}\lambda_{\alpha}\big{)}\), \(C_{3}\big{(}\lambda_{\alpha}\big{)}\) and \(D_{3}\big{(}\lambda_{\alpha}\big{)}\), below we perform the following computations with respect to the Poisson bracket. Below, we restate the main result of the paper which was provided earlier in _1.3_. Before stating the final result that needs to be proved, we reminder the reader of relationships obtained in the expressions for the monodromy matrix of the nonlinear Schrodinger's equation provided in _1.4.1_, in which, as \(x\longrightarrow+\infty\) and as \(y\longrightarrow-\infty\), independently of \(x\) and \(y\), the monodromy matrix, [5] \[T\big{(}\lambda\big{)}\equiv\begin{bmatrix}A\big{(}\lambda\big{)}&B\big{(} \lambda\big{)}\\ C\big{(}\lambda\big{)}&D\big{(}\lambda\big{)}\end{bmatrix}\ \,\] can be expressed with the following infinite limit, \[\underset{N\longrightarrow+\infty}{\lim}T_{N}\big{(}\lambda\big{)}\equiv T \big{(}\lambda\big{)}=\underset{\begin{subarray}{c}x\longrightarrow+\infty\\ y\longrightarrow-\infty\end{subarray}}{\lim}E\big{(}-x,\lambda\big{)}T\big{(} x,y,\lambda\big{)}E\big{(}y,\lambda\big{)}\ \,\] where \(E\) is the following matrix exponential that is proportional to the Pauli matrix \(U_{1}\), \[E\equiv E\big{(}x,\lambda\big{)}\equiv\exp\big{(}\lambda xU_{1}\big{)}\ \.\] For the inhomogeneous six-vertex model, we take the Poisson bracket of the tensor product between \(T_{a}\big{(}u,\big{\{}v_{k}\big{\}}\big{)}\) and \(T_{a}\big{(}u^{\prime},\big{\{}v^{\prime}_{k}\big{\}}\big{)}\), \[T_{a}\big{(}u,\big{\{}v_{k}\big{\}},H,V\big{)}\equiv T_{a}\big{(}u,\big{\{}v_ {k}\big{\}}\big{)}\equiv\begin{bmatrix}A\big{(}u\big{)}&B\big{(}u\big{)}\\ C\big{(}u\big{)}&D\big{(}u\big{)}\end{bmatrix}\ \,\] with, \[T_{a}\big{(}u^{\prime},\big{\{}v^{\prime}_{k}\big{\}},H,V\big{)}\equiv T_{a} \big{(}u^{\prime},\big{\{}v^{\prime}_{k}\big{\}}\big{)}\equiv\begin{bmatrix} A\big{(}u^{\prime}\big{)}&B\big{(}u^{\prime}\big{)}\\ C\big{(}u^{\prime}\big{)}&D\big{(}u^{\prime}\big{)}\end{bmatrix}\ \,\] in which, \[\big{\{}T_{a}\big{(}u,\big{\{}v_{k}\big{\}}\big{)}\bigotimes T_{a} \big{(}u^{\prime},\big{\{}v^{\prime}_{k}\big{\}}\big{)}\big{\}}=\bigg{(}r_{a,+} (v_{k}-v^{\prime}_{k})T_{a}\big{(}u,\big{\{}v_{k}\big{\}}\big{)}\bigg{)} \bigotimes T_{a}\big{(}u^{\prime},\big{\{}v^{\prime}_{k}\big{\}}\big{)}-\cdots\] \[T_{a}\big{(}u,\big{\{}v_{k}\big{\}}\big{)}\bigotimes\bigg{(}T_{a} \big{(}u^{\prime},\big{\{}v^{\prime}_{k}\big{\}}\big{)}r_{a,-}\big{(}v_{k}-v^{ \prime}_{k}\big{)}\bigg{)}\ \,\] corresponding to the tensor product of the Poisson bracket of \(T_{a}\big{(}u,\big{\{}v_{k}\big{\}}\big{)}\) with \(T_{a}\big{(}u^{\prime},\big{\{}v^{\prime}_{k}\big{\}}\big{)}\), where, \[r_{a,-}\big{(}v_{k}-v^{\prime}_{k}\big{)}=\underset{y\longrightarrow -\infty}{\lim}\bigg{(}E^{\mathrm{G}V}\big{(}u^{\prime},v^{\prime}_{k}-v_{k} \big{)}\bigotimes\bigg{(}E^{\mathrm{G}V}\big{(}u^{\prime},v^{\prime}_{k}-v_{k} \big{)}r_{a}\big{(}v_{k}-v^{\prime}_{k}\big{)}\bigg{)}\bigg{)}\ \,\] for the spectral parameter \(v^{\prime}_{k}\) at \(u^{\prime}\), and, \[r_{a}\big{(}v_{k}-v^{\prime}_{k}\big{)}\equiv E^{6V}\big{(}u^{\prime},v^{ \prime}_{k}-v_{k}\big{)}\bigotimes\bigg{(}E^{6V}\big{(}u^{\prime},v^{\prime}_{k} -v_{k}\big{)}r_{a}\big{(}v_{k}-v^{\prime}_{k}\big{)}\bigg{)}\ \.\] To demonstrate that the expression, \[E^{6V}\big{(}x-v_{k},x\big{)}=\exp\bigl{[}\coth\big{(}\frac{\eta}{2}+i\alpha_{j }-v_{k}\big{)}\big{]}=\exp\bigl{[}\coth^{j}\big{]}\ \,\] **Lemma** _BE 1_ (_mapping of the Bethe equations into a higher-dimensional space_). Fix solutions to the Bethe equations, \(\alpha_{j}\). For each \(j\), there exists functions \(\mathscr{U}_{1}^{j},\mathscr{U}_{2}^{j},\mathscr{U}_{3}^{j}\) and \(\mathscr{U}_{4}^{j}\), satisfying the following conditions, for functions, \[F_{1}\equiv F_{1}\big{(}\frac{1}{2},\alpha_{j},-v_{k}\big{)}\equiv\sinh\bigl{(} \frac{\eta}{2}+i\alpha_{j}-v_{k}\bigr{)}\ \,\] and, \[F_{2}\equiv F_{2}\big{(}\frac{1}{2},-\alpha_{j},v_{k}\big{)}\equiv\sinh\bigl{(} \frac{\eta}{2}-i\alpha_{j}+v_{k}\bigr{)}\ \,\] from which one obtains the system, \[\frac{\partial F_{1}}{\partial v_{k}}=\mathscr{U}_{1}^{j}\biggl{(}\frac{\eta }{2},\alpha_{j},-v_{k}\biggr{)}F_{1}\ \,\] for terms on the left hand side of the Bethe equations, under the product from \(k=1\) to \(k=N\), while, for terms on the right hand side of the Bethe equations, introduce, \[F_{3}\equiv F_{3}\big{(}1,\alpha_{j}-\alpha_{m},0\big{)}\equiv\sinh\bigl{(}i \bigl{(}\alpha_{j}-\alpha_{m}\bigr{)}+\eta\bigr{)}\bigr{)}\ \,\] and, \[F_{4}\equiv F_{4}\big{(}-1,\alpha_{j}-\alpha_{m},0\big{)}\equiv\sinh\bigl{(}i \bigl{(}\alpha_{j}-\alpha_{m}\bigr{)}-\eta\bigr{)}\bigr{)}\ \,\] from which one obtains the system, \[\frac{\partial F_{3}}{\partial\alpha_{m}}=\mathscr{U}_{3}^{j} \biggl{(}1,\alpha_{j}-\alpha_{m},0\biggr{)}F_{3}\bigl{(}1,\alpha_{j}-\alpha_{m },0\bigr{)}\ \,\] \[\frac{\partial F_{4}}{\partial\alpha_{m}}=\mathscr{U}_{4}^{j} \biggl{(}-1,\alpha_{j}-\alpha_{m},0\biggr{)}F_{4}\bigl{(}-1,\alpha_{j}-\alpha_ {m},0\bigr{)}\ \,\] under the product from \(m=1\) to \(m=n\), except for \(m=j\). For any collection of solutions \(\bigl{\{}\alpha_{j}\bigr{\}}_{j\in\mathbf{N}}\), \[\mathscr{U}_{1}^{\mathbf{N}}\biggl{(}\frac{\eta}{2},\alpha_{j},- v_{k}\biggr{)} =\bigcup_{j\in\mathbf{N}}\mathscr{U}_{1}^{j}\biggl{(}\frac{\eta}{ 2},\alpha_{j},-v_{k}\biggr{)}\ \,\] \[\mathscr{U}_{2}^{\mathbf{N}}\biggl{(}\frac{\eta}{2},-\alpha_{j},v _{k}\biggr{)} =\bigcup_{j\in\mathbf{N}}\mathscr{U}_{2}^{j}\biggl{(}\frac{\eta}{ 2},-\alpha_{j},v_{k}\biggr{)}\ \,\] \[\mathscr{U}_{3}^{\mathbf{N}}\biggl{(}1,\alpha_{j}-\alpha_{m},0 \biggr{)} =\bigcup_{j\in\mathbf{N}}\mathscr{U}_{3}^{j}\biggl{(}1,\alpha_{j}- \alpha_{m},0\biggr{)}\ \,\] \[\mathscr{U}_{4}^{\mathbf{N}}\biggl{(}-1,\alpha_{j}-\alpha_{m},0 \biggr{)} =\bigcup_{j\in\mathbf{N}}\mathscr{U}_{4}^{j}\biggl{(}-1,\alpha_{j}- \alpha_{m},0\biggr{)}\ \.\] _Proof of Lemma BE 1._ Observe, from, \[\frac{\partial}{\partial v_{k}}\bigl{(}\sinh\bigl{(}\frac{\eta}{2}+i\alpha_{j} -v_{k}\bigr{)}\bigr{)}\ \,\] that one can write, \[\frac{\partial}{\partial\alpha_{m}}\big{(}\sinh\!\big{(}i\big{(}\alpha_{j}-\alpha_{ m}\big{)}-\eta\big{)}\big{)}\ \,\] that one can write, \[\frac{\partial}{\partial v_{k}}\big{(}\sinh\!\big{(}\frac{\eta}{2}+i\alpha_{j}+v _{k}\big{)}\big{)}\ \,\] that one can write, \[\frac{\bigg{(}\frac{\sinh\!\big{(}\frac{\eta}{2}+i\alpha_{j}+v_{k} \big{)}}{\sinh\!\big{(}\frac{\eta}{2}+i\alpha_{j}+v_{k}\big{)}}\bigg{)}\ \bigg{(}\frac{\cosh\! \big{(}\frac{\eta}{2}+i\alpha_{j}+v_{k}\big{)}}{\cosh\!\big{(}\frac{\eta}{2}+i \alpha_{j}+v_{k}\big{)}}\bigg{)}\!\cosh\!\big{(}\frac{\eta}{2}+i\alpha_{j}+v_{ k}\big{)}=\bigg{(}\coth\!\big{(}\frac{\eta}{2}+i\alpha_{j}+v_{k}\big{)}\bigg{)} \!\sinh\!\big{(}\frac{\eta}{2}+i\alpha_{j}+v_{k}\big{)}\ \,\] implying, \[\mathscr{U}_{1}^{j}=\coth\!\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\ \,\] and, \[\mathscr{U}_{2}^{j}=\coth\!\big{(}\frac{\eta}{2}+i\alpha_{j}+v_{k}\big{)}\ \.\] For the remaining two functions, similarly observe, from, \[\frac{\partial}{\partial\alpha_{m}}\big{(}\sinh\!\big{(}i\big{(}\alpha_{j}- \alpha_{m}\big{)}+\eta\big{)}\big{)}\ \,\] that one can write, \[-\bigg{(}\frac{\sinh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)} }{\sinh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}}\bigg{)} \bigg{(}\frac{\cosh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}} {\cosh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}}\bigg{)}\! \cosh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}=-\bigg{(}\coth \!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}\bigg{)}\times\cdots\] \[\sinh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}\ \.\] Also, observe from, \[\frac{\partial}{\partial\alpha_{m}}\big{(}\sinh\!\big{(}i\big{(}\alpha_{j}- \alpha_{m}\big{)}-\eta\big{)}\big{)}\ \,\] that one can write, \[-\bigg{(}\frac{\sinh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)} }{\sinh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}}\bigg{)}\bigg{(} \frac{\cosh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}}{\cosh\! \big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}}\bigg{)}\!\cosh\! \big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}=-\bigg{(}\coth\!\big{(} i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}\bigg{)}\times\cdots\] \[\sinh\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}\ \,\] implying, \[\mathscr{U}_{3}^{j}=-\coth\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta \big{)}\ \.\] and, \[\mathscr{U}_{4}^{j}=\coth\!\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta \big{)}\ \.\] One can directly obtain the functions \(\mathscr{U}_{1}^{j},\mathscr{U}_{2}^{j},\mathscr{U}_{3}^{j}\) and \(\mathscr{U}_{4}^{j}\) for any \(j\), which take the form as unions over \(\mathbf{N}\) provided at the end of the statement for **Lemma _BE 1_, from which we conclude the argument. **Lemma _BE 2_** (_mapping the Bethe equations to the four functions obtained in the previous result_). From the functions obtained in the previous result, the Bethe equations can be mapped to the relation, \[\prod_{k=1}^{N}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! from which we conclude the argument. \(\qed\) From the previous two results, in the next result below we express \(\frac{F_{1}}{F_{2}}\) and \(\frac{F_{3}}{F_{4}}\), in terms of basis functions that are dependent on each solution \(\alpha_{k}\) to the Bethe equations. **Lemma** _BE 3_ (_expressing \(F_{1},F_{2},F_{3},\) and \(F_{4}\) in the solution space for the Bethe equation_). There exists functions \(\mathscr{S}_{1}\equiv\mathscr{S}_{1}\big{(}\eta,\alpha_{j},v_{k}\big{)}\) and \(\mathscr{S}_{2}\equiv\mathscr{S}_{2}\big{(}\eta,\alpha_{j},v_{k}\big{)}\), for which, \[F_{1,2}=\mathscr{S}_{1}F_{1}+\mathscr{S}_{2}F_{2}\ \,\] in the basis spanned by \(F_{1}\) and \(F_{2}\), as well as functions \(\mathscr{S}_{3}\equiv\mathscr{S}_{3}\big{(}\eta,\alpha_{j},v_{k}\big{)}\) and \(\mathscr{S}_{4}\equiv\mathscr{S}_{4}\big{(}\eta,\alpha_{j},v_{k}\big{)}\), for which, \[F_{3,4}=\mathscr{S}_{3}F_{3}+\mathscr{S}_{4}F_{4}\ \,\] in the basis spanned by \(F_{3}\) and \(F_{4}\). _Proof of Lemma BE 3_. By the definition of hyperbolic sine, write, \[\frac{F_{1}\big{(}\frac{1}{2},\alpha_{j},-v_{k}\big{)}}{F_{2}\big{(}\frac{1}{ 2},-\alpha_{j},v_{k}\big{)}}=\frac{\sinh\big{(}\frac{\eta}{2}+i\alpha_{j}-v_ {k}\big{)}}{\sinh\big{(}\frac{\eta}{2}-i\alpha_{j}+v_{k}\big{)}}=\frac{\frac{1 }{2}}{\frac{1}{2}}\bigg{(}\frac{-\exp\big{(}-\big{(}\frac{\eta}{2}+i\alpha_{j }-v_{k}\big{)}\big{)}+\exp\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}}{- \exp\big{(}-\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\big{)}+\exp\big{(} \frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}}\bigg{)}\ \,\] which is equivalent to, \[\frac{-\exp\big{(}-\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\big{)}}{- \exp\big{(}-\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\big{)}+\exp\big{(} \frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}}+\frac{\exp\big{(}\frac{\eta}{2}+i \alpha_{j}-v_{k}\big{)}}{-\exp\big{(}-\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k} \big{)}\big{)}+\exp\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}}\ \,\] after having separated exponentials in the numerator of the expression, which is in turn also equivalent to, \[\frac{F_{3}\big{(}1,\alpha_{j}-\alpha_{m},0\big{)}}{F_{4}\big{(}-1,\alpha_{j }-\alpha_{m},0\big{)}}=\frac{\sinh\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+ \eta\big{)}}{\sinh\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}}= \frac{\frac{1}{2}}{\frac{1}{2}}\bigg{(}\frac{\exp\big{(}i\big{(} \alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}-\exp\big{(}\big{(}i\big{(}\alpha_{m} -\alpha_{j}\big{)}-\eta\big{)}\big{)}}{\exp\big{(}i\big{(}\alpha_{j}-\alpha_{ m}\big{)}-\eta\big{)}-\exp\big{(}i\big{(}\alpha_{m}-\alpha_{j}\big{)}+\eta \big{)}}\bigg{)}\ \,\] which is equivalent to, \[\frac{\exp\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}}{\exp\big{(} i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}-\exp\big{(}i\big{(}\alpha_{m} -\alpha_{j}\big{)}+\eta\big{)}}-\frac{\exp\big{(}i\big{(}\alpha_{m}-\alpha_{j} \big{)}-\eta\big{)}}{\exp\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)} -\exp\big{(}i\big{(}\alpha_{m}-\alpha_{j}\big{)}+\eta\big{)}}\ \,\] after having separated exponentials in the numerator of the expression, which in turn is also equivalent to, \[\bigg{[}\frac{\exp\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta \big{)}}{\sinh\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}+\eta\big{)}\big{(} \exp\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}-\exp\big{(}i\big{(} \alpha_{m}-\alpha_{j}\big{)}+\eta\big{)}\big{)}}\bigg{]}F_{3}\big{(}1,\alpha_{j} -\alpha_{m},0\big{)}-\cdots\] \[\bigg{[}\frac{\exp\big{(}i\big{(}\alpha_{m}-\alpha_{j}\big{)}-\eta \big{)}}{\sinh\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}\big{(} \exp\big{(}i\big{(}\alpha_{j}-\alpha_{m}\big{)}-\eta\big{)}-\exp\big{(}i\big{(} \alpha_{m}-\alpha_{j}\big{)}+\eta\big{)}\big{)}}\bigg{]}F_{4}\big{(}-1,\alpha_{j} -\alpha_{m},0\big{)}\ \,\] for the basis spanned by \(F_{3}\) and \(F_{4}\). Taking \(\mathscr{S}_{1},\mathscr{S}_{2},\mathscr{S}_{3}\), and \(\mathscr{S}_{4}\) from the quantities above provides the desired form of the linear combination for \(F_{1,2}\) and \(F_{3,4}\), from which we conclude the argument. \(\qed\) **Lemma** _BE 4_ (_spanning set of the entire solution space of the Bethe equations_). For all solutions to the Bethe equations, the spanning set is equivalent to the union of spanning sets for each \(j\). ### Sixteen relations from the monodromy matrix Proof of Lemma BE 4.: It suffices to determine one basis, \(\big{\{}F_{1}^{j},F_{2}^{j}\big{\}}\), and another basis, \(\big{\{}F_{3}^{j},F_{4}^{j}\big{\}}\), for each \(j\). The computation from the previous result can be repeated a countably many number of times for any solution to the Bethe equations, hence yielding the desired span from which we conclude the argument. With the action given by the exponential in the prefactor to the basis element \(F_{1}\), \[\frac{\exp\big{(}-\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\big{)}}{\exp \big{(}-\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\big{)}-\exp\big{(}\frac{ \eta}{2}+i\alpha_{j}-v_{k}\big{)}}\ \,\] write, \[\bigg{(}\frac{\exp\bigl{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}}{\exp\bigl{(} \frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}}\bigg{)}\frac{\exp\bigl{(}-\big{(} \frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\big{)}}{\exp\bigl{(}-\big{(}\frac{\eta }{2}+i\alpha_{j}-v_{k}\big{)}\big{)}-\exp\bigl{(}\frac{\eta}{2}+i\alpha_{j}-v_ {k}\big{)}}\ \,\] so that rearranging terms gives, \[\frac{1}{1-\exp\bigl{(}2\big{(}\frac{\eta}{2}+i\alpha_{j}-v_{k}\big{)}\big{)} }\ \.\] Hence, the set of linear combinations under the functions \(F_{1}\) and \(F_{2}\) \[\bigg{[}\frac{1}{1-\exp\bigl{(}2\big{(}\frac{\eta}{2}+i\alpha_{j}- v_{k}\big{)}\big{)}\big{)}}\bigg{]}F_{1}\big{(}\frac{1}{2},\alpha_{j},-v_{k}\big{)}+\cdots\] \[\bigg{[}\frac{1}{-1+\exp\bigl{(}2\big{(}\frac{\eta}{2}+i\alpha_{j} -v_{k}\big{)}\big{)}\big{)}}\bigg{]}F_{2}\big{(}\frac{1}{2},\alpha_{j},v_{k} \big{)}\ \,\] with a similar set of relations holding for the basis spanned by \(F_{3}\) and \(F_{4}\). Furthermore, from the Poisson bracket of the tensor product of \(T_{a}\big{(}u,\big{\{}v_{k}\big{\}}\big{)}\) with \(T_{a}\big{(}u^{\prime},\big{\{}v_{k}^{\prime}\big{\}}\big{)}\), \[\bigg{(}r_{a,+}\big{(}v_{k}-v_{k}^{\prime}\big{)}T_{a}\big{(}u,\big{\{}v_{k} \big{\}}\big{)}\bigg{)}\bigotimes T_{a}\big{(}u^{\prime},\big{\{}v_{k}^{\prime }\big{\}}\big{)}-T_{a}\big{(}u,\big{\{}v_{k}\big{\}}\big{)}\bigotimes\bigg{(}T _{a}\big{(}u^{\prime},\big{\{}v_{k}^{\prime}\big{\}}\big{)}r_{a,-}\big{(}v_{k }-v_{k}^{\prime}\big{)}\bigg{)}\ \,\] one can form a set of sixteen relations for the reduced monodromy matrices of the inhomogeneous six-vertex model, akin to the sixteen relations which are satisfied for the reduced monodromy matrices of the nonlinear Schrodinger's equation, [5], \[\begin{cases}\left(\begin{array}{c}1):\big{\{}A\big{(}u\big{)},A\big{(}u^{ \prime}\big{)}\big{\}}\,\\ \left(2\right):\big{\{}A\big{(}u\big{)},B\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(3\right):\big{\{}A\big{(}u\big{)},C\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(4\right):\big{\{}A\big{(}u\big{)},D\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(5\right):\big{\{}B\big{(}u\big{)},A\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(6\right):\big{\{}B\big{(}u\big{)},B\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(7\right):\big{\{}B\big{(}u\big{)},C\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(8\right):\big{\{}B\big{(}u\big{)},D\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(9\right):\big{\{}C\big{(}u\big{)},A\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(10\right):\big{\{}C\big{(}u\big{)},B\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(11\right):\big{\{}C\big{(}u\big{)},C\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(12\right):\big{\{}C\big{(}u\big{)},D\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(13\right):\big{\{}D\big{(}u\big{)},A\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(14\right):\big{\{}D\big{(}u\big{)},B\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(15\right):\big{\{}D\big{(}u\big{)},C\big{(}u^{\prime}\big{)}\big{\}}\,\\ \left(16\right):\big{\{}D\big{(}u\big{)},D\big{(}u^{\prime}\big{)}\big{\}}\,\end{cases}\] from the tensor product, which in the coordinates \(u,u^{\prime}\) is equivalent to, \[\big{\{}T_{a}\big{(}u,\{v_{k}\}\big{)}\bigotimes T_{a}\big{(}u^{\prime},\{v^{ \prime}_{k}\}\big{)}\big{\}}=\left\{\begin{bmatrix}A\big{(}u\big{)}&B\big{(}u \big{)}\\ C\big{(}u\big{)}&D\big{(}u\big{)}\end{bmatrix}\otimes\begin{bmatrix}A\big{(}u^ {\prime}\big{)}&B\big{(}u^{\prime}\big{)}\\ C\big{(}u^{\prime}\big{)}&D\big{(}u^{\prime}\big{)}\end{bmatrix}\right\}\ \,\] which by the definition of Poisson bracket of the tensor product of the two reduced monodromy matrices, \[\bigg{(}\bigg{(}r_{a,+}\big{(}v_{k}-v^{\prime}_{k}\big{)}\begin{bmatrix}A \big{(}u\big{)}&B\big{(}u\big{)}\\ C\big{(}u\big{)}&D\big{(}u\big{)}\end{bmatrix}\bigg{)}\bigotimes\begin{bmatrix}A \big{(}u^{\prime}\big{)}&B\big{(}u^{\prime}\big{)}\\ C\big{(}u^{\prime}\big{)}&D\big{(}u^{\prime}\big{)}\end{bmatrix}\bigg{)}- \bigg{(}\begin{bmatrix}A\big{(}u\big{)}&B\big{(}u\big{)}\\ C\big{(}u\big{)}&D\big{(}u\big{)}\end{bmatrix}\big{\big{\}}\big{\big{\}}\big{(} \begin{bmatrix}A\big{(}u^{\prime}\big{)}&B\big{(}u^{\prime}\big{)}\\ C\big{(}u^{\prime}\big{)}&D\big{(}u^{\prime}\big{)}\end{bmatrix}r_{a,-} \big{(}v_{k}-v^{\prime}_{k}\big{)}\bigg{)}\bigg{)}\.\] From expressions that were previously obtained for each entry of the \(N\) th power of the monodromy matrix from L operators, the set of sixteen relations is equivalent to, From the set of all possible relations, those which vanish with respect to the Poisson bracket constitute the action-angle variables. The next result determines which relations, from the sixteen listed above, vanish. **Theorem** (_action-angle variables of the Hamiltonian flow for the inhomogeneous six-vertex model_). _Proof of Theorem._ To demonstrate that the statement above holds, it suffices to compute the Poisson bracket for each of the sixteen relations. Beginning with the first relation, write, \[(1)\equiv\bigg{\{}\bigg{(}A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\bigg{)} \big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1},\bigg{(}A_{3} \big{(}u^{\prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\bigg{)}\bigg{(}\big{(} \sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime}+\mathscr{A}_{2}^ {\prime}+\mathscr{A}_{3}^{\prime}\bigg{)}\bigg{\}}+\cdots\\ \bigg{\{}\bigg{(}A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\bigg{)} \mathscr{A}_{3},\bigg{(}A_{3}\big{(}u^{\prime}\big{)}+B_{3}\big{(}u^{\prime} \big{)}\bigg{)}\bigg{(}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_ {1}^{\prime}+\mathscr{A}_{2}^{\prime}+\mathscr{A}_{3}^{\prime}\bigg{)}\bigg{\}} \bigg{\}}\ \,\] which can be further rearranged, after a second application of bilinearity of the Poisson bracket, as, \[\left\{\left(A_{3}\big{(}u\right)+B_{3}\big{(}u\big{)}\right)\! \left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},\left(A_{3}\big{(}u^ {\prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\right)\!\left(\sin\!\left(2\eta \right)\right)^{n-3}\!\mathscr{A}_{1}^{\prime}\right\}+\cdots\] \[\left\{\left(A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\right)\! \left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},\left(A_{3}\big{(}u ^{\prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\right)\!\mathscr{A}_{2}^{\prime} \right\}+\cdots\] \[\left\{\left(A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\right)\! \left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},\left(A_{3}\big{(}u ^{\prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\right)\!\mathscr{A}_{3}^{\prime }\right\}+\cdots\] \[\left\{\left(A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\right)\! \mathscr{A}_{2},\left(A_{3}\big{(}u^{\prime}\big{)}+B_{3}\big{(}u^{\prime} \big{)}\right)\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\cdots\] \[\left\{\left(A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\right)\! \mathscr{A}_{3},\left(A_{3}\big{(}u^{\prime}\big{)}+B_{3}\big{(}u^{\prime} \big{)}\right)\!\left(\sigma_{2}^{\prime}\right)+\cdots\right.\] \[\left\{\left(A_{3}\big{(}u\big{)}+B_{3}\big{(}u\big{)}\right)\! \mathscr{A}_{3},\left(A_{3}\big{(}u^{\prime}\big{)}+B_{3}\big{(}u^{\prime} \big{)}\right)\!\mathscr{A}_{3}^{\prime}\right\}\.\] With two more applications of bilinearity of the Poisson bracket to each term in the superposition above, the previous expression is equivalent to, \[\left\{A_{3}\big{(}u\big{)}\!\left(\sin\!\left(2\eta\right) \right)^{n-3}\!\mathscr{A}_{1},A_{3}\big{(}u^{\prime}\big{)}\!\left(\sin\! \left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{\prime}\right\}+\left\{A_{ 3}\big{(}u\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1 },B_{3}\big{(}u^{\prime}\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3} \!\mathscr{A}_{1}^{\prime}\right\}+\cdots\] \[\left\{B_{3}\big{(}u\big{)}\!\left(\sin\!\left(2\eta\right) \right)^{n-3}\!\mathscr{A}_{1},A_{3}\big{(}u^{\prime}\big{)}\!\left(\sin\! \left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{\prime}\right\}+\left\{B_{3} \big{(}u\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},B_{3}\big{(}u^{\prime}\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3} \!\mathscr{A}_{1}^{\prime}\right\}\cdots\] \[\left\{A_{3}\big{(}u\big{)}\!\left(\sin\!\left(2\eta\right) \right)^{n-3}\!\mathscr{A}_{1},A_{3}\big{(}u^{\prime}\big{)}\!\left(\sin\! \left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},B_{3}\big{(}u^{\prime}\big{)} \!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{\prime}\right\}\cdots\] \[\left\{B_{3}\big{(}u\big{)}\!\left(\sin\!\left(2\eta\right) \right)^{n-3}\!\mathscr{A}_{1},A_{3}\big{(}u^{\prime}\big{)}\!\left(\sin\! \left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},B_{3}\big{(}u^{\prime}\big{)} \!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},\right\}\cdots\] \[\left\{B_{3}\big{(}u\big{)}\!\left(\sin\!\left(2\eta\right) \right)^{n-3}\!\mathscr{A}_{1},A_{3}\big{(}u^{\prime}\big{)}\!\left(\sin\! \left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{\prime}\right\}+\left\{B_{3} \big{(}u\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1},B_ {3}\big{(}u^{\prime}\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\! \mathscr{A}_{1}^{\prime}\right\}+\cdots\] \[\left\{A_{3}\big{(}u\big{)}\!\mathscr{A}_{2},A_{3}\big{(}u^{\prime }\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{\prime} \right\}+\left\{B_{3}\big{(}u\big{)}\!\mathscr{A}_{2},B_{3}\big{(}u^{\prime} \big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{\prime}\right\}+\cdots\] \[\left\{A_{3}\big{(}u\big{)}\!\mathscr{A}_{2},A_{3}\big{(}u^{\prime }\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\left\{B_{3}\big{(}u\big{)}\!\mathscr{A}_{2},B_{3}\big{(}u^{ \prime}\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\cdots\] \[\left\{A_{3}\big{(}u\big{)}\!\mathscr{A}_{3},A_{3}\big{(}u^{\prime }\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\left\{B_{3}\big{(}u\big{)}\!\mathscr{A}_{3},B_{3}\big{(}u^{\prime }\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\cdots\] \[\left\{A_{3}\big{(}u\big{)}\!\mathscr{A}_{3},A_{3}\big{(}u^{\prime }\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\left\{B_{3}\big{(}u\big{)}\!\mathscr{A}_{3},B_{3}\big{(}u^{ \prime}\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\cdots\] \[\left\{A_{3}\big{(}u\big{)}\!\mathscr{A}_{3},A_{3}\big{(}u^{ \prime}\big{)}\!\left(\sin\!\left(2\eta\right)\right)^{n-3}\!\mathscr{A}_{1}^{ \prime}\right\}+\left\{B_{3}\big{(}u\big{)}\! which can be expressed as, \[\sum_{\begin{subarray}{c}(\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),B_{3}(u^{ \prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),A_{3}(u^{\prime}))\end{subarray}} \bigg{\{}\mathscr{P}_{1}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1 },\mathscr{P}_{2}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{ \prime}\bigg{\}}+\cdots\] \[\sum_{\begin{subarray}{c}(\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),B_{3}(u^ {\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),A_{3}(u^{\prime}))\end{subarray}} \bigg{\{}\mathscr{P}_{1}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_ {1},\mathscr{P}_{2}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^ {\prime}\bigg{\}}+\sum_{\begin{subarray}{c}(\mathcal{P}_{1},\mathcal{P}_{2}) \in(B_{3}(u),B_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),B_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),A_{3}(u^{\prime}))\end{subarray}} \bigg{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2}\big{(}\sin\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime}\bigg{\}}+\cdots\] \[\sum_{\begin{subarray}{c}(\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),B_{3}(u ^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),A_{3}(u^{\prime}))\end{subarray}} \bigg{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2}\mathscr{A}_{3}^{ \prime}\bigg{\}}+\sum_{\begin{subarray}{c}(\mathcal{P}_{1},\mathcal{P}_{2}) \in(B_{3}(u),B_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathcal{P}_{1},\mathcal{P}_{2})\in(B_{3}(u),A_{3}(u^{\prime}))\end{subarray}} \bigg{\{}\mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_{2}\big{(}\sin\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime}\bigg{\}}+\cdots\] after isolating Poisson brackets together in groups of four, from which thirty six terms from the superposition on the previous page can be expressed with nine summations. To lighten the notation in the superposition above, denote, \[\mathscr{P}\equiv\left\{\begin{array}{l}(\mathscr{P}_{1},\mathscr{P}_{2}) \in(B_{3}(u),B_{3}(u^{\prime}))\\ (\mathscr{P}_{1},\mathscr{P}_{2})\in(A_{3}(u),A_{3}(u^{\prime}))\\ (\mathscr{P}_{1},\mathscr{P}_{2})\in(A_{3}(u),B_{3}(u^{\prime}))\\ (\mathscr{P}_{1},\mathscr{P}_{2})\in(B_{3}(u),A_{3}(u^{\prime}))\end{array} \right.\] For the Poisson bracket below, fix some \(\big{(}\mathscr{P}^{\prime}\big{(}u\big{)},\mathscr{P}^{\prime\prime}\big{(}u \big{)}\big{)}\equiv\big{(}\mathscr{P}^{\prime},\mathscr{P}^{\prime\prime} \big{)}\in\mathscr{P}\). The relation above will be used for the other fifteen relations to determine which ones vanish with respect to the Poisson bracket. We make use of the formula, by bilinearity of the Poisson bracket, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}^{\prime}\big{(}u\big{)}+ \mathscr{P}^{\prime\prime}\big{(}u\big{)},\mathscr{P}^{\prime}\big{(}u^{\prime} \big{)}+\mathscr{P}^{\prime\prime}\big{(}u^{\prime}\big{)}\bigg{\}}=\sum_{ \mathscr{P}}\bigg{[}\bigg{\{}\mathscr{P}^{\prime}\big{(}u\big{)},\mathscr{P}^{ \prime}\big{(}u^{\prime}\big{)}\bigg{\}}+\bigg{\{}\mathscr{P}^{\prime}\big{(}u \big{)},\mathscr{P}^{\prime\prime}\big{(}u\big{)}\bigg{\}}+\bigg{\{}\mathscr{P}^{ \prime\prime}\big{(}u\big{)},\mathscr{P}^{\prime\prime}\big{(}u^{\prime}\big{)} \bigg{\}}+\cdots\] \[\bigg{\{}\mathscr{P}^{\prime\prime}\big{(}u\big{)},\mathscr{P}^{ \prime\prime}\big{(}u^{\prime}\big{)}\bigg{\}}\bigg{]}\enspace,\] for the other fifteen relations, in which, by order, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\big{(}\sin\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\big{(}\sin\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{A}_{1}\bigg{\}}+\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_ {1}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2} \mathscr{B}_{2}^{\prime}\bigg{\}}+\cdots\] \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\big{(}\sin\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{B}_{3}^{\prime}\bigg{\}}+ \sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2}\big{(} \sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{B}_{1}^{\prime}\bigg{\}}+\cdots\] \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_ {2}\mathscr{B}_{2}^{\prime}\bigg{\}}+\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1} \mathscr{A}_{2},\mathscr{P}_{2}\mathscr{B}_{3}^{\prime}\bigg{\}}+\sum_{\mathscr{P}} \bigg{\{}\mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_{2}\big{(}\sin\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{B}_{1}^{\prime}\bigg{\}}+\cdots\] \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_ {2}\mathscr{B}_{2}^{\prime}\bigg{\}}+\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1} \mathscr{A}_{3},\mathscr{P}_{2}\mathscr{B}_{3}^{\prime}\bigg{\}}\ \,\] for (2), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin(2\eta)\bigr{)} ^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\bigl{(}\sin(2\eta)\bigr{)}^{n-3}\mathscr{A} _{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(} \sin(2\eta)\bigr{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{C}_{2}^{ \prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin(2\eta) \bigr{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{C}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2} \bigl{(}\sin(2\eta)\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{3}, \mathscr{P}_{2}\mathscr{C}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_{2}\mathscr{C}_{3}^{\prime}\biggr{\}} \biggr{\}}\enspace,\] for (3), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin(2\eta) \bigr{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\bigl{(}\sin(2\eta)\bigr{)}^{n-3 }\mathscr{A}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1 }\bigl{(}\sin(2\eta)\bigr{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{C}_{2 }^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{3}, \mathscr{P}_{2}\mathscr{C}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_{2}\mathscr{C}_{3}^{\prime}\biggr{\}} \biggr{\}}\enspace,\] for (4), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin(2\eta) \bigr{)}^{n-3}\mathscr{B}_{1},\mathscr{P}_{2}\bigl{(}\sin(2\eta)\bigr{)}^{n-3 }\mathscr{A}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1 }\bigl{(}\sin(2\eta)\bigr{)}^{n-3}\mathscr{B}_{1},\mathscr{P}_{2}\mathscr{A} _{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\big{(}\sin(2\eta) \bigr{)}^{n-3}\mathscr{B}_{1},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{B}_{2},\mathscr{P}_{2} \bigl{(}\sin(2\eta)\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{B}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{B}_{3},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} \enspace,\] for (5), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin(2\eta) \bigr{)}^{n-3}\mathscr{B}_{1},\mathscr{P}_{2}\bigl{(}\sin(2\eta)\bigr{)}^{n- 3}\mathscr{B}_{1}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1} \bigl{(}\sin(2\eta)\bigr{)}^{n-3}\mathscr{B}_{1},\mathscr{P}_{2}\mathscr{B}_{2 }\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{B}_{2}, \mathscr{P}_{2}\mathscr{B}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{B}_{2},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{B}_{3},\mathscr{P}_{2} \bigl{(}\sin(2\eta)\bigr{)}^{n-3}\mathscr{B}_{1}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{B}_{3}, \mathscr{P}_{2}\mathscr{B}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{B}_{3},\mathscr{P}_{2}\mathscr{B}_{3}^{\prime}\biggr{\}} \biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{B}_{3},\mathscr{P}_{ 2}\mathscr{B}_{3}^{\prime}\biggr{\}}\enspace,\] for (6), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P}} \biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{C}_{1},\mathscr{P}_{2}\mathscr{C}_{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{2}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}} +\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} \,\] for (10), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P}} \biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{C}_{1},\mathscr{P}_{2}\mathscr{C}_{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\mathscr{C}_{3}^{ \prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{2}, \mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{ \prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{2}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}} +\cdots\] for (8). For the remaining relations, instead of the set \(\mathscr{P}\) defined earlier, introduce, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\bigl{(}\sin\bigl{(} 2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P }}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{C}_{1},\mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime}\biggr{\}} +\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \mathscr{A}_{3}^{\prime}\biggr{\}}\,\] for (9), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}} +\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\biggr{\}} \,\] for (10), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P}} \biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1 },\mathscr{P}_{2}\mathscr{C}_{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\mathscr{C}_{3}^{\prime} \biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}} +\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{2}, \mathscr{P}_{2}\mathscr{C}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{C}_{3}^{\prime}\biggr{\}}+ \sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2}\bigl{(} \sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \mathscr{E}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1} \mathscr{C}_{3},\mathscr{P}_{2}\mathscr{E}_{3}^{\prime}\biggr{\}}\ \,\] for (11), \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{C}_{1},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{E}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P} }\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{E}_{1},\mathscr{P}_{2}\mathscr{E}_{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{2}, \mathscr{P}_{2}\mathscr{D}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{D}_{3}^{\prime}\biggr{\}}+ \sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{E}_{1}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3}, \mathscr{P}_{2}\mathscr{D}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{2},\mathscr{P}_{2}\mathscr{D}_{3}^{\prime}\biggr{\}} +\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{E}_{1}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{C}_{3}, \mathscr{P}_{2}\mathscr{D}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{C}_{3},\mathscr{P}_{2}\mathscr{D}_{3}^{\prime}\biggr{\}} \,\] for (12), for (13), for (14), for (15), and, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{D}_{1},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{D}_{1}^{\prime}\biggr{\}}+\sum_{\mathscr{P}} \biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{D}_{1},\mathscr{P}_{2}\mathscr{D}_{2}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{D}_{1},\mathscr{P}_{2}\mathscr{D}_{3}^{ \prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{D}_{2}, \mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{D}_{1}^ {\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{D}_{2}, \mathscr{P}_{2}\mathscr{D}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{D}_{3},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)} \bigr{)}^{n-3}\mathscr{D}_{1}^{\prime}\biggr{\}}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{D}_{3}, \mathscr{P}_{2}\mathscr{D}_{2}^{\prime}\biggr{\}}+\sum_{\mathscr{P}}\biggl{\{} \mathscr{P}_{1}\mathscr{D}_{3},\mathscr{P}_{2}\mathscr{D}_{3}^{\prime}\biggr{\}} \enspace,\] for (16). From each of the sixteen relations provided above as a summation over Poisson brackets, it is possible to determine which one of the relations vanishes by computing each one of the Poisson brackets individually. To this end, for the first relation, (1), rearrange terms from each one of the nine Poisson bracket terms. Before differentiating each expression which appears in the formula for the Poisson bracket, observe, \[\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{A}_{1}\equiv\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)} ^{n-3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\enspace,\] \[\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{A}_{1}^{\prime}\equiv\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)} \bigr{)}^{n-3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\enspace,\] \[\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{A}_{1}\equiv\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)} ^{n-3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\enspace,\] \[\mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\equiv\mathscr{P}_{2} \biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i} ^{z}\bigr{)}\biggr{)}\enspace,\] \[\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \mathscr{A}_{1}\equiv\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)} ^{n-3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\enspace,\] \[\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\equiv\mathscr{P}_{2} \biggl{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\end{subarray}}\biggl{[}\biggl{(}\prod_{1\leq i\leq m} \sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)} \enspace(\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\biggr{)}\enspace,\] \[\mathscr{P}_{1}\mathscr{A}_{2}\equiv\mathscr{P}_{1}\biggl{(} \prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)}\enspace,\] \[\mathscr{P}_{1}\mathscr{A}_{2}\equiv\mathscr{P}_{1}\biggl{(} \prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)}\enspace,\] \[\mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\equiv\mathscr{P}_{2} \biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i} ^{z}\bigr{)}\biggr{)}\enspace,\] \[\mathscr{P}_{1}\mathscr{A}_{2}\equiv\mathscr{P}_{1}\biggl{(} \prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)}\enspace,\] \[\mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\equiv\mathscr{P}_{2} \biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i} ^{z}\bigr{)}\biggr{)}\enspace,\] \[\mathscr{P}_{1}\mathscr{A}_{2}\equiv\mathscr{P}_{1}\biggl{(} \prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)} \enspace,\] \[\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\equiv\mathscr{P}_{2} \biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i} ^{z}\bigr{)}\biggr{)}\enspace,\] \[\mathscr{P}_{2}\mathscr{A}_{ \[\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{ \prime}\equiv\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(} \prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}\,\] \[\mathscr{P}_{1}\mathscr{A}_{3}\equiv\mathscr{P}_{1}\bigg{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m} \sin\!\big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \big{(}\sin\! \big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\ \,\] \[\mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\equiv\mathscr{P}_{2} \bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i }^{z}\big{)}\bigg{)}\,\] \[\mathscr{P}_{1}\mathscr{A}_{3}\equiv\mathscr{P}_{1}\bigg{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m} \sin\!\big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \big{(}\sin\! \big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\ \,\] \[\mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\equiv\mathscr{P}_{2} \bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m} \sin\!\big{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \big{(}\sin\! \big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\ \.\] Below, we list several results for evaluating each of the nine Poisson brackets appearing in the first relation. #### 2.4.1 First Poisson bracket, \(\mathcal{P}_{1}\) **Lemma 6** (_evaluating the first Poisson bracket in the first relation_).: The first term, \(\mathcal{P}_{1}\), approximately equals, \[\big{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{C}_{1}\big{]}^{2 }\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime }}-\frac{A_{3}\big{(}u^{\prime}\big{)}B_{3}\big{(}u\big{)}}{u^{\prime}-u}\bigg{]}\ \.\] _Proof of Lemma 6_. The first term, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\big{(}\sin\!\big{(}2\eta\big{)} \big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)} \big{)}^{n-3}\mathscr{A}_{1}^{\prime}\bigg{\}}\ \,\] is equivalent to, \[\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n -3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2}\sum_{\mathscr{P}}\bigg{\{}\mathscr{P }_{1},\mathscr{P}_{2}\bigg{\}}\.\] Observe, \[\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n -3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2}\bigg{(}\big{\{}A_{3}\big{(}u\big{)},B_ {3}\big{(}u^{\prime}\big{)}\big{\}}+\big{\{}B_{3}\big{(}u\big{)},A_{3}\big{(}u^{ \prime}\big{)}\big{\}}\bigg{)}\ \,\] and by anticommutativity of the Poisson bracket, that, \[\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n -3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2}\bigg{(}\big{\{}A_{3}\big{(}u\big{)},B_ {3}\big{(}u^{\prime}\big{)}\big{\}}-\big{\{}A_{3}\big{(}u^{\prime}\big{)},B_{3} \big{(}u\big{)}\big{\}}\bigg{)}\ \,\] which can be further rearranged, from the observation that the summation of two Poisson brackets above are equivalent to, \[\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n -3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2}\big{\{}A_{3}\big{(}u\big{)},B_{3} \big{(}u^{\prime}\big{)}\big{\}}-\bigg{[}\big{(}\sin\!\big{(}2\eta\big{)}\big{)} ^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2} \big{\{}A_{3}\big{(}u^{\prime}\big{)},B_{3}\big{(}u\big{)}\big{\}}\ \,\] which is, approximately, equivalent to, \[\bigg{[}\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i \leq n-3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2}\bigg{[}\frac{A_{3}\big{(}u\big{)} B_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}-\bigg{[}\big{(}\text{sin} \big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+ }\bigg{)}\bigg{]}^{2}\bigg{[}\frac{A_{3}\big{(}u^{\prime}\big{)}B_{3}\big{(}u \big{)}}{u^{\prime}-u}\bigg{]}\enspace,\] from the fact that the two Poisson brackets can be computed with, \[\big{\{}A_{3}\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\big{\}}=\frac{A_{3} \big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}\enspace,\] corresponding to the first bracket, and, \[\big{\{}A_{3}\big{(}u^{\prime}\big{)},B_{3}\big{(}u\big{)}\big{\}}=\frac{A_{3 }\big{(}u^{\prime}\big{)}B_{3}\big{(}u\big{)}}{u^{\prime}-u}\enspace,\] corresponding to the second bracket. Hence, the desired expression is, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\big{(}\text{sin}\big{(} 2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\big{(}\text{sin} \big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime}\bigg{\}}\approx\bigg{[} \big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n -3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2}\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{ 3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}-\cdots\\ \bigg{[}\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(} \prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}^{2}\bigg{[}\frac{A_{ 3}\big{(}u^{\prime}\big{)}B_{3}\big{(}u\big{)}}{u^{\prime}-u}\bigg{]}\enspace,\] from which we conclude the argument. #### 2.4.2 Second Poisson bracket, \(\mathcal{P}_{2}\) **Lemma 7** (_evaluating the second Poisson bracket in the first relation_).: The second term, \(\mathcal{P}_{2}\), approximately equals, \[-\big{[}\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{C}_{1} \big{]}\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^{ \prime}}-\frac{B_{3}\big{(}u\big{)}A_{3}\big{(}u^{\prime}\big{)}}{u^{\prime}-u }\bigg{]}\enspace.\] Proof of Lemma 7.: The second term, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\big{(}\text{sin}\big{(}2\eta\big{)} \big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\bigg{\}}\enspace,\] is equivalent to, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\big{(}\text{sin}\big{(}2\eta\big{)} \big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)},\mathscr{ P}_{2}\bigg{(}\prod_{1\leq i\leq n-3}\text{sin}\big{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{\}}\enspace,\] which equals, by Leibniz' rule of the Poisson bracket, \[\sum_{\mathscr{P}}\bigg{[}\bigg{\{}\mathscr{P}_{1},\mathscr{P}_{2 }\bigg{(}\prod_{1\leq i\leq n-3}\text{sin}\big{(}u^{\prime}-v_{n-i}+\eta\sigma _{n-i}^{z}\bigg{)}\bigg{)}\big{(}\text{sin}\big{(}2\eta\big{)}\big{)}^{n-3} \bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}+\cdots\] \[\qquad\qquad\mathscr{P}_{1}\bigg{\{}\big{(}\text{sin}\big{(}2\eta \big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}, \mathscr{P}_{2}\bigg{\}}\bigg{(}\prod_{1\leq i\leq n-3}\text{sin}\big{(}u^{ \prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{]}\enspace.\] The first Poisson bracket in the summation over \(\mathscr{P}\) above is equivalent to, \[-\Big{\{}\mathscr{P}_{2}\Big{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{ \prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\Big{)},\mathscr{P}_{1}\Big{\}}\big{(} \sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma^{-,+}_{n-i}\Big{)}\ \,\] by anticommutativity, and also to, \[-\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i \leq n-3}\sigma^{-,+}_{n-i}\bigg{)}\bigg{[}\bigg{\{}\mathscr{P}_{2},\mathscr{P }_{1}\Big{\}}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+ \eta\sigma^{z}_{n-i}\big{)}\bigg{)}+\cdots\] \[\mathscr{P}_{2}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},\mathscr{P}_{1} \bigg{\}}\bigg{]}\ \,\] by Leibniz' rule. Proceeding, the first term in the expression above is equivalent to, \[-\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma^{-,+}_{n-i}\bigg{)}\bigg{[}\bigg{(}\prod_{1\leq i\leq n -3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\big{\{} A_{3}\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\big{\}}+\cdots\] \[\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+ \eta\sigma^{z}_{n-i}\big{)}\bigg{)}\big{\{}B_{3}\big{(}u\big{)},A_{3}\big{(}u^{ \prime}\big{)}\big{\}}\bigg{]}\ \,\] while the second term in the expression above is equivalent to, \[-\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma^{-,+}_{n-i}\bigg{)}\bigg{[}\mathscr{P}_{2}\bigg{(}\big{ }\big{\{}1,\mathscr{P}_{2}\big{\}}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(} u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)}+\cdots\] \[\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_ {n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},\mathscr{P}_{2}\bigg{\}}\bigg{]}\ \.\] From the second bracket, \[-\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma^{-,+}_{n-i}\bigg{)}\bigg{[}\bigg{(}\prod_{1\leq i\leq n-3 }\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\big{\{} A_{3}\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\big{\}}+\cdots\] \[\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+ \eta\sigma^{z}_{n-i}\big{)}\bigg{)}\big{\{}B_{3}\big{(}u\big{)},A_{3}\big{(}u^{ \prime}\big{)}\big{\}}+\cdots\] \[\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},\mathscr{P}_{2 }\bigg{\}}\bigg{]}\ \,\] we evaluate each remaining Poisson bracket appearing in the summation over \(\mathscr{P}\), for each possible \(\mathscr{P}_{2}\), by writing, \[\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta \sigma^{z}_{n-i}\big{)}\bigg{)},B_{3}\big{(}u^{\prime}\big{)}\bigg{\}}+\bigg{\{} \bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n- i}\big{)}\bigg{)},A_{3}\big{(}u^{\prime}\big{)}\bigg{\}}+\cdots\] \[\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta \sigma^{z}_{n-i}\big{)}\bigg{)},B_{3}\big{(}u^{\prime}\big{)}\bigg{\}}+\bigg{\{} \bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n- i}\big{)}\bigg{)},A_{3}\big{(}u^{\prime}\big{)}\bigg{\}}\ \.\] As a result, each Poisson bracket from the superposition above can be expressed as, \[\Big{\{}\Big{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+ \eta\sigma_{n-i}^{z}\bigr{)}\Big{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\Big{\}} =\Big{[}\frac{\partial}{\partial u^{\prime}}\Big{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\Big{)} \Big{]}\frac{\partial B_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime}}-\ldots\] \[\frac{\partial B_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{ \prime}}\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{(}\prod_{1\leq i\leq n -3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigg{]} \equiv 0\ \,\] \[\Big{\{}\Big{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i }+\eta\sigma_{n-i}^{z}\bigr{)}\Big{)},A_{3}\bigl{(}u^{\prime}\bigr{)}\Big{\}} =\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)} \bigg{]}\frac{\partial A_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime}}-\ldots\] \[\frac{\partial A_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime }}\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{(}\prod_{1\leq i\leq n-3} \sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigg{]} \equiv 0\ \,\] \[\Big{\{}\Big{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n- i}+\eta\sigma_{n-i}^{z}\bigr{)}\Big{)},B_{3}\bigl{(}u^{\prime}\Big{)}\Big{\}} =\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)} \bigg{]}\frac{\partial B_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime}}-\ldots\] \[\frac{\partial B_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime }}\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{(}\prod_{1\leq i\leq n-3} \sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigg{]} \equiv 0\ \,\] \[\frac{\partial A_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime }}\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{(}\prod_{1\leq i\leq n-3} \sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigg{]} \equiv 0\ \,\] while the remaining Poisson brackets, \[-\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma_{n-i}^{-+}\bigg{)}\bigg{[}\,\bigg{(}\prod_{1\leq i\leq n -3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{]}\big{\{} A_{3}\bigl{(}u\bigr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\big{\}}+\cdots\] \[\bigg{[}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\bigg{]}\big{\{}B_{3}\bigl{(}u\bigr{)},A_{3}\bigl{(}u^{ \prime}\bigr{)}\big{\}}\ \bigg{)}\ \.\] are equivalent to, \[-\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma_{n-i}^{-+}\bigg{)}\bigg{[}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{]}\bigg{[}\frac{A_ {3}\bigl{(}u\bigr{)}B_{3}\bigl{(}u^{\prime}\bigr{)}}{u-u^{\prime}}\bigg{]}-\cdots\] \[\big{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma_{n-i}^{-+}\bigg{)}\bigg{[}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\bigg{]}\bigg{[}\frac{B_ {3}\bigl{(}u\bigr{)}A_{3}\bigl{(}u^{\prime}\bigr{)}}{u^{\prime}-u}\bigg{]}\ \,\] from the observation that, \[\big{\{}A_{3}\bigl{(}u\bigr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\big{\}}=\bigl{(} \frac{\partial}{\partial u}A_{3}\bigl{(}u\bigr{)}\bigr{)}\bigl{(}\frac{\partial}{ \partial u^{\prime}}B_{3}\bigl{(}u^{\prime}\bigr{)}\bigr{)}-\bigl{(}\frac{ \partial}{\partial u^{\prime}}A_{3}\bigl{(}u\bigr{)}\bigr{)}\bigl{(}\frac{ \partial}{\partial u}B_{3}\bigl{(}u^{\prime}\bigr{)}\bigr{)}\ \,\] is approximately equivalent to, \[\frac{A_{3}\bigl{(}u\bigr{)}B_{3}\bigl{(}u^{\prime}\bigr{)}}{u-u^{\prime}}\ \,\] and also that, \[\big{\{}B_{3}\bigl{(}u\bigr{)},A_{3}\bigl{(}u^{\prime}\bigr{)}\big{\}}=-\big{\{} A_{3}\bigl{(}u^{\prime}\bigr{)},B_{3}\bigl{(}u\bigr{)}\big{\}}=-\bigg{[}\bigl{(}\frac{ \partial}{\partial u}B_{3}\bigl{(}u\bigr{)}\bigr{)}\bigl{(}\frac{\partial}{ \partial u^{\prime}}A_{3}\bigl{(}u^{\prime}\bigr{)}\bigr{)}+\bigl{(}\frac{\partial}{ \partial u^{\prime}}B_{3}\bigl{(}u\bigr{)}\bigr{)}\bigl{(}\frac{\partial}{\partial u }A_{3}\bigl{(}u^{\prime}\bigr{)}\bigr{)}\bigg{]}\ \,\] is approximately equivalent to, \[\frac{B_{3}\big{(}u\big{)}A_{3}(u^{\prime})}{u^{\prime}-u}\enspace.\] Hence, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\big{(}\mathrm{sin}\big{(}2\eta\big{)} \big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\biggr{\}} \approx-\biggl{[}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\biggl{(} \prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggr{]}\biggl{[}\frac{A_{3} \big{(}u\big{)}B_{3}(u^{\prime})}{u-u^{\prime}}-\frac{B_{3}\big{(}u\big{)}A_{3} \big{(}u^{\prime}\big{)}}{u^{\prime}-u}\biggr{]}\enspace,\] from which we conclude the argument. #### 2.4.3 Third Poisson bracket, \(\mathcal{P}_{3}\) **Lemma 8**: (_evaluating the third Poisson bracket in the first relation_). The third term approximately equals, \[\sum_{n^{\prime}:m+n^{\prime}=n-3}\bigl{(}\mathrm{sin}\big{(}2\eta \big{)}\big{)}^{n^{\prime}-1}\biggl{[}\sum_{1\leq i\leq m}\frac{\partial}{ \partial u}\biggl{(}\prod_{1\leq i\leq m}\left(\mathscr{C}_{2}\right)_{i} \biggr{)}\biggr{]}\biggl{[}\frac{\partial B_{3}\big{(}u\big{)}}{\partial u^{ \prime}}+\frac{\partial A_{3}\big{(}u\big{)}}{\partial u^{\prime}}\biggr{]}+\cdots\] \[\sum_{n^{\prime}:m+n^{\prime}=n-3}\bigl{(}\mathrm{sin}\big{(}2\eta \big{)}\big{)}^{n^{\prime}-1}\biggl{[}\sum_{1\leq i\leq m}\frac{\partial}{ \partial u^{\prime}}\biggl{(}\prod_{1\leq i\leq m}\left(\mathscr{C}_{2}\right) _{i}\biggr{)}\biggr{]}\biggl{[}\frac{\partial B_{3}\big{(}u\big{)}}{\partial u }+\frac{\partial A_{3}\big{(}u\big{)}}{\partial u}\biggr{]}-\cdots\] \[\qquad\qquad\bigl{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3} \biggl{[}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\biggl{(}\left(\mathscr{C}_{3}\right)_{i,j} \biggr{)}\biggr{]}\biggl{[}\frac{B_{3}\big{(}u\big{)}A_{3}\big{(}u^{\prime} \big{)}}{u-u^{\prime}}+\frac{A_{3}\big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)} }{u-u^{\prime}}\biggr{]}\enspace.\] _Proof of Lemma 8._ The third term, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\big{(}\mathrm{sin}\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime} \biggr{\}}\enspace,\] is equivalent to, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\big{(}\mathrm{sin}\big{(}2\eta \big{)}\big{)}^{n-3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)},\mathscr{P}_{2}\biggl{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\biggl{(}\prod_{1\leq i\leq m}\mathrm{sin} \big{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\,\left(\mathrm{sin} \big{(}2\eta\big{)}\right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma _{n-j}^{-,+}\biggr{)}\biggr{)}\biggr{\}}\enspace.\] Applying Leibniz' rule to the Poisson bracket over \(\mathscr{P}\) yields the expression, \[\biggl{\{}\mathscr{P}_{1},\mathscr{P}_{2}\biggl{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\biggl{(}\prod_{1\leq i\leq m}\mathrm{sin} \big{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\,\left(\mathrm{sin} \big{(}2\eta\big{)}\right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma _{n-j}^{-,+}\biggr{)}\biggr{)}\biggr{\}}\bigl{(}\mathrm{sin}\big{(}2\eta\big{)} \bigr{)}^{n-3}+\cdots\] \[\mathscr{P}_{1}\biggl{\{}\bigl{(}\mathrm{sin}\big{(}2\eta\big{)} \bigr{)}^{n-3},\mathscr{P}_{2}\biggl{(}\sum_{\begin{subarray}{c}1\leq i\leq m \\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\biggl{(}\prod_{1\leq i\leq m}\mathrm{sin} \big{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\,\left(\mathrm{sin} \big{(}2\eta\big{)}\right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma _{n-j}^{-,+}\biggr{)}\biggr{)}\biggr{\}}\enspace,\] which, by anticommutativity of the Poisson bracket, equals, \[-\biggl{\{}\mathscr{P}_{2}\biggl{(}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\biggl{(}\prod_{1\leq i\leq m}\mathrm{sin} \big{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\,\left(\mathrm{sin} \big{(}2\eta\big{)}\right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n- j}^{-,+}\biggr{)}\biggr{)},\mathscr{P}_{1}\biggr{\}}\bigl{(}\mathrm{sin} \big{(}2\eta\big{)}\bigr{)}^{n-3}-\cdots\] \[\mathscr{P}_{1}\biggl{\{}\mathscr{P}_{2}\biggl{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\biggl{(}\prod_{1\leq i\leq m}\mathrm{sin}\big{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\,\left(\mathrm{sin}\big{(}2\eta \big{)}\right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+} \biggr{)}\biggr{)},\bigl{(}\mathrm{sin}\big{(}2\eta\big{)}\bigr{)}^{n-3}\biggr{\}}\enspace.\] Applying Leibniz' rule to the final Poisson bracket in the superposition above yields, \[-\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{[}\{\mathscr{P}_{2}, \mathscr{P}_{1}\}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\left(\sin\bigl{(}2\eta\big{)} \right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \right)+\cdots\] \[\mathscr{P}_{2}\bigg{\{}\bigg{(}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\left(\sin\bigl{(}2\eta\big{)} \right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{)},\mathscr{P}_{1}\bigg{\}}\bigg{]}\ \,\] corresponding to the first term, and, \[-\mathscr{P}_{1}\bigg{[}\Big{\{}\mathscr{P}_{2},\big{(}\sin\bigl{(}2\eta \big{)}\big{)}^{n-3}\Big{\}}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\left(\sin\bigl{(}2\eta\big{)} \right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{)}+\cdots\] \[\mathscr{P}_{2}\bigg{\{}\bigg{(}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\left(\sin\bigl{(}2\eta\big{)} \right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{)},\big{(}\sin\bigl{(}2\eta\big{)}\big{)}^{n-3}\bigg{\}}\bigg{]}\ \,\] corresponding to the second term. In the first term to which we applied the Poisson bracket, the fact that, \[\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\left(\sin\bigl{(}2\eta\big{)} \right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \ \,\] is equivalent to, \[\sum_{n^{\prime}:m+n^{\prime}=n-3}\bigl{(}\sin\bigl{(}2\eta\big{)}\bigr{)}^{n^ {\prime}-1}\bigg{(}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\Big{)}\bigg{(}\sum_{1\leq j \leq n^{\prime}}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{)} \ \,\] implies that another application of Leibniz' rule yields, \[-\big{(}\sin\bigl{(}2\eta\big{)}\big{)}^{n-3}\bigg{[}\{\mathscr{P}_{2}, \mathscr{P}_{1}\}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\left(\sin\bigl{(}2\eta\big{)} \right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{)}+\cdots\] \[\mathscr{P}_{2}\bigg{[}\bigg{\{}\sum_{n^{\prime}:m+n^{\prime}=n-3} \bigl{(}\sin\bigl{(}2\eta\big{)}\bigr{)}^{n^{\prime}-1},\mathscr{P}_{1}\bigg{\}} \bigg{(}\bigg{(}\sum_{1\leq i\leq m}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigg{(}\sum_{1\leq j \leq n^{\prime}}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{)}+\cdots\] \[\sum_{n^{\prime}:m+n^{\prime}=n-3}\bigl{(}\sin\bigl{(}2\eta\big{)} \bigr{)}^{n^{\prime}-1}\bigg{\{}\bigg{(}\bigg{(}\sum_{1\leq i\leq m}\prod_{1 \leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)} \bigg{)}\bigg{(}\sum_{1\leq j\leq n^{\prime}}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)}\bigg{)},\mathscr{P}_{1}\bigg{\}}\ \bigg{]}\ \bigg{]}\ \.\] Applying Leibniz' rule to the final Poisson bracket in the superposition above yields, \[-\big{(}\sin\bigl{(}2\eta\big{)}\big{)}^{n-3}\bigg{[}\{\mathscr{P}_{2}, \mathscr{P}_{1}\}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\,\left(\sin\bigl{(}2\eta\big{)} \right)^{n^{\prime}-1}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{)}+\cdots\] \[\mathscr{P}_{2}\bigg{[}\bigg{\{}\sum_{n^{\prime}:m+n^{\prime}=n-3} \bigl{(}\sin\bigl{(}2\eta\big{)}\bigr{)}^{n^{\prime}-1},\mathscr{P}_{1}\bigg{\}} \bigg{(}\Big{(}\sum_{1\leq i\leq m}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v _{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigg{(}\sum_{1\leq j\leq n^{\prime}} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{)}+\cdots\] \[\sum_{n^{\prime}:m+n^{\prime}=n-3}\bigl{(}\sin\bigl{(}2\eta\big{)} \bigr{)}^{n^{\prime}-1}\bigg{[}\bigg{\{}\bigg{(}\sum_{1\leq i\leq m}\prod_{1 \leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)} \bigg{)},\mathscr{P}_{1}\bigg{\}}\bigg{(}\sum_{1\leq j\leq n^{\prime}}\prod_{1 \leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}+\cdots\] \[\bigg{\{}\bigg{(}\sum_{1\leq j\leq n^{\prime}}\prod_{1\leq j\leq n ^{\prime}}\sigma_{n-j}^{-,+}\bigg{)},\mathscr{P}_{1}\bigg{\}}\bigg{(}\sum_{1 \leq i\leq m}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta \sigma_{n-i}^{z}\bigr{)}\bigg{)}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \.\] From the expression above, writing each Poisson bracket individually yields, as summations over \(\mathscr{P}\), \[\sum_{\mathscr{P}}\{\mathscr{P}_{2},\mathscr{P}_{1}\}=-\sum_{ \mathscr{P}}\{\mathscr{P}_{1},\mathscr{P}_{2}\}=-\bigg{(}\big{\{}B_{3}\big{(}u \big{)},A_{3}\big{(}u^{\prime}\big{)}\big{\}}+\big{\{}A_{3}\big{(}u\big{)},B_{3 }\big{(}u^{\prime}\big{)}\big{\}}\bigg{)}\ \,\] \[\sum_{\mathscr{P}}\biggl{\{}\sum_{n^{\prime}:m+n^{\prime}=n-3} \bigl{(}\sin\bigl{(}2\eta\big{)}\bigr{)}^{n^{\prime}-1},\mathscr{P}_{1}\biggr{\}} \equiv 0\ \,\] \[\sum_{\mathscr{P}}\biggl{\{}\Big{(}\sum_{1\leq i\leq m}\prod_{1 \leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)} \Big{)},\mathscr{P}_{1}\biggr{\}}\ \,\] \[\sum_{\mathscr{P}}\biggl{\{}\bigg{(}\sum_{1\leq j\leq n^{\prime}} \sigma^{-,i}_{n-j}\bigg{)},\mathscr{P}_{1}\biggr{\}}\equiv 0\ \.\] Evaluating the third Poisson bracket is related to the computation of the derivative, \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m}\bigg{(}\prod _{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)} \bigg{)}\biggr{]}\ \,\] which can be expressed as, \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\underbrace{\sin \bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}+\cdots+\cdots}_{i=1} \underbrace{\sin\bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}+ \cdots+\cdots}_{i\equiv n}\underbrace{\sin\bigl{(}u^{\prime}-v_{n-1}\pm\eta \sigma^{z}_{n-1}\bigr{)}}_{i\equiv n}\times\sin\bigl{(}u^{\prime}-v_{n-m}\pm \eta\sigma^{z}_{n-m}\bigr{)}\biggr{)}\ \biggr{]}\ \,\] which can be differentiated term by term, with respect to \(u^{\prime}\), to obtain, \[\cos\bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}+ \cdots+\cos\bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}+\cdots+ \bigg{(}\cos\bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}\times\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ holds. Altogether, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1},\mathscr{P}_{2}\mathscr{A}_{3}^{\prime} \biggr{\}}\approx\sum_{n^{\prime}:m+n^{\prime}=n-3}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{[}\sum_{1\leq i\leq m}\frac{\partial}{ \partial u}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta \sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggr{]}\ \biggr{]}\times\cdots\] \[\biggl{[}\frac{\partial B_{3}\bigl{(}u\bigr{)}}{\partial u^{\prime }}+\frac{\partial A_{3}\bigl{(}u\bigr{)}}{\partial u^{\prime}}\biggr{]}\ldots\] \[\biggl{[}\frac{\partial B_{3}\bigl{(}u\bigr{)}}{\partial u}+\frac {\partial A_{3}\bigl{(}u\bigr{)}}{\partial u}\biggr{]}-\cdots\] \[\biggl{[}\frac{\partial B_{3}\bigl{(}u\bigr{)}}{\partial u}+\frac {\partial A_{3}\bigl{(}u\bigr{)}}{\partial u}\biggr{]}-\cdots\] \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{[}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\prod_{1\leq j\leq n^ {\prime}}\sigma_{n-j}^{-+}\biggr{)}\biggr{]}\times\cdots\] \[\biggl{[}\frac{B_{3}\bigl{(}u\bigr{)}A_{3}\bigl{(}u^{\prime} \bigr{)}}{u-u^{\prime}}+\frac{A_{3}\bigl{(}u\bigr{)}B_{3}\bigl{(}u^{\prime} \bigr{)}}{u-u^{\prime}}\biggr{]}\ \,\] from which we conclude the argument. #### 2.4.4 Fourth Poisson bracket, \(\mathcal{P}_{4}\) **Lemma 9** (_evaluating the fourth Poisson bracket in the first relation_). The fourth term, \(\mathcal{P}_{4}\), approximately equals, \[-\bigl{[}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\bigl{(} \mathscr{C}_{1}\bigr{)}_{i}\bigr{]}\biggl{[}\frac{A_{3}\bigl{(}u\bigr{)}B_{3} \bigl{(}u^{\prime}\bigr{)}}{u-u^{\prime}}+\frac{B_{3}\bigl{(}u\bigr{)}A_{3} \bigl{(}u^{\prime}\bigr{)}}{u-u^{\prime}}\biggr{]}+\cdots\] \[\sum_{1\leq i\leq n-3}\mathscr{A}_{2}\biggl{[}\ \biggl{[}A_{3}\bigl{(}u\bigr{)}\frac{ \partial B_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime}}\frac{\partial \bigl{(}\mathscr{C}_{2}\bigr{)}_{i}}{\partial u}+B_{3}\bigl{(}u\bigr{)}\frac{ \partial B_{3}\bigl{(}u^{\prime}\bigr{)}}{\partial u}\frac{\partial\bigl{(} \mathscr{C}_{2}\bigr{)}_{i}}{\partial u^{\prime}}\biggr{]}+\cdots\] \[\biggl{[}A_{3}\bigl{(}u\bigr{)}\frac{\partial A_{3}\bigl{(}u^{ \prime}\bigr{)}}{\partial u^{\prime}}\frac{\partial\bigl{(}\mathscr{C}_{2} \bigr{)}_{i}}{\partial u}+B_{3}\bigl{(}u\bigr{)}\frac{\partial A_{3}\bigl{(}u ^{\prime}\bigr{)}}{\partial u}\frac{\partial\bigl{(}\mathscr{C}_{2}\bigr{)}_{i }}{\partial u^{\prime}}\biggr{]}\ \biggr{]}\ \.\] _Proof of Lemma 9_. The fourth term, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime} \biggr{\}}\ \,\] is equivalent to, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\biggl{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P} _{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime} \biggr{\}}\ \.\] One application of Leibniz' rule to the Poisson bracket above gives, \[\sum_{\mathscr{P}}\biggl{[}\biggl{\{}\mathscr{P}_{1},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime} \biggr{\}}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i} ^{z}-n-i\bigr{)}\biggr{)}+\cdots\] \[\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+ \eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime}\biggr{\}}\mathscr{P}_{1}\biggr{]}\ \,\] while a second application of Leibniz' rule to the first Poisson bracket above gives, \[-\bigl{\{}\mathscr{P}_{2},\mathscr{P}_{1}\bigr{\}}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime}-\biggl{\{}\bigl{(}\sin\bigl{(}2 \eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime},\mathscr{P}_{1}\biggr{\}} \mathscr{P}_{2}\ \,\] (*) after anticommuting terms in the first Poisson bracket, \[\left\{\mathscr{P}_{1},\mathscr{P}_{2}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)} ^{n-3}\mathscr{A}_{1}^{\prime}\right\}\,\] with, \[\left\{\mathscr{P}_{1},\mathscr{P}_{2}\big{(}\mathrm{sin}\big{(}2\eta\big{)} \big{)}^{n-3}\mathscr{A}_{1}^{\prime}\right\}=-\bigg{\{}\mathscr{P}_{2}\big{(} \mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime},\mathscr{ P}_{1}\bigg{\}}\ \.\] Applying Leibniz' rule to the second Poisson bracket in (*) gives, \[\mathscr{P}_{2}\bigg{(}\big{\{}\mathscr{A}_{1}^{\prime},\mathscr{P}_{1}\big{\}} \big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}+\big{\{}\big{(}\mathrm{ sin}\big{(}2\eta\big{)}\big{)}^{n-3},\mathscr{P}_{1}\big{\}}\mathscr{A}_{1}^{ \prime}\bigg{)}\ \.\] For the second Poisson bracket, \[\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\mathrm{sin}\big{(}u-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{)},\mathscr{P}_{2}\big{(}\mathrm{sin}\big{(}2\eta \big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime}\bigg{\}}\mathscr{P}_{1}\ \,\] anticommuting terms gives, \[-\bigg{\{}\mathscr{P}_{2}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3} \mathscr{A}_{1}^{\prime},\bigg{(}\prod_{1\leq i\leq n-3}\mathrm{sin}\big{(}u- v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{\}}\mathscr{P}_{2}\bigg{)}\ \,\] to which we apply Leibniz' rule, \[-\mathscr{P}_{1}\bigg{(}\bigg{\{}\mathscr{P}_{2},\bigg{(}\prod_{1 \leq i\leq n-3}\mathrm{sin}\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)} \bigg{\}}\big{)}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A }_{1}^{\prime}+\cdots\] \[\bigg{\{}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3} \mathscr{A}_{1}^{\prime},\bigg{(}\prod_{1\leq i\leq n-3}\mathrm{sin}\big{(}u- v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{\}}\mathscr{P}_{2}\bigg{)}\ \.\] Observe that the second Poisson bracket above is equal to, \[\mathscr{P}_{2}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{\{} \mathscr{A}_{1}^{\prime},\bigg{(}\prod_{1\leq i\leq n-3}\mathrm{sin}\big{(}u- v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{\}}\ \.\] Similarly, from (*), from the second Poisson bracket, \[\bigg{\{}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{ \prime},\mathscr{P}_{1}\bigg{\}}\mathscr{P}_{2}\ \,\] to which we apply Leibniz' rule implies, \[\mathscr{P}_{2}\bigg{(}\big{\{}\mathscr{A}_{1}^{\prime},\mathscr{P}_{1}\big{\}} \big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}+\big{\{}\big{(}\mathrm{sin} \big{(}2\eta\big{)}\big{)}^{n-3},\mathscr{P}_{1}\big{\}}\mathscr{A}_{1}^{ \prime}\bigg{)}\ \.\] Hence, the Poisson bracket that was rearranged corresponding to the fourth term is equivalent to, \[\sum_{\mathscr{P}}\bigg{[}-\big{\{}\mathscr{P}_{2},\mathscr{P}_{1} \big{\}}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{ \prime}+\mathscr{P}_{2}\bigg{(}\big{\{}\mathscr{A}_{1}^{\prime},\mathscr{P}_{1 }\big{\}}\big{(}\mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3}+\big{\{}\big{(} \mathrm{sin}\big{(}2\eta\big{)}\big{)}^{n-3},\mathscr{P}_{1}\big{\}}\mathscr{A} _{1}^{\prime}\bigg{)}-\cdots\] \[\bigg{\{}\mathscr{P}_{2},\bigg{(}\prod_{1\leq i\leq n-3}\mathrm{ sin}\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{\}}\big{(}\mathrm{sin} \big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime}\mathscr{P}_{1}-\cdots\] \[\mathscr{P}_{1}\mathscr{P}_{2}\big{(}\mathrm{sin}\big{(}2\eta\big{)} \big{)}^{n-3}\bigg{\{}\mathscr{A}_{1}^{\prime},\bigg{(}\prod_{1\leq i\leq n-3} \mathrm{sin}\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{\}}\ \bigg{]}\ \.\] As a summation over \(\mathscr{P}\), writing out each Poisson bracket from the superposition above gives, \[-\!\sum_{\mathscr{P}}\!\big{\{}\mathscr{P}_{2},\mathscr{P}_{1}\big{\}} \big{(}\!\sin\!\left(2\eta\right)\!\big{)}^{n-3}\mathscr{A}^{\prime}_{1}=-\bigg{(} \big{\{}A_{3}\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\big{\}}+\big{\{}B_{3 }\big{(}u\big{)},A_{3}\big{(}u^{\prime}\big{)}\big{\}}\bigg{)}\big{(}\!\sin\! \left(2\eta\right)\!\big{)}^{n-3}\mathscr{A}^{\prime}_{1}\enspace,\] \[\sum_{\mathscr{P}}\!\mathscr{P}_{2}\bigg{(}\Big{\{}\mathscr{A}^{ \prime}_{1},\mathscr{P}_{1}\big{\}}\big{(}\!\sin\!\left(2\eta\right)\!\big{)}^{ n-3}+\big{\{}\big{(}\!\sin\!\left(2\eta\right)\!\big{)}^{n-3},\mathscr{P}_{1} \big{\}}\mathscr{A}^{\prime}_{1}\bigg{)}\equiv 0\enspace,\] \[-\!\sum_{\mathscr{P}}\!\bigg{\{}\mathscr{P}_{2},\bigg{(}\!\prod_ {1\leq i\leq n-3}\!\sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i}\right)\bigg{)} \bigg{\}}\big{(}\!\sin\!\left(2\eta\right)\!\big{)}^{n-3}\mathscr{A}^{\prime}_ {1}\mathscr{P}_{1}\enspace,\] \[-\!\sum_{\mathscr{P}}\!\mathscr{P}_{1}\mathscr{P}_{2}\big{(}\! \sin\!\left(2\eta\right)\!\big{)}^{n-3}\bigg{\{}\mathscr{A}^{\prime}_{1}, \bigg{(}\!\prod_{1\leq i\leq n-3}\!\sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i} \right)\!\bigg{)}\bigg{\}}\enspace,\] where in the second Poisson bracket, we made use of the fact that, \[\big{\{}\mathscr{A}^{\prime}_{1},B_{3}\big{(}u\big{)}\big{\}}=0\enspace,\] \[\big{\{}\mathscr{A}^{\prime}_{1},A_{3}\big{(}u\big{)}\big{\}}=0 \enspace,\] \[\big{\{}\mathscr{A}^{\prime}_{1},A_{3}\big{(}u^{\prime}\big{)}\big{\}}=0 \enspace,\] \[\big{\{}\mathscr{A}^{\prime}_{1},B_{3}\big{(}u^{\prime}\big{)}\big{\}}=0 \enspace,\] For the third Poisson bracket above, each term for all possible \(\mathscr{P}_{2}\) is, \[\big{(}\!\sin\!\left(2\eta\right)\!\big{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma^{-,+}_{n-i}\big{)}\big{(}\!B_{3}\big{(}u\big{)}\!\bigg{)} \bigg{\{}A_{3}\big{(}u^{\prime}\big{)},\bigg{(}\!\prod_{1\leq i\leq n-3}\!\sin \!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i}\right)\!\bigg{)}\bigg{\}}\enspace,\] \[\big{(}\!\sin\!\left(2\eta\right)\!\big{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma^{-,+}_{n-i}\big{)}\big{(}\!A_{3}\big{(}u\big{)}\!\bigg{)} \bigg{\{}A_{3}\big{(}u^{\prime}\big{)},\bigg{(}\!\prod_{1\leq i\leq n-3}\! \sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i}\right)\!\bigg{)}\bigg{\}}\enspace,\] \[\big{(}\!\sin\!\left(2\eta\right)\!\big{)}^{n-3}\bigg{(}\prod_{1 \leq i\leq n-3}\sigma^{-,+}_{n-i}\big{)}\big{(}\!A_{3}\big{(}u\big{)}\!\bigg{)} \bigg{\{}A_{3}\big{(}u^{\prime}\big{)},\bigg{(}\!\prod_{1\leq i\leq n-3}\! \sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i}\right)\!\bigg{)}\bigg{\}}\enspace,\] and, for the fourth Poisson bracket, \[\bigg{\{}\!\prod_{1\leq i\leq n-3}\!\sigma^{-,+}_{n-i},\bigg{(}\!\prod_{1\leq i \leq n-3}\!\sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i}\right)\!\bigg{)}\bigg{\}} =0\enspace.\] Evaluating each bracket from the five listed above implies, \[B_{3}\big{(}u\big{)}\bigg{[}\frac{\partial B_{3}\big{(}u^{\prime} \big{)}}{\partial u^{\prime}}\bigg{[}\!\sum_{1\leq i\leq n-3}\!\bigg{[}\! \frac{\partial}{\partial u}\bigg{(}\!\sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n- i}\right)\!\bigg{)}\bigg{]}\,\left(\!\prod_{1\leq j\neq i\leq n-3}\!\sin\! \left(u-v_{n-j}+\eta\sigma^{z}_{n-j}\right)\!\right)\bigg{]}+\cdots\] \[\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{[} \!\sum_{1\leq i\leq n-3}\!\bigg{[}\!\frac{\partial}{\partial u^{\prime}}\! \bigg{(}\!\sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i}\right)\!\bigg{)}\bigg{]} \,\left(\!\prod_{1\leq j\neq i\leq n-3}\!\sin\!\left(u-v_{n-j}+\eta\sigma^{z}_{n- j}\right)\!\right)\bigg{]}\,\bigg{]}\enspace,\] corresponding to the Poisson bracket between \(B_{3}\big{(}u^{\prime}\big{)}\) and the product of sine functions, \[B_{3}\big{(}u\big{)}\bigg{[}\frac{\partial A_{3}\big{(}u^{\prime} \big{)}}{\partial u^{\prime}}\bigg{[}\!\sum_{1\leq i\leq n-3}\!\bigg{[}\! \bigg{(}\!\frac{\partial}{\partial u}\bigg{(}\!\sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n- i}\right)\!\bigg{)}\bigg{]}\,\left(\!\prod_{1\leq j\neq i\leq n-3}\!\sin\!\left(u-v_{n-j}+ \eta\sigma^{z}_{n-j}\right)\!\right)\bigg{]}+\cdots\] \[\frac{\partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{[} \!\sum_{1\leq i\leq n-3}\!\bigg{[}\!\bigg{(}\!\frac{\partial}{\partial u^{\prime}} \bigg{(}\!\sin\!\left(u-v_{n-i}+\eta\sigma^{z}_{n-i}\right)\!\bigg{)}\bigg{]}\, \left(\!\prod_{1\leq j\neq i\leq n-3}\!\sin\!\left(u-v_{n-j}+\eta\sigma^{z}_{n- j}\right)\!\right)\bigg{]}\,\bigg{]}\enspace,\] corresponding to the Poisson bracket between \(A_{3}\big{(}u^{\prime}\big{)}\) and the product of sine functions, \[A_{3}\big{(}u\big{)}\bigg{[} \bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3}\big{(}u^{\prime} \big{)}\bigg{)}\bigg{[}\sum_{1\leq i\leq n-3}\bigg{[}\bigg{(}\frac{\partial}{ \partial u}\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)} \bigg{)}\bigg{)}\ \bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{ z}\big{)}\bigg{)}\bigg{]}\ \bigg{]}+\cdots\] \[\bigg{(}\frac{\partial}{\partial u}B_{3}\big{(}u^{\prime}\big{)} \bigg{)}\bigg{[}\sum_{1\leq i\leq n-3}\bigg{[}\bigg{(}\frac{\partial}{\partial u ^{\prime}}\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)} \bigg{)}\bigg{)}\ \bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{ z}\big{)}\bigg{)}\bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\,\] corresponding to the Poisson bracket between \(B_{3}\big{(}u^{\prime}\big{)}\) and the product of sine functions, and, \[A_{3}\big{(}u\big{)}\bigg{[} \bigg{(}\frac{\partial}{\partial u^{\prime}}A_{3}\big{(}u^{ \prime}\big{)}\bigg{)}\bigg{[}\sum_{1\leq i\leq n-3}\bigg{[}\bigg{(}\frac{ \partial}{\partial u}\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)} \bigg{)}\bigg{)}\bigg{)}\ \bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{ z}\big{)}\bigg{)}\bigg{]}\ \bigg{]}+\cdots\] \[\bigg{(}\frac{\partial}{\partial u}A_{3}\big{(}u^{\prime}\big{)} \bigg{)}\bigg{[}\sum_{1\leq i\leq n-3}\bigg{[}\bigg{(}\frac{\partial}{\partial u ^{\prime}}\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)} \bigg{)}\bigg{)}\ \bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{ z}\big{)}\bigg{)}\bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \,\] corresponding to the Poisson bracket between \(A_{3}\big{(}u^{\prime}\big{)}\) and the product of sine functions. From each of the four brackets above, for differentiation with respect to \(u^{\prime}\), and then with respect to \(u\), one can write, \[\bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3}\big{(}u^{\prime}\big{)} \bigg{)}\bigg{(}\!\sum_{1\leq i\leq n-3}\!\bigg{[}\bigg{(}\frac{\partial}{ \partial u}\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)} \bigg{)}\bigg{)}\ \bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{ z}\big{)}\bigg{)}\bigg{]}\bigg{)}\ \,\] as, \[\bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3}\big{(}u^{\prime} \big{)}\bigg{)}\bigg{(}\!\frac{\partial}{\partial u}\bigg{[}\!\prod_{1\leq i \leq n-3}\!\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{]}\bigg{)} \equiv\bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3}\big{(}u^{ \prime}\big{)}\bigg{)}\bigg{(}\!\cos\!\big{(}u-v_{n-1}+\eta\sigma_{n-1}^{z} \big{)}\times\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad \[\sum_{1\leq i\leq n-3}\biggl{[}\ \biggl{[}\biggl{(}\frac{\partial}{ \partial u^{\prime}}B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{)}\biggl{(}\frac{ \partial}{\partial u}\mathrm{sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)} \biggr{)}+\biggl{(}\frac{\partial}{\partial u}B_{3}\bigl{(}u^{\prime}\bigr{)} \biggr{)}\biggl{(}\frac{\partial}{\partial u^{\prime}}\mathrm{sin}\bigl{(}u-v_ {n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{]}\times\cdots\] \[\biggl{(}\prod_{1\leq j\neq i\leq n-3}\mathrm{sin}\bigl{(}u-v_{n-j }+\eta\sigma^{z}_{n-j}\bigr{)}\biggr{)}\biggr{]}\ \.\] The observation that the derivative of the product of sine functions, \[\frac{\partial}{\partial u}\biggl{[}\prod_{1\leq i\leq n-3}\mathrm{sin}\bigl{(} u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{]}\ \,\] equals, \[\mathrm{cos}\bigl{(}u-v_{n-1}+\eta\sigma^{z}_{n-1}\bigr{)}\prod_ {2\leq i\leq n-3}\mathrm{sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}+ \cdots+\prod_{2\leq i\leq n-4}\mathrm{sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n -i}\bigr{)}\times\cdots\] \[\mathrm{cos}\bigl{(}u-v_{n-(n-3)}+\eta\sigma^{z}_{n-(n-3)}\bigr{)}\ \.\] comes from the fact that the sum of products above can be expressed as, \[\sum_{1\leq i\leq n-3}\biggl{[}\ \biggl{[}\ \frac{\partial}{\partial u} \biggl{(}\mathrm{sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)} \biggr{]}\ \biggl{(}\prod_{1\leq j\neq i\leq n-3}\mathrm{sin}\bigl{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\bigr{)}\biggr{)}\biggr{]}\ \.\] For the three remaining Poisson brackets, along similar lines, \[\sum_{1\leq i\leq n-3}\biggl{[}\ \biggl{[}\biggl{(}\frac{ \partial}{\partial u^{\prime}}A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{)}\biggl{(} \frac{\partial}{\partial u}\mathrm{sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)}+\biggl{(}\frac{\partial}{\partial u}A_{3}\bigl{(}u^{\prime} \bigr{)}\biggr{)}\biggl{(}\frac{\partial}{\partial u^{\prime}}\mathrm{sin} \bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\ \biggr{]}\times\cdots\] \[\biggl{(}\prod_{1\leq j\neq i\leq n-3}\mathrm{sin}\bigl{(}u-v_{n-j }+\eta\sigma^{z}_{n-j}\bigr{)}\biggr{)}\biggr{]}\ \,\] for the first Poisson bracket, \[A_{3}\bigl{(}u\bigr{)}\biggl{[}\ \biggl{(}\frac{\partial}{ \partial u^{\prime}}B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{)}\biggl{(}\sum_{1 \leq i\leq n-3}\biggl{[}\biggl{(}\frac{\partial}{\partial u}\biggl{(}\mathrm{ sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{)}\ \biggr{)}\ \biggl{(}\prod_{1\leq j\neq i\leq n-3}\mathrm{sin}\bigl{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\bigr{)}\biggr{)}\biggr{]}\biggr{)}+\cdots\] \[\biggl{(}\frac{\partial}{\partial u}B_{3}\bigl{(}u^{\prime} \bigr{)}\sum_{1\leq i\leq n-3}\biggl{[}\biggl{(}\frac{\partial}{\partial u^{ \prime}}\biggl{(}\mathrm{sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)} \biggr{)}\biggr{)}\ \biggl{)}\ \biggl{(}\prod_{1\leq j\neq i\leq n-3}\mathrm{sin}\bigl{(}u-v_{n-j }+\eta\sigma^{z}_{n-j}\bigr{)}\biggr{)}\biggr{]}\ \biggr{]}\ \,\] for the second Poisson bracket, and, \[\sum_{1\leq i\leq n-3}\biggl{[}\ \biggl{[}\biggl{(}\frac{ \partial}{\partial u^{\prime}}A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{)}\biggl{(} \frac{\partial}{\partial u}\mathrm{sin}\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)}+\biggl{(}\frac{\partial}{\partial u}A_{3}\bigl{(}u^{\prime} \bigr{)}\biggr{)}\biggl{(}\frac{\partial}{\partial u^{\prime}}\mathrm{sin} \bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\ \biggr{]}\times\cdots\] \[\biggl{(}\prod_{1\leq j\neq i\leq n-3}\mathrm{sin}\bigl{(}u-v_{n-j }+\eta\sigma^{z}_{n-j}\bigr{)}\biggr{)}\biggr{]}\ \,\] for the third Poisson bracket, in which the derivatives of \(A_{3}\), or \(B_{3}\), can be combined with the summation of the \(i\) th derivative of the sine functions and the remaining product of sine functions. Hence, the terms from each of the four Poisson brackets are equal to, \[B_{3}(u)\sum_{1\leq i\leq n-3}\Big{[}\left[\Big{(}\frac{ \partial}{\partial u^{\prime}}B_{3}\big{(}u^{\prime}\big{)}\Big{)}\Big{(}\frac{ \partial}{\partial u}{\rm sin}\big{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)} \Big{)}+\Big{(}\frac{\partial}{\partial u}B_{3}\big{(}u^{\prime}\big{)}\Big{)} \Big{(}\frac{\partial}{\partial u^{\prime}}{\rm sin}\big{(}u-v_{n-i}+\eta \sigma^{z}_{n-i}\big{)}\Big{)}\right]\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)\right]+B_{3}\big{(}u\big{)}\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)\right]+\cdots\] \[\sum_{1\leq i\leq n-3}\Big{[}\left[\Big{(}\frac{ \partial}{\partial u^{\prime}}B_{3}\big{(}u^{\prime}\big{)}\Big{)}\Big{(} \frac{\partial}{\partial u}{\rm sin}\big{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)} \Big{)}+\Big{(}\frac{\partial}{\partial u}B_{3}\big{(}u^{\prime}\big{)}\Big{)} \Big{(}\frac{\partial}{\partial u^{\prime}}{\rm sin}\big{(}u-v_{n-i}+\eta \sigma^{z}_{n-i}\big{)}\Big{)}\right]\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)\Big{]}+\cdots\] \[\sum_{1\leq i\leq n-3}\Big{[}\left[\Big{(}\frac{\partial}{ \partial u^{\prime}}A_{3}\big{(}u^{\prime}\big{)}\Big{)}\Big{(}\frac{\partial} {\partial u}{\rm sin}\big{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\Big{)}+\Big{(} \frac{\partial}{\partial u}A_{3}\big{(}u^{\prime}\big{)}\Big{)}\Big{(}\frac{ \partial}{\partial u^{\prime}}{\rm sin}\big{(}u-v_{n-i}+\eta\sigma^{z}_{n-i} \big{)}\Big{)}\right]\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)\Big{]}\ \,\] with prefactor, \[\big{(}{\rm sin}\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{1\leq i\leq n- 3}\sigma^{-,+}_{n-i}\bigg{)}\ \.\] Excluding the prefactor above, the four Poisson brackets can be combined under the single summation, yielding, \[\sum_{1\leq i\leq n-3}\!\!\Big{[}B_{3}\big{(}u\big{)}\Big{[}\bigg{(} \frac{\partial}{\partial u^{\prime}}B_{3}\big{(}u^{\prime}\big{)}\bigg{)} \Big{(}\frac{\partial}{\partial u}{\rm sin}\big{(}u-v_{n-i}+\eta\sigma^{z}_{n- i}\big{)}\Big{)}+\Big{(}\frac{\partial}{\partial u}B_{3}\big{(}u^{ \prime}\big{)}\Big{)}\bigg{(}\frac{\partial}{\partial u^{\prime}}{\rm sin} \big{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\Big{)}\Big{]}\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)+B_{3}\big{(}u\big{)}\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)+A_{3}\big{(}u\big{)}\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)+\Big{(}\frac{\partial}{\partial u}B_{3}\big{(}u^{ \prime}\big{)}\Big{)}\bigg{(}\frac{\partial}{\partial u^{\prime}}{\rm sin} \big{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\right]\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)+A_{3}\big{(}u\big{)}\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)+A_{3}\big{(}u\big{)}\times\cdots\] \[\left(\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u-v_{n-j}+\eta \sigma^{z}_{n-j}\big{)}\right)\bigg{]}\ \,\] which, from the expressions introduced in **Lemma 4**, can be expressed as, \[\sum_{1\leq i\leq n-3}\mathscr{A}_{2}\bigg{[}B_{3}(u)\bigg{[}\bigg{(} \frac{\partial}{\partial u^{\prime}}B_{3}(u^{\prime})\bigg{)}\bigg{(}\frac{ \partial}{\partial u}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{)}+\bigg{(}\frac{ \partial}{\partial u}B_{3}(u^{\prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u ^{\prime}}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{)}\bigg{]}+\cdots\] \[\bigg{[}\bigg{(}\frac{\partial}{\partial u^{\prime}}A_{3}(u^{ \prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2}\big{)} _{i}\bigg{)}+\bigg{(}\frac{\partial}{\partial u}A_{3}(u^{\prime})\bigg{)}\bigg{(} \frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{)} \bigg{]}+\cdots\] \[\bigg{[}\bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3}(u^{ \prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2}\big{)} _{i}\bigg{)}+\bigg{(}\frac{\partial}{\partial u}B_{3}(u^{\prime})\bigg{)}\bigg{(} \frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{)} \bigg{]}+\cdots\] \[\bigg{[}\bigg{(}\frac{\partial}{\partial u^{\prime}}A_{3}(u^{ \prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2}\big{)} _{i}\bigg{)}+\bigg{(}\frac{\partial}{\partial u}A_{3}(u^{\prime})\bigg{)}\bigg{(} \frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{)} \bigg{]}~{}\bigg{]}~{}~{}.\] Grouping together like terms from the above summation implies, \[\sum_{1\leq i\leq n-3}\mathscr{A}_{2}\bigg{[} \bigg{[}A_{3}(u)\bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3} (u^{\prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2} \big{)}_{i}\bigg{)}+B_{3}(u)\bigg{(}\frac{\partial}{\partial u}B_{3}(u^{ \prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{ C}_{2}\big{)}_{i}\bigg{)}\bigg{]}+\cdots\] \[\bigg{[}A_{3}(u)\bigg{(}\frac{\partial}{\partial u^{\prime}}A_{3} (u^{\prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2} \big{)}_{i}\bigg{)}+B_{3}(u)\bigg{(}\frac{\partial}{\partial u}A_{3}(u^{ \prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{ C}_{2}\big{)}_{i}\bigg{)}\bigg{]}~{}\bigg{]}~{}~{}.\] Altogether, the terms from all nonzero Poisson brackets, \[-\bigg{(}\big{\{}A_{3}(u),B_{3}(u^{\prime})\big{\}}+\big{\{}B_{3} (u),A_{3}(u^{\prime})\big{\}}\bigg{)}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n -3}\bigg{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}+\cdots\] \[\sum_{1\leq i\leq n-3}\mathscr{A}_{2}\bigg{[} \bigg{[}\bigg{[}A_{3}(u)\bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3} (u^{\prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2} \big{)}_{i}\bigg{)}+B_{3}(u)\bigg{(}\frac{\partial}{\partial u}B_{3}(u^{ \prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{ C}_{2}\big{)}_{i}\bigg{)}\bigg{]}+\cdots\] \[\bigg{[}A_{3}(u)\bigg{(}\frac{\partial}{\partial u^{\prime}}A_{3} (u^{\prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2} \big{)}_{i}\bigg{)}+B_{3}(u)\bigg{(}\frac{\partial}{\partial u}A_{3}(u^{ \prime})\bigg{)}\bigg{(}\frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{C}_{2 }\big{)}_{i}\bigg{)}\bigg{]}~{}\bigg{]}~{}~{},\] Furthermore, applying expressions for the Poisson brackets between \(A_{3}\big{(}u\big{)}\), \(B_{3}\big{(}u^{\prime}\big{)}\), and for the Poisson bracket between \(B_{3}\big{(}u\big{)}\), \(A_{3}\big{(}u^{\prime}\big{)}\), from previous terms in the first relation, approximately yields, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P }_{2}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{A}_{1}^{\prime} \bigg{\}}\approx-\bigg{[}\big{(}\sin\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(} \prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\bigg{)}\bigg{]}\bigg{[}\frac{A_{3} \big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}+\frac{B_{3}\big{(}u \big{)}A_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}+\cdots\] \[\sum_{1\leq i\leq n-3}\mathscr{A}_{2}\bigg{[}\bigg{[}A_{3}(u) \bigg{(}\frac{\partial}{\partial u^{\prime}}B_{3}\big{(}u^{\prime}\big{)} \bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(}\mathscr{C}_{2}\big{)}_{i} \bigg{)}+B_{3}\big{(}u\big{)}\bigg{(}\frac{\partial}{\partial u}B_{3}\big{(}u^{ \prime}\big{)}\bigg{)}\bigg{(}\frac{\partial}{\partial u^{\prime}}\big{(}\mathscr{ C}_{2}\big{)}_{i}\bigg{)}\bigg{]}+\cdots\] \[\bigg{[}A_{3}(u)\bigg{(}\frac{\partial}{\partial u^{\prime}}A_{3} \big{(}u^{\prime}\big{)}\bigg{)}\bigg{(}\frac{\partial}{\partial u}\big{(} \mathscr{C}_{2}\big{)}_{i}\bigg{)}+B_{3}\big{(}u\big{)}\bigg{(}\frac{\partial}{ \partial u}A_{3}\big{(}u^{\prime}\big{)}\bigg{)}\bigg{(}\frac{\partial}{\partial u^{ \prime}}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{)}\bigg{]}~{}\bigg{]}~{}~{},\] from which we conclude the argument. #### 2.4.5 Fifth Poisson bracket, \(\mathcal{P}_{5}\) **Lemma 10** (_evaluating the fifth Poisson bracket in the first relation_).: The fifth term, \(\mathcal{P}_{5}\), approximately equals, \[-\mathscr{C}_{2}\bigg{[}\frac{B_{3}\big{(}u^{\prime}\big{)}A_{3} \big{(}u\big{)}}{u^{\prime}-u}+\frac{B_{3}\big{(}u\big{)}A_{3}\big{(}u^{ \prime}\big{)}}{u-u^{\prime}}\bigg{]}-\big{(}B_{3}\big{(}u\big{)}+A_{3}\big{(}u \big{)}\big{)}\mathscr{C}_{2}\bigg{[}\sum_{1\leq i\leq n-3}\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\frac{\partial\big{(} \mathscr{C}_{2}\big{)}_{i}}{\partial u}-\frac{\partial B_{3}\big{(}u^{\prime} \big{)}}{\partial u}\frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u} \bigg{]}~{}\bigg{]}\times\cdots\] \[\bigg{[}\prod_{1\leq j\neq i\leq n-3}\big{(}\mathscr{C}_{2} \big{)}_{i}\bigg{]}-\big{(}B_{3}\big{(}u\big{)}+A_{3}\big{(}u\big{)}\bigg{)} \bigg{[}\prod_{1\leq i\leq n-3}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{]}\times\cdots\] \[\bigg{[}\sum_{1\leq i\leq n-3}\bigg{[}\frac{\partial A_{3}\big{(}u^{ \prime}\big{)}}{\partial u^{\prime}}\frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{ \partial u}-\frac{\partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u}\frac{ \partial\big{(}\mathscr{C}_{2}\big{ Proof of Lemma 10.: The fifth term, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2} \mathscr{A}_{2}^{\prime}\biggr{\}}\ \,\] is equivalent to, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\biggl{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P} _{2}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n -i}^{z}\bigr{)}\biggr{)}\biggr{\}}\ \,\] which can be rearranged by applying Leibniz' rule to the first argument, in which, \[\sum_{\mathscr{P}}\biggl{(}\biggl{\{}\mathscr{P}_{1},\mathscr{P}_ {2}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n -i}^{z}\bigr{)}\biggr{)}\biggr{\}}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u -v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}+\cdots\] \[\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+ \eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{2}\biggl{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)} \biggr{\}}\mathscr{P}_{1}\biggr{)}\ \,\] to which a second application of Leibniz' rule yields, for the second argument, \[\sum_{\mathscr{P}}\biggl{[}\ \biggl{(}\prod_{1\leq i\leq n-3}\sin \bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggl{[}-\bigl{\{} \mathscr{P}_{2},\mathscr{P}_{1}\bigr{\}}\biggl{(}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}-\cdots\] \[\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v _{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{1}\biggr{\}} \mathscr{P}_{2}\biggr{]}-\cdots\] \[\mathscr{P}_{1}\biggl{[}\biggl{\{}\mathscr{P}_{2},\biggl{(}\prod _{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)} \biggr{\}}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\biggr{)}+\cdots\] \[\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v _{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\biggl{(}\prod_{1\leq i\leq n-3} \sin\bigl{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggr{\}} \mathscr{P}_{2}\biggr{]}\ \biggr{]}\ \,\] after anticommuting the first Poisson bracket, \[\biggl{\{}\mathscr{P}_{1},\mathscr{P}_{2}\biggl{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)} \biggr{\}}\ \,\] and the second Poisson bracket, \[\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma_{n -i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{2}\biggl{(}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggr{\}}\ \.\] Writing out each Poisson bracket gives, \[-\sum_{\mathscr{P}}\bigl{\{}\mathscr{P}_{2},\mathscr{P}_{1} \biggl{\}}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\bigr{)}\biggr{)}=-\biggl{(}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggl{(}\bigl{ }\bigl{.}\bigl{.}\bigl{\{}B_{3}\bigl{(}u^{\prime}\bigr{)},A_{3}\bigl{(}u\bigr{)} \bigr{\}}+\cdots\] \[\bigl{.}\bigl{\{}B_{3}\bigl{(}u\bigr{)},A_{3}\bigl{(}u^{\prime} \bigr{)}\bigr{\}}\biggr{)}\ \,\] \[-\sum_{\mathscr{P}}\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{ 1}\biggr{\}}\mathscr{P}_{2}=-\mathscr{P}_{2}\biggl{(}\biggl{\{}\biggl{(}\prod_{1 \leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)},B_{3}\bigl{(}u\bigr{)}\biggr{\}}+\cdots\] \[\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v _{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},A_{3}\bigl{(}u\bigr{)}\biggr{\}} \biggr{)}\ \,\] for the first two terms, while the third term, \[-\mathscr{P}_{1}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+ \eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}{\sum_{\mathscr{P}}}\biggl{\{} \mathscr{P}_{2},\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta \sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{\}}\] is equivalent to, \[\mathscr{P}_{1}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{ \prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggl{[}\biggl{\{}B_{3} \bigl{(}u^{\prime}\bigr{)},\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i} +\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{\}}+\cdots\] \[\biggl{\{}A_{3}\bigl{(}u^{\prime}\bigr{)},\biggl{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{\}} \biggr{]}\ \.\] The fourth term is, \[\mathscr{P}_{2}\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{ \prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)},\biggl{(}\prod_{1\leq i \leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{\}} \ \.\] For the first Poisson bracket, one approximately has, \[-\biggl{[}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma^{z} _{n-i}\bigr{)}\biggr{]}\biggl{[}\frac{B_{3}\bigl{(}u^{\prime}\bigr{)}A_{3} \bigl{(}u\bigr{)}}{u^{\prime}-u}+\frac{B_{3}\bigl{(}u\bigr{)}A_{3}\bigl{(}u^{ \prime}\bigr{)}}{u-u^{\prime}}\biggr{]}\ \.\] After taking the summation of each bracket over \(\mathscr{P}\), observe, from further rearrangement of the third Poisson bracket, \[-\mathscr{P}_{1}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i} +\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}{\sum_{\mathscr{P}}}\biggl{\{} \mathscr{P}_{2},\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta \sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{\}}\ \,\] is equivalent to the terms, \[-B_{3}\bigl{(}u\bigr{)}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}- v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggl{\{}B_{3}\bigl{(}u^{\prime} \bigr{)},\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta\sigma^{z} _{n-i}\bigr{)}\biggr{)}\biggr{\}}\ \,\] \[-A_{3}\bigl{(}u\bigr{)}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u ^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggl{\{}A_{3}\bigl{(} u^{\prime}\bigr{)},\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta \sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{\}}\ \,\] \[-B_{3}\bigl{(}u\bigr{)}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(} u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggl{\{}A_{3}\bigl{(} u^{\prime}\bigr{)},\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v_{n-i}+\eta \sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{\}}\ \.\] Term by term, evaluating the bracket for, \[\biggl{(}\frac{\partial}{\partial u^{\prime}}B_{3}\bigl{(}u^{\prime}\bigr{)} \biggr{)}\biggl{[}\frac{\partial}{\partial u}\biggl{(}\prod_{1\leq i\leq n-3} \sin\bigl{(}u-v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{]}-\biggl{(} \frac{\partial}{\partial u}B_{3}\bigl{(}u\bigr{)}\biggr{)}\biggl{[}\frac{ \partial}{\partial u^{\prime}}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u-v _{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{]}\ \,\] with prefactor, \[-B_{3}\bigl{(}u\bigr{)}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}- v_{n-i}+\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\ \,\] corresponding to the first bracket, can be expressed with, \[\left(\frac{\partial}{\partial u^{\prime}}A_{3}\big{(}u^{\prime} \big{)}\right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u} \bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\prod_{1\leq j \neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{z}\big{)}\right)\bigg{]} -\cdots\] \[\left(\frac{\partial}{\partial u}B_{3}\big{(}u^{\prime}\big{)} \right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u^{\prime} }\bigg{(}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)} \prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-j}+\eta\sigma_{n-j}^ {z}\big{)}\bigg{)}\right]\,\] from the fact that the derivative of the product of sine functions equals, \[\frac{\partial}{\partial u}\bigg{(}\prod_{1\leq i\leq n-3}\sin \!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}=\frac{\partial}{ \partial u}\bigg{(}\sin\!\big{(}u-v_{n-1}+\eta\sigma_{n-1}^{z}\big{)}\times \cdots\times\sin\!\big{(}u-v_{n-(n-3)}+\eta\sigma_{n-(n-3)}^{z}\big{)}\bigg{)}\] \[=\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u}\bigg{(} \sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\prod_{1\leq j\neq i \leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{z}\big{)}\right)\.\] Evaluating the bracket for, \[\left\{A_{3}\big{(}u^{\prime}\big{)},\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\right\}\,\] with prefactor, \[-A_{3}\big{(}u\big{)}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v _{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \,\] corresponding to the second bracket, can similarly be expressed with, \[\left(\frac{\partial}{\partial u^{\prime}}A_{3}\big{(}u^{\prime} \big{)}\right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u }\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\prod_{1 \leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{z}\big{)}\right) \bigg{]}-\cdots\] \[\left(\frac{\partial}{\partial u^{\prime}}A_{3}\big{(}u^{\prime} \big{)}\right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u }\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\prod_{1\leq j \neq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-j}+\eta\sigma_{n-j}^{z}\big{)} \bigg{)}\bigg{]}\ \,\] with prefactor, \[-A_{3}\big{(}u\big{)}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v _{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \,\] corresponding to the third bracket. Along similar lines, the third and fourth brackets can be respectively expressed with, \[\left(\frac{\partial}{\partial u^{\prime}}A_{3}\big{(}u^{\prime} \big{)}\right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u }\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\prod_{1 \leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{z}\big{)}\bigg{)} \bigg{]}-\cdots\] \[\left(\frac{\partial}{\partial u^{\prime}}A_{3}\big{(}u^{\prime} \big{)}\right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u }\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\prod_{1 \leq j\neq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-j}+\eta\sigma_{n-j}^{z} \big{)}\bigg{)}\bigg{]}\ \,\] and with, \[\left(\frac{\partial}{\partial u^{\prime}}A_{3}\big{(}u^{\prime} \big{)}\right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u }\bigg{(}\sin\!\big{(}u-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\prod_{1 \leq j\neq i\leq n-3}\sin\!\big{(}u-v_{n-j}+\eta\sigma_{n-j}^{z}\big{)}\bigg{)} \bigg{]}-\cdots\] \[\left(\frac{\partial}{\partial u}A_{3}\big{(}u^{\prime} \big{)}\right)\bigg{[}\sum_{1\leq i\leq n-3}\left(\frac{\partial}{\partial u }\bigg{(}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{)} \prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-j}+\eta\sigma_{n-j}^ {z}\big{)}\bigg{)}\bigg{]}\ \,\] with prefactors, \[-A_{3}\big{(}u\big{)}\bigg{(}\prod_{1\leq i\leq n-3}\sin\big{(}u^{\prime}-v_{n-i}+ \eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \,\] and, \[-B_{3}\big{(}u\big{)}\bigg{(}\prod_{1\leq i\leq n-3}\sin\big{(}u^{\prime}-v_{n-i} +\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \.\] Grouping together like terms, from the each bracket, from the four above, can be expressed as, either, \[\bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u^{\prime}-v_{n-j} +\eta\sigma_{n-j}^{z}\big{)}\bigg{)}\ \,\] or as, \[\bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\big{(}u^{\prime}-v_{n-j}+ \eta\sigma_{n-j}^{z}\big{)}\bigg{)}\ \,\] from which approximating, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2} \mathscr{A}_{2}^{\prime}\bigg{\}}\ \,\] yields the desired expression, \[-\bigg{[}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+ \eta\sigma_{n-i}^{z}\big{)}\bigg{]}\bigg{[}\frac{B_{3}\big{(}u^{\prime}\big{)} A_{3}\big{(}u\big{)}}{u^{\prime}-u}+\frac{B_{3}\big{(}u\big{)}A_{3}\big{(}u^{ \prime}\big{)}}{u-u^{\prime}}\bigg{]}-\cdots\] \[\big{(}B_{3}\big{(}u\big{)}+A_{3}\big{(}u\big{)}\big{)}\bigg{(} \prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z} \big{)}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n- j}+\eta\sigma_{n-j}^{z}\big{)}\bigg{)}-\cdots\] \[\big{(}B_{3}\big{(}u\big{)}+A_{3}\big{(}u\big{)}\bigg{)}\bigg{(} \prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z} \big{)}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n- j}+\eta\sigma_{n-j}^{z}\big{)}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{1\leq j\neq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n- j}+\eta\sigma_{n-j}^{z}\big{)}\bigg{)}\ \,\] from which we conclude the argument. #### 2.4.6 Sixth Poisson bracket, \(\mathcal{P}_{6}\) **Lemma 11**: (_evaluating the sixth Poisson bracket in the first relation_). The sixth term, \(\mathcal{P}_{6}\), approximately equals, \[\bigg{[}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\big{(} \mathscr{C}_{2}\big{)}_{i}\bigg{)}\ \big{(}\sin\!\left(2\eta\right)\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j \leq n^{\prime}}\big{(}\mathscr{C}_{1}\big{)}_{j}\bigg{)}\bigg{]}\ \bigg{]}\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}+\cdots\] \[\bigg{[}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\big{(} \mathscr{C}_{2}\big{)}_{i}\bigg{)}\ \big{(}\sin\!\left(2\eta\right)\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j \leq n^{\prime}}\big{(}\mathscr{C}_{1}\big{)}_{j}\bigg{)}\bigg{]}\ \bigg{]}\bigg{[}\frac{B_{3}\big{(}u\big{)}A_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}-\cdots\] \[2\big{(}B_{3}\big{(}u\big{)}+A_{3}\big{(}u\big{)}\bigg{)}\bigg{[} \sum_{1\leq j\leq n^{\prime}}\big{(}\!\sin\!\left(2\eta\right)\big{)}^{n^{ \prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\big{(}\mathscr{C}_{1}\big{)} _{j}\bigg{)}\bigg{]}\bigg{[}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\frac{\partial\big{(}\mathscr{C }_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{1\leq j\neq i\leq m}\big{(}\mathscr{C}_{2}\big{)}_{j }\bigg{)}\bigg{(}\frac{\partial A_{3}\big{(}u\big{)}}{\partial u}+\frac{ \partial B_{3}\big{(}u\big{)}}{\partial u}\bigg{)}+\bigg{(}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{)}\bigg{(}\prod_{1\leq j \neq i\leq m}\big{(}\mathscr{C}_{2}\big{)}_{j}\bigg{)}\bigg{(}\frac{\partial A _{3}\big{(}u^{\prime}\big{)}}{\partial u^{\prime}}+\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{)}\bigg{]}\ \bigg{]}\ \bigg{]}\ \.\] _Proof of Lemma 11._ The sixth term, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2} \mathscr{A}_{3}^{\prime}\bigg{\}}\ \,\] is equivalent to, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1},\mathscr{P}_{2}\bigg{[} \sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\left( u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\bigg{)}\ \big{(}\sin\!\left(2\eta\right)\big{)}^{n^{\prime}-1}\times\cdots\] \[\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{]}\ \bigg{]}\bigg{\}}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\left(u-v_{n-i}+\eta \sigma_{n-i}^{z}\right)\bigg{)}+\cdots\] \[\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \left(u-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{)},\mathscr{P}_{2}\bigg{[} \sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\left( u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\bigg{)}\ \left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\times\cdots\] \[\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{]}\ \bigg{]}\bigg{\}}\mathscr{P}_{1}\ \,\] from an application of Leibniz' rule. Applying Leibniz' rule a second time to each bracket in the expression above gives, \[-\sum_{\mathscr{P}}\Big{[}\ \{\mathscr{P}_{2},\mathscr{P}_{1}\}\ \sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \left[\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm \eta\sigma_{n-i}^{z}\right)\right)\,\left(\sin\!\left(2\eta\right)\right)^{n^ {\prime}-1}\times\cdots\] \[\Big{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\Big{)}\Big{]} \Big{)}\] \[\Big{\{}\sum_{1\leq i\leq m}\atop{1\leq j\leq n^{\prime}\atop m+n^{\prime}=n-3}} \ \left[\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n- i}^{z}\right)\Big{)}\,\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\! \bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\Big{)}\right], \mathscr{P}_{1}\Big{\}}\mathscr{P}_{2}\Big{]}\ -\cdots\] \[\sum_{\mathscr{P}}\Big{[}\ \Big{\{}\mathscr{P}_{2},\Big{(}\prod_{1 \leq i\leq n-3}\sin\!\left(u-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\Big{)}\Big{\}} \Big{[}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \left[\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm \eta\sigma_{n-i}^{z}\right)\Big{)}\,\left(\sin\!\left(2\eta\right)\right)^{n^{ \prime}-1}\times\cdots\right.\] \[\Big{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\Big{)} \Big{]}\ \Big{]}\ \Big{]}\ \Big{]}\,\] after anticommuting terms in, \[\Big{\{}\mathscr{P}_{1},\mathscr{P}_{2}\Big{[}\ \sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \left[\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm \eta\sigma_{n-i}^{z}\right)\Big{)}\,\left(\sin\!\left(2\eta\right)\right)^{n^ {\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\Big{)} \right]\Big{]}\Big{\}}\,\] and in, \[\Big{\{}\Big{(}\prod_{1\leq i\leq n-3}\sin\!\left(u-v_{n-i}+\eta \sigma_{n-i}^{z}\right)\Big{)},\mathscr{P}_{2}\Big{[}\ \sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \left[\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm \eta\sigma_{n-i}^{z}\right)\Big{)}\,\left(\sin\!\left(2\eta\right)\right)^{n^{ \prime}-1}\times\cdots\right.\] \[\left.\left.\left(\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+} \right)\right]\Big{]}\Big{\}}\.\] Writing out each bracket individually gives, \[\sum_{\mathscr{P}}\big{\{}\mathscr{P}_{1},\mathscr{P}_{1}\big{\}}=\big{\{}A_{3 }\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\big{\}}+\big{\{}B_{3}\big{(}u \big{)},A_{3}\big{(}u^{\prime}\big{)}\big{\}}\ \,\] for the first term, which with the corresponding prefactor, equals, \[\Big{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \left[\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\Big{)}\,\left(\sin\!\left(2\eta \right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n- j}^{-,+}\Big{)}\right]\,\Big{)}\Big{(}\big{\{}A_{3}\big{(}u\big{)},B_{3}\big{(}u^{ \prime}\big{)}\big{\}}+\cdots\] \[\big{\{}B_{3}\big{(}u\big{)},A_{3}\big{(}u^{\prime}\big{)}\big{\}} \Big{)}\ \,\] while for the remaining brackets, \[-\mathscr{P}_{2}\underset{\mathscr{P}}{\sum}\bigg{\{}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{[}\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u ^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\Big{)}\left(\sin\!\left(2\eta \right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n -j}^{-,+}\bigg{)}\bigg{]},\mathscr{P}_{1}\bigg{\}}\] \[=-\mathscr{P}_{2}\underset{\mathscr{P}}{\sum}\bigg{\{}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\Big{)},\mathscr{P}_{1}\bigg{\}} \underset{1\leq j\leq n^{\prime}}{\sum}\bigg{(}\!\left(\sin\!\left(2\eta \right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n -j}^{-,+}\bigg{)}\bigg{)}-\cdots\] \[\sum_{\mathscr{P}}\bigg{\{}\Big{(}\sin\!\left(2\eta\right)\! \Big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+ }\bigg{)}\bigg{)},\mathscr{P}_{1}\bigg{\}}\times\cdots\] \[\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\!\Big{)}\bigg{)}\] \[=-\mathscr{P}_{2}\underset{\mathscr{P}}{\sum}\bigg{\{}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\!\Big{)},\mathscr{P}_{1}\bigg{\}} \underset{1\leq j\leq n^{\prime}}{\sum}\bigg{(}\!\left(\sin\!\left(2\eta\right) \right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{ -,+}\bigg{)}\bigg{)}\enspace.\] Altogether, one has the remaining brackets, \[\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\Big{(}\prod_{1\leq i\leq m}\sin\!\left( u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\Big{)}\left(\sin\!\left(2\eta \right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n -j}^{-,+}\bigg{)}\bigg{]}\ \bigg{)}\bigg{)}\Big{(}\big{\{}A_{3}\!\left(u\right),B_{3}\!\left(u^{ \prime}\right)\}+\cdots\] \[\big{\{}B_{3}\!\left(u\right),A_{3}\!\left(u^{\prime}\right)\} \bigg{)}-\cdots\] \[\mathscr{P}_{2}\underset{\mathscr{P}}{\sum}\bigg{\{}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\!\Big{)},\mathscr{P}_{1}\bigg{\}} \times\cdots\] \[\sum_{1\leq j\leq n^{\prime}}\!\bigg{(}\!\left(\sin\!\left(2\eta \right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n -j}^{-,+}\bigg{)}\bigg{)}\enspace.\] The second Poisson bracket is equivalent to, \[\bigg{\{}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\!\Big{)},B_{3}\!\left(u\right) \bigg{\}}+\bigg{\{}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\!\left(u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\!\bigg{)},A_{3}\!\left(u\right) \bigg{\}}\enspace,\] with prefactor, \[-\mathscr{P}_{2}\underset{1\leq j\leq n^{\prime}}{\sum}\bigg{(}\!\left(\sin\! \left(2\eta\right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)}\bigg{)}\enspace.\] The two brackets, from the summation over \(\mathscr{P}_{2}\), can each analyzed from the following six derivatives, \[\frac{\partial}{\partial u^{\prime}}\bigg{[}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin(u^{\prime}-v_{ n-i}\pm\eta\sigma^{z}_{n-i})\bigg{)}\bigg{]}\enspace,\] \[\frac{\partial}{\partial u}\bigg{[}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin(u^{\prime}-v_{ n-i}\pm\eta\sigma^{z}_{n-i})\bigg{)}\bigg{]}\enspace,\] \[\frac{\partial}{\partial u}\big{[}B_{3}(u)\big{]}\enspace,\] \[\frac{\partial}{\partial u}\big{[}A_{3}(u)\big{]}\enspace,\] \[\frac{\partial}{\partial u^{\prime}}\big{[}A_{3}(u^{\prime})\big{]}\enspace,\] \[\frac{\partial}{\partial u^{\prime}}\big{[}B_{3}(u^{\prime}) \big{]}\enspace,\] corresponding to the first term. Observe, for the first two derivatives with respective to \(u^{\prime}\), and to \(u\), above, that, \[\frac{\partial}{\partial u^{\prime}}\bigg{[}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin(u^{\prime}-v_{ n-i}\pm\eta\sigma^{z}_{n-i})\bigg{)}\bigg{]}=\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\frac{\partial}{\partial u^{\prime}} \bigg{(}\prod_{1\leq i\leq m}\sin(u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}) \bigg{)}\bigg{]}\] \[=\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\bigg{(}\frac{\partial}{\partial u^{ \prime}}\sin(u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i})\bigg{)}\prod_{1\leq j \neq i\leq m}\sin(u^{\prime}-v_{n-j}\pm\eta\sigma^{z}_{n-j})\bigg{)}\enspace,\] from the fact that differentiating the product with respect to \(u^{\prime}\), \[\prod_{1\leq i\leq m}\sin(u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i})=\sin(u^{ \prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i})\cdots\times\sin(u^{\prime}-v_{n-m}\pm \eta\sigma^{z}_{n-m})\enspace,\] gives, \[\frac{\partial}{\partial u^{\prime}}\bigg{[}\sin(u^{\prime}-v_{n- i}\pm\eta\sigma^{z}_{n-i})\cdots\times\sin(u^{\prime}-v_{n-m}\pm\eta\sigma^{z}_{n-m}) \bigg{]}=\cos(u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1})\times\cdots\] \[\prod_{2\leq i\leq m}\sin(u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n- i})+\cdots+\prod_{1\leq i\leq m-1}\sin(u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}) \times\cdots\] \[\cos(u^{\prime}-v_{n-m}\pm\eta^{z}_{n-m})\enspace,\] which can be expressed with the summation, \[\sum_{1\leq i\leq m}\bigg{[}\bigg{(}\frac{\partial}{\partial u^{\prime}}\sin (u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i})\bigg{)}\bigg{(}\prod_{1\leq j\neq i \leq m}\sin(u^{\prime}-v_{n-j}\pm\eta\sigma^{z}_{n-j})\bigg{)}\bigg{]}\enspace.\] For the other term, \[\frac{\partial}{\partial u}\bigg{[}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin(u-v_{n-i}\pm \eta\sigma^{z}_{n-i})\bigg{)}\bigg{]}\enspace,\] one similarly obtains, \[\sum_{1\leq i\leq m}\left[\left(\frac{\partial}{\partial u}\text{sin}\big{(}u^{ \prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\right)\left(\prod_{1\leq j\neq i \leq m}\text{sin}\big{(}u^{\prime}-v_{n-j}\pm\eta\sigma_{n-j}^{z}\big{)}\right) \right]\,\] corresponding to the second term. Altogether, the Poisson bracket for each term takes the form, \[-\mathscr{P}_{2}\sum_{1\leq j\leq n^{\prime}}\left(\left(\text{ sin}\big{(}2\eta\big{)}\right)^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)}\right)\bigg{[}\ \bigg{[}\ \sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\left(\left(\frac{\partial}{\partial u^{\prime }}\text{sin}\big{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\right) \times\cdots\right.\] \[\ \[-\mathscr{P}_{2}\!\!\!\sum_{\begin{subarray}{c}1\leq j\leq n^{\prime} \end{subarray}}\left(\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\! \left(\,\prod_{\begin{subarray}{c}1\leq j\leq n^{\prime}\end{subarray}}\sigma_{ n-j}^{-,+}\right)\right)\!\left[\,\,\,\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\,\left(\left(\frac{\partial}{\partial u^{ \prime}}\!\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\right) \times\cdots\right.\right.\] \[\left.\left.\left(\prod_{\begin{subarray}{c}1\leq j\neq i\leq m \end{subarray}}\sin\!\left(u^{\prime}-v_{n-j}\pm\eta\sigma_{n-j}^{z}\right) \right)\!\frac{\partial A_{3}\!\left(u\right)}{\partial u}+\cdots\right.\] \[\left.\left(\frac{\partial}{\partial u}\!\sin\!\left(u-v_{n-i}\pm \eta\sigma_{n-i}^{z}\right)\right)\times\cdots\right.\] \[\left.\left(\prod_{\begin{subarray}{c}1\leq j\neq i\leq m \end{subarray}}\sin\!\left(u^{\prime}-v_{n-j}\pm\eta\sigma_{n-j}^{z}\right) \right)\!\frac{\partial A_{3}\!\left(u^{\prime}\right)}{\partial u^{\prime}} \right)\,\,\right]\,\,\,.\] Grouping together like terms from the two brackets above yields, \[-\mathscr{P}_{2}\!\!\!\sum_{\begin{subarray}{c}1\leq j\leq n^{ \prime}\end{subarray}}\!\!\left(\left(\sin\!\left(2\eta\right)\right)^{n^{ \prime}-1}\!\left(\,\prod_{\begin{subarray}{c}1\leq j\leq n^{\prime}\end{subarray}} \sigma_{n-j}^{-,+}\right)\right)\!\left[\,\,\,\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\,\left(\left(\frac{\partial}{\partial u^{ \prime}}\!\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\right) \times\cdots\right.\] \[\left.\left(\prod_{\begin{subarray}{c}1\leq j\neq i\leq m \end{subarray}}\sin\!\left(u^{\prime}-v_{n-j}\pm\eta\sigma_{n-j}^{z}\right) \right)\!\left(\frac{\partial A_{3}\!\left(u\right)}{\partial u}+\frac{ \partial B_{3}\!\left(u\right)}{\partial u}\right)+\cdots\right.\] \[\left.\left(\frac{\partial}{\partial u}\!\sin\!\left(u-v_{n-i}\pm \eta\sigma_{n-i}^{z}\right)\right)\times\cdots\right.\] \[\left.\left(\prod_{\begin{subarray}{c}1\leq j\neq i\leq m \end{subarray}}\sin\!\left(u^{\prime}-v_{n-j}\pm\eta\sigma_{n-j}^{z}\right) \right)\!\left(\frac{\partial A_{3}\!\left(u^{\prime}\right)}{\partial u^{ \prime}}+\frac{\partial B_{3}\!\left(u^{\prime}\right)}{\partial u^{\prime}} \right)\right)\,\right]\,\,.\] For the two remaining Poisson brackets appearing before the second bracket, namely, \[\left(\,\,\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\,\left[\left(\,\prod_{\begin{subarray}{c}1\leq i \leq m\end{subarray}}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z} \right)\right)\,\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\left(\, \prod_{\begin{subarray}{c}1\leq j\leq n^{\prime}\end{subarray}}\sigma_{n-j}^ {-,+}\right)\right]\,\right)\!\left\{A_{3}\!\left(u\right),B_{3}\!\left(u^{ \prime}\right)\right\}\,\,\,,\] and, \[\left(\,\,\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\,\left[\left(\,\prod_{\begin{subarray}{c}1\leq i \leq m\end{subarray}}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z} \right)\right)\,\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\left(\, \prod_{\begin{subarray}{c}1\leq j\leq n^{\prime}\end{subarray}}\sigma_{n-j}^ {-,+}\right)\right]\,\right)\!\left\{B_{3}\!\left(u\right),A_{3}\!\left(u^{ \prime}\right)\right\}\,\,\,,\] one has, approximately, \[\left[\,\,\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\,\left[\left(\,\prod_{\begin{subarray}{c}1\leq i \leq m\end{subarray}}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z} \right)\right)\,\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\left(\, \prod_{\begin{subarray}{c}1\leq j\leq n^{\prime}\end{subarray}}\sigma_{n-j}^ {-,+}\right)\right]\,\right]\left[\frac{A_{3}\!\left(u\right)B_{3}\!\left(u^{ \prime}\right)}{u-u^{\prime}}\right]\,\,\,,\] corresponding to the first term, and, approximately, \[\left[\,\,\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\,\left[\left(\,\prod_{\begin{subarray}{c}1\leq i \leq m\end{subarray}}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z} \right)\right)\,\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\left(\, \prod_{\begin{subarray}{c}1\leq j\leq n^{\prime}\end{subarray}}\sigma_{n-j}^ {-,+}\right)\right]\,\right]\left[\frac{B_{3}\!\left(u\right)A_{3}\!\left(u^{ \prime}\right)}{u-u^{\prime}}\right]\,\,\,,\] corresponding to the second term. Hence, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{2},\mathscr{P}_{2} \mathscr{A}_{q}^{\prime}\biggr{\}}\approx\biggl{[}\sum_{\begin{array}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{array}}\biggl{[}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u ^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\left(\sin\bigl{(}2 \eta\bigr{)}\right)^{n^{\prime}-1}\biggr{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\,\biggr{]}\,\biggr{]}\times\cdots\] \[\biggl{[}\frac{A_{3}\bigl{(}u\bigr{)}B_{3}\bigl{(}u^{\prime}\bigr{)}}{u-u^{ \prime}}\biggr{]}+\cdots\] \[\biggl{[}\frac{B_{3}\bigl{(}u\bigr{)}A_{3}\bigl{(}u^{\prime}\bigr{)}}{u-u^{ \prime}}\biggr{]}-\cdots\] \[2\bigl{(}B_{3}\bigl{(}u\bigr{)}+A_{3}\bigl{(}u\bigr{)}\bigr{)}\sum_{1\leq j \leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime }-1}\biggr{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)} \biggl{[}\sum_{\begin{array}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{array}}\biggl{(}\biggl{(}\frac{\partial}{\partial u^{ \prime}}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)} \times\cdots\] \[\biggl{(}\prod_{1\leq j\neq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-j}\pm\eta \sigma_{n-j}^{z}\bigr{)}\biggr{)}\biggl{(}\frac{\partial A_{3}\bigl{(}u^{ \prime}\bigr{)}}{\partial u^{\prime}}+\frac{\partial B_{3}\bigl{(}u^{\prime} \bigr{)}}{\partial u^{\prime}}\biggr{)}\biggr{)}\,\biggr{]}\,\] from which we conclude the argument. \(\qed\) #### 2.4.7 Seventh Poisson bracket, \(\mathcal{P}_{7}\) **Lemma 12** (_evaluating the seventh Poisson bracket in the first relation_).: The seventh term, \(\mathcal{P}_{7}\), approximately equals, \[\biggl{[}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{ C}_{1}\biggr{]}\biggl{[}\frac{A_{3}\bigl{(}u\bigr{)}B_{3}\bigl{(}u^{\prime} \bigr{)}}{u-u^{\prime}}\biggr{]}+\biggl{[}\bigl{(}\sin\bigl{(}2\eta\bigr{)} \bigr{)}^{n-3}\mathscr{C}_{1}\biggr{]}\biggl{[}\frac{B_{3}\bigl{(}u\bigr{)}A_ {3}\bigl{(}u^{\prime}\bigr{)}}{u-u^{\prime}}\biggr{]}+\cdots\] \[\biggl{[}\frac{\partial}{\partial u^{\prime}}\prod_{1\leq i\leq m }\sin\bigl{(}\mathscr{C}_{2}\bigr{)}_{i}\biggr{]}\biggl{[}\frac{\partial B_{3} \bigl{(}u^{\prime}\bigr{)}}{\partial u^{\prime}}-\frac{\partial B_{3}\bigl{(} u^{\prime}\bigr{)}}{\partial u}\biggr{]}\,\biggr{]}\,\biggr{]}\,\biggr{]}\, \biggr{]}\,\biggr{]}\,\biggr{]}\,\biggr{]}\,\biggr{]}\,\biggr{]}\,\biggr{]}\,\] Proof of Lemma 12.: The seventh term, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^{\prime}\biggr{\}}\,\,\] is equivalent to, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\Big{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{[}\Big{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v _{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\Big{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\Big{]}\Big{)},\cdots\] \[\mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggr{\}}\ \,\] which can be expressed as, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1},\mathscr{P}_{2} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(}\prod_{1\leq i\leq n-3} \sigma_{n-i}^{-,+}\biggr{)}\biggr{\}}\biggl{(}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\times\cdots\] \[\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)} \bigg{]}\biggr{)}+\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\bigg{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\times\cdots\] \[\big{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\bigg{]}\biggr{)}, \mathscr{P}_{2}\biggr{\}}\mathscr{P}_{1}\ \,\] from one application of Leibniz' rule, and, from another application of Leibniz' rule, \[-\biggl{(}\sum_{\mathscr{P}}\bigl{\{}\mathscr{P}_{2},\mathscr{P} _{1}\bigr{\}}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(}\prod_{1 \leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}+\sum_{\mathscr{P}}\biggl{\{}\big{(} \sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_ {n-i}^{-,+}\biggr{)},\mathscr{P}_{1}\biggr{\}}\mathscr{P}_{2}\biggr{)}\times\cdots\] \[\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(} u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\bigg{]}\biggr{)}+\cdots\] \[\mathscr{P}_{1}\biggl{(}\sum_{\mathscr{P}}\biggl{(}\biggl{\{}\sum _{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta \sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{2}\biggr{\}}\sum_{1\leq j\leq n ^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1} \bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}+\cdots\] \[\bigg{\{}\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(} 2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\biggr{)}\biggr{)},\mathscr{P}_{2}\bigg{\}}\ \bigg{)}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i} \pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\ \.\] From the superposition above, writing out the first two Poisson bracket yields, \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(}\prod_{1 \leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\sum_{\mathscr{P}}\bigl{\{}\mathscr{P} _{1},\mathscr{P}_{2}\bigr{\}}=\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3} \biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggl{(}\bigl{\{}A_{ 3}\bigl{(}u\bigr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\bigr{\}}+\bigl{\{}B_{3} \bigl{(}u\bigr{)},A_{3}\bigl{(}u^{\prime}\bigr{)}\bigr{\}}\biggr{)}\,\] \[\sum_{\mathscr{P}}\biggl{\{}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)} ^{n-3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)},\mathscr{P}_{1} \biggr{\}}\mathscr{P}_{2}\equiv 0\.\] The third Poisson bracket, \[\mathscr{P}_{1}\biggl{(}\sum_{\mathscr{P}}\biggl{\{}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)},\mathscr{P}_{2}\biggr{\}}\sum_{1\leq j\leq n^{\prime}}\biggl{(} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}\biggr{)}\,\] can be expressed as, \[B_{3}\bigl{(}u\bigr{)}\biggl{(}\biggl{\{}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}}+\biggl{\{}\sum_{1\leq i\leq m} \biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z} \bigr{)}\biggr{)},A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}}\biggr{)}\times\cdots\] \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{ -,+}\biggr{)}\biggr{)}\biggr{)}\ \,\] corresponding to the first two terms, and, \[A_{3}\big{(}u\big{)}\bigg{(}\bigg{\{}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},A_{3}\big{(}u^{\prime}\big{)}\bigg{\}}+\bigg{\{}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)},B_{3}\big{(}u^{\prime}\big{)}\bigg{\}}\bigg{)}\times\cdots\] \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n -j}^{-,+}\biggr{)}\biggr{)}\bigg{)}\ \,\] corresponding to the next two terms. The fourth Poisson bracket, \[\bigg{\{}\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta\bigr{)} \bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+ }\biggr{)}\biggr{)},\mathscr{P}_{2}\bigg{\}}\ \,\] vanishes, for all \(\mathscr{P}_{2}\). The remaining nonzero terms, \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(}\prod_{1 \leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggl{(}\bigl{\{}A_{3}\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\bigr{\}}+\bigl{\{}B_{3}\big{(}u\big{)},A_{3} \big{(}u^{\prime}\big{)}\bigr{\}}\biggr{)}+\mathscr{P}_{1}\times\cdots\] \[\biggl{(}\sum_{\mathscr{P}}\biggl{\{}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{2}\biggr{\}}\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin \bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{ \prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}\biggr{)}\ \,\] are each approximately equivalent to, \[\biggl{[}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(} \prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggr{]}\biggl{[}\frac{A_{3} \big{(}u\big{)}B_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}\biggr{]}\ \,\] \[\biggl{[}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(} \prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggr{]}\biggl{[}\frac{B_{3} \big{(}u\big{)}A_{3}\big{(}u^{\prime}\big{)}}{u-u^{\prime}}\biggr{]}\ \,\] corresponding to the first two Poisson brackets, \[\bigl{\{}A_{3}\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\bigr{\}}\ \,\] \[\bigl{\{}B_{3}\big{(}u\big{)},A_{3}\big{(}u^{\prime}\big{)}\bigr{\}} \ \.\] For the remaining brackets, \[\bigg{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i }\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},B_{3}\big{(}u^{\prime}\big{)}\bigg{\}} \ \,\] and, \[\bigg{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i }\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},A_{3}\big{(}u^{\prime}\big{)}\bigg{\}} \ \,\] corresponding to the third Poisson bracket above, \[\sum_{\mathscr{P}}\biggl{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m} \sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{2} \bigg{\}}\ \,\] from each possible \(\mathscr{P}_{2}\). From each of the possible four brackets, \[B_{3}(u)\Big{(}\Big{\{}\sum_{1\leq i\leq m}\Bigl{(}\prod_{1\leq i \leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)}\Bigr{)},B_{3}\bigl{(} u^{\prime}\bigr{)}\Big{\}}+\Big{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i \leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)},A_{3} \bigl{(}u^{\prime}\bigr{)}\Big{\}}\Big{)}\times\cdots\] \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma^{ -+}_{n-j}\biggr{)}\biggr{)}\biggr{)}\biggr{)}\,\] \[A_{3}\bigl{(}u\bigr{)}\biggl{(}\Bigl{\{}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)} \biggr{)},A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}}+\Big{\{}\sum_{1\leq i\leq m }\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\Bigr{\}}\biggr{)}\times\cdots\] \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma^{ -+}_{n-j}\biggr{)}\biggr{)}\biggr{)}\,\] implies that the desired expression for each of the four brackets above would take the form, \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma^{ -+}_{n-j}\biggr{)}\biggr{)}\biggl{[}B_{3}\bigl{(}u\bigr{)}\biggl{(}\frac{\partial }{\partial u}\biggl{[}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{]}\frac{\partial }{\partial u^{\prime}}B_{3}\bigl{(}u^{\prime}\bigr{)}-\cdots\] \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m }\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u}B_{3}\bigl{(}u^{\prime} \bigr{)}+\cdots\] \[\frac{\partial}{\partial u}\biggl{[}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)} \biggr{)}\biggr{]}\frac{\partial}{\partial u}A_{3}\bigl{(}u^{\prime}\bigr{)}+\cdots\] \[A_{3}\bigl{(}u\bigr{)}\biggl{(}\frac{\partial}{\partial u}\biggl{[} \sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta \sigma^{z}_{n-i}\bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u^{\prime}} A_{3}\bigl{(}u^{\prime}\bigr{)}-\cdots\] \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m }\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u}A_{3}\bigl{(}u^{\prime} \bigr{)}+\cdots\] \[\frac{\partial}{\partial u}\biggl{[}\sum_{1\leq i\leq m }\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u^{\prime}}B_{3}\bigl{(}u^{ \prime}\bigr{)}-\cdots\] \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m }\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u}B_{3}\bigl{(}u^{\prime} \bigr{)}\biggr{)}\biggr{]}\ \.\] For the first bracket taken with respect to \(B_{3}\bigl{(}u^{\prime}\bigr{)}\), write, \[\frac{\partial}{\partial u}\biggl{[}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)} \biggr{)}\biggr{]}=\frac{\partial}{\partial u}\biggl{[}\biggl{(}\sin\bigl{(}u-v_ {n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}+\cdots\] \[+\biggl{(}\sin\bigl{(}u-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}+ \cdots+\biggl{(}\sin\bigl{(}u-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}\times\cdots\] \[\times\sin\bigl{(}u-v_{n-(n-3)}\pm\eta\sigma^{z}_{n-(n-3)}\bigr{)} \biggr{)}\biggr{)}\biggr{]}\ \,\] which is further rearranged as, \[\frac{\partial}{\partial u}\sin\bigl{(}u-v_{n-1}\pm\eta\sigma^{z}_ {n-1}\bigr{)}+\frac{\partial}{\partial u}\biggl{(}\sin\bigl{(}u-v_{n-1}\pm\eta \sigma^{z}_{n-1}\bigr{)}+\cdots+\sin\bigl{(}u-v_{n-(n-3)}\pm\eta\sigma^{z}_{n-(n-3 )}\bigr{)}\biggr{)}\] \[=2\cos\bigl{(}u-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}+\cdots+\cos \bigl{(}u-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}\prod_{2\leq i\leq n-3}\sin \bigl{(}u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)}\] \[=\sum_{1\leq i\leq m}\biggl{[}\frac{\partial}{\partial u}\prod_{1 \leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)} \biggr{]}\ \.\] This implies, \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta\bigr{)} \bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+} \biggr{)}\biggr{)}\biggl{[}B_{3}\bigl{(}u\bigr{)}\biggl{[}\sum_{1\leq i\leq m} \biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u^{\prime}}B_{3}\bigl{(}u^{ \prime}\bigr{)}-\cdots\] \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m} \biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u}B_{3}\bigl{(}u^{\prime} \bigr{)}+\cdots\] \[\frac{\partial}{\partial u}\biggl{[}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)}\biggr{]}\frac{\partial}{\partial u^{\prime}}A_{3}\bigl{(}u^{\prime} \bigr{)}+\cdots\] \[A_{3}\bigl{(}u\bigr{)}\biggl{(}\frac{\partial}{\partial u} \biggl{[}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i} \pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u^{ \prime}}A_{3}\bigl{(}u^{\prime}\bigr{)}-\cdots\] \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m} \biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u}A_{3}\bigl{(}u^{\prime} \bigr{)}+\cdots\] \[\frac{\partial}{\partial u}\biggl{[}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)} \biggr{)}\biggr{]}\frac{\partial}{\partial u^{\prime}}B_{3}\bigl{(}u^{\prime} \bigr{)}-\cdots\] \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m} \biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z} \bigr{)}\biggr{)}\biggr{]}\frac{\partial}{\partial u}B_{3}\bigl{(}u^{\prime} \bigr{)}\biggr{)}\biggr{]}\ \.\] equals, \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_ {n-j}^{-,+}\biggr{)}\biggr{)}\biggl{[}B_{3}\bigl{(}u\bigr{)}\biggl{[}\sum_{1 \leq i\leq m}\biggl{[}\frac{\partial}{\partial u}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{]}\times\cdots\] \[\biggl{[}\frac{\partial}{\partial u^{\prime}}\biggl{[}B_{3}\bigl{(} u^{\prime}\bigr{)}+A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{]}\ \biggr{]}-\cdots\] \[\biggl{[}\frac{\partial}{\partial u^{\prime}}\prod_{1\leq i\leq m }\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{]}\times\cdots\] \[\biggl{[}\frac{\partial}{\partial u}\biggl{[}A_{3}\bigl{(}u^{\prime }\bigr{)}+B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{]}\ \biggr{]}\ \biggr{]}\ +\cdots\] \[A_{3}\bigl{(}u\bigr{)}\biggl{[}\sum_{1\leq i\leq m}\biggl{[}\frac {\partial}{\partial u}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm \eta\sigma_{n-i}^{z}\bigr{)}\biggr{]}\biggl{[}\frac{\partial}{\partial u} \biggl{[}A_{3}\bigl{(}u^{\prime}\bigr{)}+B_{3}\bigl{(}u^{\prime}\bigr{)} \biggr{]}\ \biggr{]}-\cdots\] \[\biggl{[}\frac{\partial}{\partial u^{\prime}}\prod_{1\leq i\leq m }\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{]}\biggl{[} \frac{\partial}{\partial u}\biggl{[}A_{3}\bigl{(}u^{\prime}\bigr{)}+B_{3} \bigl{(}u^{\prime}\bigr{)}\biggr{]}\ \biggr{]}\ \biggr{]}\.\] Altogether, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{3}, \mathscr{P}_{2}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\mathscr{A}_{1}^ {\prime}\biggr{\}}\approx\biggl{[}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n -3}\biggl{(}\prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggr{]}\biggl{[} \frac{A_{3}\bigl{(}u\bigr{)}B_{3}\bigl{(}u^{\prime}\bigr{)}}{u-u^{\prime}} \biggr{]}+\cdots\] \[\biggl{[}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n-3}\biggl{(} \prod_{1\leq i\leq n-3}\sigma_{n-i}^{-,+}\biggr{)}\biggr{]}\biggl{[}\frac{B_{3} \bigl{(}u\bigr{)}A_{3}\bigl{(}u^{\prime}\bigr{)}}{u-u^{\prime}}\biggr{]}+\] \[\sum_{1\leq j\leq n^{\prime}}\biggl{(}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^ {-,+}\biggr{)}\biggr{)}\biggl{[}B_{3}\bigl{(}u\bigr{)}\biggl{[}\ \sum_{1\leq i\leq m}\biggl{[}\frac{\partial}{\partial u}\prod_{1\leq i\leq m} \sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{]}\times\cdots\] \[\biggl{[}\frac{\partial}{\partial u^{\prime}}\biggl{[}B_{3} \bigl{(}u^{\prime}\bigr{)}+A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{]}\ \biggr{]}-\cdots\] \[\biggl{[}\frac{\partial}{\partial u^{\prime}}\prod_{1\leq i\leq m }\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{]}\times\cdots\] \[\biggl{[}\frac{\partial}{\partial u}\biggl{[}A_{3}\bigl{(}u^{ \prime}\bigr{)}+B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{]}\ \biggr{]}\ \biggr{]}\ +\cdots\] \[A_{3}\big{(}u\big{)}\bigg{[}\sum_{1\leq i\leq m}\bigg{[} \bigg{[}\frac{\partial\beta_{u}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[} \frac{\partial A_{3}\big{(}u\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[} \frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}\bigg{]} \bigg{]}~{}\times\] \[\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{]}\sum_{1\leq i \leq m}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\big{(}\mathscr{C}_{2}\big{)}_{i} \bigg{)}\ \big{(}\sin(2\eta)\big{)}^{n^{\prime}-1}\mathscr{C}_{1}\bigg{]}\bigg{)}\times\] \[A_{3}\big{(}u^{\prime}\big{)}\bigg{[}\sum_{1\leq j\leq n-3}\bigg{[} \bigg{[}\frac{\partial A_{3}\big{(}u\big{)}}{\partial u}\bigg{]}\bigg{[} \frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]} -\bigg{[}\frac{\partial A_{3}\big{(}u\big{)}}{\partial u^{\prime}}\bigg{]} \bigg{[}\frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]} \bigg{]}~{}\bigg{]}~{}\times\] \[\prod_{1\leq j\neq i\leq n-3}\bigg{[}\bigg{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}\bigg{]}~{} \bigg{]}~{}\times\] \[A_{3}\big{(}u^{\prime}\big{)}\bigg{[}\sum_{1\leq j\leq n-3}\bigg{[} \bigg{[}\frac{\partial A_{3}\big{(}u\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[} \frac{\partial A_{3}\big{(}u\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[} \frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}~{}\bigg{]} \bigg{]}~{}\times\] \[\prod_{1\leq j\neq i\leq n-3}\big{(}\mathscr{C}_{2}\big{)}_{j} \bigg{]}-\cdots\] \[\bigg{(}\sum_{1\leq j\leq n-3}\bigg{[}\bigg{[}\bigg{(}\prod_{1\leq i \leq m}\big{(}\mathscr{C}_{2}\big{)}_{i}\bigg{)}\ \big{(}\sin(2\eta)\big{)}^{n^{\prime}-1}\mathscr{C}_{1}\bigg{]}\bigg{)} \times\cdots\] \[A_{3}\big{(}u^{\prime}\big{)}\bigg{[}\sum_{1\leq j\leq n-3}\bigg{[} \bigg{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u^{\prime}} \bigg{]}\bigg{[}\frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{ \prime}}\bigg{]}-\bigg{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{ \partial u}\bigg{]}\bigg{[}\frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{ \partial u}\bigg{]}~{}\bigg{]}~{}\bigg{]}~{}\times\cdots\] \[A_{3}\big{(}u^{\prime}\big{)}\bigg{[}\sum_{1\leq j\leq n-3}\bigg{[} \bigg{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u^{\prime}} \bigg{]}\bigg{[}\frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{ \prime}}\bigg{]}-\bigg{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{ \partial u}\bigg{]}\bigg{[}\frac{\partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{ \partial u}\bigg{]}~{}\bigg{]}~{}\bigg{]}~{}\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\big{[}\bigg{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}~{}\bigg{]}~{} \bigg{]}~{}\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\big{[}\bigg{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}~{}\bigg{]}~{} \bigg{]}~{}\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\big{[}\bigg{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}~{}\bigg{]}~{} \bigg{]}~{}\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\big{[}\bigg{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}~{}\bigg{]}~{} \bigg{]}~{}\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\big{[}\bigg{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}\bigg{]}~{} \bigg{]}~{}\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\big{[}\bigg{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial \big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}\bigg{]}~{}\bigg{]}~{} \bigg{]}~{}\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\big{[}\bigg{[}\frac{\partial B_{3} \big{(}u\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{\partial\big{(}\mathscr{C}_{2} \big{)}_{i}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{\partial B_{3} \big{(}u\big{)}}{\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial\big{(} \mathscr{C}_{2}\big{)}_{i}}{\partial u}\bigg{]}~{}\bigg{]}~{}\bigg{]}~{}\times\cdots\] \[A_{3}\big{(}u^{\prime}\big{)}\bigg{[}\sum_{1\leq j\leq n-3}\bigg{[} \bigg{[}\frac{\partial A_{3}\big{(}u\big{)}}{\partial u}\bigg{]}\bigg{[}\frac{ \partial\big{(}\mathscr{C}_{2}\big{)}_{i Proof of Lemma 13.: The eighth term, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_{2} \mathscr{A}_{2}^{\prime}\biggr{\}}\enspace,\] is equivalent to, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\biggl{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\left[\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\right)\,\left(\sin\bigl{(}2\eta\bigr{)} \right)^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+} \biggr{)}\right]\biggr{)},\mathscr{P}_{2}\times\cdots\] \[\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+ \eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggr{\}}\enspace,\] which can be rearranged as, \[\sum_{\mathscr{P}}\biggl{(}\biggl{\{}\mathscr{P}_{1},\mathscr{P}_ {2}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n -i}^{z}\bigr{)}\biggr{)}\biggr{)}\biggr{(}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\left[\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\times\cdots\right.\] \[\left.\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1} \biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]} \biggr{)}+\biggl{\{}\biggl{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\left[\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\times\cdots\right.\] \[\left.\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1} \biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]} \biggr{)},\mathscr{P}_{2}\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime} -v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggr{\}}\mathscr{P}_{1} \biggr{)}\enspace,\] from an application of Leibniz' rule. Further rearranging each summation of Poisson brackets over \(\mathscr{P}\) implies, \[-\sum_{\mathscr{P}}\bigl{\{}\mathscr{P}_{2},\mathscr{P}_{1}\bigr{ }\biggr{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\left[\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(} u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n -j}^{-,+}\biggr{)}\right]\biggr{)}\times\cdots\] \[\biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+ \eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}-\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{ 1}\biggr{\}}\mathscr{P}_{2}\times\cdots\] \[\biggl{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\left[\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(} u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\times\cdots\right.\] \[\left.\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1} \biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]} \biggr{)}\enspace,\] after applying Leibniz' rule to, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1},\mathscr{P}_{2} \biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n -i}^{z}\bigr{)}\biggr{)}\biggr{\}}\biggl{(}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\left[\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(} u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\times\cdots\right.\] \[\left.\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1} \biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]} \biggr{)}\enspace,\] and similarly, \[-\mathscr{P}_{1}\left[\ \sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{2}, \bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u-v _{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigl{(}\sin\bigl{(}2\eta\bigr{)} \bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-+} \biggr{)}\bigg{]}\bigg{)}\right\}\times\cdots\\ \biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+ \eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}-\cdots\\ \sum_{\mathscr{P}}\biggl{\{}\biggl{(}\prod_{1\leq i\leq n-3}\sin \bigl{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\bigg{(}\sum_ {\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\times\cdots\\ \Bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1 \leq j\leq n^{\prime}}\sigma_{n-j}^{-+}\biggr{)}\bigg{]}\bigg{)}\biggr{\}} \mathscr{P}_{2}\ \biggr{]}\ \,\] after applying Leibniz' rule to, \[\biggl{\{}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\bigl{(}\sin\bigl{(}2\eta\bigr{)} \bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-+} \biggr{)}\bigg{]}\bigg{)},\mathscr{P}_{2}\times\cdots\\ \biggl{(}\prod_{1\leq i\leq n-3}\sin\bigl{(}u^{\prime}-v_{n-i}+ \eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\biggr{\}}\mathscr{P}_{1}\ \.\] Following each applications of Leibniz' rule, we apply Leibniz' rule multiple times to rearrange the second Poisson bracket, in which, \[\mathscr{P}_{1}\biggl{[}\sum_{\mathscr{P}}\biggl{\{}\biggl{(} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-+}\biggr{)},\mathscr{P}_{2}\biggr{\}} \bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\Bigr{)}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{]}\bigg{)}+\cdots\\ \sum_{\mathscr{P}}\biggl{\{}\bigg{(}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\bigg{)}\times\cdots\\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{]}\bigg{)}, \mathscr{P}_{2}\biggr{\}}\biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-+} \biggr{)}\biggr{]}\ \,\] after anticommuting the first bracket, \[-\mathscr{P}_{1}\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{2}, \bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\Bigr{)}\bigl{(}\sin\bigl{(}2\eta \bigr{)}\bigr{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n- j}^{-+}\biggr{)}\bigg{]}\bigg{)}\bigg{\}}\ \.\] The expression, \[-\sum_{\mathscr{P}}\{\mathscr{P}_{2},\mathscr{P}_{1}\}\Big{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{[}\Big{(}\prod_{1\leq i\leq m}\sin\!\left(u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\!\Big{)}\!\left(\sin\!\left(2\eta\right) \right)^{n^{\prime}-1}\!\left(\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+} \right)\!\Big{]}\Big{)}\times\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\left(\prod_{1\leq i\leq n-3}\sin\!\left(u ^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\right)-\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left(\prod_{1\leq i\leq n-3} \sin\!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\right),\mathscr{P} _{1}\}\mathscr{P}_{2}\times\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\left(\prod_{1\leq i\leq m}\sin\!\left(u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\right)\times\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\left(\sin\!\left(2\eta\right)\right)^{n^ {\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)} \bigg{]}\bigg{)}+\cdots\] \[\mathscr{P}_{1}\bigg{[}\sum_{\mathscr{P}}\!\left\{\Big{(}\prod_{1 \leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\Big{)},\mathscr{P}_{2}\right\}\! \bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\Big{(}\prod_{1\leq i\leq m}\sin\!\left( u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\!\Big{)}\!\left(\sin\!\left(2\eta\right) \right)^{n^{\prime}-1}\!\bigg{]}\bigg{)}+\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\left(\sin\!\left(2\eta\right) \right)^{n^{\prime}-1}\!\bigg{]}\bigg{)},\mathscr{P}_{2}\bigg{\}}\bigg{(}\prod_{1 \leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\,\] can be rearranged to obtain the following Poisson brackets, \[\sum_{\mathscr{P}}\{\mathscr{P}_{1},\mathscr{P}_{2}\}=\big{\{}A_{3}\big{(}u \big{)},B_{3}\big{(}u^{\prime}\big{)}\big{\}}+\big{\{}B_{3}\big{(}u\big{)},A_{ 3}\big{(}u^{\prime}\big{)}\}\ \,\] corresponding to the first term, and, \[\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\left(u^{\prime}-v_{ n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{)},A_{3}\big{(}u\big{)}\bigg{\}}A_{3}\big{(}u^{ \prime}\big{)}+\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin \!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{)},B_{3}\big{(}u \big{)}\bigg{\}}\times\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad B_{3}\big{(}u ^{\prime}\big{)}+\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\bigg{\{}\bigg{(}\prod_{1 \leq i\leq n-3}\sin\!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{)},B_{3}\big{(}u\big{)}\bigg{\}}A_{3}\big{(}u^{\prime}\big{)}\ \,\] corresponding to the second term. The first Poisson bracket approximately equals, \[-\Big{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\Big{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\big{(}u-v_ {n-i}\pm\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\big{(}\!\sin\!\big{(}2\eta\big{)} \big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma^{-,+}_{n-j} \bigg{)}\bigg{]}\Big{)}\times\cdots\] \[\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+ \eta\sigma^{z}_{n-i}\big{)}\bigg{)}\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}-\cdots\] \[\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\big{(} u-v_{n-i}\pm\eta\sigma^{z}_{n-i}\big{)}\bigg{)}\big{(}\!\sin\!\big{(}2\eta \big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma^{-, +}_{n-j}\bigg{)}\bigg{]}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+ \eta\sigma^{z}_{n-i}\big{)}\bigg{)}\bigg{[}\frac{B_{3}\big{(}u\big{)}A_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]}\ \.\] For the second Poisson bracket, taking the summation over all \(\mathscr{P}\) implies that the terms, \[\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{ \prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},\mathscr{P}_{1}\bigg{\}} \mathscr{P}_{2}\ \,\] can be evaluated by observing that the single Poisson bracket is equivalent to the following four brackets, \[\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},B_{3}\big{(}u \big{)}\bigg{\}}B_{3}\big{(}u^{\prime}\big{)}\ \,\] \[\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},A_{3}\big{(}u \big{)}\bigg{\}}A_{3}\big{(}u^{\prime}\big{)}\ \,\] \[\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},B_{3}\big{(}u \big{)}\bigg{\}}A_{3}\big{(}u^{\prime}\big{)}\ \,\] \[\sum_{\mathscr{P}}\bigg{\{}\bigg{(}\prod_{1\leq i\leq n-3}\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{)},B_{3}\big{(}u \big{)}\bigg{\}}A_{3}\big{(}u^{\prime}\big{)}\ \,\] which can each be be individually evaluated below. For the first bracket, evaluating terms from the bracket yields, \[B_{3}\big{(}u^{\prime}\big{)}\bigg{[}\frac{\partial B_{3}\big{(}u\big{)}}{ \partial u^{\prime}}\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{[}\prod _{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)} \bigg{]}\bigg{]}-\bigg{[}\frac{\partial}{\partial u}\bigg{[}\prod_{1\leq i \leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{]} \bigg{]}\frac{\partial B_{3}\big{(}u\big{)}}{\partial u}\bigg{]}\ \bigg{]}\ \.\] The derivative of the product of sine functions appearing in the first term of the Poisson bracket above, \[\frac{\partial}{\partial u^{\prime}}\bigg{[}\sin\!\big{(}u^{\prime}-v_{n-1}+ \eta\sigma^{z}_{n-1}\big{)}\times\cdots\times\sin\!\big{(}u^{\prime}-v_{n-(n-3) }+\eta\sigma^{z}_{n-(n-3)}\big{)}\bigg{]}\ \,\] equals, \[\bigg{[}\frac{\partial}{\partial u^{\prime}}\!\sin\!\big{(}u^{ \prime}-v_{n-1}+\eta\sigma^{z}_{n-1}\big{)}\bigg{]}\!\!\prod_{2\leq i\leq n-3} \!\!\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}+\cdots+\prod_{1 \leq i\leq n-4}\!\!\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)} \times\cdots\] \[\bigg{[}\frac{\partial}{\partial u^{\prime}}\!\sin\!\big{(}u^{ \prime}-v_{n-(n-3)}+\eta\sigma^{z}_{n-(n-3)}\big{)}\bigg{]}\] \[=\bigg{[}\sum_{1\leq j\leq n-3}\!\frac{\partial}{\partial u^{ \prime}}\bigg{[}\!\sin\!\big{(}u^{\prime}-v_{n-j}+\eta\sigma^{z}_{n-j}\big{)} \bigg{]}\bigg{]}\bigg{[}\!\prod_{1\leq j\neq i\leq n-3}\!\!\sin\!\big{(}u^{ \prime}-v_{n-i}+\eta\sigma^{z}_{n-i}\big{)}\bigg{]}\ \.\] Hence, the first bracket is equivalent to, \[B_{3}\big{(}u^{\prime}\big{)}\bigg{[}\ \sum_{1\leq j\leq n-3}\!\bigg{[}\ \bigg{[}\frac{\partial}{\partial u^{\prime}}\!\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{]}\bigg{[}\frac{ \partial B_{3}\big{(}u\big{)}}{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{ \partial}{\partial u}\!\sin\!\big{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z} \big{)}\bigg{]}\bigg{[}\frac{\partial B_{3}\big{(}u\big{)}}{\partial u}\bigg{]} \bigg{]}\ \times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{]}\ \.\] For the second bracket, evaluating terms from the bracket similarly yields, \[A_{3}\big{(}u^{\prime}\big{)}\bigg{[}\ \sum_{1\leq j\leq n-3}\! \bigg{[}\ \bigg{[}\frac{\partial}{\partial u^{\prime}}\!\sin\!\big{(}u^{\prime}-v_{n-i} +\eta\sigma_{n-i}^{z}\big{)}\bigg{]}\bigg{[}\frac{\partial A_{3}\big{(}u\big{)} }{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{\partial}{\partial u}\!\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{]}\bigg{[}\frac{ \partial A_{3}\big{(}u\big{)}}{\partial u}\bigg{]}\ \bigg{]}\ \times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{]}\ \.\] For the third bracket, evaluating terms from the bracket similarly yields, \[A_{3}\big{(}u^{\prime}\big{)}\bigg{[}\ \sum_{1\leq j\leq n-3}\! \bigg{[}\ \bigg{[}\frac{\partial}{\partial u^{\prime}}\!\sin\!\big{(}u^{\prime}-v_{n-i} +\eta\sigma_{n-i}^{z}\big{)}\bigg{]}\bigg{[}\frac{\partial B_{3}\big{(}u\big{)} }{\partial u^{\prime}}\bigg{]}-\bigg{[}\frac{\partial}{\partial u}\!\sin\! \big{(}u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\big{)}\bigg{]}\bigg{[}\frac{ \partial B_{3}\big{(}u\big{)}}{\partial u}\bigg{]}\ \bigg{]}\ \times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{]}\ \.\] For the third Poisson bracket, the terms, \[\sum_{\mathscr{P}}\!\bigg{\{}\bigg{(}\ \prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)},\mathscr{P}_{2}\bigg{\}}=\bigg{\{}\bigg{(}\ \prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-,+}\bigg{)},B_{3}\big{(}u^{\prime}\big{)}\bigg{\}}+\bigg{\{} \bigg{(}\ \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)},A_{3}\big{(}u^{\prime} \big{)}\bigg{\}}\equiv 0\ \,\] vanish, while for the fourth Poisson bracket, \[\sum_{\mathscr{P}}\!\bigg{\{}\bigg{(}\ \sum_{1\leq i\leq m \atop 1\leq j\leq n^{\prime}}\ \bigg{[}\ \bigg{(}\ \prod_{1\leq i\leq m}\!\sin\!\big{(}u-v_{n-i}\pm\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{ \prime}-1}\bigg{]}\bigg{)},\mathscr{P}_{2}\bigg{\}}\ \,\] computing the differentiation, \[\frac{\partial}{\partial u}\bigg{[}\ \sum_{1\leq i\leq m\atop 1\leq j\leq n^{ \prime}}\ \bigg{[}\ \bigg{(}\ \prod_{1\leq i\leq m}\!\sin\!\big{(}u-v_{n-i}\pm\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{ \prime}-1}\bigg{]}\ \bigg{]}\ \,\] which can be arranged as, \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{[}\ \bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta \sigma_{n-1}^{z}\big{)}\bigg{)}+\cdots+\bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta \sigma_{n-1}^{z}\big{)}+\cdots+\cdots\] \[\bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta\sigma_{n-1}^{z}\big{)} \times\cdots\times\sin\!\big{(}u-v_{n-m}\pm\eta\sigma_{n-m}^{z}\big{)}\bigg{)} \bigg{)}\ \bigg{]}\ \,\] with derivative, \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{[}\,\cos \!\big{(}u-v_{n-1}\pm\sigma_{n-1}^{z}\big{)}+\cdots+\bigg{(}\cos\!\big{(}u-v_{n- 1}\pm\sigma_{n-1}^{z}\big{)}+\cdots\] \[\qquad\qquad+\bigg{(}\cos\!\big{(}u-v_{n-m}\pm\eta\sigma_{n-m}^{z} \big{)}\!\prod_{1\leq i\leq m-1}\!\sin\!\big{(}u-v_{n-i}\pm\eta\sigma_{n-1}^{z} \big{)}\bigg{)}\bigg{)}\bigg{]}\ \.\] Therefore, \[\frac{\partial}{\partial u}\bigg{[}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+\eta=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\!\sin\!\big{(}u-v _{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\big{(}\sin\!\big{(}2\eta\big{)} \big{)}^{n^{\prime}-1}\bigg{]}\ \bigg{]}\bigg{[}\frac{\partial\mathscr{P}_{2}}{ \partial u}\bigg{]}-\cdots\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \[-\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{[} \frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\bigg{[} \bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta\sigma_{n-1}^{z}\big{)}\bigg{)}+ \cdots+\bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta\sigma_{n-1}^{z}\big{)}+\cdots+\cdots\] \[\bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta\sigma_{n-1}^{z}\big{)}\times \cdots\times\sin\!\big{(}u-v_{n-m}\pm\eta\sigma_{n-m}^{z}\big{)}\bigg{)}\bigg{)} \bigg{]} +\cdots\] \[\frac{\partial}{\partial u}\bigg{[}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\! \big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{]}\bigg{]} \bigg{]} -\cdots\] \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{[} \frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\bigg{]} \bigg{[}\bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta\sigma_{n-1}^{z}\big{)}\bigg{)}+ \cdots+\bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta\sigma_{n-1}^{z}\big{)}+\cdots+\cdots\] \[\bigg{(}\sin\!\big{(}u-v_{n-1}\pm\eta\sigma_{n-1}^{z}\big{)}\times \cdots\times\sin\!\big{(}u-v_{n-m}\pm\eta\sigma_{n-m}^{z}\big{)}\bigg{)} \bigg{)} +\cdots\] \[\frac{\partial}{\partial u}\bigg{[}\sum_{\begin{subarray}{c}1\leq i \leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\! \big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{]}\bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \] which equals, \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{[} \bigg{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u }\bigg{]}\frac{\partial}{\partial u^{\prime}}\bigg{[}\sum_{\begin{subarray}{c }1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\! \big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{]} \bigg{]}-\cdots\] \[\bigg{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u ^{\prime}}\bigg{]}\frac{\partial}{\partial u}\bigg{[}\sum_{\begin{subarray}{c }1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\! \big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{]}\bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]}\] Grouping together like terms in the summation \(i\) and \(j\) yields, while performing an identical substitution in the second case implies, \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{[}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\bigg{(}\bigg{[} \frac{\partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u}\bigg{]}\frac{ \partial}{\partial u^{\prime}}\!\sin\!\big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z} \big{)}-\bigg{[}\frac{\partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u^{ \prime}}\bigg{]}\frac{\partial}{\partial u}\!\sin\!\big{(}u-v_{n-i}\pm\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{)}\bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]} \bigg{]}\] Altogether, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{3}, \mathscr{P}_{2}\mathscr{A}_{2}^{\prime}\bigg{\}}\approx-\bigg{[}\bigg{(} \sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\! \big{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\big{(}\sin\!\big{(}2 \eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_ {n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{]}\bigg{[}\frac{A_{3}\big{(}u\big{)}B_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]} -\cdots\] \[\bigg{[}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\big{(}u-v_ {n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\big{(}\sin\!\big{(}2\eta\big{)} \big{)}^{n^{\prime}-1}\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+} \bigg{)}\bigg{]}\bigg{)}\times\cdots\] \[\bigg{(}\prod_{1\leq i\leq n-3}\sin\!\big{(}u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\big{)}\bigg{)}\bigg{]}\bigg{[}\frac{B_{3}\big{(}u\big{)}A_{3} \big{(}u^{\prime}\big{)}}{u-u^{\prime}}\bigg{]} -\cdots\] \[2\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\left(u -v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\bigg{)}\,\left(\sin\!\left(2\eta\right) \right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\bigg{)}\times\cdots\] \[B_{3}\!\left(u^{\prime}\right)\!\bigg{[}\sum_{1\leq j\leq n-3}\!\bigg{[}\bigg{(} \frac{\partial B_{3}\!\left(u\right)}{\partial u}\bigg{)}\bigg{]}\bigg{[} \frac{\partial}{\partial u^{\prime}}\!\sin\!\left(u^{\prime}-v_{n-i}+\eta \sigma_{n-i}^{z}\right)\bigg{]}-\bigg{[}\frac{\partial B_{3}\!\left(u\right)} {\partial u^{\prime}}\bigg{]}\bigg{[}\frac{\partial}{\partial u}\!\sin\!\left( u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{]}\,\bigg{]}\,\,\times\cdots\] \[A_{3}\!\left(u^{\prime}\right)\!\bigg{[}\sum_{1\leq j\leq n-3}\!\bigg{[}\bigg{ }\frac{\partial A_{3}\!\left(u\right)}{\partial u}\bigg{]}\bigg{[}\frac{ \partial}{\partial u^{\prime}}\!\sin\!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n- i}^{z}\right)\bigg{]}-\bigg{[}\frac{\partial A_{3}\!\left(u\right)}{\partial u^{ \prime}}\bigg{]}\bigg{[}\frac{\partial}{\partial u}\!\sin\!\left(u^{\prime}-v_ {n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{]}\,\bigg{]}\,\,\bigg{]}\,\,\bigg{]} \,\,\times\cdots\] \[A_{3}\!\left(u^{\prime}\right)\!\bigg{[}\sum_{1\leq j\leq n-3}\!\bigg{[}\! \bigg{[}\frac{\partial B_{3}\!\left(u^{\prime}\right)}{\partial u^{\prime}} \bigg{]}\bigg{[}\frac{\partial}{\partial u^{\prime}}\!\sin\!\left(u^{\prime}-v _{n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{]}-\bigg{[}\frac{\partial B_{3}\! \left(u^{\prime}\right)}{\partial u}\bigg{]}\bigg{[}\frac{\partial}{\partial u }\!\sin\!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^{z}\right)\bigg{]}\,\bigg{]} \,\,\bigg{]}\,\,\bigg{]}\,\,\times\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\sin\!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^ {z}\right)\bigg{]}+\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\sin\!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^ {z}\right)\bigg{]}+\cdots\] \[\prod_{1\leq j\neq i\leq n-3}\sin\!\left(u^{\prime}-v_{n-i}+\eta\sigma_{n-i}^ {z}\right)\bigg{]}+\cdots\] \[\left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\bigg{[}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\bigg{(}\bigg{[} \frac{\partial}{\partial u}\bigg{[}B_{3}\!\left(u^{\prime}\right)+A_{3}\! \left(u^{\prime}\right)\bigg{]}\bigg{]}\frac{\partial}{\partial u^{\prime}} \!\sin\!\left(u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)-\cdots\] \[\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{[}A_{3}\!\left(u^{\prime} \right)+B_{3}\!\left(u^{\prime}\right)\bigg{]}\bigg{]}\frac{\partial}{\partial u }\!\sin\!\left(u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\bigg{)}\bigg{)}\bigg{]} \,\,\,,\] from which we conclude the argument. #### 2.4.9 Ninth Poisson bracket, \(\mathcal{P}_{9}\) **Lemma 14** (_evaluating the ninth Poisson bracket in the first relation_).: The ninth term approximately equals, \[2\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{[}A_{3}\!\left(u^{\prime} \right)+B_{3}\!\left(u^{\prime}\right)\bigg{]}\bigg{[}\!\sum_{1\leq i\leq m} \!\!\bigg{[}\frac{\partial}{\partial u}\bigg{[}\!\prod_{1\leq i\leq m}\!\!\sin \!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\bigg{]}\,\bigg{]} \bigg{]}\bigg{]}-\bigg{[}\frac{\partial}{\partial u}\!\left[A_{3}\!\left(u^{ \prime}\right)+B_{3}\!\left(u^{\prime}\right)\right]\bigg{]}\times\cdots\] \[\bigg{[}\sum_{1\leq i\leq m}\!\bigg{[}\frac{\partial}{\partial u^{\prime}} \bigg{[}\!\prod_{1\leq i\leq m}\!\!\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{ n-i}^{z}\right)\bigg{]}\,\bigg{]}\,\bigg{]}\,\bigg{]}\,\,\bigg{]}\,\,\bigg{]}\,\,\, \,\bigg{]}\,\,\bigg{]}\,\,\,\bigg{]}\,\,\,\bigg{]}\,\,\,\,\bigg{]}\,\,\,\,\bigg{]}\,\,\, \, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\mathscr{A}_{3},\mathscr{P}_{2} \mathscr{A}_{3}^{\prime}\biggr{\}}\ \,\] is equivalent to, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1}\biggl{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \biggl{[}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\biggr{)},\mathscr{P}_{2} \biggl{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \times\cdots\] \[\biggl{[}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{ n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\biggr{)}\biggr{\}}\ \,\] which can be rearranged with Leibniz' rule, as, \[\sum_{\mathscr{P}}\biggl{\{}\mathscr{P}_{1},\mathscr{P}_{2}\biggl{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \biggl{[}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\biggr{)}\biggr{\}}\times\cdots\] Applying Leibniz' for a second time to each bracket yields, \[-\sum_{\mathscr{P}}\{\mathscr{P}_{2},\mathscr{P}_{1}\}\biggl{(} \sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \biggl{[}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\biggr{)}-\cdots\] \[\sum_{\mathscr{P}}\biggl{\{}\biggl{(}\sum_{\begin{subarray}{c}1 \leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \biggl{[}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\ \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\biggr{)},\mathscr{P}_{1} \biggr{\}}\times\cdots\] \[\mathscr{P}_{2}+\mathscr{P}_{1}\sum_{\mathscr{P}}\biggl{\{}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ m+n^{\prime}=n-3\end{subarray}}\ \biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},\mathscr{P}_{2} \times\cdots\] \[\biggl{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\ \biggl{[}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\times\cdots\] \[\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\biggr{)} \biggr{\}}\Bigl{)}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}+\cdots\] \[\Big{\{}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)},\Big{(}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\! \big{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\times\cdots\] \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\Big{)}\Big{\}} \sum_{\begin{subarray}{c}1\leq i\leq m\\ m+n^{\prime}=n-3\end{subarray}}\bigg{(}\prod_{1\leq i\leq m}\sin\!\big{(}u-v_{n -i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \.\] Applying Leibniz' rule for a third time to the second, third, and fourth, brackets yields, \[-\!\!\sum_{\mathscr{P}}\!\Big{\{}\mathscr{P}_{2},\mathscr{P}_{1} \Big{\}}\bigg{(}\sum_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\big{(}u ^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\,\left(\sin\!\big{(}2 \eta\big{)}\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j\leq n^{\prime}} \sigma_{n-j}^{-+}\bigg{)}\bigg{]}\bigg{)}-\cdots\] \[\sum_{\mathscr{P}}\!\bigg{(}\bigg{\{}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! while the nonzero entries for the third Poisson bracket are, \[-\mathscr{P}_{2}\bigg{[}\sum_{n^{\prime}:m+n^{\prime}=n-3}\biggl{(} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}^{2}\bigg{]}\,\biggl{\{} \sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i} \pm\sigma_{n-i}^{z}\bigr{)}\biggr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}} +\cdots\] \[\biggl{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\sigma_{n-i}^{z}\bigr{)}\biggr{)},A_{3}\bigl{(}u^{ \prime}\bigr{)}\biggr{\}}\biggr{)}\ \.\] The nonzero entries for the fourth Poisson bracket are, \[\mathscr{P}_{1}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u ^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\biggl{(}\bigl{(} \sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j\leq n^ {\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}^{2}\,\bigg{]}\biggl{(}\Bigl{\{} \sum_{1\leq i\leq m}\atop m+n^{\prime}=n-3}\times\cdots\] \[\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm \eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}} +\cdots\] \[\biggl{\{}\sum_{1\leq i\leq m}\limits\sin\bigl{(}u^{\prime}-v_{n-i }\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}} +\cdots\] \[\biggl{\{}\sum_{1\leq i\leq m}\limits\sin\bigl{(}u^{\prime}-v_{n-i }\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\bigl{(}\sin\bigl{(}u^{\prime}-v_{n- i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}} \biggr{)}\ \.\] Altogether, the remaining entries, \[\bigg{(}\sum_{1\leq i\leq m\atop 1\leq j\leq n^{\prime}}\bigg{[}\bigg{(} \prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z} \bigr{)}\biggr{)}\,\left(\sin\bigl{(}2\eta\bigr{)}\right)^{n^{\prime}-1}\biggl{(} \prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{]}\bigg{\}} \bigl{\{}A_{3}\bigl{(}u\bigr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\bigr{\}} +\cdots\] \[\mathscr{P}_{2}\bigg{[}\sum_{n^{\prime}:m+n^{\prime}=n-3}\biggl{(} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}^{2}\bigg{]}\,\biggl{\{} \sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i }\pm\sigma_{n-i}^{z}\bigr{)}\biggr{)},B_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}} -\cdots\] \[\mathscr{P}_{2}\bigg{[}\sum_{n^{\prime}:m+n^{\prime}=n-3}\biggl{(} \bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1}\biggl{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}^{2}\bigg{]}\biggl{\{} \sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i }\pm\sigma_{n-i}^{z}\bigr{)}\biggr{)},A_{3}\bigl{(}u^{\prime}\bigr{)}\biggr{\}} \,\] from the first, and third, Poisson brackets, as well as, \[-\mathscr{P}_{1}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u ^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\times\cdots\] \[\biggl{(}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1} \biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}^{2 }\,\bigg{]}\biggl{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},B_{3} \bigl{(}u^{\prime}\bigr{)}\biggr{\}}\ \,\] and, \[\mathscr{P}_{1}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\bigl{(}u ^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)}\,\times\cdots\] \[\biggl{(}\bigl{(}\sin\bigl{(}2\eta\bigr{)}\bigr{)}^{n^{\prime}-1} \biggl{(}\prod_{1\leq j\leq n^{\prime}}\sigma_{n-j}^{-,+}\biggr{)}\biggr{)}^{2 }\,\bigg{]}\biggl{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin \bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\bigr{)}\biggr{)},A_{3} \bigl{(}u^{\prime}\bigr{)}\biggr{\}}\ \,\] from the fourth Poisson bracket. The observations that, \[\big{\{}A_{3}\big{(}u\big{)},B_{3}\big{(}u^{\prime}\big{)}\big{\}}=-\big{\{}B_{3} \big{(}u^{\prime}\big{)},A_{3}\big{(}u\big{)}\big{\}}\approx-\frac{B_{3}\big{(}u ^{\prime}\big{)}A_{3}\big{(}u\big{)}}{u^{\prime}-u}\ \,\] and that, \[\big{\{}B_{3}\big{(}u\big{)},A_{3}\big{(}u^{\prime}\big{)}\big{\}}=-\big{\{}A_ {3}\big{(}u^{\prime}\big{)},B_{3}\big{(}u\big{)}\big{\}}\approx-\frac{A_{3} \big{(}u^{\prime}\big{)}B_{3}\big{(}u\big{)}}{u^{\prime}-u}\ \,\] corresponding to the two entries from the first Poisson bracket, \[\bigg{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\sigma^{z}_{n-i}\bigr{)}\biggr{)},B_{3}\big{(}u^{\prime} \big{)}\bigg{\}}\ \,\] and, \[\bigg{\{}\sum_{1\leq i\leq m}\biggl{(}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\sigma^{z}_{n-i}\bigr{)}\biggr{)},A_{3}\big{(}u^{\prime} \big{)}\bigg{\}}\ \,\] corresponding to the two entries from the second Poisson bracket. Along the lines of similar computations for previous Poisson brackets in the first relation, we evaluate each term by observing that the derivative of the sum of product of sine functions, \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\sum_{1\leq i\leq m}\biggl{(} \prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{)}\biggr{]}\ \,\] can be expressed as, \[\frac{\partial}{\partial u^{\prime}}\biggl{[}\biggl{(}\sin\bigl{(} u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}\biggr{)}+\cdots+\biggl{(} \biggl{(}\sin\bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1}\bigr{)}\biggr{)} +\cdots+\cdots\] \[\biggl{(}\sin\bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1} \bigr{)}\times\cdots\times\sin\bigl{(}u^{\prime}-v_{n-m}\pm\eta\sigma^{z}_{n- m}\bigr{)}\biggr{)}\biggr{]}\ \,\] corresponding to the first term. Explicitly, \[\biggl{(}\cos\bigl{(}u^{\prime}-v_{n-1}\pm\eta\sigma^{z}_{n-1} \bigr{)}\biggr{)}+\cdots+\biggl{(}\cos\bigl{(}u^{\prime}-v_{n-1}\pm\eta \sigma^{z}_{n-1}\bigr{)}+\cdots+\biggl{(}\sum_{1\leq i\leq m}\biggl{[}\frac{ \partial}{\partial u^{\prime}}\!\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z }_{n-i}\bigr{)}\biggr{]}\times\cdots\] \[\biggl{(}\prod_{1\leq j\neq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-j} \pm\eta\sigma^{z}_{n-j}\bigr{)}\biggr{)}\biggr{)}\biggr{)}\ \.\] Hence the first bracket takes the form, under a single summation from \(i\equiv 1\) to \(i\equiv m\), \[\biggl{[}\sum_{1\leq i\leq m}\!\frac{\partial}{\partial u}\biggl{[} \prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{]}\ \biggr{]}\biggl{[}\frac{\partial B_{3}\big{(}u^{\prime}\big{)}}{\partial u^{ \prime}}\biggr{]}-\biggl{[}\biggl{[}\sum_{1\leq i\leq m}\!\frac{\partial}{ \partial u^{\prime}}\biggl{[}\prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n- i}\pm\eta\sigma^{z}_{n-i}\bigr{)}\biggr{]}\ \biggr{]}\ \biggr{]}\biggl{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u}\biggr{]}\] \[=\biggl{[}\sum_{1\leq i\leq m}\biggl{[}\ \frac{\partial}{\partial u}\biggl{[}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)}\biggr{]}\ \biggr{]}\biggl{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u^{\prime}}\biggr{]}-\biggl{[}\frac{ \partial}{\partial u^{\prime}}\biggl{[}\prod_{1\leq i\leq m}\sin\bigl{(}u^{ \prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)}\biggr{]}\ \biggr{]}\times\cdots\] \[\biggl{[}\frac{\partial B_{3} \big{(}u^{\prime}\big{)}}{\partial u}\biggr{]}\ \biggr{]}\ \ \.\] For the second term, one has, \[\biggl{[}\sum_{1\leq i\leq m}\biggl{[}\ \frac{\partial}{\partial u}\biggl{[} \prod_{1\leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i} \bigr{)}\biggr{]}\ \biggr{]}\biggl{[}\frac{\partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u^{ \prime}}\biggr{]}-\biggl{[}\frac{\partial}{\partial u^{\prime}}\biggl{[}\prod_{1 \leq i\leq m}\sin\bigl{(}u^{\prime}-v_{n-i}\pm\eta\sigma^{z}_{n-i}\bigr{)} \biggr{]}\ \biggr{]}\biggl{[}\frac{\partial A_{3}\big{(}u^{\prime}\big{)}}{\partial u }\biggr{]}\ \biggr{]}\ \biggr{]}\ \,\] from which combining the two Poisson brackets above yields, \[\left[\frac{\partial}{\partial u^{\prime}}\bigg{[}A_{3}\big{(}u^{ \prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\bigg{]}\ \bigg{]}\bigg{[}\sum_{1\leq i\leq m}\bigg{[}\frac{\partial}{\partial u}\bigg{[} \prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z} \right)\ \bigg{]}\ \bigg{]}\ \bigg{]}-\bigg{[}\frac{\partial}{\partial u}\bigg{[}A_{3}\big{(}u^{ \prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\bigg{]}\ \bigg{]}\times\cdots\] \[\bigg{[}\sum_{1\leq i\leq m}\bigg{[}\frac{\partial}{\partial u^{ \prime}}\bigg{[}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta \sigma_{n-i}^{z}\right)\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \.\] For the two remaining possibilities for \(\mathscr{P}_{2}\) in the first and second Poisson bracket evaluated above, combining terms yields the same identity, \[\bigg{[}\frac{\partial}{\partial u^{\prime}}\bigg{[}A_{3}\big{(}u ^{\prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\bigg{]}\ \bigg{]}\bigg{[}\sum_{1\leq i\leq m}\bigg{[}\frac{\partial}{\partial u}\bigg{[} \prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z} \right)\ \bigg{]}\ \bigg{]}\ \bigg{]}-\bigg{[}\frac{\partial}{\partial u}\bigg{[}A_{3}\big{(}u^{ \prime}\big{)}+B_{3}\big{(}u^{\prime}\big{)}\bigg{]}\ \bigg{]}\times\cdots\] \[\bigg{[}\sum_{1\leq i\leq m}\bigg{[}\frac{\partial}{\partial u^{ \prime}}\bigg{[}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta \sigma_{n-i}^{z}\right)\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \.\] The two computations above together imply that the desired expression for the ninth term takes the form, \[\sum_{\mathscr{P}}\bigg{\{}\mathscr{P}_{1}\mathscr{A}_{3}, \mathscr{P}_{2}\mathscr{A}_{3}^{\prime}\bigg{\}}\approx-\bigg{[}\sum_{ \begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n^{\prime}\\ m+n^{\prime}=n-3\end{subarray}}\bigg{[}\bigg{(}\prod_{1\leq i\leq m}\sin\!\left( u^{\prime}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\right)\bigg{)}\ \left(\sin\!\left(2\eta\right)\right)^{n^{\prime}-1}\!\bigg{(}\prod_{1\leq j \leq n^{\prime}}\sigma_{n-j}^{-+}\bigg{)}\bigg{]}\bigg{]}\times\cdots\] \[\bigg{[}\frac{B_{3}\big{(}u^{\prime}\big{)}A_{3}\big{(}u\big{)}}{u ^{\prime}-u}+\frac{A_{3}\big{(}u^{\prime}\big{)}B_{3}\big{(}u\big{)}}{u^{ \prime}-u}\bigg{]}-\cdots\] \[\bigg{[}\frac{B_{3}\big{(}u^{\prime}\big{)}A_{3}\big{(}u\big{)}}{ u^{\prime}-u}+\frac{A_{3}\big{(}u^{\prime}\big{)}B_{3}\big{(}u\big{)}}{u^{ \prime}-u}\bigg{]}+\cdots\] \[\bigg{[}\sum_{1\leq i\leq m}\bigg{[}\frac{\partial}{\partial u^ {\prime}}\bigg{[}\prod_{1\leq i\leq m}\sin\!\left(u^{\prime}-v_{n-i}\pm\eta \sigma_{n-i}^{z}\right)\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \bigg{]}\ \,\] from which we conclude the argument. _Proof of Theorem 1._ The result immediately follows from the results for each Poisson bracket obtained in 2.4.1-2.4.9, from which we conclude the argument. _Proof of Theorem 2._ The result follows immediately from direct computation of each of the two Poisson brackets in canonical coordinates, from which we conclude the argument. 4.10 Overview of extending the computations with the Poisson bracket to the remaining fifteen relations To exhibit how the computations performed for evaluating the nine Poisson brackets from the first relation can be extended to the remaining fifteen relations, we provide an outline of the terms involved in the second relation below. In particular, these terms are parametrized in different entries of the monodromy matrix, namely \(\mathscr{B}_{1}^{\prime}\), \(\mathscr{B}_{2}^{\prime}\), and \(\mathscr{B}_{3}^{\prime}\) instead of \(\mathscr{A}_{1}^{\prime}\), \(\mathscr{A}_{2}^{\prime}\), and \(\mathscr{A}_{3}^{\prime}\), and can be expressed in terms of a superposition of nine terms, obtained from a superposition of thirty six Poisson brackets as provided for the first relation. The terms, excluding those which are dependent on \(\mathscr{A}_{1}\), \(\mathscr{A}_{2}\), \(\mathscr{A}_{3}\), \(\mathscr{B}_{1}^{\prime}\), \(\mathscr{B}_{2}^{\prime}\), or \(\mathscr{B}_{3}^{\prime}\), appearing in the second relation are, \[\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{B}_{1}^{\prime} \equiv\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\bigg{(}\prod_{ 2\leq i\leq n-(i-3)}\sigma_{n-i}^{-,+}\bigg{)}\ \,\] \[\mathscr{P}_{1}\mathscr{B}_{2}^{\prime}\equiv\mathscr{P}_{1}\bigg{(}\prod_{2 \leq i\leq n-(i-3)}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}+\eta\sigma_{n-i}^{z} \big{)}\ \,\] \[\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{B}_{3}^{ \prime}=\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\sum_{ \begin{subarray}{c}2\leq i\leq m\\ 2\leq i\leq m\\ m+n^{\prime}=n-(i-3)\end{subarray}}\bigg{[}\bigg{(}\prod_{2\leq i\leq m}\sin \!\big{(}\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \times\cdots\] \[\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-1}\bigg{(}\prod_{2\leq i\leq n^{ \prime}}\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\ \,\] \[\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3}\mathscr{B}_{1}^{ \prime}\equiv\mathscr{P}_{2}\big{(}\sin\!\big{(}2\eta\big{)}\big{)}^{n-3} \bigg{(}\prod_{2\leq i\leq n-(i-3)}\sigma_{n-i}^{-,+}\bigg{)}\ \,\] \[\mathscr{P}_{2}\mathscr{B}_{2}^{\prime}\equiv\mathscr{P}_{2}\bigg{(}\prod_{2 \leq i\leq n-(i-3)}\sin\!\big{(}\lambda_{\alpha}-v_{n-i}+\eta\sigma_{n-i}^{z} \big{)}\bigg{)}\ \,\] \[\mathscr{P}_{2}\mathscr{B}_{3}^{\prime}\equiv\mathscr{P}_{2}\sum_{ \begin{subarray}{c}2\leq i\leq m\\ 2\leq j\leq n^{\prime}\\ m+n^{\prime}=n-(i-3)\end{subarray}}\bigg{[}\bigg{(}\prod_{2\leq i\leq m}\sin \!\big{(}\lambda_{\alpha}-v_{n-i}\pm\eta\sigma_{n-i}^{z}\big{)}\bigg{)}\ \ \big{(}\sin\! \big{(}2\eta\big{)}\big{)}^{n^{\prime}-1}\bigg{(}\prod_{2\leq j\leq n^{\prime} }\sigma_{n-j}^{-,+}\bigg{)}\bigg{]}\ \.\] To compute each Poisson bracket appearing in (2), the second relation, one can directly apply previous arguments for computing the Poisson brackets for the first relation. For sake of not repeating similar computations with the Poisson bracket than those which were provided earlier in _2.4.1-2.4.9_, the details for each computation are omitted.
2306.08150
SPYGLASS. IV. New Stellar Survey of Recent Star Formation within 1 kpc
Young stellar populations provide a powerful record that traces millions of years of star formation history in the solar neighborhood. Using a revised form of the SPYGLASS young star identification methodology, we produce an expanded census of nearby young stars (Age $<50$ Myr). We then use the HDBSCAN clustering algorithm to produce a new SPYGLASS Catalog of Young Associations (SCYA), which reveals 116 young associations within 1 kpc. More than 25\% of these groups are largely new discoveries, as 20 are substantively different from any previous definition, and 10 have no equivalent in the literature. The new associations reveal a yet undiscovered demographic of small associations with little connection to larger structures. Some of the groups we identify are especially unique for their high transverse velocities, which can differ from the solar velocity by 30-50 km s$^{-1}$, and for their positions, which can reach up to 300 pc above the galactic plane. These features may suggest a unique origin, matching existing evidence of infalling gas parcels interacting with the disk ISM. Our clustering also suggests links between often-separated populations, hinting to direct structural connections between Orion Complex and Perseus OB2, and between the subregions of Vela. The $\sim$30 Myr old Cepheus-Hercules association is another emerging large-scale structure, with a size and population comparable to Sco-Cen. Cep-Her and other similarly-aged structures are also found clustered along extended structures perpendicular to known spiral arm structure, suggesting that arm-aligned star formation patterns have only recently become dominant in the solar neighborhood.
Ronan Kerr, Adam Kraus, Aaron Rizzuto
2023-06-13T21:48:41Z
http://arxiv.org/abs/2306.08150v2
# SPYGLASS. IV. New Stellar Survey of Recent Star Formation within 1 kpc ###### Abstract Young stellar populations provide a powerful record that traces millions of years of star formation history in the solar neighborhood. Using a revised form of the SPYGLASS young star identification methodology, we produce an expanded census of nearby young stars (Age \(<50\) Myr). We then use the HDBSCAN clustering algorithm to produce a new SPYGLASS Catalog of Young Associations (SCYA), which reveals 116 young associations within 1 kpc. More than 25% of these groups are largely new discoveries, as 20 are substantively different from any previous definition, and 10 have no equivalent in the literature. The new associations reveal a yet undiscovered demographic of small associations with little connection to larger structures. Some of the groups we identify are especially unique for their high transverse velocities, which can differ from the solar velocity by 30-50 km s\({}^{-1}\), and for their positions, which can reach up to 300 pc above the galactic plane. These features may suggest a unique origin, matching existing evidence of infalling gas parcels interacting with the disk ISM. Our clustering also suggests links between often-separated populations, hinting to direct structural connections between Orion Complex and Perseus OB2, and between the subregions of Vela. The \(\sim\)30 Myr old Cepheus-Hercules association is another emerging large-scale structure, with a size and population comparable to Sco-Cen. Cep-Her and other similarly-aged structures are also found clustered along extended structures perpendicular to known spiral arm structure, suggesting that arm-aligned star formation patterns have only recently become dominant in the solar neighborhood. Stellar associations (1582); Stellar ages (1581); Star formation (1569); Young star clusters (1833); Young stellar objects (1834); Pre-main sequence stars (1290); OB associations (1140) 0000-0002-4886-2408]Ronan Kerr ## 1 Introduction Most young stars are located in clusters and associations, stellar populations left behind after the dispersal of their natal cloud (Lada & Lada, 2003; Krumholz et al., 2019). By preserving elements of the dynamics and history of their star-forming environments, these populations provide a powerful record of recent star formation which can be detected for tens of millions of years after their formation (Mamajek, 2016; Krause et al., 2020). Detailed studies of young populations therefore have the unique ability to reconstruct entire star formation histories for recent events, using dynamics to trace the locations of stars back in time, and ages to determine the location of stars and their subsequent environments at the time of formation (e.g., Galli et al., 2021; Miret-Roig et al., 2022; Kerr et al., 2022, 2022). The results can reveal star formation patterns at a range of scales, from molecular clouds to spiral arms (e.g., Pecaut & Mamajek, 2016; Pang et al., 2021; Pantaleoni Gonzalez et al., 2021; Zucker et al., 2022). On the scale of individual associations, young populations can be used to trace star formation patterns that either last too long to understand in full from observations of active star-forming regions, or are too rare or brief for comparable active sites to be readily available. Association-level studies have recently been used to provide evidence for processes such as triggered star formation, where star formation is initiated by an external force such as a supernova, or sequential star forma tion, where star formation propagates across a molecular cloud, with each star-forming event producing the feedback which initiates the formation of the next generation (e.g., Elmegreen and Lada, 1977; Kerr et al., 2021; Nony et al., 2021; Pang et al., 2021). On larger scales, young populations can be used to trace galactic spiral arm structure (Zucker et al., 2022). The recent discoveries of the Radcliffe Wave and the Split reveal structures largely aligned with the pitch of our spiral arms, as do structures marked by O and B stars (Lallement et al., 2019; Alves et al., 2020; Pantaleoni Gonzalez et al., 2021). However, much less is known about the distribution of older, gas-poor structures with ages \(\gtrsim 20\) Myr outside of the nearest 100 pc, especially less populated structures that lack O and B stars (de Zeeuw et al., 1999; Mamajek, 2016; Zucker et al., 2022). Broad scale is therefore an asset for studies of young populations, enabling the emergence of large-scale patterns from disparate nearby associations. With the emergence of increasingly comprehensive simulations of star formation, studies of young associations will become increasingly important for testing star formation models (e.g., Grudic et al., 2020; Guszejnov et al., 2022). Until recently, our view of nearby stellar populations has been limited to only the nearest and most substantial populations (de Zeeuw et al., 1999; Mamajek, 2016). Within 50-100 pc, the detection of associations has generally relied on stars with strong youth indicators such as TW Hydrae and \(\beta\) Pic, around which associations can be built by searching for nearby stars with common motion (Kastner et al., 1997; Barrado y Navascues et al., 1999; Zuckerman et al., 2001; Gagne et al., 2018). For more distant populations, the short-lived and therefore necessarily young O and B stars assume the role of signposts, and can be grouped together into comoving OB Associations. While this approach can reliably detect large associations such as Perseus OB3, the Orion Nebula Complex, and Sco-Cen (e.g., de Zeeuw et al., 1999), small associations often lack O and B stars, severely limiting the range of structures over which this approach is useful. This has resulted in more distant low-mass structures being largely unexplored until very recently (e.g., see Kounkel and Covey, 2019; Kerr et al., 2021; Prisinzano et al., 2022). The Gaia survey has revolutionized the detection of young stellar populations by providing nearly 2 billion stars with accurate space-velocity coordinates and photometric measurements (Gaia Collaboration et al., 2018, 2021). This not only allows for the detection of comoving structures, but also the suppression of the field background by detecting and isolating stars with photometry consistent with youth. Recent surveys have already identified thousands of stellar populations and clusters across a wide range of ages (e.g., Sim et al., 2019; Kounkel and Covey, 2019; Cantat-Gaudin et al., 2020; Hunt and Reffert, 2023). While the populations discovered by these studies were numerous, their broad focus in age reduced the visibility of young structures. Zari et al. (2019) used a photometrically-limited sample to identify patterns in the distribution of young stellar populations. This work however did not cluster the distribution of young stars into groups, meaning that the extents of potential populations were left undefined. Surveys of individual populations occasionally include age-limited samples of young stars and the clustering of substructures (e.g., Zari et al., 2018; Cantat-Gaudin et al., 2019), however, until very recently, there were no all-sky surveys specifically targeting young stellar populations in Gaia. The SPYGLASS program (Stars with Photometrically Young Gaia Luminosities Around the Solar System), which this work is a part of, was designed to complement the existing research on young stellar populations by creating a spatially unbiased survey of young stellar populations which both robustly assesses the youth of potential members and provides well-defined extents and membership lists for the populations that emerge from that sample. The first paper in this series, Kerr et al. (2021) (hereafter SPYGLASS-I), outlined our Bayesian framework for the detection of young stars and performed a relatively conservative clustering analysis on a Gaia DR2-based sample. The search focused on populations under 50 Myr old within 333 pc of the sun, revealing 27 top-level associations with numerous subclusters, many of which were either little-known or completely absent from the literature. While SPYGLASS-I considerably expanded our record of nearby associations, it was relatively conservative in its clustering analysis, using quality cuts to avoid the inclusion of populations in the background of dense molecular clouds where reddening corrections occasionally allowed the erroneous detection of subgiants as young because reddening was underestimated. The recent release of Gaia Data Release 3 (DR3) provides a new opportunity to deepen our survey of these nearby populations. The sample represents a significant improvement over DR2, improving the precision of parallaxes by 30%, proper motions by a factor of 2, and greatly reducing the systematic errors for both of those measures (Gaia Collaboration et al., 2021). The EDR3 sample, which is largely identical to the DR3 sample with the exception of radial velocities (Gaia Collaboration et al., 2022), has already been used by Prisinzano et al. (2022) for an expansive survey of young stellar populations, revealing 354 associations under 10 Myr old within a radius of 1.5 kpc. This result demonstrates the power of this updated Gaia sample to reveal stellar structures far beyond the 333 pc radius in SPYGLASS-I. However, that work's focus on populations younger than 10 Myr and lack of spatially-dependent reddening and extinction corrections motivates new work that covers a wider range of ages and includes reddening as a core component of stellar youth assessment. Through a Gaia DR3 update to our SPYGLASS young star and association detection framework, we can take advantage of the quality improvements of DR3 while better optimizing our young star detection algorithm, and adapting vetting methods to exclude false groups while including tenuous stellar populations. In this paper, we outline our expanded survey of young stellar populations in the solar neighborhood, both improving our sensitivity to young stellar populations relative to SPYGLASS-I and widening the survey to 1 kpc. In Section 2 we outline the new Gaia DR3 dataset that we make use of. We describe updates to our SPYGLASS methodology in Section 3, which refines our identification of young stars, and provide revised clustering results and cluster vetting techniques in Section 4, which improve our sensitivity to young associations. We then outline a new technique for computing cluster membership probabilities in Section 5, before providing basic information on the groups we detect in Section 6. We then provide an overview of some broad features of the groups we detect in Section 7, before concluding in Section 8. ## 2 Data The recent publication of Gaia Data Release 3 (DR3) has significantly improved the astrometric and photometric quality of measurements from the Gaia spacecraft compared to DR2, which was used in SPYGLASS-I (Gaia Collaboration et al., 2016, 2018, 2022). We therefore use that updated dataset for this paper. Our initial data download from DR3 was much less restrictive compared to the SPYGLASS-I sample to reflect our desire to produce maximally complete populations for any associations we find. We required only that each star has a 5-parameter Gaia astrometric solution and a valid \(G\) magnitude, and that the star is within 1 kpc according to the Bailer-Jones et al. (2021) geometric distances (see Section 2.1). The resulting sample contains approximately 94 million stars. This dataset provides a maximally complete sample capable of producing deep coverage of nearby populations. While this unrestricted sample is necessary for complete demographic studies, not all objects included are well-suited for youth assessment and the group detection that it enables. We therefore produce a separate restricted sample of stars with quality diagnostics that are well-suited for youth assessment. To enable the quality restriction of the sample, we included a series of quality parameters in our data download. This allowed us to generate a series of quality flags, which are based on the following inequalities outlined in SPYGLASS-I: \[u<1.2\times\max[1,\exp(-0.2(G-19.5))] \tag{1}\] \[\begin{split} 1.0+0.015(G_{BP}-G_{RP})^{2}<E<\\ 1.3+0.037(G_{BP}-G_{RP})^{2}\end{split} \tag{2}\] \[\begin{split}\pi/\sigma_{\pi}>5\end{split} \tag{3}\] where \(u\) is the unit weight error, defined as \(u=\sqrt{\chi^{2}/\nu}\), with \(\chi^{2}\) being the goodness of fit of the single-star astrometric solution1, and \(\nu\) being the number of observations used in that solution2. \(G\), \(G_{RP}\), and \(G_{BP}\) refer to the Gaia magnitudes, and \(E\) is the BP/RP Flux Excess Factor3, which is an indicator of flux anomalies between the gaia \(G\) band and the \(G_{RP}\) and \(G_{BP}\) bands. The inequalities 1 and 2 can be converted to astrometric and photometric goodness flags, respectively, where stars that pass the inequalities have values set to 1, and stars that fail them have values of 0. Stars that fail these cuts are typically in crowded fields or occasionally close binaries where it is more difficult to disentangle the astrometry and photometry of separate sources. Footnote 1: astrometric_chi2.all in the Gaia archive Footnote 2: astrometric_m_good_obs_al in the Gaia archive Footnote 3: phot_bp_rp_excess_factor in the Gaia archive Alternative cuts using the Renormalized Unit Weight Error (RUWE) have recently become popular for vetting astrometric solutions in place of the cut for \(u\), particularly the requirement that RUWE \(<\) 1.4 (Lindegren, 2018). This cut is most often used specifically to identify probable binaries (Bryson et al., 2020), and while binaries do provide challenges to youth assessment, they are also a core component of all young associations, and are included in our model of the solar neighborhood introduced in Section 3. The requirement that RUWE \(<\) 1.4 is also more than twice as restrictive as the cut on \(u\), removing over 3.3 million stars out of the 54 million that exist before the astrometric cut. Furthermore, recent work by Fitton et al. (2022) has shown that RUWE is also increased by the presence of protoplanetary disks, which are a common feature of young associations. We therefore conclude that the possible quality improvements that this RUWE cut provides do not justify the detrimental effects on the completeness of young populations, particularly at the youth probability calculation stage. We therefore do not apply this restriction for the initial selection of stars. Applying this cut is however useful for improving our detection of young associations in velocity space, a choice that we discuss in Section 4.1. Inequality 3 provides an additional quality check on parallax specifically, with a more permissive limit of 5 compared to the value of 10 chosen in SPYGLASS-I. This looser restriction is common in papers using Gaia data (e.g. Arenou et al., 2018; Katz et al., 2019), and its use reflects improvements made in Section 3.2, which greatly improve our handling of parallax uncertainties. While additional restrictions related to distance uncertainty will be necessary for identifying young groups, further parallax restrictions are no longer required for accurate assessments of youth. Finally, we dropped the requirement that visibility_periods_used\(>8\) used in SPYGLASS-I, as no stars fail that restriction in Gaia DR3. Stars that passed these quality restrictions were admitted to the main quality-restricted dataset we used for analysis, which contains nearly 53 million stars within 1 kpc. ### Distances While inverting Gaia parallaxes provides an accurate distance measurement in the near field, when expanding our search to a distance of 1 kpc the uncertainties in these measurements become much more significant and asymmetrical, reducing the accuracy of this distance calculation method. To improve these results, Bailer-Jones et al. (2021) uses a series of priors to refine the distance measurements relative to results from raw Gaia parallaxes. This work provides both geometric distances, which use a direction-dependent prior in addition to Gaia parallaxes, and photogeometric distances, which include an additional color-magnitude prior which favors distances which produce positions in color-magnitude space consistent with expectations. Both of these measurement methods have been shown to produce a robust improvement over inverted parallaxes, especially for sources with larger fractional uncertainties (Lutz and Kelker, 1973; Bailer-Jones et al., 2021). We experimented with both distance calculation methods, running our full young star identification pipeline and HDBSCAN clustering routine on both datasets. We found limited visually identifiable differences between the two, with locally larger radial scatter in the photogeometric distances for some associations. While these differences were very subtle, we selected the geometric distances for the purposes of this project, despite the higher accuracy of the photogeometric distances reported in Bailer-Jones et al. (2021). Since our detection of young stars is based on a Bayesian framework that uses distances and magnitudes, the use of distances with their own priors on magnitude may introduce undesired artifacts. This suspicion appears to be reflected in the subtly wider scatter in some associations during our testing, which may be the result of reddening anomalies manipulating and distorting the priors for the Bailer-Jones et al. (2021) photogeometric distances. Since both reddening and photometric youth are likely to skew generalized photometric priors, it would not be surprising to see less accurate photogeometric distances in these environments despite their higher accuracy relative to geometric distances across the rest of the sky. ## 3 Young Star Survey Our methods for identifying young stars and associations, as well as computing their basic properties, closely follow the methods of SPYGLASS-I. However, we do provide some minor updates that improve performance. These changes include updates which both reflect the new Gaia DR3 photometric system and refine our young star identification methods. ### Model Generation Like in SPYGLASS-I, our methods necessitate a model of stellar populations to compare with the Gaia sample we gathered in Section 2. This requires model stars that are representative of the solar neighborhood, and capture the diversity in age, mass, metallicity, and binarity that exists. All of these factors modify luminosity and thereby affect the probability of youth for possible young stars. Most components of our model generation directly follow SPYGLASS-I, which can be referenced for further detail on our methods. We assumed constant star formation over the age of the solar neighborhood from 1 Myr to 11.2 Gyr (Binney et al., 2000). We then sampled this distribution uniformly in log-space to ensure strong model coverage on the pre-main sequence, and accounted for the subsequent overselection of young stars in the model using a prior in our youth probability calculation (see Equation 4). Metallicities are drawn from the probability distribution provided by the GALAH survey (Buder et al., 2018; Hayden et al., 2019), in the smoothed form shown in SPYGLASS-I. While metallicity does have some dependence on the galactic Z coordinate over a 1 kpc scale, associations are rare beyond about 100 pc from the galactic plane (e.g., Bobylev and Bajkova, 2016), so relatively few associations are likely to be affected by significant metallicity variations. The masses of single or primary system components are then drawn from the Chabrier (2005) system IMF, with possible companions added according to the binary and triple system rate curves from SPYGLASS-I, which are based on the multiplicity rates and higher-order multiplicity behaviors from Duchene and Kraus (2013). We also used the same mass ratio distribution for companions that was previously used in SPYGLASS-I, which were based on the power law distributions from Kraus et al. (2011) and Rizzuto et al. (2013). System separations were also generated to assess whether the binaries are resolved. Unresolved binaries are visible as a single source, requiring that the photometry of the components be merged in the model, while resolved systems remain separate. The separations we use follow the distribution from Raghavan et al. (2010), which was also used in SPYGLASS-I. We then generated model photometry by interpolating photometric observables from isochrones according to the randomly-generated stellar properties described above. Like in SPYGLASS-I, we used PARSEC isochrones (Chen et al., 2015) to generate this model photometry, adopting the revised DR3 photometric system (Riello et al., 2021) and updating our isochrone grids to reflect that. Following SPYGLASS-I, we based our grid density on the rate of stellar evolution, requiring that each slice in age and mass must contain at least two points on the horizontal branch, except for the most massive of stars, which have extremely rapid evolution. We slightly increased the number of isochrone ages used in the grid to 616 from 498 in SPYGLASS-I, but left the mass and metallicity grids unchanged. We interpolated values for \(G\), \(G_{BP}\), and \(G_{RP}\) off the resulting grid according to the values of age, mass, and metallicity for each model star, producing photometry for each one. Finally, we merged the photometry of unresolved binaries. In SPYGLASS-I, all model stars were taken to be at the same distance due to the survey's relatively limited 333 pc distance horizon, which reduced the differences in the unresolved rate across the sample (see the discussion in SPYGLASS-I). However, with our expansion of this search to 1 kpc, a more versatile approach is required to properly capture the contribution from unresolved binaries at these larger distances. To do this, we split the selection of Gaia stars into 8000 bins sorted by distance, all but one being equally populated with 6621 stars. For the distance to the stars in each bin, rounded to the nearest 1 pc, we generated a model with 10 million sample stars. At that distance, we computed angular separations for each generated linear separation, merging the photometry of all stars with separations below 1 arcsecond, following SPYGLASS-I. We did not generate multiple models for bins with the same rounded distance, so in practice we generated models at 1 pc increments for bins with distances above \(\sim\)90 pc. The spacing between bins was increasingly wide closer to the sun, with the two closest bins to the sun having average distances of 19 and 30 pc. Coarser distance sensitivity within 90 pc is however of limited concern for youth assessment, as SPYGLASS-I showed that our sensitivity to young groups in that region is relatively weak due to dominant geometric projection effects. The total number of models generated was 939. This step completed our set of intrinsic models, which were later modulated with reddening and distance to produce model apparent magnitudes for individual stars. A sample model at 500 pc is provided in Figure 1. While the updates to the Gaia photometric system and binary management were important for the generation of statistics, they were also subtle to the eye, making this model and the others we generated difficult to distinguish from the SPYGLASS-I models. There was still some slight under-sampling that produced artefacts on the subgiant branch and OB sequence like in SPYGLASS-I, however our finer grid sampling in these models reduced these effects compared to SPYGLASS-I. The effects of these artefacts nonetheless have little effect on our statistics, especially given that less massive stars dominate our young sample. Figure 1: An HR diagram demonstrating our sample model using the new DR3 photometric system. This example uses a distance of 500 pc for merging binaries, although we also generate 938 other models for comparison with stars at different distances. Like in SPYGLASS-I, there are some streak-like density anomalies on the giant branch and far upper main sequence. These are caused by the speed of stellar evolution in these regions, however youth is typically not contentious there, so sampling density issues are unlikely to be important. ### Generating Star Statistics In SPYGLASS-I, distances to Gaia stars were directly combined with apparent magnitudes to generate absolute magnitudes, with uncertainties derived from the photometric and parallax uncertainties. While this produces an accurate representation of the absolute magnitude uncertainty for an individual magnitude, the Gaia \(G\), \(G_{BP}\), and \(G_{RP}\) absolute magnitudes are all covariant in distance, and as a result, treating all three as independent provides a imperfect statistical representation of these uncertainties. To resolve this, we instead introduced the distance corrections directly into the model. We randomly generated a set of distances for each star from a gaussian centered on the mean Bailer-Jones et al. (2021) geometric distance with high and low 1-sigma intervals equal to the upper and lower limits of the corresponding distance measurement. These became model distances, which, when combined with the model magnitudes, generated model absolute magnitudes, which can be directly compared to our Gaia observables. This is the same approach used to introduce reddening, both in SPYGLASS-I and in this paper. We interpolated reddening values from the Lallement et al. (2019) reddening maps, along with the upper and lower uncertainty intervals. We then generated a set of model reddenings drawn from the reddening probability distribution for each star in the model. These reddenings were applied to the model stars like for distance, changing the model accordingly. The direct addition of model reddening and distance values to the intrinsic stellar models therefore produced a model specific to each star, which fully accounts for the covariance of the apparent magnitudes in reddening and distance. While this approach makes our uncertainty handling more accurate, photometric uncertainties from Gaia are often much smaller than those induced through distance measurements, and the choice to build the distance uncertainty into the model does not improve the density of model stars. As a result, statistical measurements which only consider photometric uncertainties result in many stars with tightly-constrained Gaia photometry having such tight photometric distributions that few, if any model stars lie within a few sigma in magnitude-space. We retrospectively identified a similar issue on rare occasions in SPYGLASS-I, in which a small number of very low-uncertainty stars were excluded from our sample through the requirement that at least 50 model stars must reside within 1-sigma of each Gaia star in magnitude space. This choice was intended to remove white dwarfs and other stars not covered within our model, however it inadvertently also excluded a few high-quality candidates. Modifications to our methods are therefore necessary to maximize the number of good comparisons to model stars for each Gaia star. We therefore introduced factors which account for internal uncertainties in the models themselves. These model uncertainties were separated into two components: the uncertainty within the model itself, and the coarseness of our model population in sparsely-populated sections of the CMD. Two new uncertainty factors were therefore introduced into the formula for the probability of a sample star being consistent with a Gaia observation, updating the equation presented in SPYGLASS-I to the following: \[p(y|g)\propto p(y)p(g|y)=\prod_{i}exp\left(-\frac{(g_{i}-x_{i})^{2}}{2(\sigma _{g,i}^{2}+\sigma_{x,i}^{2}+\sigma_{s,i}^{2})}\right) \tag{4}\] In this framework the subscript \(i\) multiplies over each observable in x, with \(g_{i}\) being the observables of the Gaia star, \(x_{i}\) being the observables of the model star, and \(\sigma_{g,i}\) being the uncertainty in Gaia observables. The only prior \(p(y)\) we have accounts for the oversampling of young stars in age space, and the corrective factor is equal to the age of the star. The last two \(\sigma\) values in the denominator of the exponential are therefore new to this publication and not included in SPYGLASS-I. The first new uncertainty, which we call the model uncertainty, \(\sigma_{x,i}^{2}\), is set equal to 0.02 mag. This value is chosen to represent the approximate minimum magnitude difference induced by a 0.05 dex change to metallicity, with that definition meant to represent the magnitude change induced by a minimally resolvable change in the model (Chen et al., 2015; Garcia Perez et al., 2016). This 0.02 mag value is also considerably smaller than the width of typical main sequences of star clusters, ensuring that it does not introduce any additional uncertainty not already seen in populations with presumed identical properties. This additional uncertainty factor has little effect for stars with uncertain Gaia photometry, however it ensures that stars with extremely well-constrained Gaia photometry are still able to take advantage of our model sampling density. Most anomalies produced by small Gaia uncertainties were resolved by the addition of this model uncertainty parameter. However, regions in the CMD with particularly low model densities still occasionally produced anomalous youth results. These issues are most evident in the region where the subgiant branch intersects the pre-main sequence, where our logarithmic age sampling populates this region well with young stars, but the sampling of subgiants remains sparse. As such, stars identified as young in this region typically had few or even no model subgiants within 1\(\sigma\), despite the presence of subgiants in nearby sections of the CMD. There was also a region below the main sequence where a small group of stars sits below the range of our model populations, most likely due to the presence of relatively rare stars with metallicities outside the range our model considers. Both of these regions occasionally see stars identified as young, despite the fact that their position casts significant doubt on their youth. The need to ensure that these populations have sufficient model coverage for a meaningful youth assessment prompts us to add the second new uncertainty factor to the denominator of the exponent in Equation 4, \(\sigma_{s,i}^{2}\). We set this value equal to 0.1 magnitudes multiplied by a new integer \(j\). We found that stars with questionable youth assessments generally had \(N<200\) model stars within \(1\sigma\) in magnitude space, so we use the uncertainty term \(0.1j\) to account for coarseness of the model grid, where \(j\) is initialized to 0 and increased by 1 until \(N>200\). While this does have the effect of somewhat blurring our model beyond its resolution for stars with \(j>0\), the base uncertainty addition of 0.1 magnitudes is still capable resolving age differences as small as 5 Myr on the pre-main sequence, a level of resolution exceeding what is required for a broader youth determination. For white dwarfs, these uncertainty increases required to get \(N>200\) can be unphysically large due to our lack of white dwarfs in our modelling, although given that these stars are not of interest to our work, poor handling of them is not a concern, and no white dwarfs end up in our young star sample. ### Selecting Young Stars Our search for young stars was performed on the quality-restricted sample of 53 million stars described in Section 2. However, due to the changes to our stellar probability analysis in Section 3.2, we found that the \(P_{Age<50Myr}>0.1\) cut marked stars as young that are up to 0.5 magnitudes lower on the pre-main sequence compared to SPYGLASS-I. Most of the changes to \(P_{Age<50Myr}\) appear to result from our new Bayesian management of distance and Gaia DR3 improvements to distance uncertainty, which together significantly reduce uncertainties for the absolute magnitudes in each Gaia color band. Since young stars are rare relative to older stars, poorly-constrained measurements tend to default to a low \(P_{Age<50Myr}\). The result of improving our uncertainties therefore widens the range of stars with \(P_{Age<50Myr}>0.1\), which in the clustering stage merges populations that perhaps should not be merged. We therefore changed the selection limit to \(P_{Age<50Myr}>0.2\), which brings the parameter space identified as young roughly in line with SPYGLASS-I while improving the contrast between structures and the background. Given our calculated probabilities, that restriction is expected to produce a sample where approximately two-thirds of the total stars have Age\(<50\) Myr. This choice therefore accepts slightly elevated contamination in exchange for the inclusion of populations near the upper end of our age limit where the field binary sequence and pre-main sequence overlap the most. This choice to priorize these older populations over contamination synergizes well with the clustering method we use to identify young populations, which is built to identify overdensities in the presence of a significant background. Most of the non-young model stars that contribute to \(P_{Age<50Myr}\) for values near 0.2 are also binaries, so they can be suppressed using RUWE cuts, which we impose in Section 4.1. The final population of photometrically young stars after updating the \(P_{Age<50Myr}\) threshhold is 418611. ## 4 Young Populations ### Clustering Our clustering in this publication uses HDBSCAN (McInnes et al., 2017)4, and almost entirely follows the choices made in SPYGLASS-I. We cluster in five-dimensional \((X,Y,Z,c*v_{T,l},c*v_{T,b})\) space, using the constant \(c\) to equalize the typical scales of the space and velocity components, which is set to \(c=6\) pc km\({}^{-1}\) s. We set the HDBSCAN parameters min_samples and min_cluster_size to 10, and set \(\epsilon^{5}\) to 25, with the latter being the parameter used in SPYGLASS-I to allow for the merging of groups with similar enough distributions to hint to mutual connections. We only used excess of mass (EOM) clustering, which identifies groups by their persistence across clustering scales. Leaf clustering, which identifies the smallest scales of overdensities useful in analyzing substructure, is beyond the scope of this publication. Starting with the sample of candidate young stars drawn from our quality-restricted sample, we required that \(d/\sigma_{d}>25\) to ensure manageable distance spreads in galactic coordinates, following SPYGLASS-I. This reduces the sample size to 199434, making this restriction more impactful in this publication than it was in SPYGLASS-I. This cut is nonetheless worthwhile to improve clustering for the closer and more accessible populations that are our focus. Finally, we added a new cut requiring RUWE \(<1.4\). This is the loosest of the typically-accepted RUWE cuts for re moving binaries, and we find that this cut is useful to reduce the background of misidentified older binaries (e.g., Bryson et al., 2020; Stassun and Torres, 2021). The final sample of high-quality young stars contains 181524 stars after these restrictions. Once these cuts were made, we applied HDBSCAN to identify clusters, resulting in 228 populations being identified within 1 kpc, containing about 39000 photometrically young stars. One additional and particularly important change was made to the clustering methodology for this paper: the removal of the cut on the cluster persistence factor. In SPYGLASS-I, this factor was used to remove clusters associated with reddening anomalies, however in doing so it likely removed many real features. By removing this cut, we reintroduced many new and potentially interesting features, however we also greatly increased the potential influence of anomalous features. Our larger search distance exacerbates this issue by introducing significant new reddening anomalies through both the addition of new dense clouds into the search area and through the addition of more area within the backgrounds of these clouds, which can result in reddening anomalies spanning hundreds of parsecs. As a result, the remainder of this section primarily concerns cluster validation, including both the removal of likely spurious features, and the revision of the cluster member lists. ### Reintroduction of Photometrically Ambiguous Populations Our clustering methods identified many young populations, both known and unknown. However, since we only clustered on a subset containing high-quality and photometrically young stars, there are certain to be other members that are not included in these base samples. The generation of complete group populations therefore requires the reintroduction of stars that are co-spatial with identified members in space-velocity coordinates, but are not photometrically identifiable as young. To do this, we followed the same method employed in SPYGLASS-I, defining a distance metric equal to the distance to the 10th nearest young member (\(d_{10}\)), which is similar to the metric used in HDBSCAN clustering. Candidate members were subsequently identified as having a \(d_{10}\) smaller than the largest value for an identified young member. To preserve information on the credibility of each candidate member, we computed clustering proximity (\(D\), formerly "strength" in SPYGLASS-I), which is a measure of a star's centrality within the space-velocity distribution of a group. It is defined such that the star in the cluster with the largest \(d_{10}\) has a value of zero, and the star with the smallest \(d_{10}\) (i.e., the most central member) has a value of 1, with a linear scale in between. We later use \(D\) as the basis for computing cluster membership probabilities, \(P_{mem}\), in Section 5.2. Our search for extended candidate populations was applied to the nearly unrestricted set of 94 million stars discussed in Section 2. This full stellar sample contains all stars for which membership and the feasibility of follow-up observations can be assessed, and provides maximally complete populations for the groups we identified. We used the subset of this sample for which we generated youth statistics in validation work, as the properties of a group's extended population of candidates strongly inform its coherence (see Section 4.3). In total, approximately 3 million stars were identified as candidate members of a young association. The nearly 100-fold increase in the membership of these expanded populations relative to the 39000 young stars used to identify them suggests significant field contamination in many of these populations, which we address in Section 5. ### Cluster Vetting To clean the sample of false clusters produced by our inclusive clustering result, we must identify indicators that are associated with the false groups we often see. Many metrics for cluster vetting exist, however they can also be difficult to quantify and apply uniformly due to the highly varied positions, shapes, and environments of the young populations identified. As a result, we found that visual input from a human was often required for proper assessment. In this section we define the indicators used to identify false positives, and vet these populations by hand in accordance with those metrics. Our first major indicator of false positives arises from poorly-corrected reddening, which results from the spatially-coherent errors of the Lallement et al. (2019) reddening maps. This is by far the most common cause of false cluster identification, especially in the most heavily reddened environments. These reddening anomalies often result in large swaths of old stars behind a cloud being identified as young. However, since the reddening vector moves stars nearly parallel to the main sequence, field stars that are falsely identified as young due to uncorrected reddening are generally limited to those that were originally on the subgiant branch, but were reddened towards the lower-right on the HR diagram, placing them on the pre-main-sequence. Moderate reddening of subgiants yields colors of \(0.5<G_{BP}-G_{RP}<1.5\), defining a region on the HR diagram which usually contributes minimally to young stellar populations (with approximate masses \(0.75M_{\odot}<M<1.4M_{\odot}\)), but can become the dominant region occupied by photometrically young objects if reddening is improperly corrected. Most well-known young groups, such as Sco-Cen, Orion, and Vela have populations of photometrically young stars with between 1% and 5% of their membership in this reddening error-contaminated region, and this low fraction is expected for a typical IMF (e.g., Chabrier, 2005), as the less massive K and M stars are much more common compared to the earlier-type stars in this color range. Sco-Cen, being largely gas-free throughout most of its extent, has a fraction of 1.2%, while Orion, which has more gas present, has a higher value, at 2.7%, indicating the introduction of minor reddening anomalies. More consistently reddened environments and those at larger distances fre Figure 2: Example groups which demonstrate our vetting choices. For each group, we show the photometrically young sample used to identify the group (large yellow dots), the full quality-restricted sample of candidate members which includes photometrically older candidates (small black dots), and a \(D\)-restricted subset of these candidates limited to stars near the cluster center (red dots). The value of \(D_{min}\) used for restriction is annotated, along with the candidate group ID and location. For accepted groups, the assigned SCYA IDs are provided in brackets. 1 Gyr, 80 Myr, and 20 Myr isochrones are provided for reference, which represent the field population, maximum age of SPYGLASS populations, and typical young groups, respectively. The top-left panel shows the Sco-Cen association, where restriction in \(D\) shows a strong sequence which overlaps with the photometrically young sample. The top-middle panel shows the Circinus complex, which is real but very distant, resulting in most young stars being found just below the subgiant branch. While this typically indicates a spurious group, OB stars are present and the \(D\)-restricted subset reveals a strong sequence, so we accept this population. The top-right population is also identified by the region below the subgiant branch, however there are no OB stars, the \(D\)-restricted main sequence is mostly focused on the field sequence, and that \(D\)-restricted sequence appears consistent with a reddened field sequence (the reddening vector is shown for reference). We therefore reject it. The bottom-left panel shows a new and very tenuous population. Despite the small population of young stars in the \(D\)-restricted sequence, much of the young sequence originally identified is still there, while the field population is comparatively nearly gone. We therefore accept this group. The bottom-middle panel shows a likely false group in southern Scorpius. While this sample has OB stars, it is otherwise identified by the region beneath the subgiant branch, and the \(D\)-restricted sample does not skew towards a young sequence, instead looking like a depopulated version of the full sequence. We therefore reject it. The final (bottom-right) panel shows the Pleiades, an older population that straddles the 80 Myr isochrone. This isochrone marks our limit for defining groups as young, so this group and any older than it are rejected from the young sample, and handled in Appendix A. quently had much higher values, however we found that for all but the most distant associations, spurious populations were reliably identified by fractions exceeding 50%. The most distant regions are however exceptions, as with the quality restrictions we employ prior to clustering, populations with \(d\gtrsim 800\) pc often have limited representation beyond this often-contaminated region below the subgiant branch. Assessments of these distant populations must therefore include the extended populations, which are not restricted on distance uncertainty and therefore extend to dimmer magnitudes than the population used for clustering. A real cluster among these distant populations can generally be identified by the presence of a sequence extending parallel to an isochrone and directly overlapping with the stars just below the subgiant branch used to define the group. The other major identifier we use to detect false positives in clustering is based on a lack of internal coherence. In real young populations, the space and velocity coordinates should both show a concentration of genuine members, in which credibly young stars are more common closer to the group center, and field stars become increasingly dominant further out. False groupings tend to have either no central concentration of young stars, or concentrations guided by reddening patterns, in which more central stars, specifically in spatial coordinates, may be on a different sequence produced purely by the locally heavy reddening of a field sequence. We assessed the central concentration of the young sequences using the quality-restricted extended populations of candidates, which are drawn from the age-unrestricted population of 53 million stars for which we have measurements of youth probability. We restricted these populations in \(D\), observing whether a young sequence emerges more clearly with restricted \(D\), or whether the sequence could be explained as just a random assortment of field stars. The metrics we have outlined, which focus on central concentration and subgiant branch reddening contamination, are used to assess whether each association in our catalog is real or a spurious detection. We removed clusters that failed either of these metrics, making the final judgement for each association by hand. In Figure 2, we provide some illustrative vetting examples, which cover the full range of marginal cases which arise in our set of candidate populations. The top-left panel shows the population of Sco-Cen as a prototypical example of a real group (Preibisch and Mamajek, 2008; Pecaut and Mamajek, 2016). The young stars used to identify it cover much of the pre-main sequence, and low-mass stars are much more numerous among them, as the Initial Mass Function would predict (e.g., Salpeter, 1955; Chabrier, 2005). There is also no clear concentration of stars below the subgiant branch (1.2% with \(0.5<G_{BP}-G_{RP}<1.5\)), suggesting that reddening has a minimal effect on young star identification here. Restricting the extended population in \(D\) results in an increasingly clean stellar sample with increasing \(D\), suggesting that while there is field contamination near the edges, the center is very pure. Most of the stars on the \(D\)-restricted sequence reside above the 20 Myr PARSEC isochrone Chen et al. (2015), meaning that it is clearly within our target age range. The remaining five are all much less obvious in their vetting decision due to deficiencies in one or more of the observables that makes Sco-Cen's sequence so convincing. Those examples therefore cover all major choices involved in group vetting. The top-center and top-right panels of Figure 2 both show populations mainly identified through stars in the reddening-vulnerable region below the subgiant branch. The top-center panel shows the Circinus complex, which is a known young population (e.g., Reipurth et al., 2008), while the top-right panel shows a reddening anomaly behind the Serpens complex, which we label Candidate Group (CG) 192. There are a few differences between them. First is the OB stars, which are present for Circinus, but entirely absent for CG-192. While smaller populations will not always have OB stars, their presence is generally a strong indication that a real population is present. However, the much more telling vetting indicator in this case is the sequence of the extended population. In Circinus, restricting the population in \(D\) reveals a strong pre-main sequence well above the 20 Myr isochrone, following a standard isochrone track that passes through the photometrically young founding sample. The young founding sample for CG-192 is also elevated above the 20 Myr sequence, which would predict the presence of even more stars along this sequence at lower masses, like in Circinus. Instead, the sequence curves down across the isochrones, settling along the right edge of the field main sequence. Rather than being a young sequence, this appears consistent with a reddened copy of the field sequence, pushed to the lower-right of the CMD by approximately 2 magnitudes of reddening. With its location near the center of the Serpens Complex and adjoining dense clouds, an anomalous population here is not unexpected. The bottom-left panel of Figure 2 shows CG-110, which is a lightly-populated and tenuous association with a clear central concentration. Its extended population is dominated by the field, however with \(D>0.7\), stars on the young sequence approach the field in abun dance, with about one-third of stars being found near the 20 Myr isochrone rather than the field sequence. Furthermore, none of the young stars used to define CG-110 are near the subgiant branch, so there is no indication of reddening interference. The result is a population that, while very small, satisfies every requirement of a genuine population, resulting in it being accepted. The bottom-center group (CG-26), however, is less convincing. It has a few OB stars, which would ordinarily be predictive of a substantial young population given their rarity, however restricting in \(D\) reveals a sequence that looks like a depopulated version of the field sequence, without any young population emerging. While it is tempting to define a population based only on the OB stars, groups that we know of have a range in masses, so we must require that credible evidence of a young sequence appears over the entire region where it should be visible. In this particular case, it is unclear why the OB stars are present, however this is in a crowded region of the central Milky Way in southern Scorpius, so it is possible that the group was identified mainly by reddening anomalies. Groups from reddening anomalies tend to have broader velocity spreads, so the area connected to it may have included some genuinely young stars on its periphery, making its youth appear more credible. Nonetheless, this is not a group that produces internally-consistent results, so we remove it. Finally, in the bottom-right panel of Figure 2, we show the Pleiades, which is an example of a group which is real, but is not especially young. These older groups were occasionally identified as young populations, typically through some combination of reddening and a strong binary sequence. We used the 80 Myr PARSEC isochrone to set that limit, which is slightly younger than the typically-accepted ages for the Pleiades (100-160 Myr; e.g., Stauffer et al., 1998; Gossage et al., 2018). Anything found with a pre-main sequence below the 80 Myr isochrone is excluded from further analysis, but we do report their populations in Appendix A. We show all groups in the sample in Figure 3, colored by whether they are accepted, rejected, or real but old. Groups that are accepted by our vetting process are treated as genuine young associations for the remainder of this publication. We give all such groups new indices within a new "SPYGLASS Catalog of Young Associations" (SCYA), which are used to refer to them later in this paper. A total of 116 groups pass vetting, and are given a SCYA ID as a result. The clusters removed are often grouped in sky coordinates, occasionally having essentially identical spatial distributions (see Figure 3). This tends to provide reliable verification for a group's removal, as the radial stacking of groups is almost always produced by reddening anomalies that result in the systematic misidentification of stars as young in their backgrounds. The Serpens and Cepheus clouds produce some of the most frequent false detections, but most notable molecular clouds in the solar neighborhood produce at least one anomalous group behind them. However, these tend to be easily removed using the already-imposed vetting methods. While we did consult reddening maps when making our final vetting choices to confirm reddening suspicions, no groups were removed solely due to the presence of a reddening anomaly, as it is common for genuine populations to be embedded in heavy reddening. ## 5 Membership Probabilities In previous SPYGLASS papers, the clustering proximity parameter \(D\) (which includes both spatial and kinematic distance) was used as a proxy for a star's likelihood of membership, with stars more central in their parent distribution being deemed more probable members than stars near the boundary with the field. However, since that value only calculates distance to young neighbors, it ignores the local density of the field population. As a result, any given value of \(D\) could have very different implications depending on the density profile of both the group and the field. A true membership probability (\(P_{mem}\)) would compare the relative sizes of the field and the young population in the vicinity of the star, similar to the statistical approach used by Sanders (1971) and numerous others. We therefore created an algorithm to produce conversion maps between \(D\) and \(P_{mem}\) for each population in our sample. Results are provided for each group that passes vetting. ### Corrective Factors for Stellar Populations We must first establish a way of reliably estimating the populations of the young group and the field separately so that they can be compared. Using the quality-restricted extended candidate samples, we generated near-certain populations of group and field populations using their youth probability. We defined stars in the quality-restricted extended populations with \(P_{Age<50Myr}>0.2\) as young (following Section 3.3), and stars with \(P_{Age<50Myr}<0.001\) as old. The locations of these identified sequences are shown in Figure 4, using Sco-Cen as an example. Identified young stars were, as expected, pre-main sequence and OB stars, while reliably old stars were below the pre-main sequence, on the giant branch, and on the white dwarf cooling sequence. With these confident populations established, total populations of the young and old samples could Figure 3: Clusters identified by HDBSCAN, shown in XY galactic cartesian coordinates and l/b galactic sky coordinates marked by their vetting status. We introduce small differences in hue to distinguish overlapping groups, but we broadly use orange-ish shades marked by up-arrows represent real groups, while blueish shades marked by down-arrows represent false groups. Real but older groups are shown as black circles. The extensive rays of false groups towards the right side of the XY plot are mainly produced by reddening anomalies in the foregrounds in Serpens, Aquila, and the Pipe Nebula. then be estimated by calculating the fraction of stars missed by the young and old selections. The field stars are relatively straightforward for which to compute missing fractions, as the field tends to have similar photometric properties regardless of the direction, especially within the galactic plane where most associations reside and metallicity gradients are minor. Our sensitivity to stars is distance-dependent, so we computed the correction for field stars as a function of distance. To do, this we took the stars in our Gaia sample that are not likely young (\(P_{Age<50Myr}<0.2\)), and computed the fraction of those which have \(P_{Age<50Myr}<0.001\) across 20 bins from 0 to 1 kpc. After smoothing the results with a Gaussian kernel, we had a curve for the field abundance conversion as a function of distance. We provide this curve in Figure 5. For each star with \(P_{Age<50Myr}<0.001\) at a given distance, a corresponding corrective factor can be read off, providing an expected number of missing stars for each identified field star. We then estimated the missing fraction for the members of the young population. As shown through the completeness discussion in SPYGLASS-I, SPYGLASS recovery rates are age-dependent. However, like for the field population, the much larger search horizon employed in this work prevents us from reaching the bottom of the main sequence in some cases, resulting in distance also affecting completeness. Unlike those nearby groups used to calculate completeness in SPYGLASS-I, there is not nearly enough coverage in distant groups to reliably factor distance into that sort of analysis. We therefore developed a conversion based on the populations available to us in each group. We plan on addressing the issue of completeness in more detail in an upcoming paper, which will provide a statistical view of the populations identified in this paper (Kerr in prep). Our missing fractions for the field allowed us to estimate the total field population in each group. The population of the young group itself should then be the difference between the population of all candidates and the field population. The corrective factor which provides the total group population from the young sample is then computed as the size of the young group population divided by the number of stars with \(P_{Age<50Myr}>0.2\). While field stars often dominate candidate populations, occasionally overwhelming the contribution from the young association, our vetting choices ensure that for a sufficiently restricted range of \(D\), the population of the young sequence approaches or exceeds that of the field, providing robustly measurable young stellar populations which enable this calculation. For large and kinematically distinctive groups, this method produced consistent conversion factors for our young populations, across a range of choices for \(D\) re Figure 4: Figure showing the regions of the HR diagram that we identified as reliably old (\(P_{Age<50Myr}<0.001\)) or young (\(P_{Age<50Myr}>0.2\)), using Sco-Cen as an example. Young stars are identified on the pre-main sequence and the tip of the OB sequence, and old stars are confidently identified on the lower main sequence, giant branch, and white dwarf cooling sequence. The faintness of most stars in those reliably young or old sets makes the fraction of the respective populations that they represent a strong function of distance once faint stars begin to become invisible to Gaia. Figure 5: The curve for the field population corrective factor, which is the inverse of the fraction of total field stars identified as old using \(P_{Age<50Myr}<0.001\) (see Fig. 4). The increase with distance reflects the less reliable youth assessment for stars that are higher on the main sequence. striction. However, for many smaller groups, the sample of young stars was small enough relative to the field that this calculation was dominated by Poisson uncertainty in the population of field stars, skewing it. We nonetheless found that by restricting the minimum value of \(D\), the corrective factor changes as the group becomes increasingly dominant over the field, until it plateaus at the presumed corrective factor for the group. The \(D\) values we used to restrict the selection are the same as those later used in Section 5.2 to calculate \(P(mem)\) as a function of \(D\). We only ran these calculations for stars above a certain \(D\) when at least 5 objects were likely old or young stars, which are defined as having \(P_{Age<50Myr}>0.2\) or \(P_{Age<50Myr}<0.001\). This ensures that there is a population of stars firmly in either the field or the group population which help set these fractions. Upon calculating the corrective factor for a range of \(D\) restrictions, we produced a KDE of the resulting corrective factors, and computed the final correction value as the peak of that KDE. Some examples of the \(D_{min}\) vs \(f_{corr,group}\) curve are shown in Figure 6, alongside the fit from the KDE. In most cases the KDE peak returns the plateau in the corrective factor achieved once the group population dominates over the uncertainty in the field calculation. These plateaus are occasionally accompanied by some variation caused by internal age differences as a function of \(D\) which the KDE averages over. However, there are also a small number of groups with too few likely old or young stars to generate calculations over a wide range of \(D\), and 3 additional extremely tenuous groups which are dominated by the field for most \(D\) restrictions, making their plateau undetectable to the KDE. In these cases, we computed the corrective factor as the average of the two solutions with the most restricted \(D\), and in all cases this produced a corrective factor within the range expected by visual inspection. ### Producing a \(D\)-\(P_{mem}\) Curve The reliably young and old populations, combined with the corrective factors that account for the missing fraction, allows us to compute the fractional abundance of members as a function of \(D\). For binning, we chose 10 bins covering \(0<D<1\) as a base selection, and for more populated groups we doubled the number of bins until less than 200 stars exist in the bin just above \(D=0.4\). More populated groups often dominate the center of their parameter space and only lose dominance near the edges, and more bins allows that dropoff to be resolved. \(D=0.4\) is an intermediate value that avoids the occasional field domination of low \(D\) and the inconsistent density at high \(D\), hence why it is a good place to assess the need for adjusting bin density. We also required that there be at least 5 likely old or young stars for each fractional abundance calculation, as defined in Section 5.1. This produced a rough curve of \(P_{mem}\) as a function of \(D\), however it is not smooth due to small sample sizes in many bins, and also does not increase consistently with \(D\), a result that would imply an unphysical pattern of \(P_{mem}\) decreasing for stars closer to the cluster center. To smooth the result, we fit it with a Gaussian CDF. The wide range in cluster density profiles makes any profile fitting choices difficult to make universally, however Gaussian CDFs have asymptotes at 0 and 1, which should be the behavior of a membership probability curve. We also find close agreement between the Gaussian CDF profile and our most populated \(P_{mem}\) curves. Some sample fits are shown in Figure 7. Using this fitting method, only 6 of 116 groups require additional attention to produce reasonable results. Four of these are group-dominated populations without enough field stars to produce a meaningful background contribution. For these, we simply calculated a mean \(P_{mem}\) and attributed it to all members, and all but one of these values is between 85 and 91%. The other is just below 60%, and this is caused mainly by its age, Figure 6: Curves of the corrective factors to the populations of young groups, plotted in yellow, as a function of \(D_{min}\), the minimum value of \(D\) that the population is restricted to for a given point. The final corrective factor is shown as the dark horizontal line. Sco-Cen and Vela are both prototypical examples where the final value of \(f_{corr,group}\) is selected as the peak of the corresponding KDE. SCYA-35 (CG-110 pre-vetting) plateaus only for very restrictive choices of \(D_{min}\) and SCYA-85 has only three corrective factor measurements, so in these cases we calculate the final \(f_{corr,group}\) as the mean of the last two values, which tend to be most dominated by group members. which only marginally passes our 80 Myr isochron age cut. This results in some more massive young stars that have already reached the main sequence being flagged as likely older than 50 Myr by our youth probability estimate. The remaining two groups had significant ranges with barely enough stars to permit \(P_{mem}\) calculations, which resulted in difficult-to-fit noise spikes. In these cases, we restricted fitting to \(D<0.7\), and both produced very tight fits. The occasional assignment of stars in the older young associations to the likely old sample (see Sec 6.2) suggests that the \(P_{mem}\) results of groups older than 50 Myr should be treated with caution, such as for Perseus OB3, which has a gradual fit to its \(D\)-P\({}_{mem}\) conversion despite the visual dominance of the \(\alpha\) Persei Cluster at high \(D\). These curves produce maps between \(D\) and \(P_{mem}\) for each individual group. For each star in a given group with a given \(D\), we interpolated a value of \(P_{mem}\) off the \(P_{mem}\) vs \(D\) curve for that group. Since \(D\) is available for all stars in the minimally restricted stellar population, we were able to provide \(P_{mem}\) values for all candidates in that extended sample. To ensure that all candidates have a credible chance of membership within its parent association, we used \(P_{mem}\) to restrict the sample for each association, requiring that \(P_{mem}>0.05\). Not all groups contain stars that fail this \(P_{mem}\) cut, so this change is only impactful in groups which have an especially weak separation from the field. This restriction is nonetheless important in those limited cases to ensure that the extents of groups do not incorporate too much of the field, which contains a background level of ejected association members and field binaries which become increasingly abundant for low values of \(P_{mem}\)(e.g., Sullivan and Kraus, 2021; Kerr et al., 2022). Photometrically young stars used to define the group are kept regardless of whether they fail the \(P_{mem}\) cut. The resulting selection of candidate members includes 1.2 million stars across the 116 approved groups. The majority of these stars are expected to not be young, with the values of \(P_{mem}\) providing an approximate likelihood of membership for each of these candidate members. In most cases, the total population of members identifiable with Gaia can be approximated as equal to the sum of all \(P_{mem}\) values for stars in the association, which would infer a total population of real members in that set of approximately \(2.8\times 10^{5}\). A constant star formation rate over the 11.2 Gyr history (Binney et al., 2000) of the solar neighborhood would imply \(2.5\times 10^{5}\) stars formed in the last 30 Myr out of our total sample of 94 million, so the size of this population is consistent with expectations. The relationship between the number of identified stars and the corresponding population is set by the age and distance, which control recovery rates. While the sizes of the extended populations are in proportion to the size of the founding population in most cases, there are exceptions where inferred population sizes from \(P_{mem}\) may differ significantly from the sizes of the populations accessible to Gaia. The extended population can be disproportionately large in situations where our detection of young stars happens in close proximity to older populations, such as in the case of SCYA-95, which produces extended populations far larger than the \(P_{mem}\) curve should allow. The opposite effect is seen in older populations, where real members enter the set of likely old stars used to set \(P_{mem}\). This can result in a systematic underestimation for \(P_{mem}\). While these effects warrant caution, particularly when using our lists for statistical assertions, effects skewing \(P_{mem}\) appear to be quite rare in our sample, especially in nearer and more substantial populations. We list credible candidate members of each association in Table 1, consisting of both the population of young stars used to define the group in the clustering step, and the extended populations with \(P_{mem}>0.05\) Figure 7: Estimated \(P_{mem}\) values for a range \(D\), plotted as dots, alongside the fit we use as a map between \(D\) and \(P_{mem}\). The top row shows typical fits, with the top-left panel showing the young star-dominated Sco-Cen, and the top-left panel showing the field-dominated SCYA-35. The bottom row shows examples which include rare complications. For SCYA-47 in the bottom-left, its low stellar density as a function of \(D\) over a wide range in \(D\) resulted some outliers that skew the fit over \(D=0.7\). The result shown restricts the fit to \(D=0.7\), providing a strong agreement between the points and curve. The bottom-right panel shows SCYA-76, which is a group with so little field contribution that it does not produce a sensible curve. The fit provided is therefore just the average \(P_{mem}\). We include stars that do not pass the astrometric and photometric quality restrictions, and include relevant flags necessary for re-imposing these restrictions, including the astrometric and photometric quality flags, \(\pi/\sigma_{\pi}\), \(P_{mem}\), and whether the star was in the young set used to define the group. We also provide basic information for each prospective member, including the Gaia ID, position, and magnitude. In Table 2 we provide a summary of all notable stellar samples referenced throughout this paper, which serves as an overview of the operations that led to the final sample shown in Table 1. The distributions of groups in Galactic Cartesian coordinates, Galactic sky coordinates, and transverse velocities are shown in Figures 8, 9, and 10, respectively. To improve the view and highlight the most notable associations, we plot groups with 8 or more OB stars in the founding population separately, and provide a second panel for groups with less than 8 O an B stars. This includes many well-known groups as well as a few with limited coverage in the literature, which are discussed in Section 6.1. The Circinus complex is a peculiar case in transverse velocity, so we discuss it separately in Ap Figure 8: Distribution of groups we identify in galactic coordinates, shown here in XY space, with all groups labelled. Stars shown are limited to young founding members. This panel shows the 15 groups with 8 or more O and B stars included in the founding population, which we use to indicate the most heavily populated groups. pendix B. We provide interactive versions of all three plots in the online-only version of this paper, allowing for subsets of the data to be selected as desired, alongside age measurements, which are introduced in Section 6.2. ## 6 Survey and Cluster Overview With 116 associations detected, our updated census provides a significant expansion over SPYGLASS-I, which found only 27 groups at the top level, or 26 excluding the older Pleiades cluster. Much of this expansion comes from the widening of our survey to a 1 kpc search horizon, although we also see significant deepening of our survey within the 333 pc search horizon of SPYGLASS-I. Of the 116 groups in our sample, 41 have at least 10 stars within 333 pc, making them detectable using only the stars within that radius. We therefore nearly double the number of groups available relative to SPYGLASS-I, even when that original search radius is considered. The increase in survey depth is even more Figure 8: (continued) The remaining 101 groups we identify, shown in XY galactic space. The labelled groups in the first panel are shown in the background in grey so that locations relative to these larger groups can be assessed without the larger groups obscuring the smaller ones. An interactive version in XYZ 3D galactic coordinates is available in the online-only version of this figure, which includes age data (see Section 6.2), as well as buttons to restrict the sample to interesting subsets of groups. notable when we consider group mergers. The higher sensitivity in this work often results in low-density populations connecting adjoining associations (see Section 7.1), such that the 26 young populations in SPYGLASS-I merge to form only 16 populations here. As a result, this survey expands to the number of populations represented within 333 pc by a factor of 2.6, and expands the total list by more than a factor of seven. Since much of the 30 Myr old sequence has already descended close to the main sequence, nearly all of the identifiably young stars are M dwarfs. As such, unless the association is large enough to be identified purely based on O and B stars, our methodology loses the ability to detect new structures with age \(\tau\gtrsim 25\) Myr at about 400-500 pc, where Gaia begins lose sensitivity and astrometric quality at the bottom of the pre-main sequence. Therefore, despite the rich set of stellar populations identified here, there is likely significant structure in the outer reaches of our search radius waiting to be discovered. In this section we provide basic information on the populations identified in this work. This includes both a broad overview of the position the populations we identify hold within the literature, and their intrinsic properties, allowing each association to be contextualized within our current knowledge of associations. ### Our Groups in Literature \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline SCYA & Gaia DR3 ID & RA & Dec & \(d\) & \(m_{G}\) & \(G_{BP}-G_{RP}\) & \(A^{\overline{d}}\) & \(p^{\overline{b}}\) & \(\pi/\sigma_{\pi}\) & \(P_{r<50Myr}\) & \(P_{mem}\) & \(F^{c}\) \\ & & (deg) & (deg) & (pc) & & & & & & & \\ \hline 1 & 6086677607216395264 & 200.1376 & -47.2830 & 585.1 & 14.47 & 1.40 & 1 & 1 & 73.2 & 0.350 & 0.000 & 1 \\ 1 & 6088255921798929280 & 199.2986 & -44.2373 & 778.3 & 15.53 & 1.53 & 1 & 1 & 35.6 & 0.708 & 0.000 & 1 \\ 1 & 6097169628200000640 & 210.9157 & -44.2703 & 669.1 & 20.26 & 1.00 & 1 & 1 & 3.1 & & 0.080 & 0 \\ 1 & 6097286313872329600 & 211.9991 & -43.8656 & 658.0 & 20.27 & 1.96 & 1 & 0 & 2.7 & 0.092 & 0 \\ 1 & 60976529664263186 & 212.8071 & -43.3230 & 666.6 & 17.21 & 1.95 & 1 & 1 & 16.9 & 0.002 & 0.084 & 0 \\ 1 & 6097678221050944512 & 212.5070 & -43.3811 & 644.2 & 18.81 & 2.67 & 1 & 1 & 6.3 & 0.006 & 0.052 & 0 \\ 1 & 6097710347401574784 & 212.9618 & -43.0180 & 673.8 & 20.04 & 1.67 & 1 & 0 & 3.8 & & 0.075 & 0 \\ 1 & 6097742817358919680 & 213.6807 & -43.1084 & 649.2 & 16.05 & 1.45 & 1 & 1 & 28.2 & 0.000 & 0.051 & 0 \\ 1 & 6097784220841457536 & 214.1132 & -42.5274 & 660.8 & 20.79 & 1.12 & 1 & 0 & 2.3 & & 0.052 & 0 \\ 1 & 610785990058069440 & 208.0558 & -44.4360 & 697.7 & 20.44 & 1.00 & 1 & 0 & 2.5 & & 0.073 & 0 \\ 1 & 6107886293162139776 & 208.5639 & -44.46361 & 695.0 & 16.77 & 1.68 & 1 & 1 & 17.9 & 0.001 & 0.066 & 0 \\ 1 & 6107915597718417664 & 209.2874 & -44.1924 & 647.8 & 19.71 & 2.54 & 1 & 1 & 3.9 & & 0.052 & 0 \\ \hline \end{tabular} \({}^{a}\) The boolean solution to the astrometric quality cut, which is based on the unit weight error. 1 passes, 0 fails. \({}^{b}\) The boolean solution to the photometric quality cut, which is based on the BP/RP flux excess factor. 1 passes, 0 fails. \({}^{c}\) indicates whether the star was part of the robustly young stellar population used to identify the group in the clustering stage. \end{table} Table 1: Candidate members of young associations, sorted by their SCYA group ID. We include both members used to identify the population, and members of the extended population with less certain youth, provided that they have \(P_{mem}>0.05\). We include the Gaia ID, numerous basic properties, and flags used to assess the Gaia observation quality and membership likelihood of each member. \begin{table} \begin{tabular}{c c c} \hline \hline Sample & Number of Stars & Description & Section \\ \hline Unrestricted (UR) & 94,238,210 & Full Gaia sample & 2 \\ Quality-Restricted (QR) & 52,965,232 & UR with poor measurements removed & 2 \\ Photometrically Young (PY) & 418,611 & QR with \(\mathrm{P}_{Age<50Myr}>0.2\) & 3.3 \\ High-Quality Young (HQY) & 181,524 & PY with restrictions to improve clustering & 4.1 \\ Young Clustered (YC) & 38,899 & HQY stars located in a cluster & 4.1 \\ Extended Clustered (XC) & 3,053,874 & YC + phase-space neighbors from UR & 4.2 \\ Vetted Young Clustered (VYC) & 36,182 & YC with false groups vetted & 4.3 \\ Extended Vetted Clusters (XVC) & 1,317,609 & XC with false groups vetted & 4.3 \\ Final Cluster Candidate Sample & 1,222,152 & XVC with low-\(P_{mem}\) stars outside VYC removed & 5.2 \\ \hline \end{tabular} \end{table} Table 2: Summary of populations discussed in this paper, including their sizes, an explanation of their origin, and the section of this paper they originate from. The sample in Table 1 is given by the “Final Cluster Candidate Sample”. Most of the associations we identify in this work are known in the literature. We therefore must cross-match our lists with known populations to locate existing identifiers that better contextualize these groups. For better-established groups we simply provide common names, while some of the smaller groups required direct cross-matching with additional lists. We compared our catalogs to the Theia and UPK lists (Kounkel and Covey, 2019; Sim et al., 2019), as well as the catalogs from Cantat-Gaudin et al. (2020) and Prisinzano et al. (2022), which represent four of the deepest existing surveys covering young stellar populations. We initially flagged groups with any stars cross-listed with any of these catalog associations, and then investigated each possible match individually to determine whether the match has both clear overlap with the core of our group, as well as a similar extent. Those with reasonable matches have their catalog IDs provided as their name in Table 3. Of the 116 groups identified, 74 had clear equivalents, with the remaining groups either overlapping with but clearly different from a known group, or completely unknown. Figure 9: The distribution of groups in l/b galactic sky coordinates. Our plotting choices are the same as for Figure 8, with the top panel showing the large groups with 8 or more OB stars, and the bottom panel showing all other groups, with the larger groups in grey. An interactive form of this plot is available in the online-only version of this figure, which paints groups by their distance. Cases of overlapping but non-equivalent populations were most common in comparisons with the Kounkel and Covey (2019) catalog, as our populations were often grouped together under large-scale "string" structures rather than separate populations. We have our own independent merging condition through the use of \(\epsilon\) in our HDBSCAN implementation, which merges structures by identifying young stellar populations in between that bridge disparate overdensities into continuous structures. While many groups remained independent from each other in our clustering results, this merging condition did cause some mergers at scales comparable to or exceeding those of Kounkel and Covey (2019), including the merging of the widely-separated populations of Perseus OB2 and the Orion Nebula Complex (see Section 7.1). Despite frequent group mergers in our sample, some of the "strings" in the Theia catalog contain many SCYA groups. While our sensitivity to late K and M stars drops at \(\sim 400-500\) pc, resulting in the potential loss of some connecting structure, many of the Theia strings containing the most SCYA groups are within that radius, suggesting that these groups' lack of a merger is not a sensitivity issue. As a result, we see little reason for further agglomeration of groups, and therefore treat independent populations within Theia groups as new populations, rather than components of a known agglomeration. However, we do note groups with overlap in Table 3. This choice is supported by follow-up studies which have shown a lack of consistent velocity coherence in these strings, further questioning whether the sub-components of these structures have genuine connections to one another (Zucker et al., 2022). Out of the 116 groups we find, we categorize 27 as overlapping with known populations, but having a different enough extent that they cannot be directly connected to any known population. One of these groups, SCYA-47, is substantial enough that it is included in the set of large groups with 8 or more OB stars shown in Figures 8, 9, and 10. It includes components of Theia 86 and 87 from the Kounkel and Covey (2019) catalog as well as parts of Prisinzano et al. (2022) group 633, although none of these groups resemble the extent we show. Due to its status as a new and substantial population, we refer to this group as the Canis Major South (CaMaS) Association for future reference. SCYA-7, which contains the open clusters NGC 6250 and NGC 6178, was not counted as unknown in its presented extent since it is dominated by known structures, however the clusters it contains have not been previously connected. Due to the scale of the structure, we name it the Norma-Ara-Scorpius (NAS) Complex for future reference, after its location near the tripoint between those constellations. In addition to the 27 groups with weak connections to known populations, 15 groups had no recognizable equivalents across these catalogs. While this paper was under review, a new catalog which uses HDBSCAN clustering on an age-unrestricted DR3 dataset was published by Hunt and Reffert (2023), providing one of the most sensitive general cluster surveys to date. This paper independently discovered 5 of Figure 10: The same as Figures 8 and 9, but in transverse velocity. The left panel shows groups with 8 or more OB stars, and the right panel shows all other groups. An interactive version of this plot is available in the online-only version of this figure. There we also have selector buttons that limit the sample to either newly-discovered groups with higher average velocities (see Section 7.3), or the Circinus complex, which is shown separately in Figure 14. the 15 previously-undiscovered groups. Of the groups with weak connections to known populations, 7 of 27 have close matches with the Hunt and Reffert (2023) catalog, with most of the remaining populations having at least some overlap. However, despite this survey's sensitivity, we found that 14 of our populations had no equivalent in the Hunt and Reffert (2023) catalog, demonstrating that an age-restricted survey may be necessary to detect the most tenuous young groups. After the inclusion of the DR3-based Hunt and Reffert (2023) cluster survey, 10 of our groups are entirely unknown, and an additional 20 have no direct equivalent in the literature. This publication therefore provides a significant deepening of our knowledge of young associations over not just SPYGLASS-I, but also the existing literature. The properties of our newly-discovered groups in spatial and velocity coordinates are often unique, and we discuss the properties of these groups in further detail in Section 7.3. ### Ages While stellar ages can be calculated directly from the SPYGLASS infrastructure (as outlined in SPYGLASS-I), ages for individual stars can be impacted drastically by local inaccuracies in the reddening corrections, along with typical scatter in magnitude and distance. Computing ages on an association level can smooth out many of these systematic variations, resulting in a more reliable solution. While many groups contain significant internal age variations such as those shown in SPYGLASS-II, association-level ages nonetheless provide a broad age categorization for the group, which can be later broken into sub-populations for studies on the association level. We therefore only provide association-level ages in this publication. Our age calculation roughly following the method employed in SPYGLASS-I, using a restricted form of the extended population for isochrone fitting. The quality restrictions we employ are selected to minimize contamination from non-members and binaries. First we required that all stars in our fits have \(P_{mem}>0.9\) and RUWE\(<\)1.1, removing most field contamination and binaries, respectively. This cut on RUWE is quite harsh, and was used in SPYGLASS-II to produce a sample nearly free of unresolved binaries for fitting, at the expense of completeness (Bryson et al., 2020). We then restricted our sample to the pre-main sequence by requiring that \(1.2<G_{BP}-G_{RP}<4\) and \(G>3\), which provides a region where age varies with magnitude in a predictable and consistent manner. An additional vertical cut to limit field contamination was also included, requiring that all stars be above an 80 Myr solar-metallicity PARSEC isochrone. This requirement was omitted for any older populations with part of their sequences below this isochrone. The lower sensitivity of our star detection for older populations means that groups in this category tend to dominate their parameter space, typically making broader selections appropriate for capturing the group without a considerable increase in contamination. The restrictions presented so far represent a harsh set of limits, especially the requirement that \(P_{mem}>0.9\), a condition which members of some associations never pass. We therefore provide loosened restrictions that allow age calculation for smaller and more tenuous groups. For groups with less than 8 stars that meet our initial requirements, we first relax the binarity cut to RUWE\(<1.2\), a cut used in SPYGLASS-I and SPYGLASS-III in cases where completeness is desired over purity. For any sample that still has less than 8 members under that loosened restriction, we simply select the 8 members with the highest \(P_{mem}\) out of a sample that satisfies our color and magnitude cuts and has RUWE\(<1.2\). While samples with less restrictive cuts on \(P_{mem}\) are much more prone to contamination, we find that our cut on Figure 11: A sample age fit, showing SCYA-3 (H-R 3321). The stars included in the fit are shown as black diamonds, and the best fit pre-main sequence isochrone is shown as a thick red line. The stars included follow the restrictions described in Section 6.2, which limit the contamination from binaries and field stars. Isochrones of 10 Myr, 20 Myr, 40 Myr, 80 Myr, and 1 Gyr, from top to bottom, are shown for reference. The full set of fits for all 116 SCYA associations is provided in the online version of this paper. the 80 Myr isochrone does enough to limit field contamination that reasonable age solutions are still attainable. Once the samples for each group were selected, we used a least-squares fitting algorithm to fit the stars in color-magnitude space against a grid of PARSEC isochrones from 1 to 80 Myr. The resulting age fits are shown in Figure 11, showing the high-confidence stars used for fitting alongside the best-fit isochrone. We do not include any uncertainties in our results due to the likely presence of substructure within these populations. The most obvious of these is the Vela complex, which shows a very clear double sequence in its CMD, however there are many other examples such as Sco-Cen and Perseus-Orion which also show evidence for non-coeval substructure. However, due to the scale of this survey, the complete sub-clustering and age-dating of all substructures is beyond the scope of this publication. More substantial populations with extensive substructure will therefore require regional studies to provide a comprehensive view of their history. However, for smaller associations without significant internal structure, these ages should be much more widely applicable. ### Other Bulk Group Properties Aside from age, we compile numerous other cluster properties in Table 3, following the properties included in the tables in SPYGLASS-I. These include median sky positions in galactic and celestial coordinates, as well as median distance, proper motion, and transverse velocity. For distance we also include a standard deviation, which can be indicative of the radial extent of associations at short distances, but tends to be increasingly dominated by parallax uncertainty towards the edge of our search radius. We also include approximate on-sky extents and velocity extents. The values reported are major and minor axes fit from a multivariate Gaussians, and come from the galactic l/b coordinates and \(v_{T,l}/v_{T,b}\) transverse velocity coordinates, respectively. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline SCYA & Name\({}^{\mbox{\small\boldmath$d\bar{b}$}}\) & N\({}^{\mbox{\small\boldmath$C$}}\) & RA & Dec & l & b & \(D_{sky}\)\({}^{\mbox{\small\boldmath$d$}}\) & d & \(\mu_{RA}\) & \(\mu_{Dec}\) & \(V_{T,l}\) & \(V_{T,b}\) & \(\sigma_{V_{T}}\)\({}^{\mbox{\small\boldmath$e$}}\) & Age \\ & & & (deg) & & (deg) & & (deg) & & (pc) & & (mas/yr) & & (km/s) & (km/s) & (Myr) \\ \hline [MISSING_PAGE_POST] heia \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \hline SCYA & Name\({}^{\mathbf{a\overline{b}}}\) & N\({}^{\mbox{C}}\) & RA & Dec & 1 & b & \(D_{sky}\)\({}^{\mathbf{d}}\) & d & \(\mu_{RA}\) & \(\mu_{Dec}\) & \(V_{T,l}\) & \(V_{T,b}\) & \(\sigma_{V_{T}}\)\({}^{\mathbf{e}}\) & Age \\ & & & (deg) & & (deg) & & (deg) & (pc) & (mas/yr) & (km/s) & (km/s) & (Myr) \\ \hline [MISSING_PAGE_POST] heia 232 & 21 & 16 & 51.2 & 125.1 & -11.5 & 4.6 \(\times ## 7 Notable Features The results presented in Section 6 cover a range of associations with widely varied positions, velocities, and sizes. A substantial share of these populations are ei ther little-studied or completely unknown, so these populations provide important opportunities for future research and discoveries related to young stars, their formation mechanisms, and the properties of the planetary systems they contain. In this section, we highlight some of the most notable features to emerge from our analysis, and discuss their potential implications. The results we discuss are drawn directly from the data summaries provided in Table 1, the group overviews in Table 3, and the visual representations provided in Figures 8, 9, and 10. This provides a showcase of potential future research avenues which the study of our associations would facilitate. ### Large-Scale Mergers Since our clustering was designed to identify regions of interest for studies of common star formation, we merged some regions which are not typically merged. The most notable examples of this are the Vela complex, which includes a few dynamically different subregions described in Cantat-Gaudin et al. (2019), and the Orion-Perseus complex, which contains the Orion Molecular Cloud complex, Perseus OB2, and the recently-discovered but still notable Monoceros Southwest region described in SPYGLASS-I, in addition to weakly-defined connecting structure. Sco-Cen also grows in this survey, merging its extent from SPYGLASS-I with the main Chamaeleon Complex (i.e., Cha I and II). The last substantial new merger in this publication is SCYA-96, which contains the populations of Lyra, Cerberus, and Cepheus-Cygnus from SPYGLASS-I in addition to previously-known and more distant \(\delta\) Lyrae and RSG-5 clusters (Stephenson, 1959; Roser et al., 2016). The combined population contains over 1000 stars spanning nearly 300 pc. We name this population Cepheus-Hercules, or Cep-Her6, after the endpoints of the constellations it spans, and we discuss the potential implications of this population and other similar structures in Section 7.2. Footnote 6: pronounced to rhyme with “Zephyr” While components of all other major merged populations have as least some history of being studied together (see SPYGLASS-I for Sco-Cen, Cantat-Gaudin et al., 2019 for Vela, and Kounkel and Covey, 2019 for Cep-Her), the merged Orion-Perseus association (SCYA-115) is unique in the range of populations it includes, with the component Orion Nebula Complex, Perseus OB2, and even Monoceros Southwest complex all having been consistently viewed as separate to date, even by works like Kounkel and Covey (2019) which routinely merge smaller-scale structures. These populations are merged despite the exclusion of the \(\lambda\) Orionis Cluster, which is occasionally discussed alongside the Orion complex (e.g., Zari et al., 2019). The reason for their merger in this work is the presence of tenuous young stellar populations filling the space between the populations, largely consisting of stars identified as part of Taurus-Orion IV in SPYGLASS-I. Despite the disparate positions of the core populations being merged, the components of this combined population actually have very similar ages to one another in SPYGLASS-I, with both Perseus and Orion having regions of active star formation alongside older generations with maximum ages around 20 Myr old. Monoceros Southwest is older at \(\sim\) 25 Myr, but not far from the range seen for the other populations. This suggests that the entire complex could be explained by a single, initially co-spatial event in which star formation begins in a large-scale molecular cloud which breaks up, likely under the influence of feedback from O and B stars, eventually resulting in the emergence of more significant star formation events in Perseus, Orion, and Monoceros Southwest, while leftover material in between produces smaller star-forming events that link the structure. More thorough traceback and age-dating will however be necessary to assess these connecting structures and establish whether they are dynamically consistent with being bridge structures between these larger populations. This work will be necessary to confirm these connections, as the low velocity between the complex and the field makes Orion particularly vulnerable to field contamination, which may expand group boundaries and falsely merge structures. ### Cep-Her and Evidence for Large-Scale Patterns The Cep-Her complex (SCYA-96) is the fifth-most populated association we identify in this survey, and it is also one of the largest, with an end-to-end spatial extent comparable to Sco-Cen. This is not the first time a large population in this region has been proposed. The structure was first broadly identified on stellar density maps by Zari et al. (2018) and later presented by Kounkel and Covey (2019) as the 3000-member Theia 73, which contains many of the populations included by HDBSCAN in this work. However, Theia 73 is not identical to our definition of Cep-Her, as it skips many of the sub-populations closer to the galactic equator, while including populations like CFN which our HDBSCAN implementation sees as completely separate. The region containing Cep-Her is also noted by Prisinzano et al. (2022), where it is labelled as connected to the \(\delta\) Lyr cluster. In this work, Cep-Her contains 1164 founding young members. Given the lower sensitivity of our young star identification for ages near Cep-Her's bulk age of 29 Myr, we expect a low recovery rate for members in the range of 20-25% (see the completeness discussion in SPYGLASS-I), implying a total population of around 5000. This puts Cep-Her on a similar scale to many of the other great young associations in the solar neighborhood, and its age makes it a potential older analog to populations like Sco-Cen. Cep-Her also contains some of the youngest stars in the Kepler field, and planets have already been found around multiple stars within, making Cep-Her of particular interest for studies of young planets (Bouma et al., 2022). However, the large scale of Cep-Her may make it particularly important as a probe of the large-scale star-forming structures present at the time of its formation. Figure 12: The distribution of ages for stellar populations, presented in the XY plane, and limited to the inner 500 pc where patterns are visible for stars near 30 Myr. Annotations show the locations of the Radcliffe Wave, the Split, and our proposed older structures. The age distribution can also be interacted with in the online-only version of Figure 8, which allows the isolation of structures around 30 Myr, or the young structures (age \(<20\) Myr), which are plotted alongside the outlines of the Radcliffe Wave and the Split. The study of large-scale star-forming structures has advanced considerably in recent years with the discovery of the Radcliffe Wave (Alves et al., 2020) and the Split (Lallement et al., 2019), which were identified as kpc-scale dust overdensities. Both of these structures include stellar populations alongside this gas and dust, which comprise most major young associations in our sample (Zucker et al., 2022). We show the spatial distribution of stars in our survey within the nearest 500 pc in Figure 12, coloring groups by the ages computed in Section 6.2. Components of the Radcliffe Wave are traced by a chain of substantial associations, which all have ages younger than 10 Myr. Our view of the Split contains a wider range in ages due to the older populations in Sco-Cen and Vela which exist alongside regions of active formation, but Sco-Cen, Vela, and Ser Figure 13: Component populations of proposed 25-35 Myr structures, which are extended along an axis perpendicular to known spiral structure. We display these structures in XY and XZ galactic coordinates, showing their spatial coherence, as well as \(l\)-\(v_{T,l}\) space, in which continuous sinusoidal arcs typically indicate common velocities which are modulated by projection effects in \(l\). Shades of orange are associated with Cep-Her, while shades of purple and blue are associated with Carina-Musca. We also include the near edge of the Vela complex in black, which has a similar age and location to Carina-Musca. Both proposed structures follow consistent arcs in both XY and XZ coordinates, with Cep-Her’s proposed structure being inclined to the galactic plane, while Carina-Musca’s structure is relatively flat. Cep-Her and its companions show a tight velocity trend in \(l\), consistent with projections of a single common velocity. Carina-Musca has multiple components in \(l\)-\(v_{T,l}\) space, indicating that it is less dynamically coherent, but it still shows a few continuous arcs, with the largest consisting of Carina-Musca, SCYA-111, P51, Theia 246, and part of Vela, which is similar to Cep-Her in its spatial extent. We show the projected median 3D velocity vectors of the \(\delta\) Lyrae cluster in Cep-Her and Carina-Musca as red and blue curves, respectively, demonstrating relatively consistent velocities of these groups. These structures are prime targets for RV followup, which would enable the reconstruction of patterns in their formation sites via traceback. pens still provide a strong outline for the local extent of this structure. The young ages of the populations associated with the Split and the Radcliffe Wave reflect their discovery through dust overdensities, which disperse soon after star formation. O and B stars have been used to identify similar structures without the presence of gas, such as the Cepheus Spur (Pantaleoni Gonzalez et al., 2021), however the potential for discoveries using this method are limited by the rarity of these luminous stars. The discovery of the Cep-Her therefore provides an older complement to the younger structures which comprise the Radcliffe Wave and the Split. The new-found accessibility of these older populations allows us to begin tracing the evolution of star formation not just within populations, but as a continuous system of interconnected processes that facilitate the initiation, progression, and termination of star formation throughout the nearby spiral arms. Cep-Her's age provides ample time for the warping and dispersal of its population, so conclusions based on the current configuration of the association should be treated with caution, pending dynamical traceback studies. Nonetheless, the current forms of Cep-Her and some other nearby groups show patterns which are unique compared to the younger structures, most notably their spatial orientations. Unlike most large nearby structures, which either follow the nearby spiral arms or diverge from them at a very acute angle (Zucker et al., 2022), Cep-Her has a near-perpendicular orientation, spanning the 400 pc gap between the Radcliffe Wave and the Split while having a width of only about 100 pc. Similarly-aged but separate populations are also found on either side of Cep-Her, accentuating this pattern and bringing the total length of this possible structure to over 500 pc. We show the spatial distribution of the potential component populations in Figure 13, alongside the variation in \(v_{T,l}\) transverse velocity with galactic \(l\) coordinate, which we use to indicate whether velocity spreads are consistent with geometric projection effects. There we show that the proposed component populations lie within a common plane in 3D space which is inclined to the galactic plane. Most velocity variations within these populations occur along the \(v_{T,l}\) axis, and this variation has a very consistent trend in \(l\), indicating that geometric projection effects dominate the velocity differences that we see. To verify this, we calculated projected radial velocities across Cep-Her using the known radial velocities and proper motions of the \(\delta\) Lyrae cluster, a centrally-located component of Cep-Her (Cantat-Gaudin and Anders, 2020; Tarricq et al., 2021). The result shows that mean velocities in Cep-Her, IC 4665, and Theia 98 all stay within 5 km/s of this velocity vector throughout their extents in \(l\), limiting the potential for structural change in the time since its formation. This perpendicular orientation for associations around 30 Myr is not unique to Cep-Her and its neighbors, as another chain of \(\sim\)30 Myr old groups can be found opposite the Sun from that complex, stretching from Cannis Major North (SCYA-65) to SCYA-111 and containing Carina-Musca. The subregions of Vela and Sco-Cen that overlap with this chain of associations, namely the CG4 subgroup in Vela (Cantat-Gaudin et al., 2019) and the IC 2602 branch in Sco-Cen, are only slightly older than these other structures, with SPYGLASS-I ages between 30 and 45 Myr, indicating a possible extension to this age pattern. Like in Cep-Her, all potential component groups lie along the same plane in 3-D galactic coordinates, however they are less clearly connected in velocity coordinates, with a few distinct arcs emerging in \(l\)-\(v_{T,l}\) space (see Figure 13). While this does not conclusively rule out a connection between these groups, it does emphasize the importance of a full 3-D dynamical analysis, which can establish whether these dynamically-distinct components trace back to any coherent past structure. However, even if individual arcs in \(l\)-\(v_{T,l}\) space are assumed to be separate, we still see a 400 pc-long structure in the region with velocity coherence, consisting primarily of Carina-Musca, SCYA-111, and part of Vela. We show a projected velocity vector for Carina-Musca in Figure 13, which is calculated using Gaia RVs, proper motions, and positions for stars in our sample with \(P_{mem}>0.8\), and like in Cep-Her, the differences between this velocity vector and mean transverse velocities in these populations remain less than 5 km/s throughout its extent. Regardless of its exact parameters, the presence of another structure in addition to Cep-Her may nonetheless indicate a pattern of star formation along a perpendicular mode which dominated local star formation between 25 and 35 Myr ago. This motivates follow-up dynamical studies to confirm that these groups were both coherent and maintained orientations perpendicular to the spiral arms at the time of formation. If the current pattern of arm-perpendicular structures holds when traced back to formation, it would indicate that star formation in the solar neighborhood recently followed a pattern dominated by spurs, which are defined as chains of OB associations that deviate from the main arm at large angles (La Vigne et al., 2006). These structures are often seen cutting between spiral arms in other galaxies, and extend outwards from the primary dust lanes and young stars that trace the spiral arms (e.g., La Vigne et al., 2006). Spurs are also often found to have relatively regular disk density-dependent spacing of between 100-700 pc in both observations and models, which is quite comparable to the 400-500 pc between the structures we propose here (e.g., Wada and Koda, 2004; Dobbs and Bonnell, 2006; La Vigne et al., 2006). These structures therefore appear consistent with established patterns in galaxies, and could represent a strong local equivalent. However, we also know that the patterns of local star formation in the last 10-20 Myr appear to be dominated by large-scale structure with pitch angles consistent with spiral arms (Zucker et al., 2022). This suggests that the sun has recently seen a transition in nearby large-scale star formation patterns, moving from a mode dominated by structures perpendicular to the arm to a mode dominated by kpc-scale arm-aligned structures within the last 30 Myr. This further motivates more large-scale mapping, traceback, and age dating within the solar neighborhood to better understand this change, and established whether it is due to the evolution of local structure, or just the movement of gas-decoupled associations through different sections of the spiral arm. Young associations have the unique capability to trace star formation in not just space, but also time, enabling a view of the evolution of these structures which could provide critical comparisons to models of spiral arm development and evolution (e.g., Shetty and Ostriker, 2006; Kim et al., 2020). ### Small, High-Velocity, and High-Latitude Groups Of the populations we detect here, 15 of them were unknown prior to Gaia DR3 (e.g., Kounkel and Covey, 2019; Sim et al., 2019; Prisinzano et al., 2022). All of these groups have internal velocity scatters consistent with other groups in our sample (see Table 3, Figure 10), which are not consistent with the typical scatter of field populations (see Appendix B). Some have pre-main sequences that clearly separate from the field, while others are much more tenuous, making spectroscopic follow-up necessary to confirm their existence (see the full figure set for Figure 2). The most tenuous populations can have fewer than 10 pre-main sequence stars that visibly separate from the main sequence for some range of \(P_{mem}\), with values of \(P_{mem}\) that never exceed 0.3. These groups are therefore near the limit of detectability using a purely photometric and astrometric survey. The inclusion of additional radial velocities on a mass scale would help to limit the field contamination during clustering, allowing more tenuous structures to be detected. However, with Gaia RVs generally limited to magnitudes \(G<14\) and often containing large uncertainties (Gaia Collaboration et al., 2022), such an update would require generational improvements to our RV measurement infrastructure. Nonetheless, if spectroscopically confirmed, these groups would represent a substantial new reservoir of young associations, significantly expanding our knowledge of recent star formation. Many of these populations are noteworthy for their unique properties relative to some of the better-known large associations. The mean transverse velocities of these groups in particular are often quite anomalous. SCYA-26 and SCYA-28 both have combined transverse velocities in excess of 50 km s\({}^{-1}\), and combined nearly half of these small new associations have transverse velocities over 25 km s\({}^{-1}\). While there are counterexamples to this pattern, such as the low-velocity groups of SCYA-104 and SCYA-72, mean velocities exceeding this speed only exist in \(\sim\)20% of other associations, indicating these high velocities are overrepresented in this set of tenuous populations. There are also three groups in this sample which reside more than 100 pc from the galactic plane, most notably SCYA-2 and SCYA-3, a pair of small associations with 200 pc \(<Z<300\) pc. Their locations high above the galactic plane are likely a side-effect of the high velocities seen in other groups, in which substantial vertical velocities near the galactic plane translate into positions high above the galactic plane at the peaks of their orbits' vertical oscillations. None of the groups with \(|Z|>100\) pc have transverse velocities above 25 km s\({}^{-1}\), so they are likely currently near the peaks in their oscillations about the galactic midplane. As a result, the high velocities and high \(Z\) values may be different manifestations of the same pattern affecting a majority of these populations, a pattern which may indicate a unique origin for at least a subset of these associations. High group velocities likely imply high parent cloud velocities, which are common for material in the galactic halo (e.g., Wakker and van Woerden, 1997; Richter, 2017). These high-velocity clouds are thought to arise either through the direct infall of material from outside the galactic plane, or through the ejection of material from the galactic plane by a supernova which can then fall back to the galactic plane like a fountain (Heitsch et al., 2022). While the contribution of these clouds to current star formation is not well-established (e.g., Stark et al., 2015), recent surveys suggest that their presence extends close to the galactic disk, where interactions with dense gas in the galactic midplane may both compress the cloud and scoop up disk material, resulting in triggered star formation (e.g., Lehner et al., 2022; Fukui et al., 2021). If these groups do originate in these high-velocity clouds, their chemistry may provide a strong indication whether the clouds themselves originate in metal-rich disk material or metal-poor intergalactic gas, which has been an active topic of discussion in their study (e.g., Marasco et al., 2022). These high-velocity associations could also be formed during or soon after ejection from a star-forming region, an origin that should be identifiable with trace back studies, which would be indicated by formation co-spatial with a large star-forming region. Nonetheless, a detailed census of the ages, orbits, chemistry, and populations of these associations will be necessary to assess their origins. ## 8 Conclusion Using the improved astrometry and photometry from Gaia DR3, we have identified more than \(4\times 10^{4}\) young stars, which trace numerous recent star forming events in the solar neighborhood. Many of the features detected have never been seen before and could provide insight into new patterns of star formation. Our key findings are as follows: 1. We produced a new SPYGLASS Catalog of Young Associations (SCYA), which contains 116 young associations within 1 kpc. Of these associations, 10 are completely new discoveries, and a further 20 are known at least in part, but undergo significant redefinition of their extent. 2. Many populations in this new survey were found to have connecting structure linking them to other populations. This was most notably the case for the Orion Complex and Perseus OB2, which are connected by lower-density bridge populations including Taurus-Orion IV, which was identified in SPYGLASS-I. This could suggest a direct structural link early in their formation, which continues to manifest through these tenuous stellar populations 3. We defined a substantial new star formation complex in Cep-Her, which has an age of approximately 30 Myr. Its spatial extent and population of thousands of stars makes it a potential analog to younger complexes like Sco-Cen at a more advanced stage of dynamical evolution. Along with a parallel structure, it also has an orientation directly perpendicular to the Radcliffe Wave, suggesting that the current star formation patterns which closely follow the spiral arms have only recently been active. 4. Many of the newly-discovered associations have unique dynamical properties, including a pair of associations (SCYA-2 and SCYA-3) located more than 200 pc from the galactic plane, and more associations with very high transverse velocities, including two which exceed 50 km s\({}^{-1}\). This could indicate a new and unique demographic of young associations with orbits inconsistent with most of the molecular gas in the galactic plane, potentially indicating an entirely different formation mechanism. Our results show a wealth of unique structures that both hint to largely unknown processes, and suggest connections between structures that could reveal star formation patterns on a much larger scale. However, the large populations and extents of these structures necessitate collaboration to carefully assemble these small-scale associations into larger patterns. We hope that the extensive and accessible membership lists that we present here will help to facilitate these studies. RMPK is funded from the Heising-Simons Foundation. RMPK thanks the Texas Advanced Computing Center (TACC) at the University of Texas at Austin for access to their extensive computational resources, which were instrumental in facilitating our search for young populations in Gaia. RMPK also thanks Aaron Rizzuto, whose guidance and mentorship was essential to the development of the SPYGLASS program. The authors also thank Luke Bouma, whose helpful comments improved the clarity and content of this paper. Gaia Astropy (Astropy Collaboration et al., 2013), Matplotlib (Hunter, 2007), pandas (pandas development team, 2020), Numpy (Harris et al., 2020)
2304.11278
Power to the Data Defenders: Human-Centered Disclosure Risk Calibration of Open Data
The open data ecosystem is susceptible to vulnerabilities due to disclosure risks. Though the datasets are anonymized during release, the prevalence of the release-and-forget model makes the data defenders blind to privacy issues arising after the dataset release. One such issue can be the disclosure risks in the presence of newly released datasets which may compromise the privacy of the data subjects of the anonymous open datasets. In this paper, we first examine some of these pitfalls through the examples we observed during a red teaming exercise and then envision other possible vulnerabilities in this context. We also discuss proactive risk monitoring, including developing a collection of highly susceptible open datasets and a visual analytic workflow that empowers data defenders towards undertaking dynamic risk calibration strategies.
Kaustav Bhattacharjee, Aritra Dasgupta
2023-04-21T23:53:08Z
http://arxiv.org/abs/2304.11278v1
# Power to the Data Defenders: Human-Centered Disclosure Risk Calibration of Open Data ###### Abstract The open data ecosystem is susceptible to vulnerabilities due to disclosure risks. Though the datasets are anonymized during release, the prevalence of the release-and-forget model makes the data defenders blind to privacy issues arising after the dataset release. One such issue can be the disclosure risks in the presence of newly released datasets which may compromise the privacy of the data subjects of the anonymous open datasets. In this paper, we first examine some of these pitfalls through the examples we observed during a red teaming exercise and then envision other possible vulnerabilities in this context. We also discuss proactive risk monitoring, including developing a collection of highly susceptible open datasets and a visual analytic workflow that empowers data defenders towards undertaking dynamic risk calibration strategies. ## I Introduction Open data portals democratize access to hitherto proprietary data. _Data custodians_, like government agencies, can use open data to ensure transparency about their functioning, and _data subjects_, like citizens, can use them to gain insight into the education, healthcare, economic and demographic disparities. However, unrestricted and unchecked access to citizens' data can lead to adverse effects when misused by people with malicious intent. Though these datasets are generally anonymized before release, there are multiple examples where data subjects could be re-identified when these anonymized datasets were linked with other publicly available datasets. Researchers showed that 99% of Americans can be re-identified from heavily anonymized and incomplete datasets using a combination of the demographic attributes [31]. In 2016, the Australian Department of Health released _defintified_ medical records for \(2.9\) million patients (\(10\%\) of the population). However, researchers were able to re-identify the patients and their doctors using other open demographic information within a few months [9]. In another example, passengers' private information was disclosed through the public transportation open data released by the city municipal of Riga, Latvia [18]. These privacy breaches can affect citizens' trust and confidence in the government. People may likely provide false responses to census questionnaires if they think the confidentiality of these responses may be breached [3]. This calls for a comprehensive study of the possible vulnerabilities present in the open data ecosystem. Multiple studies have discussed disclosures while joining open datasets with private or enterprise ones [35, 24, 38, 23]. In this paper, the scope of our work is confined to the datasets available in the public domain since the open accessibility of these datasets poses a higher risk. As our first contribution, we discuss the curation of high-risk open datasets related to human subjects, along with methods that can detect such vulnerabilities (Section II). Next, we report these vulnerabilities observed during our ethical hacking exercises into the open data ecosystem (Section III). Finding signals of disclosures from a forest of open datasets can be challenging to the defenders of this ecosystem (data owners and data custodians, henceforth referred to as _data defenders_). Sawyer et al. observed that the performance of human observers deteriorates over time in a low-signal vigilance scenario, which is a likely scenario for data defenders [32], who are faced with the arduous task of finding needles, i.e., privacy vulnerabilities, in the unsuspecting haystack of linkable open data. As our third contribution, we discuss how vulnerabilities can be detected and triaged using visual analytic interventions [2] that can serve as cognitive aid for data defenders for continuous monitoring of privacy risks. We focus on the vulnerabilities discovered and their possible remediation through visual analytic solutions (Section IV). We also discuss future work and the challenges that must be addressed to protect open data from disclosure vulnerabilities arising out of highly plausible attack scenarios. ## II Methods In this section, we first provide a brief overview of disclosures in the open data ecosystem. This is followed by the methods we used to discover the vulnerabilities and develop a set of datasets that are highly susceptible to disclosures. ### _Background on open data disclosures_ Open datasets can be freely used, re-used, and redistributed by anyone [15]. The motivation behind creating open datasets is to promote transparency and accountability in public information, especially government data. It helps to democratize information instead of confining it within the data owners and a select few who can pay for it [16]. Governments worldwide share these datasets through various open data portals like NYC Open Data [26], Chicago Open Data [5], Australian Capital Territory Open Data [28], etc., and are generally guided by the FAIR data principle [40]. This principle provides guidelines to improve the \(f\)ndability, accessibility, interoperability, and \(r\)eusability of digital data. All these factors make the open data ecosystem a prime choice for research. However, due to their simple accessibility and findability, these open datasets are generally anonymized before release. Information-theoretic guarantees like k-anonymity [36], l-diversity [20], and t-closeness [19] are generally applied to these datasets to reduce the possible disclosure risks, i.e., the risk of sensitive information about the individuals mentioned in a dataset being disclosed. Still, joining two anonymized datasets using protected attributes can lead to the disclosure of sensitive information. Researchers were able to re-identify \(91\)% of all the taxis running in NYC using the NYC taxi open data and a taxi medallion dataset [13]. The sensitivity of the information contained in this dataset makes it prudent to protect it against all possible disclosures. But, finding disclosures becomes quite challenging for data defenders since these disclosures can be a function of time. Datasets released at a later point in time may affect the previously released datasets. Moreover, data defenders follow the practice of "release-and-forget" where, after a dataset's release, almost no checks are done to ensure the protection of these datasets against newly released datasets [31]. Thus, to protect the open data ecosystem from disclosure risks, a collaboration between multiple stakeholders is the need of the hour. Hence, we plan to empower the data defenders to inspect the privacy risks while joining open datasets. ### _Red team exercise_ In order to explore the vulnerabilities related to the open data ecosystem, we conducted a _red-team exercise_ with the help of researchers in data privacy and urban informatics. A red-team exercise can be generally defined as a structured process to better understand the capabilities and vulnerabilities of a system by viewing the problems through the lenses of an adversary [41]. In this subsection, we discuss the different stages of this exercise. **Quasi-identifiers and disclosures:** Red-team exercises generally follow the cyber kill chain. It starts with the initial reconnaissance step, where attackers try to find vulnerable entry points into any target system. Moreover, attackers used _quasi-identifiers_[22] like age, race, gender, and location to breach privacy by linking multiple datasets [37]. Inspired by this, we bootstrapped our red-teaming activity by searching for datasets with these known quasi-identifiers. During our initial exploration, analysis of these datasets led to interesting observations where some of the datasets have a highly skewed distribution of records across different categories of the quasi-identifiers. Since these datasets have meager number of records for a particular combination of age, race, gender, location, etc., joining them with other datasets can potentially expose sensitive information about these individuals. **Disclosures using pairwise joins:** These highly skewed datasets established that vulnerabilities exist in individual record-level datasets. However, this leads to an essential question of whether these datasets can be actually joined with other open datasets to expose sensitive information. Join is a fundamental operation that connects two or more datasets, and joinability is the measure to determine if two datasets are linkable by any number of join keys [12, 4]. When these _join keys coincide with protected attributes_ like age, race, location, etc., the outcome of the join can reveal sensitive information about an individual or even disclose the individual's identity. As a next step in the red-teaming exercise, we randomly selected vulnerable pairs of datasets from multiple open data portals [26, 29, 6] and analyzed them for _joinability risks_ regarding what kind of sensitive information may be leaked. **Disclosures using transitive joins:** Inspired by the disclosure examples while joining two datasets and the concept of transitive dependency in databases [8], we explored the concept that two datasets, which have no shared attributes between them, can still be joined if they have shared attributes with a third dataset. Consider that a state's criminal and health records datasets have no common attribute. However, joining them with a particular county records dataset that has shared attributes with both of them can lead to the disclosure of sensitive information. We experimented with different permutations of dataset joins to find an example of transitive disclosure. Though we did not find any examples of transitive disclosure at this stage, this can be an interesting field of research that can further strengthen the inspection of disclosure risk in open datasets. Hence, in another current work, we focus on assessing the risk of disclosure through transition (or _transitive disclosure risk_) in open datasets to prevent the disclosure of sensitive information about an individual or a group of individuals. ### _Data curation exercise_ Open data portals contain a multitude of datasets on varying topics like economics, health, and others. However, they may not be relevant in information disclosure about human activity. On top of that, the examples observed during the red teaming exercise press for an urgent need for a smaller subset of open datasets focused on disclosure risks. Hence we curated a seed set of datasets that contains a subset of the open datasets, which may be more susceptible to vulnerabilities related to disclosure. In this subsection, we discuss developing this dataset and the learning outcomes. **Data collection:** Many open data portals are developed using frameworks/APIs like Socrata API [34], CKAN API [7], DKAN API [11], etc. We selected the Socrata API as our source for the open datasets. Though other APIs could have served a similar purpose, we planned to start with Socrata and develop a generalizable approach that can help integrate the other publicly available APIs. First, we queried the list of all available data portals through the Socrata Discovery API. From each of these data portals, we queried the metadata for all the data items available within them. Data items include datasets, maps, data dictionaries, etc. We filtered these results and created a list of \(39,507\) datasets. Manually analyzing all these datasets would be a difficult task for any analyst. However, during our red teaming exercise, we understood that quasi-identifiers' presence could be an indicator of possible disclosures. Hence we developed a semi-automated process that filters datasets if they have some combinations of the known quasi-identifiers like _age_, _sex_, _race_, and _age group_, to name a few. After evaluating the attribute space of the selected datasets, we subsequently updated this list to include more such quasi-identifiers. This helped us to select a broader set of datasets that may be susceptible to disclosure risk through these quasi-identifiers. Multiple iterations of this process led to the development of a set of about \(5404\) datasets with some combination of the quasi-identifiers. **Data curation:** After reducing the set of candidate datasets, the next step was determining if these datasets relate to human subjects and activity. Hence, we started manually curating the metadata file to understand what each dataset pertains to. For each of the datasets, we opened them in their respective data portals and analyzed them to understand if they were related to human data subjects or not. We observed many such datasets with location attributes (like zip code, address, etc.). However, they do not necessarily relate to human beings, like datasets for _street lamps_, _building details_, etc. We dropped those datasets since they are irrelevant in this context. Removing these datasets related to non-human objects, we curated a seed set of \(426\) datasets of varying granularity. \(151\) of these datasets were individual record-level (e.g., records of people committing crimes) while the rest \(275\) datasets were aggregated record-level (e.g., college records) datasets (Figure 1). We understand that a dataset collection like this should be continuously updated. In the case of data defenders, they need to be provided with the infrastructure and techniques to set up data augmentation methods that can fetch and update this collection continuously. ## III Finding Vulnerabilities in Open Data The red teaming exercise and the set of highly susceptible datasets led to development of a few attack scenarios that the data defenders can emulate to discover vulnerabilities in the open data ecosystem. In this section, we discuss these attack scenarios along with some of the disclosure examples observed. The values reported in this section have been perturbed to a certain extent to protect the data subjects' privacy. **Attack exploiting vulnerable entry points:** Datasets with a highly skewed distribution of records for different categories of a quasi-identifier can serve as vulnerable entry points into the open data ecosystem. For example, the dataset _Whole Person Care Demographics 2_[39] from the _County of San Matee Datalhub_ portal [33] had only one record for a 28-year-old female of the Hawaiian race. This can lead to identity disclosure and leak of sensitive information when joined with other datasets. Another dataset, _Demographics for Public Health, Policy, and Planning_[10], from the same data portal, had only seven records for the age of \(18\). However, out of these seven people, only one person was female. This individual can be identified since other identifying attributes like race, language, and city were also present. This may also lead to attribute disclosure if other similar datasets are exploited. Thus, datasets with vulnerable entry points can be exploited to reveal sensitive information about human data subjects. The presence of such datasets in the open data ecosystem is a warning sign that calls for developing a method that acts as the trusted informer for data custodians and informs them of potential disclosures in a proactive manner. **Attack using suitable join keys:** The previous attack scenario established that vulnerabilities exist in individual record-level datasets. These vulnerabilities can be further exploited while joining them with other datasets using suitable join keys. Several iterations of the selection of joinable pairs and join keys led to the discovery of disclosure between the datasets _Juvenile Arrests_ and _Adult Arrests_ from the _Fort Lauderdale Police Open Data Portal_[14]. We observed that two individuals, aged \(16\) and \(20\), mentioned separately in these datasets, were involved in the same incident of larceny on \(10^{\text{th}}\) March \(2018\), at the Coral Ridge Country Club Estate, Fort Lauderdale. This can be an example of identity disclosure by joining two open datasets using a particular join key. Further investigation revealed other examples where two individuals, aged \(18\) and \(23\), mentioned separately in these datasets, were involved in the same incident of motor vehicle theft on \(18^{\text{th}}\) of July, \(2018\). The presence of linking attributes like _case id_ between datasets _Adult Arrests_ and _Citations_ helped to reveal an incident where a 26-year-old black male, who was arrested for larceny on \(27^{\text{th}}\) September \(2021\) at NW \(10^{\text{th}}\) Ave, Fort Lauderdale, was also cited for disobeying stop/yield sign and driving while license being suspended, at NW \(8^{\text{th}}\) Street, just around \(3\) miles away from the arrest location. A similar incident was also observed while joining datasets _Citations_ and _Juvenile Arrests_ on the linking attribute _case id_. In this incident, a 16-year-old white male was first charged with disobeying a red light. He was later arrested for possession of cannabis over \(20\) grams on \(6^{\text{th}}\) August, \(2015\), both at N Federal Hwy, Fort Lauderdale. **Attack exploiting quasi-identifiers:** We also observed such examples across other open data portals where datasets can be joined using different combinations of quasi-identifiers. Datasets _APD Arrests Dataset by Neighborhood_, and _APD Field Interview Cards Dataset by Neighborhood_ from the _Albany Police Department_[27] were joined on the attributes _age_, _race_, _sex_, and _neighborhoodo_y. We observed that a \(24\)-year-old white male was interviewed by the police in the Washington Park neighborhood at \(08\):\(08\) hrs on \(2^{\text{nd}}\) December, \(2020\) and was later arrested for trespassing on enclosed property at \(11\):\(42\) hrs. This leads to an attribute disclosure for the individual arrested as his arrest details are revealed. Joining other datasets Fig. 1: **Privacy-relevant Data curation:** The dataset development process starts with over \(216,000\) data resources from \(496\) data portals. After a few filtering steps, it consists of \(426\) highly susceptible datasets with different levels of granularities and distribution of quasi-identifiers. like _APD Arrests Dataset by Patrol Zone_ and _APD Field Interview Cards Dataset by Neighborhood_ from the same data portal revealed a similar incident where a \(27\)-year-old black female was interviewed at \(10{:}22\) hrs on \(13^{\text{th}}\) December, \(2020\) and was later arrested at \(20{:}27\) hrs for"assault with intent to cause physical injury". In another example, joining datasets _APD Field Interview Cards Dataset by Neighborhood_ and _APD Traffic Citations by Neighborhood_ on a broader set of quasi-identifiers like _age_, _sex_, _neighborhoodday_ and _date_ led to another interesting observation related to a police incident. We observed that a \(22\)-year old male was stopped for a field interview on \(3^{\text{rd}}\) January, \(2021\) at \(1{:}45\) am. Since field interviews are usual routine stop and search activities by the police, this may seem a regular incident. However, the other dataset informed that an individual of the same age and gender received a citation on the same date and at the exact location at \(1{:}48\) am, just \(3\) minutes after the incident from the first dataset. Since both these records seem to belong to the same person, this is a possible identity disclosure, and it was discovered using a combination of date and quasi-identifiers like location coordinates, age, and gender. **Attack leveraging background knowledge:** Next, we repeated this exercise with added background knowledge about the sensitive attributes used in police datasets and found examples where dataset joins ultimately led to disclosures. For example, two datasets, namely _Electronic Police Report_\(2016\) and _Electronic Police Report_\(2015\) from _New Orleans Open Data_ portal [25], were joined on quasi-identifiers generally used in police datasets like _victim age_, _offender age_, _victim race_, _victim gender_, _location_ and _offender gender_. On inspection of the joined records, we observed that a 23-year-old black male was charged with attempted robbery with a gun against a 29-year-old white male at 6XX Tobinquitous St on \(12^{\text{th}}\) July \(2015\) at \(01{:}00\) hrs and again on \(29^{\text{th}}\) April \(2016\) at \(03{:}00\) hrs with attempted simple robbery. This is an example of identity disclosure even when _masking techniques_ are used on the address. Another observation from these joined records revealed an incident where a runaway female juvenile of age \(16\) was reported at 85XX Dinkins St on \(24^{\text{th}}\) February \(2015\), and the same incident was closed through a supplemental report one and half years later on \(5^{\text{th}}\) December \(2016\). Incidents like these may be rare; hence, identifying the individuals from these records may not be difficult. ## IV Empowering Disclosure Evaluation through Visual Analytic Techniques The attack scenarios developed using the seed set of datasets highly susceptible to disclosure risks motivated us to explore the visual analytic solution space to understand if the risk can be inspected and communicated to data defenders, leveraging their knowledge through a human-in-the-loop approach. This led to the PRIVEE workflow and interface development, which can guide the data defender toward identifying disclosures using a combination of these attack scenarios. In this section, we discuss how these visual analytic interventions can help the evaluation of disclosures. During the red teaming exercise, we randomly selected datasets from various open data portals. However, the candidate datasets can be of the order of hundreds, thus increasing the number of possible combinations. Our collection of highly susceptible datasets has 426 datasets, thus leading to \({}^{426}C_{2}\) or \(90,525\) possible pairwise combinations. Analyzing all these combinations for disclosure can be a challenging task. Hence, we developed the PRIVEE workflow, leveraging the attack scenarios, which can help the data defender find joinable groups of datasets, triage them based on their risk score and ultimately identify disclosures. Now we discuss these steps using the _New Orleans Open Data_[25] portal. **Finding joinable datasets leveraging background knowledge:** The joinability of datasets depends on the presence of shared attributes between the datasets. Hence, developing Fig. 2: **Empowering disclosure evaluation through visual analytic techniques:** Using PRIVEE [2], a visual analytic tool for proactive disclosure risk monitoring, data defenders can (a) observe a cluster of joinable datasets formed leveraging their background knowledge, (b) triage the risky dataset pairs based on various combinations of the quasi-identifiers as the join key, and (c) evaluate common records for a particular join key to (d) finally identify disclosures (Section IV). clusters of datasets based on their attribute space and then understanding the cluster signatures can help find a specific group of highly joinable datasets. Suppose data defenders use their background knowledge in criminal history and select some quasi-identifiers popularly observed in police datasets like _victim age_, _victim gender_, _victim race_, _offender age_, etc. In that case, PRIVEE can automatically group the candidate datasets based on their shared attributes (Figure 1(a)). These groups are ranked based on the presence of the selected quasi-identifiers; hence, the first group of datasets would be more relevant based on the user's inputs. PRIVEE also offers insight into the cluster signatures, thus explaining the reason behind the formation of the clusters. **Triaging dataset pairs using quasi-identifiers:** The joinable clusters reduce the number of combinations to be analyzed to a great extent. Selecting the first cluster of \(8\) datasets leads to \(28\) different pairwise combinations. These datasets can be joined based on a _join key_ consisting of some or all of the shared attributes, including the quasi-identifiers. However, during the red-teaming exercise, we realized that analyzing all these dataset pairs based on various join keys can also take considerable time and effort. PRIVEE attempts to help the data defenders by visualizing all the possible pairwise combinations of the datasets present in a cluster, using a bar chart representing the entropy of the shared attributes. PRIVEE automatically selects some of the shared attributes as the initial join key, giving more preference to the known quasi-identifiers. But the visual cues, like the height of the bars and colored bars representing the privacy-related attributes, help the data defender to make an informed choice (Figure 1(b)). Moreover, these pairs are ranked based on their joinability risk, thus helping the data defenders to focus on highly joinable pairs. PRIVEE also helps to explore the datasets with a highly skewed distribution of records and triage all possible pairwise combinations with these datasets and other individual record-level datasets. **Identifying disclosures through suitable join keys:** The disclosure evaluation process requires the high-risk pairs to be joined using a suitable join key. Multiple iterations of selecting the join key based on the join results can lead to the identification of a disclosure. However, these join results can be hard to interpret regarding the privacy-related attributes and other related attributes. PRIVEE presents these results using a modified version of Parallel Sets, a visualization method for the interactive exploration of categorical data that shows the data frequencies instead of the individual data points [17]. This helps to understand the relationship between the different attribute categories and identify a specific record with a unique set of attribute values which can lead to disclosures (Figure 1(c)). PRIVEE also offers feature suggestions that can help iterate through the combinations of the shared attributes as the join key. In this case, after examining the feature suggestions, selecting the _disposition_ (whether a police incident is open or closed) attribute shows that only one record was open in \(2015\) but was closed in \(2016\). Further investigation of this record reveals that this is an incident of a runaway female juvenile of age \(16\) that was reported at 85XX Dinkins St on \(24^{\text{th}}\) February \(2015\), and the same incident was closed through a supplemental report one and half years later on \(5^{\text{th}}\) December \(2016\) (Figure 1(d)). Thus, incidents of identity disclosures like this, which were reported during the red teaming exercise, can be identified through the PRIVEE workflow. ## V Discussion Identifying disclosures using traditional search options in open data portals is challenging. Moreover, data custodians might need more information than shown in the search results to find disclosures. Thus, this context demands a visual analytic system specifically targeted toward disclosure evaluation and other privacy pitfalls. PRIVEE can be considered as an initial attempt toward this purpose. The visual analytic design space explored in PRIVEE helps establish a streamlined workflow responsive to the data custodian's inputs yet distilling the results effectively. However, this system can have users other than a data custodian. During the development of the PRIVEE workflow, we realized that a data subject could also be interested in discovering if their data can be compromised by exploiting these privacy pitfalls. Our workflow PRIVEE can address the data subjects' perspective too. However, an approach leveraging an individual user's attribute values may be more efficient in this context. Hence, we envision that future design solutions in this space will be more geared toward the data subjects' perspective. This can be incredibly beneficial in encouraging data activism by citizens [30, 1, 21]. Another attack scenario we envisaged during the red teaming exercise is the disclosure of sensitive information through the transitive join of open datasets. We are still leading a separate effort toward quantifying the transitive disclosure risk. The primary challenges in this effort are the presence of limited examples yet a high number of possible combinations to explore. This may serve as an important field of research since disclosures like this are difficult to detect by data defenders, yet they can have a massive impact on the privacy of the data subjects. We hope researchers look into different visual analytic solutions to address this attack scenario. ## VI Conclusion Open datasets are essential in improving government transparency and empowering citizens with access to hitherto proprietary data. We discuss some of the privacy pitfalls of open datasets with real-world examples we observed during an ethical hacking exercise. These examples highlight the importance of addressing these pitfalls on an urgent basis. Towards that end, we develop a collection of highly susceptible datasets and a visual analytic workflow that effectively emulates the strategies developed during the exercise and identifies disclosures. We also envision exploring possible disclosure risks beyond joinable pairs and improving the web-based interface's data processing capabilities in collaboration with big data experts. Since PRIVEE addresses the privacy pitfalls efficiently, this workflow will be used to develop more effective solutions and help data defenders safeguard the interests of the open data ecosystem.
2307.10436
A Matrix Ensemble Kalman Filter-based Multi-arm Neural Network to Adequately Approximate Deep Neural Networks
Deep Learners (DLs) are the state-of-art predictive mechanism with applications in many fields requiring complex high dimensional data processing. Although conventional DLs get trained via gradient descent with back-propagation, Kalman Filter (KF)-based techniques that do not need gradient computation have been developed to approximate DLs. We propose a multi-arm extension of a KF-based DL approximator that can mimic DL when the sample size is too small to train a multi-arm DL. The proposed Matrix Ensemble Kalman Filter-based multi-arm ANN (MEnKF-ANN) also performs explicit model stacking that becomes relevant when the training sample has an unequal-size feature set. Our proposed technique can approximate Long Short-term Memory (LSTM) Networks and attach uncertainty to the predictions obtained from these LSTMs with desirable coverage. We demonstrate how MEnKF-ANN can "adequately" approximate an LSTM network trained to classify what carbohydrate substrates are digested and utilized by a microbiome sample whose genomic sequences consist of polysaccharide utilization loci (PULs) and their encoded genes.
Ved Piyush, Yuchen Yan, Yuzhen Zhou, Yanbin Yin, Souparno Ghosh
2023-07-19T20:00:00Z
http://arxiv.org/abs/2307.10436v1
A Matrix Ensemble Kalman Filter-based Multi-arm Neural Network to Adequately Approximate Deep Neural Networks ###### Abstract Deep Learners (DLs) are the state-of-art predictive mechanism with applications in many fields requiring complex high dimensional data processing. Although conventional DLs get trained via gradient descent with back-propagation, Kalman Filter (KF)-based techniques that do not need gradient computation have been developed to approximate DLs. We propose a multi-arm extension of a KF-based DL approximator that can mimic DL when the sample size is too small to train a multi-arm DL. The proposed Matrix Ensemble Kalman Filter-based multi-arm ANN (MEnKF-ANN) also performs explicit model stacking that becomes relevant when the training sample has an unequal-size feature set. Our proposed technique can approximate Long Short-term Memory (LSTM) Networks and attach uncertainty to the predictions obtained from these LSTMs with desirable coverage. We demonstrate how MEnKF-ANN can "adequately" approximate an LSTM network trained to classify what carbohydrate substrates are digested and utilized by a microbiome sample whose genomic sequences consist of polysaccharide utilization loci (PULs) and their encoded genes. The scripts to reproduce the results in this paper are available at [https://github.com/Ved-Piyush/MEnKF-ANN-PUL](https://github.com/Ved-Piyush/MEnKF-ANN-PUL). ## 1 Introduction Deep Learners (DLs) have achieved state-of-art status in empirical predictive modeling in a wide array of fields. The ability of DLs to synthesize vast amounts of complex data, ranging from high dimensional vectors to functions and images, to produce accurate predictions has made them the go-to models in several areas where predictive accuracy is of paramount interest. Bioinformatics has also seen a steep increase in articles developing or deploying DL techniques in recent years [Min et al., 2017]. However, conventional DLs trained via gradient descent with back-propagation require tuning of a large number of hyperparameters. Additionally, given the vast number of weights that DLs estimate, they are prone to overfitting when the training sample size is relatively small. Since the gradient descent algorithms compute the weights deterministically, DLs in their vanilla form do not yield any uncertainty estimate. Several techniques have been proposed to alleviate the foregoing issues in DLs. For instance, Bayesian Neural Network (BNN) [Kononenko, 1989, Neal, 2012] was explicitly devised to incorporate epistemic and aleatoric uncertainty in the parameter estimation process by assigning suitable priors on the weights. The Bayesian mechanism can process these priors and generate uncertainty associated with DL predictions. Additionally, with judicious choice of priors, BNNs can be made less prone to overfitting [Fortuin et al., 2021, Srivastava et al., 2014]. Variational inference is another popular technique for uncertainty quantification in DLs. In particular, Hinton and Van Camp [1993] showed that a posterior distribution for the model weights could be obtained by minimizing the Kullback-Leibler distance between a variational approximation of the posterior and the true posterior of the weights. The Bayes by Backprop [Blundell et al., 2015] is another technique that uses variational formulation to extract uncertainty associated with the weights in DLs. However, the Monte Carlo dropout technique [Srivastava et al., 2014], wherein each neuron (and all of its connections) is randomly dropped with some probability during the model training process, has arguably turned out to be the most popular method to regularize DLs and extract predictive uncertainty. In addition to the conceptual simplicity of the dropout technique, it was shown that the models trained using dropout are an approximation to Gaussian processes and are theoretically equivalent to variational inference [Gal and Ghahramani, 2016]. Regardless of its conceptual simplicity and theoretical underpinning, dropout methods require gradient computation. Hence, it can be quite computationally intensive in DLs with millions of parameters. Another suite of methods for approximating DLs uses Kalman Filters (KF), or its variants, to obtain approximate estimates of the DL parameters [Yegenoglu et al., 2020, Rivas and Personnaz, 1998, Wan and Van Der Merwe, 2000, Julier and Uhlmann, 2004, Chen et al., 2019]. In particular, the Ensemble Kalman Filter (EnKF) technique offers a computationally fast approximation technique to DLs. For instance, Chen et al. [2019] train a single hidden layer neural network using the EnKF updating equations outlined in Iglesias et al. [2013] and show how using the augmented state variable, one can estimate the measurement error variance. In the DL setting, Chen et al. [2018] demonstrate the utility of EnKF in approximating a Long Short Term Memory (LSTM) model. Yegenoglu et al. [2020] use the EnKF to train a Convolutional Neural Network directly using the Kalman Filtering equations derived in Iglesias et al. [2013]. All these methods approximate single-arm DLs and, therefore, cannot be used in situations where input features can be represented in multiple ways. Our target is to develop a multi-arm approximator to the DL. In principle, we can train different DL approximators for each representation and perform a post-training model averaging. However, that would increase the computation cost substantially. We argue that since multiple different feature representations do not necessarily offer complimentary information, developing a multi-arm approximator that performs model averaging while training would perform "adequately". To that end, we develop a Matrix Ensemble Kalman Filter (MEnKF)-based multi-arm ANN that approximates a deep learner and simultaneously performs model averaging. We apply our method to approximate an LSTM model trained to classify what carbohydrate substrates are digested and utilized by a microbiome sample characterized by genomic sequences consisting of polysaccharide utilization loci (PULs) [Bjursell et al., 2006] and their encoded genes. We use two different representations of the genomic sequences consisting of the PULs in two different arms, our MEnKF-ANN approximator, and demonstrate that our approximator closely follows the predicted probabilities obtained from the trained LSTM. We also generate prediction intervals around the LSTM-predicted probabilities. Our results show that the average width of the prediction interval obtained from the MEnKF-ANN approximator is lower than that obtained from the original LSTM trained with MC dropout. We also perform extensive simulations, mimicking the focal dataset, to demonstrate that our method has desirable coverage for test samples compared to the MC dropout technique. Finally, we emphasize that even though the original problem is binary classification, our MEnKF-ANN approximator is designed to emulate the probabilities obtained from the original LSTM model and quantify the uncertainties in the LSTM-predicted probabilities. The remainder of the article is organized as follows: In section 2, we describe the aspects of an experimentally obtained microbiome dataset that motivated us to design this approximator. In section 3, for the sake of completion, we offer a brief review of KF, EnKF, and how these techniques have been used to train DLs. Section 4 details the construction of our MEnKF-ANN method. In section 5, we offer extensive simulation results under different scenarios and follow it up with the application on real data in section 6. Finally, section 7 offers concluding remarks and future research directions. ## 2 Motivating Problem The human gut, especially the colon, is a carbohydrate-rich environment (Kaoutari et al., 2013). However, most of the non-starch polysaccharides (for example, xylan, pectin, resistant glycans) reach the colon undegraded (Pudlo et al., 2022) because human digestive system does not produce the enzymes required to degrade these polysaccharides (Flint et al., 2012). Instead, humans have developed a complex symbiotic relationship with gut microbiota, with the latter providing a large set of enzymes for degrading the aforementioned non-digestible dietary components (Valdes et al., 2018). Consequently, an essential task in studying the human gut microbiome is to predict what carbohydrate substrates a microbiome sample can digest from the genetic characterization of the said microbiome (Koropatkin et al., 2012). In order to generate a focused genetic characterization of the microbes that relates to their carbohydrate utilization property, one often investigates the genes encoding the Carbohydrate Active Enzymes (CAZymes) and other proteins that target glycosidic linkages and act to degrade, synthesize, or modify carbohydrates (Lombard et al., 2014; Zhang et al., 2018). This set of genes tend to form physically linked gene clusters in the genome known as polysaccharide utilization loci (PULs) (Bjursell et al., 2006). Consequently, the gene sequences associated with PULs of microbes could be used as a predictor to ascertain the carbohydrate substrate the microbe can efficiently degrade. However, these gene sequences are string-valued quantities (Huang et al., 2018; Stewart et al., 2018) and hence their naive quantitative representations (for instance, one-hot-encoding or count vectorization) often do not produce classifiers with acceptable accuracy (Badjatiya et al., 2017). Instead, we can use LSTM to process the entire sequence of string-valued features and then implement a classifier with a categorical loss function. The trained LSTM, by default, produces an embedding of the gene sequences in a vector space. Alternatively, we can also use a Doc2Vec embedding of the entire sequence associated with the PUL or an arithmetic average of Word2Vec embedding of each gene in the sequence and train a shallow learner (an ANN, for example). Since various representations of the predictors are available, we can train a multi-arm DL that takes different representations of features in different arms and performs concatenation/late integration of the embeddings before the prediction layer (Liu et al., 2020, hanifi-Noghabi et al., 2019]. However, such multi-arm DLs require a relatively large number of training samples - typically tens of thousands. Since the experimental characterization of new PULs for carbohydrate utilization is an expensive process [Ausland et al., 2021], we do not have large enough labeled samples to train complex multi-arm DLs. This predicament motivates us to develop a multi-arm approximator to a DL with the following capabilities: (a) it must "adequately" approximate the focal single-arm DL, (b) it should be able to ingest different feature representations in different arms and perform model averaging, (c) it should be able to detect if the set of representations supplied to it is substantially different from the representations used to train the original DL, i.e., sensitive to major misspecification. Since the original DL is trained on a single representation of features and the approximator is ingesting multiple representations, the latter is misspecified in a strict sense. However, this misspecification is deliberately introduced to circumvent training a multi-arm DL and assess, via model averaging, whether there is any benefit in using multiple representations of the same feature set. We extract the dataset from the dbCAN-PUL database [Ausland et al., 2021] that contains experimentally verified PULs and the corresponding GenBank sequences of these PULs along with known target carbohydrate substrates. Figure 1 shows an example of a gene sequence associated with a PUL for the substrate Pectin. We have a total of approximately 411 data points. Figure 2 shows the dataset's frequency distribution of various target substrates. We do not have sufficient samples to train a complex DL to classify all the available substrates. Hence we propose to classify the two most frequently occurring target substrates - Xylan and Pectin - and train an LSTM binary classifier. Seventy-four samples belong to these two classes of substrates in a reasonably balanced way. One way to attach uncertainty to the probabilities estimated by the LSTM architecture is to activate the dropout layers during the prediction phase. This will generate multiple copies of the prediction depending on the architecture of the dropout layers. However, we need to decide on how many dropout layers to include and where to place them. Often the prediction intervals are pretty sensitive to the number and placements of dropout layer(s). For instance, the top left and bottom left panels of Figure 3 show the prediction intervals associated with eight held-out test samples obtained when two dropout layers were included - one inside the LSTM and one just before the final prediction layer. In contrast, the top right and bottom right panels of Figure 3 show the prediction intervals associated with the same test samples obtained when the second dropout layer was removed from the foregoing LSTM architecture. Observe how the number and placement of dropout layers influence the variability in the width of these intervals. If we wish to control the width variability, the placement of the dropout layer becomes a tuning parameter and further increases the hyperparameter search space. We empirically show that our MEnKF-ANN approximator, trained on the logit transformed LSTM-estimated probabilities as the response and the embedding of the sequences obtained from LSTM and Doc2Vec operations as two types of features produce more stable prediction intervals regardless of the location of the dropout layer in the original LSTM. Figure 1: Pectin PUL Figure 3: Boxplots showing the Predictions superimposed with the ground truth value from Heavy and Low Just LSTMs Figure 2: Frequency distribution for the various substrates Background This section offers a brief overview of Kalman Filters and Ensemble Kalman Filters and discusses how these methods have been used to train NNs and approximate DLs. For an extensive discussion on KF and EnKF technique, we direct the audience to Katzfuss et al. (2016) and Evensen (2003), respectively. ### Linear State Space Model & Kalman Filter Consider a linear Gaussian state-space model given by \[y_{t}=H_{t}x_{t}+\epsilon_{t},\ \ \epsilon_{t}\sim\mathcal{N}_{m_{t}}( 0,R_{t}) \tag{1}\] \[x_{t}=M_{t}x_{t-1}+\eta_{t},\ \ \eta_{t}\sim\mathcal{N}_{n}( 0,Q_{t}) \tag{2}\] where \(y_{t}\) is the \(m_{t}\) dimensional observation vector at time step \(t\), \(x_{t}\) is the \(n\) dimensional state variable at that time, \(H_{t}\) and \(M_{t}\) denote the observation and the state transition matrices. Assume that the filtering distribution of the state vector at \(t-1\) is given by \[x_{t-1}|y_{1:t-1}\sim\mathcal{N}(\hat{\mu}_{t-1},\hat{\Sigma}_{t-1}), \tag{3}\] KF computes the forecast distribution at \(t\) using (2) as \[x_{t}|y_{1:t-1}\sim\mathcal{N}(\hat{\mu}_{t},\hat{\Sigma}_{t})\] \[\hat{\mu}_{t}:=M_{t}\hat{\mu}_{t-1},\] \[\tilde{\Sigma}_{t}:=M_{t}\hat{\Sigma}_{t-1}M_{t}^{{}^{\prime}}+Q_ {t} \tag{4}\] Once the measurement at time step \(t\) becomes available, the joint distribution of \((x_{t},y_{t})\) is given by \[\begin{pmatrix}x_{t}\\ y_{t}\end{pmatrix}\bigg{|}y_{1:t-1}\sim N\left(\begin{pmatrix}\tilde{\mu}_{t}\\ H_{t}\tilde{\mu}_{t}\end{pmatrix},\begin{pmatrix}\tilde{\Sigma}_{t}&\tilde{ \Sigma}_{t}H_{t}^{{}^{\prime}}\\ H_{t}\tilde{\Sigma}_{t}&H_{t}\tilde{\Sigma}_{t}H_{t}^{{}^{\prime}}+R_{t}\end{pmatrix}\right) \tag{5}\] Then the updated filtering distribution is \(x_{t}|y_{1:t}\sim\mathcal{N}(\hat{\mu}_{t},\hat{\Sigma}_{t})\) where \(\hat{\mu}_{t}\) and \(\hat{\Sigma}_{t}\) are given by \[\hat{\mu_{t}}:=\tilde{\mu_{t}}+K_{t}(y_{t}-H_{t}\tilde{\mu_{t}}),\] \[\hat{\Sigma_{t}}:=(I_{n}-K_{t}H_{t})\tilde{\Sigma}_{t}, \tag{6}\] with \(K_{t}:=\tilde{\Sigma}_{t}H_{t}^{{}^{\prime}}(H_{t}\tilde{\Sigma}_{t}H_{t}^{{} ^{\prime}}+R_{t})^{-1}\) being the Kalman Gain Matrix. For large \(n\) and \(m_{t}\), computing the matrices in (6) is computationally expensive and often leads to numeric instability. ### Ensemble Kalman Filter The idea of EnKF is to take an ensemble of size \(N\) from the filtering distribution at \(t-1\). This ensemble is denoted as \(\hat{x}_{t-1}^{(0)},\ \ \hat{x}_{t-1}^{(1)},...,\hat{x}_{t-1}^{(N)}\sim \mathcal{N}_{n}\left(\hat{\mu}_{t-1},\hat{\Sigma}_{t-1}\right)\). In the forecast step of EnKF, (2) is applied to the ensemble members to obtain their evolution from \(t-1\) to \(t\). That is \[\tilde{x}_{t}^{(i)}=M_{t}\hat{x}_{t-1}^{(i)}+\eta_{t}^{(i)},\ \ \eta_{t}^{(i)}\sim \mathcal{N}(0,Q_{t}),\ \ \ \ i=1,\ldots,N \tag{7}\] It can also be shown that \(\tilde{x}_{t}^{(i)}\sim\mathcal{N}(\tilde{\mu}_{t},\tilde{\Sigma}_{t})\). Similar to the update step of the Kalman Filter, all the members of this ensemble must be updated when the measurement at time step t becomes available. To update these ensemble members, first, a sample is obtained for the measurement error sequences that is \(\epsilon_{t}^{(1)},\epsilon_{t}^{(2)},...,\epsilon_{t}^{(N)}\sim\mathcal{N}_{ m_{t}}(0,R_{t})\). Then using these simulated measurement errors \(N\) perturbed observations \(\tilde{y}_{t}^{(1)},\tilde{y}_{t}^{(2)},\ldots,\tilde{y}_{t}^{(N)}\) are obtained using \(\tilde{y}_{t}^{(i)}=H_{t}\tilde{x}_{t}^{(i)}+\epsilon_{t}^{(i)}\). Since the joint distribution of \((\tilde{x}_{t}^{(i)}\), \(\tilde{y}_{t}^{(i)})\) is the same as in (5), the updating equations are obtained by shifting the forecasted ensemble in (7) as follows \[\hat{x}_{t}^{(i)}=\tilde{x}_{t}^{(i)}+K_{t}(y_{t}-\tilde{y}_{t}^{(i)}),\ \ \ \ i=1, \ldots,N \tag{8}\] It can be easily shown that \(\hat{x}_{t}^{(i)}\sim\mathcal{N}_{n}(\hat{\mu}_{t},\hat{\Sigma}_{t})\). The computational benefit comes from the fact that instead of computing the Kalman gain matrix in (8) explicitly, the sample covariance matrix of the forecasted ensemble (\(\tilde{S}_{t}\), say) is used to estimate the Kalman Gain matrix as \(\hat{K}_{t}:=\tilde{S}_{t}H_{t}^{{}^{\prime}}(H_{t}\tilde{S}_{t}H_{t}^{{}^{ \prime}}+R_{t})^{-1}\). ### KF and EnKF for Deep Learners Although conventional KF is only suitable for estimating parameters in linear state-space models, several extensions have been proposed to generalize KF in nonlinear settings. For instance, Rivals and Personnaz (1998) used extended KF to train feed-forward NN. Wan and Van Der Merwe (2000) introduced the unscented KF that better approximates nonlinear systems while making it amenable to the KF framework. Anderson (2001) used the concept of state augmentation that offered a generic method to handle nonlinearity in state-space models via the KF framework. Iglesias et al. (2013) utilized this state augmentation technique to develop a generic method to train ANNs. They derived the state-augmented KF's forecast and updated equations in ANNs, thereby providing the algebraic framework to train DLs using the Ensemble Kalman Filters approach. These equations were subsequently used by Yegenoglu et al. (2020) to train a Convolutional Neural Network using EnKF. Furthermore, Chen et al. (2019) also used the updating equations in Iglesias et al. (2013) to train a single hidden layer ANN and demonstrated how using state augmentation one can estimate the measurement error variance. State-augmented EnKF formulation also estimated parameters in LSTMs (Chen et al., 2018). All the foregoing models offer techniques to estimate parameters of complex nonlinear DLs using the EnKF framework. However, they are unsuitable when we have multiple feature representations. We want to approximate a DL with a multi-arm ANN trained via EnKF, as discussed in section 2. ## 4 Methodology First, we offer a generic construction of the proposed MEnKF-ANN procedure and describe how this method could be deployed to solve the problem in section 2. We will use the following notations. \(Y\in\mathcal{R}\) is our target response. We have a total of \(m=\sum_{t=1}^{T}m_{t}\) training instances, with \(m_{t}\) being the number of training data points in the \(t^{th}\) batch. \(v_{t}^{f}\in\mathcal{R}^{p}\) and \(v_{t}^{g}\in\mathcal{R}^{q}\) denote two different representations of the features (possibly of different dimensions) for the \(t^{th}\) batch of data. Consider two ANNs, denoted by \(f\) and \(g\). Assume that the neural networks f and g have \(n_{f}\), \(n_{g}\) number of learnable parameters. For illustrative purposes, we will assume \(n_{f}=n_{g}\). If \(n_{f}\neq n_{g}\), we can use suitable padding when updating the weights. In the \(t^{th}\) batch of data, we assign the feature sets \(v^{f}_{t}\) and \(v^{g}_{t}\) to networks \(f\) and \(g\), respectively. We denote \(w^{f}_{t}\) and \(w^{g}_{t}\) to be the updated weights for the neural network \(f\) and \(g\), respectively. ### Matrix Kalman Filter based Multi-arm ANN Consider the state matrix, \(X_{t}\), associated with the \(t^{th}\) batch of data given by \[X_{t}^{(m_{t}+n_{g}+1)\times 2}=\begin{bmatrix}f(v^{f}_{t},w^{f}_{t}),&g(v^{g}_{ t},w^{g}_{t})\\ w^{f}_{t},&w^{g}_{t}\\ 0,&a_{t}\end{bmatrix} \tag{9}\] where \(a_{t}\) and \(b_{t}\) are real-valued scalar parameters. Define \(H_{t}^{m_{t}\times(m_{t}+n_{g}+1)}=[I_{m_{t}},0_{m_{t}\times(n_{g}+1)}]\) and \(G_{t}^{2\times 1}=[1-\sigma(a_{t}),\ \ \sigma(a_{t})]^{T}\) where \(\sigma(.):\mathcal{R}\rightarrow[0,1]\), with the sigmoid function being a popular choice of \(\sigma(.)\). Additionally, define \(\Theta_{t-1}=I_{m_{t}+n_{g}+1}\) and \(\psi_{t-1}=I_{2}\). We are now in a position to define the Matrix Kalman Filter. The measurement equation is given by: \[Y_{t}=H_{t}X_{t}G_{t}+\epsilon_{t} \tag{10}\] with the state evolution equation being \[X_{t}=\Theta_{t-1}X_{t-1}\psi_{t-1}+\eta_{t} \tag{11}\] Writing in \(vec\) format, (11) becomes \[x_{t}=vec(X_{t})=(\psi^{T}_{t-1}\otimes\Theta_{t-1})vec(X_{t-1})+vec(\eta_{t}) \tag{12}\] Now letting \(\phi_{t-1}=\psi^{T}_{t-1}\otimes\Theta_{t-1}\) and \(\tilde{\eta}_{t}=vec(\eta_{t})\) we get from (12) \[x_{t}=\phi_{t-1}x_{t-1}+\tilde{\eta}_{t} \tag{13}\] (10) can similarly be compactified as \[y_{t}=\mathcal{H}_{t}x_{t}+\epsilon_{t} \tag{14}\] where \(\mathcal{H}_{t}=G_{t}^{T}\otimes H_{t}\). Observe that(14) and (13) have the same form as the standard representation of linear state space model described in (1) and (2). Therefore, we can get the matrix state space model's solution by converting it to the vector state space model and then using EnKF to approximate the updating equations. We direct the audience to [Choukroun et al., 2006] for more details on Matrix Kalman Filters. ### Interpreting MEnKF-ANN and a Reparametrization The above construction of \(X_{t}\), \(H_{t}\), and \(G_{t}\) performs automatic model averaging while training. First, consider the matrix multiplication of \(H_{t}X_{t}\) from (10). This would be a \(m_{t}\times 2\) dimensional matrix in which the first column is the prediction, for the \(t^{th}\) batch, from the neural network \(f\) and the second column is the prediction from the neural network \(g\). Post multiplication by \(G_{t}\) would take the weighted average of each row in \(H_{t}X_{t}\) where the weights are defined inside the \(G_{t}\) matrix. Now consider the matrix multiplication of \(H_{t}X_{t}G_{t}\) from (10) \[H_{t}X_{t}G_{t} = \left[f(v_{t}^{f},w_{t}^{f}),\ \ g(v_{t}^{g},w_{t}^{g})\right]\begin{bmatrix}1- \sigma(a_{t})\\ \sigma(a_{t})\end{bmatrix} \tag{15}\] \[= \left[(1-\sigma(a_{t}))f(v_{t}^{f},w_{t}^{f})+\sigma(a_{t})g(v_{ t}^{g},w_{t}^{g})\right]\] (15) clearly demonstrates how our construction explicitly performs model averaging across the batches with \(1-\sigma(a_{t})\) and \(\sigma(a_{t})\) being the convex weights allocated to the ANNs \(f\) and \(g\), respectively. Although the foregoing construction connects Matrix KF formulation with multi-arm ANN and performs explicit model averaging, it suffers from a computational bottleneck. Using (13) and (14) the estimated Kalman Gain Matrix would be \(K_{t}=\widetilde{S}_{t}\mathcal{H}_{t}^{T}(\mathcal{H}_{t}\widetilde{S}_{t} \mathcal{H}_{t}^{T}+\sigma_{y}^{2}I_{m_{t}})^{-1}\). However, in the above parameterization we have \(G_{t}=[1-\sigma(a_{t}),\ \ \sigma(a_{t})]^{T}\) and \(\mathcal{H}_{t}=G_{t}^{T}\otimes H_{t}\). This would require computation of the estimated Kalman Gain matrix for each member in EnKF since, at any given iteration of our MEnKF-ANN, we have an \(a_{t}\) for each member of the ensemble. Thus computation complexity associated with the Kalman Gain computation increases linearly with the size of the ensemble in the above parametrization of the MEnKF-ANN. To alleviate this computational bottleneck, consider the following parametrization: \[X_{t}=\begin{bmatrix}(1-\sigma(a_{t}))f(v_{t}^{f},w_{t}^{f}),\ \sigma(a_{t})g(v_{t}^{g},w_{t}^{g})\\ w_{t}^{f},\ \ w_{t}^{g}\\ 0,\ \ a_{t}\end{bmatrix} \tag{16}\] and \(G_{t}=[1,\ \ 1]^{T}\). We still have explicit model averaging in the measurement equation, i.e., \[H_{t}X_{t}G_{t}=\left[(1-\sigma(a_{t}))f(v_{t}^{f},w_{t}^{f})+\sigma(a_{t})g(v _{t}^{g},w_{t}^{g})\right] \tag{17}\] but \(\mathcal{H}_{t}\) does not depend on \(a_{t}\). Therefore the matrix products for the Kalman Gain computation can now be computed once for each batch. Turning to the variance parameter in the measurement equation (14). Assume \(\epsilon_{t}\sim\mathcal{N}_{m_{t}}(0,\nu_{y}^{2}I_{m_{t}})\) To estimate \(\nu_{y}^{2}\), we augment the state vector as follows: \[X_{t}^{(m_{t}+n_{g}+2)\times 2}=\begin{bmatrix}(1-\sigma(a_{t}))f(v_{t}^{f},w_{t} ^{f}),\ \sigma(a_{t})g(v_{t}^{g},w_{t}^{g})\\ w_{t}^{f},\ \ w_{t}^{g}\\ 0,\ \ a_{t}\\ 0,\ \ b_{t}\end{bmatrix} \tag{18}\] where \(\nu_{y}^{2}=\log(1+e^{b_{t}})\) and \(H_{t}\) in (10) now becomes \([I_{m_{t}},0_{m_{t}\times(n_{g}+2)}]\). We used a softplus transformation of \(\nu_{y}^{2}\), instead of the usual log transformation for computational stability. ### Connecting MEnKF-ANN with DL Recall that our dataset consists of string-valued gene sequences associated with experimentally determined PULs, with the response being the carbohydrate substrates utilized by the said microbe. Since we consider only two categories of PULs, we have a binary classification problem. An LSTM trained with a binary cross-entropy loss is the approximand DL in our case. Suppose \(p\) is the probability of observing a sample of a particular category. In that case, the trained LSTM produces \(\hat{p}\) for each training instance, along with an embedding of the associated gene sequences. Our MEnKF-ANN approximator uses \(logit(\hat{p})\) as the target response. The LSTM embedding of the gene sequences is fed into one arm of the approximator while the other arm ingests Doc2Vec encoding of the gene sequences. Thus, our MEnKF-ANN approximates the probabilities estimated by an LSTM. The convex weights \(\sigma(a)\) ascertain which embedding has more predictive power. Clearly, MEnKF-ANN operates as a model stacker, and the predictive uncertainty interval that it produces, by default, around its target approximand quantifies how well simpler ANNs, fitted without backpropagation, can approximate a deep learner. To initialize the ensemble in the approximator, we draw the members in the state vector (18) from \(\mathcal{N}_{2(m_{t}+n_{g}+2)}(\mathbf{0},\nu_{x}^{2}I)\), where \(\nu_{x}^{2}\) is a tuning parameter that plays a key role in controlling the spread of the ensemble members and the dimension of \(I\) matches with the dimension of normal distribution. Following Chen et al. (2018, 2019), we assume the state transition is deterministic, i.e., \(x_{t}=\phi_{t-1}x_{t-1}\) and hence we do not have the variance parameter corresponding to \(\tilde{\eta}\) in the augmented state vector. When we reach the \(t^{th}\) batch of data, for the \(i^{th}\) member in the ensemble (\(i=1,2,...,N\)), we update each element in the augmented state vector \(w_{t}^{f,(i)}\), \(w_{t}^{g,(i)}\), \(a_{t}^{(i)}\), \(b_{t}^{(i)}\) using the updating equation (8) suitably modified to handle deterministic state transition. ## 5 Simulations We conducted extensive simulations to assess how well our MEnKF-ANN can approximate an LSTM binary classifier. This simulation exercise aims to demonstrate that our MEnKF-ANN is not only "adequate" in approximating the probabilities produced by LSTM but can also capture the "true" probabilities that generate binary labels. We compute the coverage and width of the prediction intervals of the target probabilities in the test set to assess the "adequacy" of the approximator. Then, we compare this coverage and width with those computed directly via an LSTM trained with MC dropout. Admittedly, the prediction intervals obtained from the latter are different from those computed from MEnKF-ANN. However, if the ground truth probabilities are known, an adequate approximator should be able to achieve near-nominal coverage when the approximand is not misspecified. Our simulation strategy mimics the focal dataset and uses the gene sequences associated with the original PULs to generate labels. As mentioned above, we extracted \(\hat{p}\) from the LSTM trained on the original dbCAN-PUL data. We call this LSTM the _true LSTM_. We consider \(\hat{p}\) the true probabilities for synthetic data generation. We then use noisy copies of \(\hat{p}\) to generate a synthetic label in the following way: generate \(logit(\tilde{p}_{i}^{(j)})=logit(\hat{p}_{i})+\epsilon_{i}^{*(j)},\ i=1,2,...,m,j=1,2,...,J\), where \(J\) is the number of the simulated dataset and \(m\), is the number of data points in each simulated set, the perturbation \(\epsilon_{i}^{*(j)}\) are iid Normal(0,0.01\({}^{2}\)). We generate synthetic labels \(\tilde{Y}\) by thresholding \(\tilde{p}_{i}^{(j)}\) at 0.5, i.e \(\tilde{Y}_{i}^{(j)}=I(\tilde{p}_{i}^{(j)}>0.5)\). Then the simulated dataset consists of \(D^{(j)}=\{\mathbf{F},\tilde{Y}^{(j)},\ j=1,2,...,J\}\), where \(\mathbf{F}\) is the set of original gene sequences from dbCAN-PUL. Now in each \(D^{(j)}\), we train a second LSTM (with two dropout layers) and extract \(\tilde{\tilde{p}}_{i}^{(j)},i=1,2,...,m\) and the embedding of the gene sequences. We call these LSTMs, trained on \(D^{(j)}\), the _fitted LSTMs_. Note that the embeddings from _fitted LSTMs_ could potentially be different from those obtained from the _true LSTM_. We denote the embedding from _fitted LSTMs_ by \(v_{i}^{(j),f},\ j=1,2,...J\). Our MEnKF-ANN is constructed to approximate the _fitted LSTMs_. To that end, the approximator uses \(logit(\tilde{\tilde{p}}_{i}^{(j)})\) as the target response. \(v_{i}^{(j),f}\) are supplied as features to one arm of the ANN, the other arm ingests \(v_{i}^{(j),g}\) - the Doc2Vec embedding of \(\mathbf{F}\). Once the MEnKF-ANN is trained, we use a hold-out set in each simulated dataset to generate predictive probabilities from the forecast distribution for each member in the KF ensemble and compute the empirical 95% predictive interval at \(logit^{-1}\) scale. To measure the adequacy of MEnKF-ANN, we compute the proportion of times the foregoing predictive interval contains \(\hat{p}\) in held-out test data. We expect this coverage to be close to the nominal 95%, and the average width of these intervals should not be greater than 0.5. Additionally, observe that the data-generating model uses LSTM embedding of \(F\); hence, using Doc2Vec embedding as input is misspecification. Consequently, we expect the average model weight associated with \(v^{f}\) to be larger than \(v^{g}\). Table 1 shows the performance of MEnKF-ANN in terms of coverage, the average width of prediction intervals, and average LSTM weight under two specifications of ensemble size (\(N\)) and initial ensemble variance (\(\nu_{x}^{2}\)). To compare these results, we offer the coverage and average width of the prediction intervals when both the dropout layers are activated in the _fitted LSTM_ during the prediction phase in Table 2. Observe how MEnKF-ANN recovered the _true probabilities_ even better than the correctly specified LSTM with dropout. The average interval widths obtained from MEnKF-ANN are also lower than those from the _fitted LSTM_. These demonstrate the adequacy of MEnKF-ANN in approximating the target DL. Additionally, we observe that the average LSTM model weight is \(\approx 1\) indicating the ability of our approximator to identify the correctly specified data-generating model. Figure 4 shows the histogram of the predictive samples obtained from the ensemble members for eight test samples in a randomly chosen replicate. The red vertical line denotes the true logits, and the green vertical lines show the fences of the 95% prediction interval. Now, to demonstrate a situation where MEnKF-ANN is "inadequate", we supply the approximator with a completely different feature set representation. Instead of using the LSTM embedding \(v^{f}\), we use word2vec embedding of each gene in the predictor string and take the arithmetic average of these word2vec embeddings to represent the entire sequence. We denote this feature set by \(\tilde{v}^{f}\) and then train the MEnKF-ANN using \(\tilde{v}^{f}\) and \(v^{g}\) as the features and \(logit(\tilde{\tilde{p}}^{(j)})\) as the target response. Evidently, MEnKF-ANN is highly misspecified. Table 3 reports the coverage and average width of the prediction interval obtained from this model. Observing the huge width of the intervals essentially invalidates the point prediction. Such large width indicates that MEnKF-ANN may not approximate the target DL. Therefore, we caution against using the coverage and width metrics to assess the "adequacy" of the _fitted LSTM_ itself. \begin{table} \begin{tabular}{l l l l l} \hline \(N\) & \(\nu_{x}^{2}\) & Coverage & Width & LSTM weight \\ \hline 216 & 16 & 90.25\% & 0.33 & 0.9997 \\ 216 & 32 & 89.25\% & 0.32 & 0.9999 \\ \hline \end{tabular} \end{table} Table 1: Performance of MEnKF-ANN using LSTM embedding and Doc2Vec \begin{table} \begin{tabular}{l l l l} \hline Rate & Reps & Coverage & Width \\ \hline 0.5 & 50 & 81.25\% & 0.53 \\ 0.5 & 200 & 84.50\% & 0.56 \\ \hline \end{tabular} \end{table} Table 2: Coverage and width of prediction intervals obtained from _fitted LSTM_ with two dropout layers ## 6 Application Recall that our focal dataset consists of \(n=74\) samples belonging to Xylan and Pectin. However, training an LSTM on a small sample size would require aggressive regularization, even with this reduced label space. Therefore, we draw on an extensive collection of unlabelled data containing gene sequences associated with CAZyme gene clusters (CGC) computationally predicted from genomic data [20, 17]. Although this unlabelled data contain approximately 250K CGC gene sequences, unlike experimentally characterized PULs, these sequences do not have known carbohydrate substrate information. They hence cannot be directly used for classification purposes. We, therefore, use this unlabelled dataset to learn the word2vec embeddings of each gene appearing in the unlabelled dataset. These embeddings are then used to initialize the embedding layer of the target LSTM classifier. Turning to the labeled dataset, instead of performing full cross-validation, we resort to subsampling procedure [13]. We take a subsample of sixty-six instances for training and hold eight instances for testing purposes. The subsample size (\(b\)) is chosen such that \(b(n)/n\approx 8\sqrt{n}/n\to 0\), as \(n\rightarrow\infty\). Although the subsampling theory requires generating \(\begin{pmatrix}n\\ b\end{pmatrix}\) replicates, the computational cost for generating \(\approx 10^{11}\) replicates, in our case, is prohibitive. Instead, we generate 50 independently subsampled replicates comprising training and testing sets of sizes 66 and 8, respectively. In each replication, an LSTM with the same architecture is trained on the foregoing 66 training instances. Under this scheme, the probability that the \(i^{th}\) instance in our dataset appears at least once in the test set is \(\approx 99.6\%\). The LSTM-estimated probabilities of observing a _Pectin_ substrate are extracted from each replicate. These probabilities are logit transformed and used as the target response for our MEnKF-ANN approximator. We feed the LSTM embedding and Doc2Vec embedding of the gene sequences Figure 4: True Logits Superimposed on Predicted Logits from MEnKF-ANN using LSTM and Doc2Vec embedding \begin{table} \begin{tabular}{l l l l l} \hline \(N\) & \(\nu_{x}^{2}\) & Coverage & Width & Word2Vec weight \\ \hline 216 & 16 & 96.25\% & 0.83 & 0.9155 \\ 216 & 32 & 94.25\% & 0.84 & 0.9787 \\ \hline \end{tabular} \end{table} Table 3: Performance of MEnKF-ANN using Word2Vec and Doc2Vec into the two arms of the approximator along with the foregoing logit-transformed estimated probabilities. We then generate predictions on the held-out test data points in each replicate. Finally, we compare the LSTM-prediction of probabilities with those generated by MEnKF-ANN generated predictions. The average MAE and the proportion of times a 95% prediction interval contains the LSTM-generated predictions in the held-out data set, under two different MEnKF-ANN hyperparameter choices are shown in Table 4 indicating that our approximator can be adequately used to generate the predictions. We do not report the LSTM weights estimated by MEnKF-ANN because, as we observed in the simulation (Table 1), the approximator overwhelmingly prefers the LSTM embeddings. Figure 5 shows the scatter plot of MEnKF-ANN-predicted and LSTM-predicted probabilities for the held-out data across 50 replicates. Figure 6 shows the boxplots associated with MEnKF-ANN predictions for the same set of test samples for which LSTM generated prediction boxplots were shown in the left column of Figure 3. Evidently, MEnKF-ANN can adequately approximate the target LSTM. Turning to the stability of prediction intervals, Table 5 shows the average width of the 95% prediction intervals obtained under two configurations of LSTM and their respective MEnKF-ANN approximators. LSTM\({}_{1}\) has two dropout layers (one in the LSTM layer and one before the final prediction layer) with a 50% dropout rate and 200 replicates. LSTM\({}_{2}\) has one dropout layer (in the LSTM layer) with a 50% dropout rate and 200 replicates. MEnKF-ANN\({}_{11}\) approximates LSTM\({}_{1}\) with 216 ensemble members and \(\nu_{x}^{2}=16\), MEnKF-ANN\({}_{12}\) also approximates LSTM\({}_{1}\), but now with 216 ensemble members and \(\nu_{x}^{2}=32\). Similarly, MEnKF-ANN\({}_{21}\) and MEnKF-ANN\({}_{22}\) approximates LSTM\({}_{2}\) with 216 ensemble members and \(\nu_{x}^{2}=16\) and \(\nu_{x}^{2}=32\), respectively. Observe that the variation in the average width between LSTM\({}_{1}\) and LSTM\({}_{2}\) is considerably higher than the variation between MEnKF-ANN\({}_{11}\) and MEnKF-ANN\({}_{21}\) or between MEnKF-ANN\({}_{12}\) and MEnKF-ANN\({}_{22}\). This indicates that the approximator produces more stable prediction intervals than obtaining prediction by activating the dropout layer during prediction. Finally, we demonstrate how MEnKF-ANN can naturally handle two predictive models with potentially different feature sets. This situation is relevant because, owing to the small sample size, we can train a shallow learner (ANN with backpropagation, for instance) that takes Doc2Vec representation of gene sequences as predictors to estimate the probabilities of observing the _Pectin_ substrate. Now, we can average the probabilities estimated by the LSTM (\(\hat{p}_{LSTM}\), say) and ANN (\(\hat{p}_{ANN}\), say) to produce a model-averaged estimated probability of observing _Pectin_ (\(\hat{\bar{p}}\), say). However, how would we attach uncertainty to \(\hat{\bar{p}}\)? The multi-arm construction of MEnKF-ANN provides a natural solution in this situation. We supply, as described in the foregoing sections, LSTM embeddings and Doc2Vec embeddings to the two arms of MEnKF-ANN but use \(logit(\hat{\bar{p}})\) as the target response here. Thus MEnKF-ANN is now approximating the average output of two primary models. These primary models are trained on the same response variable but use two different representations of features. Table 6 shows the performance of MEnKF-ANN in this situation for some combinations of \(N\) and \(\nu_{x}^{2}\). The coverage is measured with respect to \(\hat{\bar{p}}\) on the test sets. Although the average width and MAE are larger than those reported in Table 4, we observe that the LSTM weights \(\approx\) 0.5, which is what we would expect because MEnKF-ANN is _seeing_ equally weighted outputs from LSTM and ANN. ## 7 Discussion State-augmented Kalman Filter and its variants provide a gradient-free method that can be extended to approximate popular neural network-based deep learners for regression and classification tasks. In this article, we have developed a Matrix Ensemble Kalman Filter-based multi-arm neural network to approximate an LSTM. We have demonstrated that this technique adequately approximates the target DL regarding coverage and the average width of the prediction interval. We have also demonstrated how the in-built model averaging capability can be leveraged to attach \begin{table} \begin{tabular}{l l l l l l} \hline \hline Target model & Average Width & Approximator & Average Width \\ \hline LSTM\({}_{1}\) & 0.492 & MEnKF-ANN\({}_{11}\) & 0.102 \\ & & MEnKF-ANN\({}_{12}\) & 0.085 \\ LSTM\({}_{2}\) & 0.371 & MEnKF-ANN\({}_{21}\) & 0.119 \\ & & MEnKF-ANN\({}_{22}\) & 0.108 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance of MEnKF-ANN trained on the averaged probability of LSTM and shallow ANN \begin{table} \begin{tabular}{l l l l l l} \hline \hline \(N\) & \(\nu_{x}^{2}\) & Coverage & Width & MAE & CPU Time \\ \hline 216 & 16 & 90.50\% & 0.1024 & 0.0200 & 2.39 mins \\ 216 & 32 & 85.50\% & 0.0850 & 0.0161 & 3.67 mins \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of MEnKF-ANN using LSTM embedding and Doc2Vec for dbCAN-PUL data Figure 5: Scatterplot of First LSTM Predicted Probabilities vs. EnKF Predicted Probabilities \begin{table} \begin{tabular}{l l l l l l} \hline \hline Target model & Average Width & Approximator & Average Width \\ \hline LSTM\({}_{1}\) & 0.492 & MEnKF-ANN\({}_{11}\) & 0.102 \\ & & MEnKF-ANN\({}_{12}\) & 0.085 \\ LSTM\({}_{2}\) & 0.371 & MEnKF-ANN\({}_{21}\) & 0.119 \\ & & MEnKF-ANN\({}_{22}\) & 0.108 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of the average width of prediction interval LSTM + MC Dropout and MEnKF-ANN approximator for each LSTM uncertainty to the averaged predictions generated by two different models. Our simulations suggest that by using an explicit model averaging construction, our approximator can also identify its target approximand. We have also observed that the prediction intervals generated by the approximator are less sensitive to the location of dropout layers and hence provide more stable prediction intervals than obtaining predictions by activating the dropout layers within the DL itself. Admittedly, our procedure requires an additional round of training, but its fast computation time (see Table 4), along with its ability to emulate the approximand, adequately compensate for that. We have also deployed our approximator on a highly accessed database, dbCAN-PUL, to attach uncertainty to the predicted probabilities produced by (a) the primary LSTM model and (b) an ensemble of LSTM and ANN models. The primary LSTM and ANN models were trained to classify two carbohydrate substrates using the gene sequences characterized by the PULs of the gut microbiome. We anticipate this technique will be helpful to domain experts in assessing the reliability of predictions generated by deep learners or an ensemble of learners. In the future, we propose to expand our model to handle more than two classes. This would enable us to utilize the information in the dbCAN-PUL database better. Another possible direction is to develop an analog of MEnKF-ANN that can directly handle binary data. Although the KF technique crucially requires Gaussianity assumption, but Fasano et al. (2021) recently developed an extension of the KF method that can handle binary responses. We are actively investigating Figure 6: Boxplots showing the MEnKF-ANN Predictions superimposed with the ground truth value for Heavy and Low Dropout how this technique can be adapted to our MEnKF-ANN framework. ## 8 Competing interests No competing interest is declared. ## 9 Author contributions statement V.P., Y.Z., and S.G. conceived the models and experiment(s), and V.P. conducted the experiment(s). V.P. and S.G. analyzed the results. Y.Yan and Y.Yin contributed to the training data. V.P. drafted the manuscript. All authors reviewed the manuscript. Y. Yin secured the funding. ## 10 Acknowledgments The authors thank the anonymous reviewers for their valuable suggestions. This work is supported in part by funds from the National Institute of Health (NIH: R01GM140370, R21AI171952) and the National Science Foundation (NSF: CCF2007418, DBI-1933521). In addition, we thank the lab members for their helpful discussions. This work was partially completed utilizing the Holland Computing Center of the University of Nebraska-Lincoln, which receives support from the Nebraska Research Initiative.
2307.00625
Coherent Optical Coupling to Surface Acoustic Wave Devices
Surface acoustic waves (SAW) and associated SAW devices are ideal for sensing, metrology, and connecting and controlling hybrid quantum devices. While the advances demonstrated to date are largely based on electromechanical coupling, a robust and customizable coherent optical coupling would unlock mature and powerful cavity optomechanical control techniques and an efficient optical pathway for long-distance quantum links. Here we demonstrate direct and robust coherent optical coupling to surface acoustic wave cavities through a Brillouin-like optomechanical interaction. In high-frequency SAW cavities designed with curved metallic acoustic reflectors deposited on crystalline substrates, SAW modes are efficiently optically accessed in piezo-active directions that can be accessed through traditionally electromechanical techniques as well as non-piezo-active directions that cannot. The non-contact nature of the optical technique enables controlled analysis of dissipation mechanisms and access to pristine mechanical resonators with record-level quality factors (>100,000 measured here). The exceptional control of the optical probe beams also enables detailed transverse spatial mode spectroscopy, for the first time. These advantages combined with simple fabrication, small size, large power handling, and strong coupling to quantum systems make SAW optomechanical platforms particularly attractive for sensing, material science, and hybrid quantum systems.
Arjun Iyer, Yadav P. Kandel, Wendao Xu, John M. Nichol, William H. Renninger
2023-07-02T17:42:57Z
http://arxiv.org/abs/2307.00625v2
## Coherent Optical Coupling to Surface Acoustic Wave Devices ## Abstract Surface acoustic waves (SAW) and associated SAW devices are ideal for sensing, metrology, and connecting and controlling hybrid quantum devices. While the advances demonstrated to date are largely based on electromechanical coupling, a robust and customizable coherent optical coupling would unlock mature and powerful cavity optomechanical control techniques and an efficient optical pathway for long-distance quantum links. Here we demonstrate direct and robust coherent optical coupling to surface acoustic wave cavities through a Brillouin-like optomechanical interaction. In high-frequency SAW cavities designed with curved metallic acoustic reflectors deposited on crystalline substrates, SAW modes are efficiently optically accessed in piezo-active directions that can be accessed through traditionally electromechanical techniques as well as non-piezo-active directions that cannot. The non-contact nature of the optical technique enables controlled analysis of dissipation mechanisms and access to pristine mechanical resonators with record-level quality factors (>10\({}^{5}\) measured here). The exceptional control of the optical probe beams also enables detailed transverse spatial mode spectroscopy, for the first time. These advantages combined with simple fabrication, small size, large power handling, and strong coupling to quantum systems make SAW optomechanical platforms particularly attractive for sensing, material science, and hybrid quantum systems. ## Main Surface acoustic wave devices based on electrically driven piezoelectric materials are essential to modern technologies, including for communications[1, 2, 3] and chemical and biological sensors[4, 5]. SAWs have more recently emerged as an exciting resource for quantum systems[6, 7, 8, 9] because of their low loss, tight surface confinement, and strong coupling to a variety of quantum systems. As "universal quantum transducers,"[6, 7] SAWs and associated manipulation and probing techniques have been demonstrated in color centers[10], superconducting qubits[11, 12, 13, 14], semiconductor quantum dots[15, 16, 17], 2D materials[18, 19, 20], and superfluids[21, 22]. While electrical control of SAWs has matured over the past several decades[1], robust and customizable coherent optical coupling has not yet been demonstrated. Coherent optical coupling would enable powerful techniques established in cavity optomechanical systems, such as quantum transduction[23, 24], quantum-limited force and displacement sensing[25, 26, 27, 28, 29], generation of non-classical states of optical and acoustic fields[30, 31, 32] and ground state cooling of mechanical resonators[33, 34, 35, 36]. In addition, robust optomechanical coupling to SAW devices would enable an ideal optical pathway for long-distance quantum links, a longstanding goal of experimental quantum information science[37, 38, 39]. While optical surface Brillouin scattering has been successful for probing incoherent thermal surface phonons for the study of thin films[40, 41], many optomechanical applications, including for quantum science, require coherent and stimulated interactions. Brillouin processes have recently enabled coherent coupling to bulk acoustic modes with record lifetimes in shaped crystals, but with the low optomechanical coupling associated with larger bulk mode volumes [42, 43]. In nanoscale systems designed for cavity optomechanics, large coupling strengths are available, but often with complex designs, including subwavelength and suspended structures which can be challenging to integrate into larger hybrid quantum devices. In addition nanoscale confinement also leads to undesirable heating effects [43, 44, 9], which limit photon numbers and acoustic quality factors, and can require complex optical pumping schemes [45, 46]. SAW devices combine intrinsically tight confinement on the surface of bulk substrates with the potential for high power handling and simple fabrication. In recent SAW-based microwave-to-optical transduction schemes [47, 48, 9, 23], electrically generated SAWs are coupled to acoustic modes of a distinct resonator, such as a nanomechanical phononic crystal cavity. While demonstrating exceptional optomechanical coupling strengths, these devices achieve low net efficiencies, primarily limited by the phonon injection efficiency of the electrically generated SAWs to the acoustic cavity modes [49, 50, 7]. In an alternative approach, Okada et.al [51] examined cavity optomechanical systems mediated directly with SAWs. However, without optomechanical phase matching, efficient SAW confinement, and modal size matching between optical and acoustic fields, the system is limited to lower phonon frequencies, mechanical quality factors, and coupling strengths. Optomechanical coupling to SAW whispering gallery modes in microresonators [52, 53] and SAWs driven by optical absorption and thermal relaxation of metallic electrodes [54, 55] also enable several new possibilities. However, these devices will be challenging to integrate with a range of qubit systems, which require specific geometries and minimal heating. Direct, efficient, coherent optical access to simple high-quality and high-power handling SAW devices will be an important step toward realizing the full promise of classical and quantum SAW-based technologies. Here we establish a frequency tunable Brillouin-like optomechanical coupling with integrable surface acoustic wave devices that is direct, coherent, power-tolerant, and efficient. The technique is demonstrated with simple single-crystalline substrates supporting long-lived Gaussian SAW cavity modes confined with deposited curved metallic grating mirrors. Strong optomechanical coupling is demonstrated by engineering phase-matched Brillouin-like interactions between the trapped acoustic modes and incident out-of-plane non-collinear optical fields. In contrast to previous studies on SAW resonators, the demonstrated technique does not require piezoelectricity and can be applied to practically any crystalline media, which we demonstrate by optically driving piezo-inactive SAW devices. This approach, therefore, enables access to high-Q surface acoustic modes on quantum-critical materials which are not piezo-active, such as diamond and silicon. In addition, the absence of interdigital transducers or any other acousto-activating device enables cavity-limited quality-factors, including the record \(\sim\)120,000 quality-factors demonstrated in GaAs in this report. The presented cavities operating at 500 MHz can be tuned through several GHz by varying the incident angle of the optical beams through simple phase matching (momentum conservation) considerations. The frequency and material versatility of this coupling technique is well suited for materials spectroscopy, as illustrated here through measuring and characterizing phonon dissipation mechanisms in GaAs cavities with metallic mirrors. The SAW optomechanical platform presented combines the simplicity and power-handling advantages of bulk optomechanical systems with the small acoustic mode volumes of nanoscale systems for enhanced interaction strengths enabling a multi-functional integrated platform for sensing, quantum processing, and condensed matter physics. ## Non-Collinear Brillouin-like Optical Coupling to Surface Acoustic Waves The SAW device consists of a Fabry-Perot Gaussian surface acoustic wave cavity on a single-crystalline substrate formed by two acoustic mirrors composed of regularly spaced curved metallic reflectors. Two non-collinear optical beams, a pump field, and a Stokes field, are incident in the region enclosed by the acoustic mirrors (Fig. 1a). The confined Gaussian surface acoustic mode can mediate energy transfer between the two optical fields provided phase-matching (momentum conservation) and energy conservation relations are satisfied, as is the case with Brillouin scattering from bulk acoustic waves[56]. For pump and Stokes fields with wavevector (frequency) \(\overrightarrow{k_{p}}(\omega_{p})\) and \(\overrightarrow{k_{s}}(\omega_{s})\), respectively, that subtend equal but opposite angles, \(\theta\), with respect to the surface normal (z-axis), the optical wavevector difference can be approximated as \(\overrightarrow{\Delta k}\approx 2k_{0}\sin\theta\;\hat{x}\), assuming \(k_{p}\approx k_{s}=k_{0}\) and for \(\hat{x}\) a unit vector parallel to the surface; the corresponding optical frequency difference is \(\Delta\omega=\omega_{p}-\omega_{s}\) (Fig. 1b). Note that the magnitude of the optical wavevector difference is tunable by the optical angle of incidence. For the case of freely propagating surface acoustic waves, the acoustic dispersion relation is linear and can be Figure 1: Parametric optomechanical interactions mediated by Gaussian SAW resonators. a) Two non-collinear traveling optical fields are incident on a Fabry-Perot type Gaussian SAW resonator; interaction between the two optical fields is mediated by a Gaussian SAW cavity mode confined to the surface of the substrate. b) Phase-matching diagram of the parametric process. The vectorial optical wavevector difference, \(\Delta\vec{k}=\overrightarrow{k_{p}}-\overrightarrow{k_{s}}=2k_{0}\sin\theta\; \hat{x}\), is angle-dependent and points along the direction of SAW cavity axis. c) The acoustic dispersion relation \(\Omega(q)\) is discretized in the presence of a SAW cavity. The final optomechanical response is determined by the modes, which are both within the phase-matching and the acoustic mirror bandwidth (blue dots), while radiating longitudinal modes excluded by the acoustic mirror and the optical phase matching (grey dots) do not yield an optomechanical response. d) Finite element calculation of the acoustic displacement magnitude, \(|u|\), in a SAW cavity along [100] direction on [100] GaAs illustrating the Gaussian mode (upper panel) with the designed acoustic waist of \(w_{a}=3\lambda_{a}\) and an approximate penetration depth of \(\sim\lambda_{a}\) (lower panel). Panels e) and f) display YX cross-sections of acoustic displacement for e) anti-symmetric and f) symmetric higher-order transverse modes of the SAW cavity. expressed as \(\Omega=\mathrm{q}\mathrm{v}_{\mathrm{R}}\), where \(\Omega,q\), and \(\mathrm{v}_{\mathrm{R}}\) are the phonon frequency, phonon wavevector, and Rayleigh SAW velocity, respectively. The phase-matched phonon wavevector (\(q_{0}\)) and frequency (\(\Omega_{0}\)) are then given by the relations: \(q_{0}=\Delta k=2k_{0}\sin\theta\) and \(\Omega_{0}=q_{0}\mathrm{v}_{\mathrm{R}}\). A propagating SAW, therefore, yields a single-frequency optomechanical response, similar to the standard Brillouin response in bulk materials from propagating longitudinal waves. However, the accessible phonon spectrum is significantly modified in the presence of a surface acoustic cavity and optical beams with finite beam sizes, as illustrated by the modified acoustic dispersion plot in Fig. 1c. First, because standing SAW cavity modes are formed, the phonon wavevectors and frequencies become discretized to specific values \(q_{m}=\frac{m\pi}{L_{\mathrm{eff}}}\) and \(\Omega_{m}=q_{m}\mathrm{v}_{\mathrm{R}}\), respectively, characterized by mode number \(m\), where the free spectral range of the cavity is \(\Delta\Omega=\frac{\pi\mathrm{v}_{\mathrm{R}}}{L_{\mathrm{eff}}}\), and \(L_{\mathrm{eff}}\) is the effective cavity length. Second, unlike ideal mirrors, acoustic Bragg mirrors only efficiently confine a finite number of longitudinal modes (blue circles in Fig. 1c) determined by the reflectance and periodicity of the metallic reflectors[1]. Finally, because the optical fields are Gaussian beams with finite spatial extents, appreciable optomechanical coupling exists over a range of optical wavevectors values centered around the phase-matched configuration, \(\Delta k=q_{m}\). The effective optomechanical coupling rate to the cavity mode \(m\), \(g_{0}\), varies as a function of optical wavevector mismatch as \(g_{0}\left(\Delta k\right)\propto\exp\left(-(\Delta k-q_{m})^{2}/\delta k^{2}\right)\), where \(\delta k=2\sqrt{2}/r_{0}\) and \(r_{0}\) is the radius of incident optical fields. Equivalently, the coupling rate can be expressed as a function of the angle of incidence as \(g_{0}(\theta)\propto\exp\left(-(\theta-\theta_{m})^{2}/\delta\theta^{2}\right)\), for small angles such that \(\sin\theta\approx\theta\) and where \(\theta_{m}\) is the phase-matching angle of the acoustic cavity mode given by \(\theta_{m}=\frac{q_{m}}{2k_{0}}\). The corresponding angular bandwidth is given as \(\delta\theta=\frac{\delta k}{k_{0}}=\frac{\sqrt{2}}{r_{0}k_{0}}\) (see Section S2 of supplementary information). The resultant optomechanical spectrum consists of several discrete resonances from SAW cavity modes which lie both within the acoustic mirror and optical phase-matching bandwidths (unconfined, radiative, longitudinal modes are indicated by grey circles in Fig. 1c). Gaussian SAW cavities are designed to achieve small acoustic mode volume and appreciable coupling strengths (see Methods and section S3 of supplementary information). Diffraction losses are mitigated by accounting for the anisotropy of the acoustic group velocity on the underlying crystalline substrate[57]. GaAs is chosen because of its large photoelastic response, ease of fabrication, and integration with other quantum systems such as qubits. 3-dimensional numerical finite element simulations are performed of a SAW cavity on [100]-cut GaAs oriented along [100]-direction. The acoustic wavelength of \(\lambda_{a}=5.7~{}\mu m\) and Gaussian waist, \(w_{a}=3\lambda_{a}\), are near-identical to the experimental devices described below, while the number of reflectors and the mirror spacing are reduced to maintain computational feasibility (see section S3 of supplementary information). The simulated cavities display a series of stable SAW cavity modes with Hermite-Gaussian-like transverse profiles (Fig. 1d-f) separated by the free spectral range of the cavity. As expected, the modes are confined to the surface and steeply decay into the bulk of the substrate (e.g. lower panel Fig. 1d). The observed beam waist of the fundamental Gaussian mode agrees well with the designed full-waist (\(2w_{a}\)) of \(6\lambda_{a}\). Higher-order anti-symmetric (Fig. 1e) and symmetric (Fig. 1f) mode solutions are also observed. ## Optomechanical spectroscopy of SAW cavities To demonstrate coherent optical coupling to SAW devices, Gaussian SAW cavities are fabricated (see Methods and section S4 of supplementary information) on a single crystal GaAs substrate (inset Fig. 2a). Optomechanical measurements are made for two sets of cavities, one oriented along the crystalline [110] direction, which is piezo-active, and one along the [100] direction, which is piezo-inactive. The cavities are designed for an acoustic wavelength of \(\lambda_{a}\approx 5.7\,\mu m\) (\(\theta\approx 7.8^{\circ}\)), acoustic waist of \(w_{a}\approx 4\lambda_{a}\) and mirror spacing of \(L\sim 500\,\mu m\). The cavity parameters are chosen to optimize for practical constraints including finite optical apertures, electronics bandwidths, and the optical beam sizes. The effective cavity length (\(L_{\text{eff}}\)) is calculated to be \(\sim 610\,\mu m\) by accounting for the penetration depth into the mirrors[1, 58]. The large mirror separation relative to the optical beam size minimizes absorptive effects arising from spatial overlap of optical fields with acoustic metallic reflectors (See Section S9 of supplementary information). Figure 2: Optically measured SAW devices. Optomechanical response of 470 MHz SAW cavities oriented along the [100]-direction on [100]-cut GaAs at 4K with a) a wide frequency sweep revealing several discrete SAW cavity resonances separated by the 2.3 MHz cavity free spectral range (microscope image of the SAW device is inset), and b) a high-resolution frequency sweep revealing an acoustic Q-factor of 120,000 or a spectral linewidth of \(\sim\)4\(kHz\). c) and d) show similar measurements for the SAW cavity oriented along [110]-direction on [100]-cut GaAs. c) A wide scan reveals SAW cavity modes separated by the free spectral range of 2.4 MHz and d) the SAW modes on the [110]-oriented devices exhibit a maximum quality factor of 7000 with a corresponding linewidth of 72 kHz. The stimulated optomechanical response is measured with the sensitive phonon-mediated four-wave mixing measurement technique described in Methods and section S5 of supplementary information. The spectral response of the piezo-inactive (active) cavity along the [100]([110])-direction in Fig. 2a-b (c-d) reveals several equally spaced resonances over a wide spectral range centered at 480 MHz (506 MHz) separated by 2.3 MHz (2.4 MHz), which corresponds to the free-spectral range (\(\mathrm{v}_{R}/2L_{\mathrm{eff}}\)) of the SAW cavity. The observed resonances span \(\sim\)10 \(MHz\), which is consistent with the designed acoustic mirror bandwidth. High-resolution spectral analysis of one of the observed SAW cavity resonances of the piezoelectric (active) cavity (Fig. 2b(d)) reveals a spectral width, \(\Gamma/2\pi\), of 4 kHz (72 kHz), corresponding to an acoustic quality factor of 120,000 (7000). The measured traveling-wave zero-point coupling rate [59, 60], \(g_{0}\), for the piezo-inactive(active) cavity of \(2\pi\times 1.4\ kHz\) (\(2\pi\times 1.7\ kHz\)) is consistent with predicted values of \(2\pi\times 1.9\ kHz\) (\(2\pi\times 1.8\ kHz\)) obtained using known material parameters in conjunction with the device geometry (see section S1 and section S7 of supplementary information). As expected, no measurable acoustic response is observed when either of the optical drive tones is turned off. Additionally, as predicted by theoretical coupling calculations (see section S1 of supplementary information), no resonance is observed when the acoustic drives are orthogonally polarized to each other, or when the LO is orthogonally polarized to the incident probe (see section S11 of supplementary information). The demonstrated acoustic quality factors of the SAW devices are among the highest Figure 3: Measured spatial mode spectrum and angular dependence. a) The optomechanical response from higher-order acoustic modes is observed in a [100]-oriented cavity with a mirror spacing of \(L\approx 350\ \mu m\) by laterally displacing the optical fields as illustrated in the inset figure. The higher-order frequency spacing of 1.4 MHz, is consistent the theoretical estimate. b) Finite element simulations of the corresponding acoustic mode profiles. c) Optomechanical coupling strength as a function of angle of incidence with a Gaussian fit overlayed. d) A graphical representation of the positions of the phase-matching envelope relative to the acoustic mirror response (not to scale) for illustrative angles of incidence indicated with the same color outline as for the respective points in c). Left: The phase-matching envelope coincides with the mirror response for maximal optomechanical coupling strength. Right: The phase matching envelope is detuned from the mirror-defined SAW mode resulting in a weaker optomechanical response. measured for focused SAW cavities on any substrate, corresponding to an \(fQ\) product of \(6\times 10^{13}\)\(Hz\), which is also comparable to that of the best electromechanical SAW devices [15, 58, 61]. Moreover, the accessed SAW cavity modes are along electromechanically inaccessible directions, demonstrating a key merit of the coherent optical coupling in enabling access to long-lived SAW modes regardless of their piezoelectric properties. The larger relative loss in the [110]-oriented cavities is consistent with excess ohmic loss from the metallic reflectors owing to non-uniform strain from the Gaussian modes and the resulting piezoelectric potential on the reflectors [62, 63, 64]. While the high-order spatial modes of a Gaussian SAW resonator are challenging to probe electromechanically, the coherent optomechanical technique allows for precise and direct excitation of spatial modes through fine control of the optical spatial overlap with specific acoustic mode profiles. By laterally displacing the optical beams away from the cavity axis, optomechanical coupling to a specific higher-order SAW cavity mode is observed (green in Fig. 3a-b) in addition to the response from the fundamental mode (red in Fig. 3a-b). The frequency separation between the fundamental and corresponding higher-order mode of \(1.4\)\(MHz\) is consistent with the predicted difference of \(1.4\) MHz. The exquisite spatial control available through optical techniques could form the basis of novel SAW-based spatially resolved sensing and metrology. Finally, the accessible phonon-mode bandwidth determined by phase matching is characterized through measurements of the Brillouin coupling coefficient (\(G_{B}\propto|g_{0}|^{2}\)) as a function of the angle of incidence of the optical fields (see section S2 of supplementary information). The coupling strength exhibits a Gaussian dependence on the angle (Fig. 3c) for peak coupling centered at \(\theta_{0}=7.8^{\circ}\), with an angular bandwidth of \(0.9^{\circ}\), which agrees with the predicted bandwidth of \(0.96^{\circ}\). The peak coupling at \(\theta_{0}=7.8^{\circ}\) is determined by the angle of incidence of the optical fields and the resultant wavevector difference, but the acoustic mode frequencies are independently fixed by the cavity geometry and the acoustic mirror response. The effective optomechanical coupling rate is maximized when the center of the optical phase matching envelope coincides with the peak reflection frequency of the acoustic mirrors (point outlined with purple circle in Fig. 3c and illustrated in the left panel of Fig. 3d) and decreases as they are mismatched (cyan circle in Fig. 3c and illustrated in right panel of Fig. 3d). Because the optomechanical gain bandwidth and associated driven acoustic modes result from the optical spatial profiles, this technique presents the unique capability of tailoring the optomechanical gain profile for specific applications, from multi-mode optomechanics to tunable single frequency applications. ## Non-contact Probing of SAW Cavity Dissipation Mechanisms Acoustic dissipation is typically measured using electromechanical techniques, which also include the dissipation from external device structures such as the electrodes, electrical ports, and impedance matching circuits, limiting insights into material and structural dissipation mechanisms. In contrast, the coherent optical interaction is contact-free and not limited by these extrinsic effects. A direct probe into phonon loss mechanisms will be valuable for basic material science as well as for optimizing novel SAW device technologies. Here the coherent optical technique is used to determine the dominant loss mechanisms between SAW propagation and mirror losses for Gaussian resonators and to extract the temperature dependence of the dissipation. The acoustic quality factor is measured as a function of mirror separation (i.e. cavity length) and temperature for cavities in both the [100]-oriented (piezo- Figure 4: Characterizing loss in SAW devices. Quality factor as a function of cavity length for cavities oriented along the a) [100] and b) [110] directions. Both sets of cavities display a linear dependence of quality factor on cavity length indicating that cavity loss is dominated by the acoustic mirrors. Quality factor as a function of temperature for cavities oriented along the c) [100] and d) [110] direction. The two orientations display qualitatively distinct dependencies, suggesting differences in acoustic loss mechanisms. inactive) and [110]-oriented (piezo-active) cavities. The measured cavities are all designed to have identical parameters except for the mirror separation, which varies from \(150\,\mu m\) to \(500\,\,\mu m\). The cavity lengths are chosen to minimize effects resulting from optical absorption in the metallic reflectors (See Section S9 of supplementary information). For both cavity orientations, the acoustic quality factor displays a linear dependence on cavity length (Fig. 4a, Fig. 4b), with the quality factor increasing for larger cavity lengths. A linear dependence on cavity length suggests that the losses in SAW cavities primarily occur within the acoustic mirrors through mechanisms such as scattering into the bulk, ohmic losses, and acoustic losses within the reflectors (see section S8 of supplementary information). The dependence of quality factor on temperature is also investigated from \(T=4\,\) to \(160\) K (Fig.4c-d) for a fixed mirror separations (\(L=500\,\,\mu m\) for the [100]-oriented cavities and \(L=350\,\,\mu m\) for the [110]-oriented cavities). The [100]-oriented cavities exhibit a sharp fall and a subsequent plateau at \(Q\sim 20\),000 within the measured temperature range. In contrast, the [110] cavities exhibit a linear decrease of \(Q\) with temperature. These measurements suggest that while both cavity types are limited by losses occurring within the acoustic mirrors, the specific mirror loss mechanisms likely differ. Previous measurements of Gaussian SAW cavities on GaAs without the potential for ohmic losses in the mirrors (through superconducting reflectors[65] as well as non-metallic reflectors[61]) demonstrating large acoustic quality factors (\(2\times 10^{4}\)), suggest that the losses observed in the [110]-oriented, piezo-active, cavities primarily result from ohmic losses within the metallic reflectors. This is also consistent with the observations that the [100]-oriented cavities without piezoelectricity support much higher quality factors and have a distinct temperature dependent behavior. Additional insights could be derived from temperature dependent quality factor measurements at additional cavity lengths including longer lengths where the effects of mirror loss are reduced, from cavities where ohmic losses are reduced such as superconducting-mirror cavities, as well as from alternative cuts and material types. Importantly, because of the non-contact nature of coherent optical coupling, these measurements directly reflect intrinsic device properties, as opposed to details of the probe, providing a rich source of information across a wide range of relevant SAW device parameters. This is illustrated in Section S9 of supplementary information as well where the effects of optical absorption are clearly delineated from those of electrostriction through controlled measurements. ## Discussion and Conclusion This report introduces a powerful new coherent optomechanical platform in which two non-collinear optical fields parametrically couple through surface acoustic modes of Gaussian SAW cavities. The platform offers high power handling capabilities, requires minimal fabrication, and enables a contact-free piezo-electricity independent coupling to SAW devices enabling record-high quality factor devices in GaAs crystalline substrates. From the results presented here there are several directions in which specific metrics of interest for applications can be improved. For example, the principles outlined here can be readily applied for the coherent optical coupling of SAW cavity devices with frequencies of several GHz by changing the optical angle of incidence (see section S10 of supplementary information). The optomechanical coupling rate of devices can also be improved significantly through reduced acoustic mode volumes in cavities with smaller acoustic waists (see section S10 of supplementary information). Moreover, because the acoustic mode volume of the Gaussian SAW cavities scale inversely with the acoustic frequency, GHz SAW cavities naturally offer increased coupling strengths. Acoustic cavity losses can also be further improved by adopting etched groove reflectors in favor of metallic strips to eliminate both ohmic losses within the reflectors on piezoelectric substrates as well as additional acoustic losses within the reflectors. A natural extension of the technique presented here would be to enclose the system within optical cavities. A SAW-mediated cavity optomechanical systems with an operation frequency of \(\sim\)4 \(GHz\), Q-factors well exceeding 10\({}^{5}\), and coupling rates comparable to nanomechanical systems( \(\frac{g_{o}}{2\pi}\)\(\sim\)10 kHz) could be achieved through straightforward improvements detailed in supplementary information section S10. The power-handling capability of this system, limited only by material damage, allows for large intracavity photon numbers (\(n_{c}\)\(>\) 10\({}^{9}\)) which can consequently enable large optomechanical cooperativities (\(C_{\mathrm{om}}\)\(>\) 1000) (see section S10 of supplementary information). This platform therefore yields the high-power handling capability of bulk optomechanical systems[59, 66] while also offering large coupling rates, small sizes, and ideal integrability to quantum systems and sensing devices. A SAW cavity-optomechanical platform may have several straightforward applications. Strain fields of surface acoustic phonon modes can be readily coupled to a range of qubit systems, including spin qubits, quantum dots, and superconducting qubits, enabling novel quantum transduction strategies. Optical coupling to several other strain-sensitive quantum systems, including superfluids and 2D materials, can also be realized, which could yield new fundamental insights into novel condensed matter phenomena. The SAW-based cavity optomechanical system could also serve as an alternate platform for microwave-to-optical transduction schemes circumventing conventional challenges such as poor phonon-injection efficiencies, low-power handling capabilities, and fabrication challenges[7, 49, 50]. Beyond novel devices for quantum systems, the demonstrated techniques and devices also represent an attractive strategy for realizing a new class of non-contact all-optical SAW-based sensors with targets ranging from small molecules to large biological entities including viruses and bacteria, without electrical contacts or constraints. Moreover, in contrast to prior electromechanical techniques, the material versatility available to the optomechanical coupling presented enables broadly applicable material spectroscopy for basic studies of phonons and material science. In summary, here we demonstrate coherent optical coupling to surface acoustic cavities on crystalline substrates. A novel non-collinear Brillouin-like parametric interaction accesses high-frequency Gaussian SAW cavity modes without the need for piezo-electric coupling, enabling record cavity quality factors. Optomechanical coupling in SAW cavities could be enabling for hybrid quantum systems, condensed matter physics, SAW-based sensing, and material spectroscopy. For hybrid quantum systems, this interaction, in conjunction with demonstrated techniques of strong coupling of SAWs to quantum systems (e.g. qubits, 2D materials, and superfluids), could form the basis for the next generation of hybrid quantum platforms. For sensing, this platform could enable a new class of SAW sensors agnostic to piezoelectric properties and free of electrical constraints and resulting parasitic effects. Finally, the coherent coupling technique enables detailed phonon spectroscopy of intrinsic mechanical loss mechanisms for a wide array of materials without the limitations of extrinsic probing devices. ## Methods ### Device Fabrication: To fabricate the GaAs devices, a single crystal [100]-cut GaAs is coated with a PMMA polymer layer and the required reflector profiles are drawn on the polymer with an e-beam lithography tool. Subsequently, the required thickness of metal, in this case 200nm Aluminum, is deposited using an ultra-high vacuum e-beam evaporation tool system. Finally, the excess polymer is removed using an acetone bath to obtain the experimental devices. A more detailed description of the device fabrication is provided in section S4 of supplementary information. ### Numerical Methods: Determining the exact acoustic reflector profiles requires the SAW group velocity as a function of angle from the chosen SAW cavity axis, i.e., the anisotropy of the substrate. This is calculated by numerically solving acoustic wave equations with appropriate boundary conditions. To efficiently confine SAW fields, the shape of the reflector must match the radius of curvature of the confined gaussian mode. The calculated group velocity can then be used to determine the radius of curvature of the reflectors as a function of the axial location and angle from the cavity axis (\(R(x,\theta)\)). These reflector profiles are imported into finite element software to validate the cavity designs by verifying the stability of high-Q Gaussian-like SAW modes (Fig. 1d-1f). A detailed description of the FEM simulation procedure is provided in section S3 of supplementary information. ### Phonon Spectroscopy: A sensitive phonon-mediated four-wave mixing measurement technique is developed, building off of related techniques for measuring conventional Brillouin interactions. The SAW cavity mode is driven with two optical tones, which are incident at angles designed to target specific phonon frequencies. A probe beam at a disparate wavelength incident collinear to one of the drive tones scatters off the optically driven SAW cavity mode to generate the measured response. The angle of incidence of the optical fields is controlled through off-axis incidence on a well-calibrated aspheric focusing lens (see section S6 of supplementary information). The optomechanically scattered signal is collected on a single-mode collimator and spectrally filtered using a fiber-Bragg grating to reject excess drive light. The resulting signal is combined with a local oscillator (LO) and measured with a balanced detector (see section S5 of supplementary information). The measured signal is a coherent sum of frequency-independent Kerr four-wave mixing in the bulk of the crystalline substrate and the optomechanical response, giving rise to Fano-like resonances. This spectroscopy technique can resolve optomechanical responses with < fW optical powers. A detailed description of the experimental apparatus and the angle tuning technique is provided in sections S5 and S6 of supplementary information, respectively. ## References * 1. Morgan, D. Surface Acoustic Wave Filters. _Surf. Acoust. Wave Filters_ (2007) doi:10.1016/B978-0-12-372537-0.X5000-6. * 2. Ruppel, C. C. W. & Fjeldly, T. A. Advances in Surface Acoustic Wave Technology, Systems and Applications. **20**, (2001). * 3. Ruppel, C. C. W. Acoustic Wave Filter Technology-A Review. _IEEE Trans. Ultrason. Ferroelectr. Freq. Control_**64**, 1390-1400 (2017). * 4. Lange, K. Bulk and Surface Acoustic Wave Sensor Arrays for Multi-Analyte Detection: A Review. _Sensors 2019, Vol. 19, Page 5382_**19**, 5382 (2019). Pan, Y. _et al._ Interface and Sensitive Characteristics of the Viscoelastic Film Used in a Surface Acoustic Wave Gas Sensor. _ACS Sensors_**7**, 612-621 (2022). * 6 Schuetz, M. J. A. _et al._ Universal quantum transducers based on surface acoustic waves. _Phys. Rev. X_**5**, (2015). * 7 Delsing, P. _et al._ The 2019 surface acoustic waves roadmap. _J. Phys. D. Appl. Phys._**52**, 353001 (2019). * 8 Moores, B. A., Sletten, L. R., Viennot, J. J. & Lehnert, K. W. Cavity Quantum Acoustic Device in the Multimode Strong Coupling Regime. _Phys. Rev. Lett._**120**, 227701 (2018). * 9 Forsch, M. _et al._ Microwave-to-optics conversion using a mechanical oscillator in its quantum ground state. _Nat. Phys. 2019 161_**16**, 69-74 (2019). * 10 Whiteley, S. J. _et al._ Spin-phonon interactions in silicon carbide addressed by Gaussian acoustics. _Nat. Phys._**15**, 490-495 (2019). * 11 Aref, T. _et al._ Quantum Acoustics with Surface Acoustic Waves. in _Superconducting Devices in Quantum Optics_ (eds. Hadfield, R. H. & Johansson, G.) 217-244 (Springer International Publishing, 2016). doi:10.1007/978-3-319-24091-6_9. * 12 Manenti, R. _et al._ Circuit quantum acoustodynamics with surface acoustic waves. _Nat. Commun._**8**, 975 (2017). * 13 Bienfait, A. _et al._ Phonon-mediated quantum state transfer and remote qubit entanglement. _Science (80-. )._**364**, 368-371 (2019). * 14 Satzinger, K. J. _et al._ Quantum control of surface acoustic wave phonons. (2018). * 15 Imany, P. _et al._ Quantum phase modulation with acoustic cavities and quantum dots. _Opt. Vol. 9, Issue 5, pp. 501-504_**9**, 501-504 (2022). * 16 Metcalfe, M., Carr, S. M., Muller, A., Solomon, G. S. & Lawall, J. Resolved sideband emission of InAs/GaAs quantum dots strained by surface acoustic waves. _Phys. Rev. Lett._**105**, 037401 (2010). * 17 Weiss, M. _et al._ Optomechanical wave mixing by a single quantum dot. _Opt. Vol. 8, Issue 3, pp. 291-300_**8**, 291-300 (2021). * 18 Peng, R. _et al._ Long-range transport of 2D excitons with acoustic waves. _Nat. Commun._**13**, 1-7 (2022). * [19] Fandan, R. _et al._ Dynamic Local Strain in Graphene Generated by Surface Acoustic Waves. _Nano Lett._**20**, 402-409 (2020). * [20] Fandan, R., Pedros, J. & Calle, F. Exciton-Plasmon Coupling in 2D Semiconductors Accessed by Surface Acoustic Waves. _ACS Photonics_**8**, 1698-1704 (2021). * [21] Byeon, H. _et al._ Anomalous Attenuation of Piezoacoustic Surface Waves by Liquid Helium Thin Films. _J. Low Temp. Phys._**195**, 336-342 (2019). * [22] Byeon, H. _et al._ Piezoacoustics for precision control of electrons floating on helium. _Nat. Commun._ _2021 121_**12**, 1-7 (2021). * [23] Balram, K. C., Davanco, M. I., Song, J. D. & Srinivasan, K. Coherent coupling between radiofrequency, optical and acoustic waves in piezo-optomechanical circuits. _Nat. Photonics_**2016**, 105 10, 346-352 (2016). * [24] Mirhosseini, M., Sipahigil, A., Kalaee, M. & Painter, O. Superconducting qubit to optical photon transduction. _Nat._ _2020 5887839_**588**, 599-603 (2020). * [25] Barzanjeh, S. _et al._ Optomechanics for quantum technologies. _Nat. Phys._ _2021 181_**18**, 15-24 (2021). * [26] Mason, D., Chen, J., Rossi, M., Tsaturyan, Y. & Schliesser, A. Continuous force and displacement measurement below the standard quantum limit. _Nat. Phys._ _2019 158_**15**, 745-749 (2019). * [27] Schreppler, S. _et al._ Optically measuring force near the standard quantum limit. _Science (80-. )_. **344**, 1486-1489 (2014). * [28] Giovannetti, V., Lloyd, S. & Maccone, L. Quantum-enhanced measurements: Beating the standard quantum limit. _Science (80-. )_. **306**, 1330-1336 (2004). * [29] Teufel, J. D., Donner, T., Castellanos-Beltran, M. A., Harlow, J. W. & Lehnert, K. W. Nanomechanical motion measured with an imprecision below that at the standard quantum limit. _Nat. Nanotechnol._**4**, 820-823 (2009). * [30] Andrews, R. W. _et al._ Bidirectional and efficient conversion between microwave and optical light. _Nat. Phys._**10**, 321-326 (2014). * [31] Wollman, E. E. _et al._ Quantum squeezing of motion in a mechanical resonator. _Science (80-. )_. **349**, (2015). * [32] del Pino, J., Slim, J. J. & Verhagen, E. Non-Hermitian chiral phononics through optomechanically induced squeezing. _Nat. 2022 6067912_**606**, 82-87 (2022). * [33] Genes, C., Vitali, D., Tombesi, P., Gigan, S. & Aspelmeyer, M. Ground-state cooling of a micromechanical oscillator: generalized framework for cold damping and cavity-assisted cooling schemes. (2007) doi:10.1103/PhysRevA.77.033804. * [34] Chan, J. _et al._ Laser cooling of a nanomechanical oscillator into its quantum ground state. _Nature_**478**, 18 (2011). * [35] Xu, H., Jiang, L., Clerk, A. A. & Harris, J. G. E. Nonreciprocal control and cooling of phonon modes in an optomechanical system. _Nature_**568**, 65-69 (2019). * [36] Schliesser, A., Arcizet, O., Riviere, R., Anetsberger, G. & Kippenberg, T. J. Resolved-sideband cooling and position measurement of a micromechanical oscillator close to the Heisenberg uncertainty limit. _Nat. Phys._**5**, 509-514 (2009). * [37] Kimble, H. J. The quantum internet. _Nat. 2008 4537198_**453**, 1023-1030 (2008). * [38] Wehner, S., Elkouss, D. & Hanson, R. Quantum internet: A vision for the road ahead. _Science (80-. )_. **362**, (2018). * [39] Simon, C. Towards a global quantum network. _Nat. Photonics 2017 1111_**11**, 678-680 (2017). * [40] Loudon, R. Theory of surface-ripple Brillouin scattering by solids. _Phys. Rev. Lett._**40**, 581-583 (1978). * [41] Mishra, S. & Bray, R. Surface-ripple mechanism for brillouin scattering of reflected light from bulk acoustic waves. _Phys. Rev. Lett._**39**, 222-225 (1977). * [42] Renninger, W. H., Kharel, P., Behunin, R. O. & Rakich, P. T. Bulk crystalline optomechanics. _Nature Physics_ vol. 14 601-607 at [https://doi.org/10.1038/s41567-018-0090-3](https://doi.org/10.1038/s41567-018-0090-3) (2018). * [43] Kharel, P. _et al._ High-frequency cavity optomechanics using bulk acoustic phonons. _Sci. Adv._**5**, 1-9 (2019). * [44] Hill, J. T., Safavi-Naeini, A. H., Chan, J. & Painter, O. Coherent optical wavelength conversion via cavity optomechanics. _Nat. Commun. 2012 31_**3**, 1-7 (2012). * [45] Meenehan, S. M. _et al._ Pulsed excitation dynamics of an optomechanical crystal resonator near its quantum ground state of motion. _Phys. Rev. X_**5**, 041002 (2015). * [46] Ren, H. _et al._ Two-dimensional optomechanical crystal cavity with high quantum cooperativity. _Nat. Commun. 2020 111_**11**, 1-10 (2020). * [47] Bochmann, J., Vainsencher, A., Awschalom, D. D. & Cleland, A. N. Nanomechanical coupling between microwave and optical photons. _Nat. Phys._**9**, 712-716 (2013). * [48] Arnold, G. _et al._ Converting microwave and telecom photons with a silicon photonic nanomechanical interface. _Nat. Commun._**11**, (2020). * [49] Balram, K. C. & Srinivasan, K. Piezoelectric Optomechanical Approaches for Efficient Quantum Microwave-to-Optical Signal Transduction: The Need for Co-Design. _Adv. Quantum Technol._**5**, 2100095 (2022). * [50] Wu, M., Zeuthen, E., Balram, K. C. & Srinivasan, K. Microwave-to-Optical Transduction Using a Mechanical Supermode for Coupling Piezoelectric and Optomechanical Resonators. _Phys. Rev. Appl._**13**, 014027 (2020). * [51] Okada, A. _et al._ Cavity Enhancement of Anti-Stokes Scattering via Optomechanical Coupling with Surface Acoustic Waves. _Phys. Rev. Appl._**10**, 1 (2018). * [52] Bahl, G., Zehnpfennig, J., Tomes, M. & Carmon, T. Stimulated optomechanical excitation of surface acoustic waves in a microdevice. _Nat. Commun._**2**, 1-6 (2011). * [53] Matsko, A. B., Savchenkov, A. A., Ilchenko, V. S., Seidel, D. & Maleki, L. Optomechanics with surface-acoustic-wave whispering-gallery modes. _Phys. Rev. Lett._**103**, 257403 (2009). * [54] Katzman, M. _et al._ Surface acoustic microwave photonic filters in standard silicon-on-insulator. _Opt. Vol. 8, Issue 5, pp. 697-707_**8**, 697-707 (2021). * [55] Munk, D. _et al._ Surface acoustic wave photonic devices in silicon on insulator. _Nat. Commun. 2019 101_**10**, 1-9 (2019). * [56] Boyd, R. D. _Nonlinear Optics, 2nd Ed._ (Academic Press, 2003). * [57] Msall, M. E. & Santos, P. V. Focusing Surface-Acoustic-Wave Microcavities on GaAs. _Phys. Rev. Appl._**13**, 014037 (2020). * [58] Decrescent, R. A. _et al._ Large Single-Phonon Optomechanical Coupling between Quantum Dots and Tightly Confined Surface Acoustic Waves in the Quantum Regime. _Phys. Rev. Appl._**18**, 034067 (2022). * [59] Renninger, W. H., Kharel, P., Behunin, R. O. & Rakich, P. T. Bulk crystalline optomechanics. _Nat Phys_ (2018) doi:10.1038/s41567-018-0090-3. * [60] Wiederhecker, G. S., Dainese, P. & Alegre, T. P. M. Brillouin optomechanics in nanophotonic structures. _APL Photonics_**4**, (2019). * [61] Imany, P. _et al._ Etched-groove focusing GaAs surface acoustic wave cavities for enhanced coupling to quantum emitters. _Conf. Lasers Electro-Optics (2021), Pap. STh1D.7_ STh1D.7 (2021) doi:10.1364/CLEO_Sl.2021.STh1D.7. * [62] Gordon, K. & Farnell, G. W. Resistive Losses in Acoustic Surface Wave Multistrip Couplers. _IEEE Trans. Sonics Ultrason._**22**, 358-368 (1975). * [63] Ingebrigtsen, K. A. NORMAL MODE REPRESENTATION OF SURFACE WAVE MULTISTRIP COUPLERS. 163-167 (1973) doi:10.1109/ULTSYM.1973.196173. * [64] Lakin, K. M. Electrode Resistance Effects in Interdigital Transducers. _IEEE Trans. Microw. Theory Tech._**22**, 418-424 (1974). * [65] Andersson, G. _et al._ Squeezing and Multimode Entanglement of Surface Acoustic Wave Phonons. _PRX Quantum_**3**, 010312 (2022). * [66] Kharel, P. _et al._ Multimode Strong Coupling in Cavity Optomechanics. _Phys. Rev. Appl._**18**, 024054 (2022). ## Supplementary Information for: Coherent Optical Coupling to Surface Acoustic Wave Devices Arjun Iyer1, Yadav P. Kandel2, Wendao Xu1, John M. Nichol2 and William H. Renninger1,2 Footnote 1: Institute of Optics, University of Rochester, Rochester, NY 14627, USA. 2Departament of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA. *email: [email protected] **S1. Theoretical Estimation of Optomechanical Coupling Strength** In this section, we derive an analytic expression for the coupling rate for parametric coupling between traveling-wave non-collinear optical fields and a standing wave SAW cavity mode. The assumptions made during the derivation are minimal and provide a useful alternative to computationally intensive FEM calculations. A simple analytical expression allows the extraction of dependencies between coupling rate and material parameters. The system is modeled as follows (Fig. S1)- a semi-infinite crystalline medium occupies the region z < 0 and supports surface acoustic waves confined to the material interface, z = 0. Periodic acoustic reflectors along the x-axis confine surface acoustic fields, resulting in a Gaussian SAW cavity. Two non-collinear optical waves, pump and stokes fields, which subtend equal but opposite angles (0) with the z-axis, are incident from outside the medium (z > 0) in the region enclosed by the two acoustic mirrors. Because optomechanical scattering is a vectorial process, the resulting optomechanical coupling is a function of the polarization of incident optical fields. We derive optomechanical coupling strengths for the following cases -1) both optical fields are TE polarized, 2) the pump is TE-polarized, and the Stokes field is TM-polarized (TE-TM), and 3) both fields are TM-polarized (TM-TM). #### A) TE-polarized optical fields In the case of TE-polarized optical fields, the electric field for the pump (\(\rm E_{p}\)) and Stokes (\(\rm E_{s}\)) fields outside the medium close to the surface (\(\rm z\to 0^{+}\)) are given as- \[\rm E_{p}(x,y,z=0^{+}\ )=A_{p}\exp\left(-\frac{x^{2}}{r_{ox}^{2}}-\frac{y^{2}}{r_{oy} ^{2}}\right)\exp(ik_{p}\sin\theta\,x)\,\hat{y} \tag{1}\] \[\rm E_{s}(x,y,z=0^{+}\ )=A_{s}\exp\left(-\frac{x^{2}}{r_{ox}^{2}}-\frac{y^{2}}{r_ {oy}^{2}}\right)\exp(-ik_{s}\sin\theta\,x)\,\hat{y} \tag{2}\] Where \(\rm A_{p}(A_{s})\) and \(\rm k_{p}(k_{s})\) refer to the amplitude and wavevector of the pump (Stokes) field, respectively. Since the acoustic frequencies are much smaller than optical frequencies, we assume \(\rm k_{p}\approx k_{s}=k_{0}\). \(\rm r_{ox}\) and \(\rm r_{oy}\) refer to the effective optical beam radius along the x and y-axis, respectively. Since the fields are incident at an angle, the resultant distribution on the surface is not symmetric along the x and y-axes. The effective beam waist along the x-axis (\(\rm r_{ox}\)) can be expressed as \(\rm r_{ox}=r_{0}/\cos\theta\), while the beam waist along the y-axis remains unchanged, \(\rm r_{oy}=r_{0}\), where \(\rm r_{0}\) is the incident beam radius. The electric fields on the other side (\(\rm z\to 0^{-}\)) of the interface can be readily derived from Eqn.1 and Eqn. 2 by multiplying the appropriate Fresnel transmission coefficients (\(\rm\tau(\theta)\) ) as follows. \[\rm E_{p}(x,y,z=0^{-}\ )=A_{p}\tau(\theta)\exp\left(-\frac{x^{2}}{r_{ox}^{2}}- \frac{y^{2}}{r_{oy}^{2}}\right)\exp(ik_{0}\sin\theta\,x)\,\hat{y} \tag{3}\] \[\rm E_{s}(x,y,z=0^{-}\ )=A_{s}\tau\,(\theta)\exp\left(-\frac{x^{2}}{r_{ox}^{2}}- \frac{y^{2}}{r_{oy}^{2}}\right)\exp(-ik_{0}\sin\theta\,x)\,\hat{y} \tag{4}\] Accounting for the Gaussian mode profile of the \(\rm m^{th}\) SAW cavity mode, the acoustic field can be expressed as[1, 2] \[\rm u_{x}=U_{0}(\exp(-\eta q_{m}z-i\phi)+c.\,c.)\exp\left(-\frac{x^{2}}{r_{ ax}^{2}}-\frac{y^{2}}{r_{ay}^{2}}\right)\cos(q_{m}x) \tag{5}\] \[\rm u_{z}=-\frac{U_{0}}{i}(\nu exp(-\eta q_{m}z-i\phi)+c.\,c.)\exp\left(-\frac {x^{2}}{r_{ax}^{2}}-\frac{y^{2}}{r_{ay}^{2}}\right)\cos(q_{m}x) \tag{6}\] Where, \(\rm u_{x},u_{z}\) are the x and z-component of the acoustic displacement, respectively. \(\rm U_{0},q_{m},r_{ax},r_{ay}\) refer to the amplitude, wavevector of the \(\rm m^{th}\) cavity mode, waist of the cavity mode along the x-axis given by \(\mathrm{r_{ax}=L_{eff}/2}\), and waist along the y-axis, respectively. \(\eta,\phi,\gamma\) are material-dependent parameters obtained by solving the acoustic wave equation with appropriate boundary conditions [1, 2]. 'c.c' refers to the complex conjugate of the term preceding it. Given the acoustic and electric fields, we define the traveling wave coupling rate (\(\mathrm{g_{0}}\)) based off conventional definitions as [3]- \[\mathrm{g_{0}}=-\frac{\omega\mathrm{x_{zpf}}}{2}\frac{\left<\mathrm{f\cdot u} \right>}{\mathrm{N_{p}N_{s}}}\] ( 7 ) Where \(\omega,\mathrm{f},\mathrm{u}\) refer to the optical frequency, optical force distribution, and acoustic field distribution. The acousto-optic overlap is defined through an overlap integral given by- \(\left<\mathrm{f\cdot u}\right>=\int\mathrm{f\cdot u^{*}dV}\). \(\mathrm{x_{zpf}}\) is the zero-point displacement of the mechanical mode defined as \(\mathrm{x_{zpf}=\sqrt{\frac{\hbar}{2m_{eff}\Omega_{m}}}}\), \(\Omega_{m}\) refers to the frequency of the acoustic mode, \(\mathrm{m_{eff}=\int\rho|u|^{2}dV}\) is the effective mass of the mechanical mode, and \(\rho\) is the density of the substrate. \(\mathrm{N_{p(s)}=\sqrt{\frac{1}{2}\epsilon_{0}\int\epsilon\left|E_{p(s)}\right| ^{2}dV}}\) serves as the power normalization factor. The volume integrals are performed over the effective interaction volume, which in the case of SAWs, is a acoustic wavelength-sized slice of material near the interface, within which energy of the SAWs are confined to. Note that the traveling-wave coupling rate, as defined in this work, is similar in form to the coupling rate defined in conventional cavity optomechanics [3, 4], except that the integrals are only performed over the interaction volume and not over the entire optical cavity, as would be the case in standard cavity-optomechanical calculations [4, 5]. An equivalent cavity optomechanical (\(\mathrm{g_{0}^{c}}\)) coupling rate can be derived from \(\mathrm{g_{0}}\) as follows- \[\mathrm{g_{0}^{c}}=\mathrm{g_{0}}\left(\frac{l_{a}}{l_{opt}}\right)\] ( 8 ) Where \(l_{a}\) is the effective length of the interaction volume as used when determining \(\mathrm{g_{0}}\), and \(l_{opt}\) is the length of the optical cavity. Alternatively, the traveling-wave coupling rate, \(\mathrm{g_{0}}\), can be understood as being the largest possible cavity optomechanical coupling rate possible in an SAW-based cavity optomechanical system, achieved when the optical cavity mode and acoustic mode perfectly overlap, i.e. \(l_{opt}=l_{a}\). The strength of interactions studied in this work can also be quantified through Brillouin-like Gain coefficient (G) defined as[6]- \[G=\frac{\omega\,Q_{m}}{2\Omega_{m}^{2}P_{p}P_{s}}\frac{|(f\cdot u^{*})|^{2}}{(u \cdot pu)}\] ( 9 ) Where \(P_{p},P_{s}\) are incident pump and Stokes powers, and \(Q_{m}\) is the mechanical quality factor. The coupling rate and the Brillouin gain coefficient are related as \[|g_{0}|=\left(\frac{v_{p}v_{s}h\omega_{p}\Omega_{m}G}{4\Omega_{a}^{2}Q_{m}} \right)^{1/2}\] ( 10 ) Next, we derive the acousto-optic contributions from optical forces, namely, radiation pressure and electrostriction on the surface and the bulk of the medium. **Radiation Pressure** Radiation pressure force denoted as \(P_{rp}\) is given by[7] \[P_{rp}=\frac{1}{2}\epsilon_{0}\ \ E_{pt}E_{st}^{*}(\epsilon-1)-\frac{1}{2} \epsilon_{0}^{-1}D_{pn}D_{sn}^{*}\left(\frac{1}{\epsilon}-1\right)\] ( 11 ) Where \(E_{p(s)t(n)},D_{p(s)t(n)}\) refer to tangential (normal) pump (Stokes) electric and displacement fields. \(\epsilon_{0},\epsilon\) refer to the dielectric permittivities of the vacuum and the material, respectively. Radiation pressure force points along the surface normal, i.e., along the positive z-axis. For TE fields \(E_{pn}=E_{sn}=0\), and the resultant expression for \(P_{rp}\) can be simplified as follows- \[P_{rp}(x,y)=\frac{1}{2}\epsilon_{0}(\epsilon-1)|\tau|^{2}\ A_{p}\ A_{s}^{*}e^{-2 x^{2}/r_{ox}^{2}}\ e^{-2y^{2}/r_{0}^{2}}\ e^{i2k_{0}\sin\theta\ x}\ \widehat{z}\] ( 12 ) The resulting radiation pressure force would only overlap with the z-component of the acoustic displacement. The acoustic-optic overlap can be expressed as \[\langle f\cdot u\rangle_{rp}=\int_{-\infty}^{\infty}dx\ dy\ P_{rp}(x,y)u_{z}^{*}(z=0,x,y)\] ( 13 ) \[=\epsilon_{0}\frac{\epsilon-1}{2}|\tau|^{2}\ A_{p}\ A_{s}^{*}\ \frac{U_{0}}{i}\ 2Re \left(e^{-i\phi}y\right)\ \int_{-\infty}^{\infty}dx\ e^{-ax^{2}}e^{i\Delta kx}\cos q_{m}x\ \ \int_{-\infty}^{\infty}dy\ e^{-by^{2}}\] ( 14 ) Where we define additional parameters \(a,\Delta k\) and \(b\) as \[a=\frac{2}{r_{\rm ox}^{2}}+\frac{1}{r_{\rm ax}^{2}},\] ( 15 ) \[b=\frac{2}{r_{\rm o}^{2}}+\frac{1}{r_{\rm xy}^{2}}\] ( 16 ) \[\Delta k=2k_{0}\sin\theta.\] ( 17 ) Since the integrals concerning the spatial variables \(x\) and \(y\) are independent, we separately calculate the two integrals \[I_{1}=\int_{-\infty}^{\infty}dy\;e^{-by^{2}}=\sqrt{\frac{\pi}{b}}\] ( 18 ) \[I_{2}=\int_{-\infty}^{\infty}dx\;e^{-ax^{2}}e^{i\Delta kx}\cos q_{m}x=\int_{- \infty}^{\infty}dx\;e^{-ax^{2}}e^{i\Delta kx}\;\frac{1}{2}(e^{-iqmx}+e^{+iqmx} \;)=\frac{1}{2}e^{-\frac{(\Delta q)^{2}}{4a}}\sqrt{\frac{\pi}{a}}\] (19 ) Where \(\Delta q=\Delta k-q_{m}\) is the wavevector difference between the optical forces and the SAW cavity mode. While deriving Eqn. 19, the term which corresponds to the part of the standing-wave acoustic mode not phase matched to the optical forces (\(\exp\) (\(i(\Delta k+q_{m})x\))) is neglected, similar to rotating wave approximation in atomic physics. Note in Eqn. 19 that the resulting acousto-optic overlap is a function of phase-mismatch \(\Delta q\) and is maximum when the optical forces are perfectly phase matched with the acoustic cavity mode (\(\Delta q=0\), i.e., \(\Delta k=q_{m}\)). This dependence results in the phase matching effects outlined in the main text. The next section will explore further implications of acoustic overlap and coupling rate dependence on phase-mismatch. For the subsequent calculations in this section, we assume \(\Delta q=0\). The acousto-optic overlap resulting from radiation pressure forces can now be expressed as \[\langle f\cdot u\rangle_{rp}=\epsilon_{0}\,\frac{\epsilon-1}{2i}|\tau|^{2}\;U_ {0}A_{p}\;A_{s}^{*}\;\;Re\big{(}e^{-i\phi}\gamma\big{)}\frac{\pi}{\sqrt{ab}}\] ( 20 ) **Photoelastic forces** Time-varying electric fields within a dielectric material can generate time-varying photoelastic optical forces. Photoelastic stresses resulting in optical forces are derived from the photoelastic tensor. For a material with a cubic crystalline lattice whose principal axes are oriented along the assumed cartesian axis, the stress tensor in the Voigt notation is now given[7, 8] \[\begin{pmatrix}\sigma_{\rm xx}^{\sigma_{\rm xx}}\\ \sigma_{\rm zz}\\ \sigma_{\rm zy}\\ \sigma_{\rm zx}\\ \sigma_{\rm xy}\end{pmatrix}=-\tfrac{1}{2}\epsilon_{0}n^{4}\begin{pmatrix}p_{11 }&p_{12}&p_{12}&0&0&0\\ p_{12}&p_{11}&p_{12}&0&0&0\\ p_{12}&p_{12}&p_{11}&0&0&0\\ 0&0&0&p_{44}&0&0\\ 0&0&0&0&p_{44}&0\\ 0&0&0&0&0&p_{44}\end{pmatrix}\begin{pmatrix}E_{\rm px}E_{\rm sx}^{*}\\ E_{\rm py}E_{\rm sy}^{*}\\ E_{\rm pz}E_{\rm sz}^{*}\\ E_{\rm pz}E_{\rm sy}^{*}+E_{\rm py}E_{\rm sz}^{*}\\ E_{\rm py}E_{\rm Sx}^{*}+E_{\rm px}E_{\rm sy}^{*}\end{pmatrix}\] ( 21 ) Here we have invoked the crystal symmetry to assume \(p_{12}=p_{13}=p_{32}\), this may not be true if the coordinate system does not coincide with the principal crystal axes. The pump and Stokes electric fields are calculated within the material. For TE fields, \(E_{\rm px}=E_{\rm sx}=E_{\rm pz}=E_{\rm sz}=0\) and the resultant stresses are: \[\sigma_{\rm xx}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{12}E_{\rm py}E_{\rm sy}^{*}\] ( 22 ) \[\sigma_{\rm yy}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{11}E_{\rm py}E_{\rm sy}^{*}\] ( 23 ) \[\sigma_{\rm zz}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{12}E_{\rm py}E_{\rm sy}^{*}\] ( 24 ) \[\sigma_{\rm xy}=\sigma_{\rm yz}=\sigma_{\rm zx}=0\] ( 25 ) In a system comprising homogeneous materials, photoelastic forces can exist inside each material, resulting in body forces in the bulk of the medium and at material interfaces where discontinuous stresses are present, resulting in surface pressure (analogous to radiation pressure). We separately calculate the contribution to the acousto-optic overlap of photoelastic forces on the surface and within the bulk of the medium. The excess photoelastic surface force on the interface (\(z=0\)) is given as \[P_{\rm es}=\sigma_{\rm xz}(z=0)\,\hat{x}+\sigma_{\rm yz}(z=0)\hat{y}+\sigma_ {\rm zz}(z=0)\hat{z}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{12}E_{\rm py}E_{\rm sy}^ {*}\hat{z}\] ( 26 ) The acousto-optic overlap resulting from photoelastic surface pressure is given by \[\langle\rm f\cdot u\rangle_{es}=\int_{-\infty}^{\infty}dx\ dy\ P_{ess}(x,y)U_{z}^{*}(z =0,x,y)\] ( 27 ) \[\langle\rm f\cdot u\rangle_{es}=-\tfrac{1}{2i}n^{4}p_{12}\epsilon_{0}|\tau|^{2} \ U_{0}A_{p}\ A_{s}^{*}\ \ Re\big{(}e^{-i\phi}\gamma\big{)}\tfrac{\pi}{\sqrt{ab}}\] ( 28 ) Next, the overlap resulting from bulk photoelastic forces is calculated. The photoelastic body forces can be determined from the divergence of stress components and its vectorial components are given as[8, 9]- \[\rm f_{x} = -\partial_{x}\sigma_{xx}-\partial_{y}\sigma_{xy}-\partial_{z} \sigma_{xz}=\ -i\Delta k\sigma_{xx}-\partial_{x}\sigma_{xx}\] ( 29 ) \[\rm f_{y} = -\partial_{x}\sigma_{xy}-\partial_{y}\sigma_{yy}-\partial_{z} \sigma_{yz}\ =\ -\partial_{y}\sigma_{yy}\] ( 30 ) \[\rm f_{z} = -\partial_{x}\sigma_{xz}-\partial_{y}\sigma_{zy}-\partial_{z} \sigma_{zz}=\ -\partial_{z}\sigma_{zz}\propto\partial_{z}(E_{p}E_{s})\] ( 31 ) Since the SAW cavity modes of interest have no displacement along the y-axis, the only forces of concern are \(\rm f_{x}\) and \(\rm f_{z}\) along which acoustic displacements are non-zero. Note that the forces along the z-direction result from an electric field gradient along the z-axis. For beam sizes used in the experiments (\(\sim\)30 \(\rm\upmu m\) @ 1550 nm), which have typical optical Rayleigh lengths of \(\rm x_{R}^{0}\)\(\sim\)1 mm, the optical fields diffractive minimally along the z-axis, within the decay length of the SAW cavity mode (\(\sim\)5 \(\rm\upmu m\)), and therefore \(\rm f_{z}\)\(\sim\)0. Only the forces along the x-axis result in non-zero overlap with the SAW cavity mode. Further expanding Eqn. 31, we get \[\rm f_{x}=\ -\tfrac{1}{2}\epsilon_{0}\ \epsilon^{2}\ \ p_{12}A_{p}A_{s0}^{*}|\tau|^{2}e^{ \tfrac{2x^{2}}{r_{0x}^{2}}}e^{-\tfrac{2y^{2}}{r_{0}^{2}}}e^{i2k_{0}\sin\theta \ x}\Big{(}-i\Delta k+\frac{4x}{r_{\rm ox}}\Big{)}\] ( 32 ) The optomechanical overlap contribution from the bulk electrostricitve forces is now given by \[\langle\rm f\cdot u\rangle_{eb}=-\tfrac{1}{2}\epsilon_{0}\ \epsilon^{2}\ \ p_{12}A_{p}A_{s0}^{*}|\tau|^{2}U_{0}\int_{-\infty}^{\infty}dy\ e^{-by^{2}} \int_{-\infty}^{0}\ dz\ (\exp(\eta qz-i\phi)+\] \[\rm c.\ c.)^{*}\ \int_{-\infty}^{\infty}dx\ \tfrac{1}{2} \Big{(}-iq_{m}+\frac{4x}{r_{\rm ox}}\Big{)}\ e^{i\Delta qx-ax^{2}}\] ( 33 ) Eqn. 35 can be further simplified to \[\langle\rm f\cdot u\rangle_{eb}=-\tfrac{1}{2i}\epsilon_{0}\ \epsilon^{2}\ \ p_{12}A_{p}A_{s0}^{*}|\tau|^{2}U_{0}\ Re\Big{(}\tfrac{e^{-i\phi}}{\eta} \Big{)}\tfrac{\pi}{\sqrt{ab}}\] ( 34 ) The total optomechanical overlap can now be written as \[\langle\mathsf{f}_{\mathrm{tot}}\cdot\mathrm{u}^{*}\rangle=\langle\mathsf{f}\cdot \mathrm{u}\rangle_{\mathrm{rp}}+\langle\mathsf{f}\cdot\mathrm{u}\rangle_{\mathrm{ es}}+\langle\mathsf{f}\cdot\mathrm{u}\rangle_{\mathrm{eb}}=\tfrac{\mathrm{i}}{2} \,\epsilon_{0}\mathrm{A}_{\mathrm{p}}\mathrm{A}_{\mathrm{s}}^{*}|\tau|^{2} \mathrm{U}_{0}\tfrac{\pi}{\sqrt{\mathrm{ab}}}\alpha\] ( 35 ) Here we define \(\alpha\) as \[\alpha=\left(-(\epsilon-1)\mathrm{Re}\big{(}\mathrm{e}^{-\mathrm{i}\phi} \gamma\big{)}+\mathrm{p}_{12}\epsilon^{2}\mathrm{Re}\big{(}\mathrm{e}^{- \mathrm{i}\phi}\gamma\big{)}+\mathrm{p}_{12}\epsilon^{2}\mathrm{Re}\left( \tfrac{\mathrm{e}^{-\mathrm{i}\phi}}{\eta}\right)\right)\] ( 36 ) Next we calculate the effective mass of the acoustic mode given by- \[\mathrm{m}_{\mathrm{eff}}=\langle\mathrm{u},\rho\mathrm{u}\rangle=\int\mathrm{ d}\mathrm{V}\,\rho\,(|\mathrm{u}_{\mathrm{x}}|^{2}+|\mathrm{u}_{\mathrm{z}}|^{2})\] ( 37 ) Inserting expressions for acoustic displacements from Eqn. 5 and Eqn.6 into Eq.39 and performing the requisite integrals one obtains- \[\langle\mathrm{u},\rho\mathrm{u}\rangle_{\mathrm{V}}=|\mathrm{U}_{0}|^{2} \tfrac{\pi}{4\mathrm{q}}\,\mathrm{r}_{\mathrm{ay}}\mathrm{r}_{\mathrm{ax}} \left(\tfrac{1}{\mathrm{Re}(\eta)}+\,\mathrm{Re}\left(\tfrac{\mathrm{e}^{-2 \mathrm{i}\phi}}{\eta}\right)+\tfrac{|\gamma|^{2}}{\mathrm{Re}(\eta)}+\, \mathrm{Re}\left(\tfrac{\gamma^{2}}{\eta}\,\mathrm{e}^{-\mathrm{i}2\phi} \right)\right)\] ( 38 ) We rewrite Eqn. 38 compactly as \[\langle\mathrm{u},\rho\mathrm{u}\rangle_{\mathrm{V}}=|\mathrm{U}_{0}|^{2}\rho \tfrac{\pi}{4\mathrm{q}}\mathrm{r}_{\mathrm{ay}}\mathrm{r}_{\mathrm{ax}}\delta\] ( 39 ) Where we define \(\delta\) as \[\delta=\tfrac{1}{\mathrm{Re}(\eta)}+\,\mathrm{Re}\left(\tfrac{\mathrm{e}^{-2 \mathrm{i}\phi}}{\eta}\right)+\tfrac{|\gamma|^{2}}{\mathrm{Re}(\eta)}+\, \mathrm{Re}\left(\tfrac{\gamma^{2}}{\eta}\mathrm{e}^{-\mathrm{i}2\phi}\right)\] ( 40 ) Next, we calculate the pump and Stokes power normalization factors \(\mathrm{N}_{\mathrm{p}}\) and \(\mathrm{N}_{\mathrm{s}}\)- \[\mathrm{N}_{\mathrm{p}(\mathrm{s})}=\sqrt{\tfrac{1}{2}\,\epsilon_{0}\!\int \epsilon\,\big{|}\mathrm{E}_{\mathrm{p}(\mathrm{s})}\big{|}^{2}\mathrm{dV}}= \left(\tfrac{1}{2}\,\epsilon_{0}\epsilon|\mathrm{A}_{\mathrm{p}(\mathrm{s})}| ^{2}\,|\tau|^{2}\left(\tfrac{\pi\mathrm{r}_{0}^{2}}{2}\right)\mathrm{l}_{ \mathrm{a}}\right)^{\tfrac{1}{2}}\] ( 41 ) Where \(\mathrm{l}_{\mathrm{a}}\) is the acoustic decay length defined as \(\mathrm{l}_{\mathrm{a}}=(2\mathrm{Re}(\eta)\mathrm{q})^{-1}\). The acoustic length can be understood as the characteristic length within which the energy of the surface acoustic field decays within the bulk of the substrate. Combining Eqn. 7, 35,39 and 41 the coupling rate can be expressed as \[g_{0}=\frac{-i_{0}\alpha\alpha}{\epsilon r_{0}^{2}l_{a}}\sqrt{r_{xy}r_{xx}\delta ab \rho\alpha_{m}\pi}\] ( 42 ) Equivalently, the Brillouin gain coefficient can be given as- \[G_{TE}=\frac{8q\omega Q}{\alpha_{0}^{2}c^{2}\rho\pi}\times\frac{|\alpha|^{2}}{ \epsilon\delta}\times\frac{1}{ab\,r_{xy}r_{xx}^{4}}\] ( 43 ) ### B) TE-polarized pump and TM-polarized Stokes optical fields For the case of cross-polarized optical fields, the pump field is assumed to be TE-polarized, as in Case A, and the Stokes field is TM-polarized. The electric fields inside the medium, close to the surface (z \(\to\) 0\({}^{-}\)) are given as- \[E_{p}(x,y,z=0^{-})=A_{p}\tau\exp\left(-\frac{x^{2}}{r_{0x}^{2}}-\frac{y^{2}}{r_ {0}^{2}}\right)\exp(ik_{0}\sin\theta\,x)\,\hat{y}\] ( 44 ) \[E_{s}(x,y,z=0^{+})=A_{s}\tau(-\cos\theta\,\,\,\hat{x}+\sin\theta\,\,\,\hat{z}) \exp\left(-\frac{x^{2}}{r_{0x}^{2}}-\frac{y^{2}}{r_{0}^{2}}\right)\exp(-ik_{0 }\sin\theta\,x)\] ( 45 ) ### Radiation Pressure Since the pump and Stokes optical fields do not have overlapping non-zero electric field components (since they are perpendicularly polarized), the net radiation pressure is zero, and by extension, the acoustic overlap is zero. \[\left<f\cdot u\right>_{rp}=0\] ( 46 ) ### Photoelastic forces Photoelastic stresses for cross-polarized optical fields, as in Case A is, given by \[\begin{pmatrix}\sigma_{\rm xx}\\ \sigma_{\rm yy}\\ \sigma_{\rm zz}\\ \sigma_{\rm zx}\\ \sigma_{\rm xy}\end{pmatrix}=-\tfrac{1}{2}\epsilon_{0}n^{4}\begin{pmatrix}\p_{11 }&\p_{12}&\p_{12}&0&0&0\\ \p_{12}&\p_{11}&\p_{12}&0&0&0\\ \p_{12}&\p_{12}&\p_{11}&0&0&0\\ 0&0&0&\p_{44}&0&0\\ 0&0&0&0&\p_{44}&0\\ 0&0&0&0&0&\p_{44}\end{pmatrix}\begin{pmatrix}\p_{\rm px}E_{\rm sx}^{*}\\ \p_{\rm py}E_{\rm sy}^{*}\\ \p_{\rm zz}E_{\rm sz}^{*}\\ \p_{\rm zz}E_{\rm sy}^{*}+\p_{\rm py}E_{\rm sz}^{*}\\ \p_{\rm zz}E_{\rm sy}^{*}+\p_{\rm py}E_{\rm sz}^{*}\\ \p_{\rm py}E_{\rm sz}^{*}+\p_{\rm py}E_{\rm sy}^{*}\end{pmatrix} \tag{47}\] Here Fresnel transmission coefficient for TE and TM polarization is assumed to be approximately equal. The resultant photoelastic stresses are given by \[\sigma_{\rm xx}=\sigma_{\rm yy}=\sigma_{\rm zz}=\sigma_{\rm zx}=0 \tag{48}\] \[\sigma_{\rm zy}=-\tfrac{1}{2}\epsilon_{0}n^{4}\p_{44}E_{\rm py}E_{\rm sz}^{*} \tag{49}\] \[\sigma_{\rm xy}=-\tfrac{1}{2}\epsilon_{0}n^{4}\p_{44}E_{\rm py}E_{\rm sz}^{*} \tag{50}\] Analogous to Eqn. 28, the photoelastic surface force is now given by \[\p_{\rm ess}=\tfrac{1}{2}\epsilon_{0}n^{4}\p_{44}E_{\rm py}E_{\rm sz}^{*}\, \hat{y} \tag{51}\] Since SAW cavity modes of interest have no displacement component along the y-axis, the resultant acoustic-optic overlap resulting from electrostrictive surface forces is then \[\left<\mathrm{f}\cdot\mathrm{u}\right>_{\rm es}=0 \tag{52}\] Vectorial components of the photoelastic body forces are determined as in Eqn. 31, Eqn. 32, and Eqn. 33 as \[\mathrm{f}_{\rm x}=\ -\mathrm{i}q\,\partial_{\rm x}\sigma_{\rm xx}-\partial_{ \rm y}\sigma_{\rm xy}-\partial_{\rm z}\sigma_{\rm xz}=\ -\partial_{\rm y}\sigma_{\rm xy} \tag{53}\] \[\mathrm{f}_{\rm y}=\ -\mathrm{i}q\,\partial_{\rm x}\sigma_{\rm xy}-\partial_{ \rm y}\sigma_{\rm yy}-\partial_{\rm z}\sigma_{\rm yz}\ =\ -\mathrm{i}q_{\rm m}\,\partial_{\rm x}\sigma_{\rm xy} \tag{54}\] \[\mathrm{f}_{\rm z}=\ -\mathrm{i}q\,\partial_{\rm x}\sigma_{\rm xz}-\partial_{ \rm y}\sigma_{\rm zy}-\partial_{\rm z}\sigma_{\rm zz}=\ -\partial_{\rm y}\sigma_{\rm zy} \tag{55}\] As before, only the x and z-component of the photoelastic forces have non-zero overlap with acoustic displacements since the SAW cavity mode does not displacements along the y-axis. The forces along x and z-axis can be further expanded as- \[f_{x}=\tfrac{1}{2}\epsilon_{0}\;\epsilon^{2}\;p_{44}A_{p}A_{s0}^{*}|\tau|^{2}\; \cos\theta\left(-\tfrac{4y}{r_{0}^{2}}\right)\mathrm{e}^{-\tfrac{2x^{2}}{r_{0 }^{2}}}\;\mathrm{e}^{-\tfrac{2y^{2}}{r_{0}^{2}}}\] ( 56 ) \[f_{z}=\tfrac{1}{2}\epsilon_{0}\;\epsilon^{2}p_{44}A_{p}A_{s0}^{*}|\tau|^{2}\; \sin\theta\left(-\tfrac{4y}{r_{0}^{2}}\right)\mathrm{e}^{-\tfrac{2x^{2}}{r_{0 }^{2}}}\;\mathrm{e}^{-\tfrac{2y^{2}}{r_{0}^{2}}}\] ( 57 ) The acousto-optic overlap corresponding to forces along x- and z-axis can now be determined as- \[\langle f_{x}\cdot u_{x}^{*}\rangle=\tfrac{1}{2}\epsilon_{0}\; \epsilon^{2}\;p_{44}A_{p}A_{s0}^{*}|\tau|^{2}\;\cos\theta\,U_{0}\int_{-\infty} ^{\infty}dy\;\left(\frac{4y}{r_{0}^{2}}\right)\;\mathrm{e}^{-\mathrm{by}^{2} }\int_{0}^{\infty}dz\;(\exp(-\eta qz-\mathrm{i}\phi)+\] \[\mathrm{H.\;C.})^{*}\;\int_{-\infty}^{\infty}dx\;\;\mathrm{e}^{- \mathrm{ax}^{2}}\] ( 58 ) Since \(\int_{-\infty}^{\infty}dy\;\left(\frac{4y}{r_{0}^{2}}\right)\;\mathrm{e}^{- \mathrm{by}^{2}}=0\), the contribution resulting from the x-component of photoelastic forces is \(\langle f_{x}\cdot u_{x}\rangle=0\) Similarly, the z-component of the photoelastic body force also yields no overlap \(\langle f_{z}\cdot u_{z}\rangle=0\) As a consequence, the total bulk electrostriction overlap is- \[\langle f\cdot u\rangle_{\mathrm{eb}}=0\] ( 59 ) As a result, the total overlap and, consequently, the optomechanical coupling rate for this configuration is \[g_{0}^{\mathrm{TE-TM}}=0\] ( 60 ) The absence of optomechanical coupling for the TE-TM scattering is primarily a result of the assumed crystal symmetry (cubic) and the resulting symmetry in the photoelastic tensor. In crystal structures with reduced symmetry, such as crystalline quartz and LiNbO3, SAW mediated optomechanical processes can couple orthogonal polarizations. **C) TM-polarized optical fields** For the case where both pump and Stokes fields are TM-polarized, the electric fields inside the medium close to the surface (z \(\to 0^{-}\)) are given as- \[E_{p}(x,y,z=0^{-}\,)=A_{p}\tau(-\cos\theta\ \hat{x}-\sin\theta\ \hat{z})\exp \left(-\frac{x^{2}}{r_{\rm ox}^{2}}-\frac{y^{2}}{r_{0}^{2}}\right)\exp(\mathrm{ ik}_{0}\sin\theta\,x) \tag{61}\] \[E_{s}(x,y,z=0^{-}\,)=A_{s}\tau(-\cos\theta\ \hat{x}+\sin\theta\ \hat{z})\exp \left(-\frac{x^{2}}{r_{\rm ox}^{2}}-\frac{y^{2}}{r_{0}^{2}}\right)\exp(- \mathrm{ik}_{0}\sin\theta\,x) \tag{62}\] **Radiation Pressure** Radiation pressure force, similar to cases A and B can be expressed as- \[P_{rp}=\tfrac{1}{2}\epsilon_{0}(\epsilon-1)A_{p}A_{s}^{*}\,|\tau|^{2}\exp \left(-\frac{2x^{2}}{r_{\rm ox}^{2}}-\frac{2y^{2}}{r_{0}^{2}}\right)\ \mathrm{e}^{i2k_{0}\sin\theta\,x}(\cos^{2}\theta- \epsilon\sin^{2}\theta)\ \hat{z} \tag{63}\] The corresponding acousto-optic overlap is given as- \[\langle f\cdot u\rangle_{rp}=\int_{-\infty}^{\infty}dx\ dy\ P_{rp}(x,y)U_{z}^{* }(z=0,x,y) \tag{64}\] \[\langle f\cdot u\rangle_{rp}=\epsilon_{0}\frac{(\epsilon-1)(\cos^{2}\theta- \epsilon\sin^{2}\theta)}{2i}|\tau|^{2}\ U_{0}A_{p}\ A_{s}^{*}\ \ \mathrm{Re}\big{(}\mathrm{e}^{-i\phi}Y\big{)}\frac{\pi}{\sqrt{ab}} \tag{65}\] **Photoelastic forces** The components of the photoelastic stress tensor are- \[\sigma_{xx}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{11}E_{\rm px}E_{\rm sx}^{*}- \tfrac{1}{2}\epsilon_{0}n^{4}p_{12}E_{\rm pz}E_{\rm sz}^{*} \tag{66}\] \[\sigma_{yy}=0 \tag{67}\] \[\sigma_{zz}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{12}E_{\rm px}E_{\rm sx}^{*}- \tfrac{1}{2}\epsilon_{0}n^{4}p_{11}E_{\rm pz}E_{\rm sz}^{*} \tag{68}\] \[\sigma_{xz}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{44}\big{(}E_{\rm pz}E_{\rm sx}^{* }+E_{\rm px}E_{\rm sz}^{*}\big{)}=0 \tag{69}\] \[\sigma_{xy}=\sigma_{yz}=0 \tag{70}\] The electrostriction surface forces on the interface is given by- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{71}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{72}\] The corresponding acousto-optic overlap is given as- \[\langle f\cdot u\rangle_{rp}=\int_{-\infty}^{\infty}dx\ dy\ P_{rp}(x,y)U_{z}^{* }(z=0,x,y) \tag{73}\] \[\langle f\cdot u\rangle_{rp}=\epsilon_{0}\frac{(\epsilon-1)(\cos^{2}\theta- \epsilon\sin^{2}\theta)}{2i}|\tau|^{2}\ U_{0}A_{p}\ A_{s}^{*}\ \ \mathrm{Re}\big{(}\mathrm{e}^{-i\phi}Y\big{)}\frac{\pi}{\sqrt{ab}} \tag{74}\] **Photoelastic forces** The components of the photoelastic stress tensor are- \[\sigma_{xx}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{11}E_{\rm px}E_{\rm sx}^{*}- \tfrac{1}{2}\epsilon_{0}n^{4}p_{12}E_{\rm pz}E_{\rm sz}^{*} \tag{75}\] \[\sigma_{yy}=0 \tag{76}\] \[\sigma_{zz}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{12}E_{\rm px}E_{\rm sx}^{*}- \tfrac{1}{2}\epsilon_{0}n^{4}p_{11}E_{\rm pz}E_{\rm sz}^{*} \tag{77}\] \[\sigma_{xz}=-\tfrac{1}{2}\epsilon_{0}n^{4}p_{44}\big{(}E_{\rm pz}E_{\rm sz}^{*}+ E_{\rm px}E_{\rm sz}^{*}\big{)}=0 \tag{78}\] \[\sigma_{xy}=\sigma_{yz}=0 \tag{79}\] The electrostriction surface forces on the interface is given by- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{80}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{81}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{82}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{83}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{84}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{85}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{86}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{87}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{88}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2} \theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{89}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{90}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{91}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2} \theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{92}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau|^{2}(p_{12}\cos ^{2}\theta-p_{11}\sin^{2}\theta)\ \hat{z} \tag{93}\] The resulting overlap with acoustic mode is given as- \[P_{\rm ess}=-\tfrac{1}{2}\epsilon_{0}n^{4}A_{p}A_{s}^{*}|\tau \[\langle{\rm f}\cdot{\rm u}\rangle_{\rm es}=\int_{-\infty}^{\infty}\!{\rm d}x\;{ \rm d}y\;{\rm P}_{\rm ess}({\rm x},{\rm y}){\rm U}_{\rm z}^{*}({\rm z}=0,{\rm x}, {\rm y})\] \[\langle{\rm f}\cdot{\rm u}\rangle_{\rm es}=\frac{-1}{2i}{\rm n}^{4}({\rm p}_{12 }\cos^{2}\theta-{\rm p}_{11}\sin^{2}\theta)\epsilon_{0}|\tau|^{2}\;{\rm U}_{0} A_{\rm p}\;A_{\rm s}^{*}\;\;{\rm Re}\big{(}{\rm e}^{-{\rm i}\phi}{\rm y}\big{)} \frac{\pi}{\sqrt{ab}}\] The electrostrictive body forces in the bulk of the substrate are given by \[{\rm f}_{\rm x}=\;-{\rm i}q\,\partial_{\rm x}\sigma_{\rm xx}- \partial_{\rm y}\sigma_{\rm xy}-\partial_{\rm z}\sigma_{\rm xz}=\;-{\rm i}q_{ \rm m}\,\partial_{\rm x}\sigma_{\rm xx}\] \[{\rm f}_{\rm y}=\;-{\rm i}q\,\partial_{\rm x}\sigma_{\rm xy}- \partial_{\rm y}\sigma_{\rm yy}-\partial_{\rm z}\sigma_{\rm yz}=\;0\] \[{\rm f}_{\rm z}=\;-{\rm i}q\,\partial_{\rm x}\sigma_{\rm xz}- \partial_{\rm y}\sigma_{\rm zy}-\partial_{\rm z}\sigma_{\rm zz}=\;-\partial_{ \rm z}\sigma_{\rm zz}\mbox{$\sim$}0\] The z-component of the body force, \({\rm f}_{\rm z}\) is assumed to be zero following reasoning from case A (TE-TE scattering). The resulting overlap is expressed as \[\langle{\rm f}\cdot{\rm u}\rangle_{\rm es}=\int_{\rm V}\!{\rm d}V\,{\rm f}_{ \rm x}{\rm u}_{\rm x}+{\rm f}_{\rm y}{\rm u}_{\rm y}+{\rm f}_{\rm z}{\rm u}_{ \rm z}=\int_{\rm V}\!{\rm d}V\,{\rm f}_{\rm x}{\rm u}_{\rm x}\] \[=\frac{i}{2}\epsilon_{0}\epsilon^{2}({\rm p}_{11}\cos^{2}\theta-{\rm p}_{12} \sin^{2}\theta)A_{\rm p}A_{\rm s}^{*}0|\tau|^{2}{\rm U}_{0}{\rm Re}\left( \frac{{\rm e}^{{\rm i}\phi}}{\eta}\right)\frac{\pi}{\sqrt{ab}}\] The total overlap is then given as \[\langle{\rm f}_{\rm tot}\cdot{\rm u}^{*}\rangle=\frac{i}{2}\epsilon_{0}A_{\rm p }A_{\rm s}^{*}0|\tau|^{2}{\rm U}_{0}\frac{\pi}{\sqrt{ab}}\Bigg{(}(\epsilon-1)( \cos^{2}\theta-\epsilon\sin^{2}\theta){\rm Re}\big{(}{\rm e}^{-{\rm i}\phi}{ \rm y}\big{)}-({\rm p}_{12}\cos^{2}\theta-\] \[{\rm p}_{11}\sin^{2}\theta)\;\epsilon^{2}{\rm Re}\big{(}{\rm e}^{-{\rm i}\phi}{ \rm y}\big{)}-({\rm p}_{11}\cos^{2}\theta-{\rm p}_{12}\sin^{2}\theta)\epsilon ^{2}{\rm Re}\left(\frac{{\rm e}^{{\rm i}\phi}}{\eta}\right)\Bigg{)}\] The optomechanical coupling rate for the TM-TM scattering process is then be given by \[{\rm g}_{0}=\frac{-{\rm i}\omega\alpha_{\rm TM}}{{\rm e}r_{0}^{2}{\rm l}_{\rm a }}\sqrt{\frac{2\hbar q_{\rm m}}{r_{\rm a}r_{\rm a}\xi ab\rho{\rm l}_{\rm m}\pi}}\] Where \(\alpha_{\rm TM}\) is defined as \[\alpha_{\rm TM}=\left((\epsilon-1)(\cos^{2}\theta-\epsilon\sin^{2}\theta){\rm Re} \left({\rm e}^{-{\rm i}\phi}\gamma\right)-\epsilon^{2}({\rm p}_{12}\cos^{2} \theta-\right.\] \[\left.{\rm p}_{11}\sin^{2}\theta){\rm Re}\left({\rm e}^{-{\rm i}\phi}\gamma \right)-\epsilon^{2}({\rm p}_{11}\cos^{2}\theta-\sin^{2}\theta){\rm Re}\left( \frac{{\rm e}^{{\rm i}\phi}}{\eta}\right)\right) \tag{79}\] The corresponding Brillouin Gain coefficient is given by \[{\rm G}_{\rm TM}=\frac{8q\omega Q}{\alpha_{0}^{2}c^{2}\rho}\times\frac{|\alpha _{\rm TM}|^{2}}{\epsilon\delta}\times\frac{1}{{\rm ab}\,{\rm r}_{\rm ay}{\rm r }_{\rm ax}{\rm r}_{0}^{4}} \tag{80}\] Note that strength of the TM-TM scattering process strongly depends on the optical angle of incidence and at larger angles can be significantly stronger than TE-TE scattering process. The following parameter material values are used for GaAs to calculate coupling rates quoted in the main text. \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\lambda_{\rm o}\) & 1550.05 nm & Optical wavelength \\ \hline n & 3.37 & [100]- Cut GaAs refractive index \\ \hline \({\rm r}_{0}\) & 33 \(\mu\)m & Optical beam radius \\ \hline \(\omega/2\pi=c/\lambda_{\rm o}\) & 193.55 THz & Optical frequency \\ \hline \({\rm p}_{11}\) & \(-\)0.165 & Photoelastic constant of GaAs \\ \hline \({\rm p}_{12}\) & \(-\)0.140 & Photoelastic constant of GaAs \\ \hline \(\rho\) & 5307 kg m\({}^{-3}\) & GaAs density \\ \hline \(\theta\) & 7.8\({}^{\circ}\) & The optical angle of incidence \\ \hline \(\theta_{\rm m}=\theta\) & 7.8\({}^{\circ}\) & Phase-matched angle corresponding to SAW mode \\ \hline \(\lambda_{\rm a}\) & 5.7 \(\mu\)m & Acoustic wavelength \\ \hline L & 505 \(\mu\)m & Acoustic mirror separation \\ \hline \({\rm L}_{\rm p}\) & 7\(\lambda_{\rm a}\) & SAW penetration depth \\ \hline \({\rm L}_{\rm eff}={\rm L}+2{\rm L}_{\rm p}\) & 620 \(\mu\)m & Effective SAW cavity length \\ \hline \({\rm r}_{\rm ax}={\rm L}/2\) & 292.6 \(\mu\)m & The effective radius of SAW mode along the x-axis \\ \hline \end{tabular} \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2865 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & \(0.5+0.48\)i & SAW decay parameter \\ \hline \(\gamma\) & \(0.68-1.16\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \(\phi\) & 1.05 & Phase lag between x and z-components of acoustic displacement \\ \hline Q & 7000 & Acoustic quality factor \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} ### [110]-oriented cavities The acoustic parameters characterizing SAWs along [110]-direction on [100]-cut GaAs are[2] \begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(\nu_{\rm R}\) & 2615 m/s & Rayleigh SAW velocity \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\eta\) & 0.40 + 0.56i & SAW decay parameter \\ \hline \(\gamma\) & \(0.37-1.1\)i & Parameter quantifying the ratio of SAW displacements along the x and z-axis \\ \hline \end{tabular} The resultant TE-TE coupling rate and Brillouin gain coefficient for [100]-oriented cavities can be evaluated to be \(\frac{g_{0}}{2\pi}=1.73\times 10^{3}\) Hz and \(G=2.3\times\ 10^{-5}\) W\({}^{-1}\) ### S2. Phase matching envelope When calculating the acousto-optic overlaps in Section 2, as in Eqn. 19, optical fields were assumed to be perfectly phase-matched to acoustic modes \(\Delta k=q_{m}\). In the absence of the above assumption, the dependence of the optomechanical coupling rate on phase mismatch can be expressed as- \[g_{0}\propto\langle f\cdot u\rangle\propto\ e^{-\frac{(\Delta q)^{2}}{4a}},\] ( 81 ) where \(\Delta q=q_{m}-\Delta k\) quantifies the phase mismatch, and parameter \(a\) is defined in Eqn. 15. The Gaussian dependence of coupling rate on phase mismatch is evident in Eqn. 81. The characteristic width is quantified by the parameter \(\delta k=2\sqrt{a}=2\left(\frac{2}{r_{ox}^{2}}+\frac{1}{r_{ax}^{2}}\right)^{1/2}\). For experimental cavities investigated in this work, the acoustic cavity length (\(r_{ax}\)) is much larger than optical beam sizes, i.e. \(r_{x}\ll r_{ax}\), and as a result, \(\delta k=2\sqrt{a}\approx\frac{2\sqrt{2}}{r_{ox}}=\frac{2\sqrt{2}\cos\theta}{ r_{0}}\). For small \(\theta\), \(\cos\theta\approx 1\) and \(\delta k\approx 2\sqrt{2}/r_{0}\). As detailed in the main text, for small angles, the coupling rate has a Gaussian dependence on phase mismatch, with a characteristic width given by the inverse of the optical beam size. The dependence of the coupling rate on the angle of incidence can be derived by expanding the phase mismatch as \[\Delta q=q_{m}-2k_{0}\sin\theta\] \[\Delta q=2k_{0}(\sin\theta_{m}-\sin\theta)\] ( 82 ) Where the phase matching angle \(\theta_{m}\) is defined as \[\theta_{\rm m}=\sin^{-1}\left(\frac{q_{\rm m}}{2k_{0}}\right) \tag{83}\] For small angles of incidence \(\theta\), \(\sin\theta\approx\theta\), the phase mismatch can be approximated as \[\Delta q\approx 2k_{0}(\theta_{\rm m}-\theta)\] The dependence of the coupling rate can now be expressed as \[g_{0}\propto\exp\left(-\frac{(\theta-\theta_{\rm m})^{2}}{\delta\theta^{2}}\right) \tag{84}\] where \(\delta\theta=\frac{2\sqrt{a}}{2k_{0}}=\frac{\sqrt{2}}{k_{0}r_{0}}\) For optical fields with free space wavelength of \(\lambda_{0}=1550\) nm, and the optical beam size of \(r_{0}=30\)\(\mu\)m, the angular bandwidth is \(\delta\theta=0.66^{\circ}\), corresponding to a full width of \(2\delta\theta=1.32^{\circ}\). Since the Brillouin gain coefficient, \(G\) varies as the square of coupling rate- \(G\sim g_{0}^{2}\), the corresponding angular bandwidth is given by \(\delta\theta_{G}=\frac{1}{k_{0}r_{0}}=0.46^{\circ}\) with a full width of \(2\delta\theta_{G}=0.92^{\circ}\), agreeing well with the experimental results detailed in the main text. Gaussian SAW cavities in this work are based on a Fabry-Perot cavity design where two acoustic Bragg mirrors, each consisting of numerous metallic strips, confine surface acoustic modes in the region enclosed between them. Each metallic strip reflects a small portion of the incident acoustic field, and the cumulative interference by all the reflectors achieves the desired acoustic confinement. The geometry of the cavity is specified by four independent parameters- the direction of the cavity axis relative to a principal crystal axis (shown along the x-axis in Fig. S2a), acoustic wavelength (\(\lambda_{a}\)), the acoustic beam waist at the center of the cavity (\(w_{a}\)) and mirror separation (\(L\)) (Fig. S2a). For the chosen cavity axis, the acoustic group velocity (\(v_{g}(\Theta)\)) is calculated as a function of the angle relative to the axis (\(\Theta\)) by numerically solving acoustic wave equations with appropriate boundary conditions[10]. Accounting for this anisotropy of the SAW velocity of the underlying substrate is essential to designing an efficient SAW cavity. For a Gaussian beam with a beam waist \(w_{a}\) and wavelength \(\lambda_{a}\) the acoustic Rayleigh range (\(x_{R}\)) is given by \(x_{R}=\pi w_{a}^{2}/2\lambda_{a}\) and the corresponding phase along the propagation axis (x-axis) can be expressed as \(\Phi(x)=k_{a}x+\frac{1}{2}\tan^{-1}(x/x_{R})\). The first and second terms refer to the propagation and Guoy phases. The locations of the reflectors (\(x_{i}\)) can now be determined by calculating the nodes of the acoustic displacement, i.e., \(\Phi(x_{i})=n\pi\). The location of the reflector closest to the center (\(x_{1}\)) is chosen such that the separation between the first reflector of the two mirrors is approximately \(L\), i.e. \(x_{1}\approx L/2\). The mirror separation \(L\) is chosen to be large enough to accommodate the optical beams incident on the device with minimal optical overlap with the acoustic mirrors. For efficient confinement of the acoustic field, the curvature of each metallic reflector must coincide with the local phase front of the desired Gaussian SAW mode. Ignoring the anisotropy in SAW velocity, the local phase front of a Gaussian beam at any reflector location \(x_{i}\) would be circular arcs with a radius of curvature given by \(R(x_{i})=x_{i}\left(1+\left(\frac{x_{R}}{x_{i}}\right)^{2}\right)\). To account for the anisotropy of the substrate, a correction factor[11]\(v_{g}(\Theta)/v_{g}(0)\) is introduced to obtain an angle dependent radius of curvature function \(R^{\prime}(x_{i},\Theta)=R(x_{i})v_{g}(\Theta)/v_{g}(0)\). To validate the design principles, we perform 3D numerical finite element simulations (COMSOL 5.6) of SAW cavities on [100]-cut GaAs oriented along [100]-direction with \(\lambda_{a}=5.7\ \mu m\), \(L\sim 100\ \mu m\) and \(w_{0}=3\lambda_{a}\). The thickness of the metallic reflectors is specified as a fraction of the acoustic wavelength and is set to be \(\frac{t}{\lambda_{a}}=0.035\). This thickness found to be a good balance between achieving tight confinement and achieve small acoustic mode volumes (thick electrodes) and mitigate acoustic scattering into the bulk of the substrate (thin electrodes). This value of reflector thickness is in agreement with other similar works[12, 13]. To minimize computational resources required, we leverage the symmetry in device and simulate one-fourth of the entire device. The FEM geometry of the device (Fig. S2b) consists of a substrate with a thickness of \(3\lambda_{a}\) is surrounded by phase matched layers with thickness of \(2\lambda_{a}\). In all areas of the device, the mesh size is ensured to be less than or equal to \(\lambda_{a}/4\). Simulated devices only have 50 as opposed to 200 in the fabricated devices to limit computational resources required for the simulation. Given the large number of nodes in the simulation, the simulations are run on a super computing cluster with 56 nodes and 500-1000 GB RAM. ### S4. Device Fabrication SAW resonator designs are translated on GaAs substrate via a standard e-beam lithography process (Fig. S3). First, we coat a double-side-polished GaAs chip with \(\sim 500\) nm thick PMMA polymer. The design is drawn onto the polymer with a beam of electrons using an electron-beam lithography tool (Fig. S3a). In the subsequent step, the polymer broken by e-beam exposure is washed away, resulting in a negative image of the pattern (Fig. S3b). Next, 200 nm thick Al film is deposited on the chip in an ultra-high vacuum e-beam evaporation system (Fig. S3c). Finally, the chip is removed from the chamber and submerged in a hot acetone bath, which removes PMMA and metal film from unwanted areas, leaving behind Al electrodes (Fig. S3d). ### S5. Optomechanical Spectroscopy Setup This section presents additional details for the experimental spectroscopy apparatus used for measuring the Optomechanical response from SAWs (Fig. S4a). A continuous-wave (CW) laser at 1550 nm (the carrier, \(\omega_{\text{C}}\)) is divided into two fiber paths. Along one path, the optical field is modulated by a null-biased intensity modulator with a fixed frequency of \(\omega_{1}=2\uppi\times\mathbf{11}\) GHz, followed by a narrow fiber Bragg grating (FBG) to filter out the upshifted optical frequency sideband. The remaining lower frequency optical sideband serves as one of the acoustic drive tones, drive\({}_{1}\), with frequency \(\omega_{\text{d1}}=\omega_{\text{C}}-\omega_{1}\). Similarly, along the second path, the carrier is modulated with a frequency of \(\omega_{2}=2\uppi\times\left(\mathbf{11}+\Omega\right)\) GHz and subsequently filtered with an FBG to generate the second acoustic drive, drive\({}_{2}\), with frequency \(\omega_{\text{d2}}=\omega_{\text{c}}+\omega_{2}\). \(\omega_{\text{d1}}\) and \(\omega_{\text{d2}}\) are chosen such that the difference between the two (\(\Omega=\omega_{\text{d2}}-\omega_{\text{d1}}\)) can be continuously varied through targeted acoustic resonance frequencies (Fig.S4b). along the second path is combined with the final signal and serves as a local oscillator (LO) to detect optomechanically scattered signals. The three optical fields (drive\({}_{1}\), drive\({}_{2}\), and probe) are amplified by Erbium optical amplifiers to optical powers of about \(\sim 150-350\) mW each before impinging on the SAW cavity. Two sets of polarization controllers ensure that the optical fields are linearly polarized. The optical fields are collimated and incident off-axis focusing aspheric lens. The off-axis displacement is controlled through a linear stage, providing fine control over the incident angle. The dependence of the incident angle on the off-axis displacement is carefully pre-calibrated, as detailed in the following section. The incident probe, which is collinear to drive\({}_{1}\), optomechanically scatters via the driven SAW cavity mode into a signal collinear to drive\({}_{2}\). This optomechanical signal and drive\({}_{2}\) are collected with a single mode collimator and filtered with an FBG filter to reject the excess acoustic drive. The filtered signal is combined with the local oscillator and detected on a balanced detector. The two acoustic drive tones (drive1, drive2) travel through distinct optical paths with different lengths and components. As a result, the optical path length between the two drive fields could vary by as much as tens of meters of single-mode fiber. The uncorrelated noise along the two optical paths as a result of environmental fluctuations (vibrations, temperature, etc.) could lead to excess relative optical frequency noise and limit the measurable linewidths of the optomechanical response. To mitigate the effects of the noise, we implement an optoelectronic feedback loop to lock the two acoustic drive tones. A small fraction (\(\sim 1\%\)) of the two acoustic drives are extracted before they are incident on the device under test and mixed on a photodetector. The phase of the resultant signal is measured on a lock-in amplifier, and a PID produces a correction signal to a voltage-controlled oscillator (VCO) for feedback. This phase-locked loop ensures that the two drive tones are stabilized to a relative frequency of \(\sim 1\) Hz. ### S6. Calibrating Optical Angle of Incidence In the paraxial limit, the off-axis optical rays axis passing through an ideal lens with a focal length of f intersects the optical axis at the focal point with an angle of incidence given by \(\theta=\frac{\mathrm{s}}{\mathrm{f}}\), where \(\mathrm{s}\) is the off-axis displacement (Fig. S5a). This elementary relation forms the basis of the angle-tuning technique employed in this work. This relation, in general, is not valid for real lenses with additional aberrations incident with diffracting optical beams. These aberrations) could result in significant deviations from the paraxial relation. To accurately determine the angle of incidence as a function of off-axis displacement, we develop an apparatus to image the focus of two intersecting beams and calculate the angle of incidence by observing the resulting interference pattern (Fig. S5b). An optical beam, named beam\({}_{1}\), is first incident along the optical axis of the lens under test, which subsequently focuses on a partially reflective sample placed at the focal plane of the lens. The focused beam is aligned to the surface normal of the sample by maximizing the back-reflected beam. This configuration is assumed to represent \(\theta=0^{\circ}\). Next, beam\({}_{1}\) is laterally displaced by a known distance away from the optical axis. A second beam, beam\({}_{2}\), is aligned such that the partially reflected beam\({}_{1}\) maximally couples into the beam\({}_{2}\) collimator. This alignment ensures the two beams are focused on the same spot on the sample with equal but opposite angles of incidence. A 90:10 beam splitter samples a small portion of back reflected beams which are focused using an imaging lens on a near-infrared camera. The image observed on the camera consists of spatial fringes resulting from interference of beam\({}_{1}\) and beam\({}_{2}\) (inset Fig.S5c). The spatial periodicity (\(\Lambda\)) of the observed fringes can then be used to infer the angle of incidence on the sample through the relation \(\Lambda=\frac{\lambda_{0}}{2\sin\theta}\). The angle of incidence measured as a function of the off-axis displacement of beam\({}_{1}\) shows excellent agreement with predictions from paraxial theory (Fig. S4c) for an aspheric lens with a focal length of f = 75 mm. Observed results confirm that geometric aberrations within the lens and other optical components within the system are small, and paraxial analysis is warranted. ### S7. Estimating Optomechanical Coupling Rate This section discusses the theory of estimating the optomechanical coupling rate (\(\mathbf{g_{0}}\)) from experimentally measured spectra. The system under consideration is as follows (Fig. S4b)- two optical drive fields with amplitudes \(\mathbf{a_{d1}}\) and \(\mathbf{a_{d2}}\) resonantly drive a SAW cavity mode with amplitude b. A third optical probe with an amplitude \(\mathbf{a_{pr}}\) scatters of the driven phonon mode and scatters into optomechanically scattered Stokes (\(\mathbf{a_{S}}\)) and anti-Stokes sidebands (\(\mathbf{a_{AS}}\)). Here two simplifying assumptions are made- first, the system is operated in the small gain limit, in which pump depletion does not occur and, as a result, incident optical fields \(\mathbf{a_{d1}},\mathbf{a_{d2}}\) and \(\mathbf{a_{pr}}\) do not evolve in space. Second, the weak phonon drive generated from the scattered signals (\(\mathbf{a_{S}}\) and \(\mathbf{a_{AS}}\)) and the incident probe is neglected. In the strong gain limit, for instance, within a cavity optomechanical cavity, both of these assumptions would break down and require a more general analysis. Additionally, we assume that for small angles of incidence (near normal incidence), which is the case in this work, the optical beams approximately travel along the z-axis. In the limit of large angle this analysis can be suitably modified. The equations of motions for the driven cavity phonon amplitude (b) and scattered signals are given by \((a_{S},a_{AS})^{3,14,15}\) - \[\frac{\partial b}{\partial t}=-i(\Omega_{0}-\Omega)b-\frac{\Gamma_{0}}{2}b-i\ \int g_{0}^{*}a_{d1}^{*}a_{d2}\] ( 85 ) \[v_{g}\frac{a_{aS}}{\partial z}+\frac{\partial a_{aS}}{\partial t}=-ig_{0}^{*} b^{*}a_{pr}\] ( 86 ) \[v_{g}\frac{\partial a_{AS}}{\partial z}+\frac{\partial a_{AS}}{\partial t}=-ig_ {0}ba_{pr}^{*}\] ( 87 ) where \(v_{0},\Omega_{0},\Omega,\Gamma\) refer to the optical group velocity, resonant phonon frequency, the frequency difference between optical drives, and acoustic dissipation rate. Note that the integral in Eqn. 85 is performed only over the interaction volume, i.e., the decay length of the SAW cavity mode within the bulk of the substrate, i.e. \(l_{a}=\frac{1}{2Re(\eta)q}\sim 2\ \mu m\) as defined in section S1. Assuming steady-state operation (\(\partial_{t}=0\)) and resonant driving from optical drives (\(\Omega=\Omega_{0}\)) the phonon-field amplitudes can be expressed as- \[b=-i\frac{2}{\Gamma}g_{0}^{*}l_{a}a_{d1}^{*}a_{d2}\] ( 88 ) Inserting Eqn. 47 into Eqn. 44 and Eqn. 45 and assuming \(a_{S(AS)}(0)=0\) one obtains- \[a_{S}=\frac{2|g_{0}|^{2}l_{a}^{2}}{\Gamma v_{g}}a_{d1}a_{d2}^{*}a_{pr}\] ( 89 ) \[a_{AS}=\frac{2|g_{0}|^{2}l_{a}^{2}}{\Gamma v_{g}}a_{d1}^{*}a_{d2}a_{pr}^{*}\] ( 90 ) The optical power in terms of field amplitude is given by- \[P_{i}^{op}=\hbar\omega_{i}v_{g}|a_{i}|^{2}\] ( 91 ) Using Eqn. 48,49 and 50, the optomechanically scattered sideband powers can be expressed as \[\mathrm{P_{AS}=P_{S}=\frac{\beta^{2}}{\hbar^{2}\omega_{d1}\omega_{d2}v_{g}^{2}}P_{d1 }P_{d2}P_{pr}}\] ( 92 ) Where \(\beta=\frac{2|g_{0}|^{2}l_{a}^{2}}{\Gamma v_{g}}\). These optomechanically scattered sidebands are spectrally separated by \(\Omega_{0}\) on either side of the incident probe. Assuming a local oscillator with an optical power \(P_{LO}\), the resulting heterodyne beat note oscillating at a frequency \(\Omega_{0}\) is given by- \[\mathrm{P_{het}=2\sqrt{P_{LO}P_{S}}+2\sqrt{P_{LO}P_{AS}}=4\sqrt{P_{LO}P_{AS}}= \frac{8|g_{0}|^{2}l_{a}^{2}}{\hbar\omega_{d1}\Gamma v_{g}^{2}}\sqrt{P_{d1}P_{ d2}P_{pr}P_{LO}}\] ( 93 ) The coupling rate can then be estimated by inverting equation Eqn. 93 as \[\mathrm{g_{0}=\left(\frac{P_{het}\hbar\omega_{d1}\Gamma v_{g}^{2}}{s1_{a}^{2} /P_{d1}P_{d2}P_{pr}P_{LO}}\right)^{\frac{1}{2}}}\] ( 94 ) **[100]-oriented cavities:** For cavities oriented along the [100]-direction, optical drive tones, and the probe tone have a free space wavelength of \(\lambda_{p}=1550.05\) nm and \(\lambda_{pr}=1550.25\) nm, respectively. The optical fields are incident at an angle of \(7.8^{\circ}\). Drive\({}_{1}\), Drive\({}_{2}\), and probe powers before the sample are 325 mW, \(117\) mW, and \(434\) mW, respectively. Optical reflectivity at the front surface resulting from refractive index mismatch is \(29.41\) %. Since reflected fields do not contribute to the optomechanical process, the effective powers mediating the optomechanical process are \(P_{d1}=228\) mW, \(P_{d2}=82\) mW and \(P_{pr}=304\) mW. All the optical fields are ensured to be TE-polarized (p-polarization) by using a polarizing beam splitter. The local oscillator power is \(P_{LO}=3.7\) mW. The measured heterodyne power is \(P_{het}=0.34\) %. Corresponding to an experimentally estimated coupling rate of \(\frac{g_{0}}{2\pi}=1.43\times 10^{3}\) Hz. This agrees well with a theoretically estimated value of \(1.73\times 10^{3}\) Hz. Residual errors could be a result of uncalibrated rf losses, polarization mismatch between the optical tones, and errors in optical beam position and sizes. **[110]-oriented cavities** For cavities oriented along the [110]-direction, optical drive tones, and the probe tone have a free space wavelength of \(\lambda_{p}=1550.05\) nm and \(\lambda_{pr}=1550.25\) nm, respectively. The optical fields are incident at an approximate angle of \(7.8^{\circ}\). Drive\({}_{1}\), Drive\({}_{2}\), and probe powers before the sample are \(332\) mW, \(126\) mW, and \(443\) mW, respectively. Optical reflectivity at the front surface resulting from refractive index mismatch is 29.41 %. Since reflected fields do not contribute to the optomechanical process, the effective powers mediating the optomechanical process are \(\mathrm{P_{d1}}=232\;\mathrm{mW}\), \(\mathrm{P_{d2}}=88.7\;\mathrm{mW}\) and \(\mathrm{P_{pr}}=310\;\mathrm{mW}\). All the optical fields are ensured to be TE-polarized (p-polarization) by using a polarizing beam splitter. The local oscillator power is \(\mathrm{P_{LO}}=3.7\;\mathrm{mW}\). The measured heterodyne power is \(\mathrm{P_{het}}=21.8\;\mathrm{nW}\). Corresponding to an experimentally estimated coupling rate of \(\frac{\mathrm{g_{0}}}{2\pi}=1.90\times 10^{3}\;\mathrm{Hz}\). This agrees well with the theoretically estimated value of \(1.8\times 10^{3}\;\mathrm{Hz}\). ### S8. Quality factor vs. length The quality factor of an acoustic cavity (Q) can be expressed as a function of roundtrip loss (\(\alpha_{l}\)), resonant frequency (\(f_{0}\)), acoustic velocity (\(\mathrm{v_{R}}\)), cavity length (\(L\)), and linewidth (\(\Delta f\)) as [12]- \[Q=\frac{f_{0}}{\Delta f}=\frac{2f_{0}L}{\mathrm{v_{R}}\alpha_{l}} \tag{97}\] The acoustic round trip loss can be expressed as a sum of the propagating loss and losses occurring in the acoustic mirror. Propagation losses which scale with propagation length, can be characterized through an attenuation coefficient (\(\alpha_{P}\)) while mirror losses (\(\alpha_{M}\)) are independent of length. The total loss can now be expressed as- \[\alpha_{l}=2(\alpha_{p}L+\alpha_{M}) \tag{98}\] The factor of two in Eqn. 98 is a result of acoustic fields propagating for a total round trip length of \(2L\) and encounter the acoustic mirrors twice, one on each side of the cavity. Inserting Eqn. 98 into Eqn. 97 we get \[Q=\frac{f_{0}L}{\mathrm{v_{R}}}\left(\frac{1}{\alpha_{p}L+\alpha_{M}}\right) \tag{99}\] For small cavity lengths, assuming \(\alpha_{P}L\ll\alpha_{M}\), that is, losses are dominated by mirror losses, we get- \[Q\approx\frac{f_{0}L}{\mathrm{v_{R}}\alpha_{M}} \tag{100}\] quality factor as observed in the main text, depends linearly on the cavity length. ### S9. Absorption-mediated optomechanical effects In addition to optomechanical interactions enabled by nonlinear optical forces (photoelastic and radiation pressure), devices investigated in this work, by virtue of having metallic reflectors, also display optomechanical interactions mediated by absorption within the metallic reflectors. The mechanism of this interaction is as follows- the two drive fields separated by the resonant cavity frequency act as an intensity-modulated source which, when absorbed by the acoustic reflectors, excites SAWs due to thermo-elastic expansion. Subsequently, the optical probe field can scatter off the excited SAW cavity mode to produce an analogous optomechanical response[16, 17]. Note that such processes do not require phase-matching of the optical drives which generate the acoustic fields since the acoustic fields are driven solely by time-modulated absorptive effects. Since absorptive effects are typically accompanied by thermal effects, such residual thermal effects are used to discriminate the two optomechanical effects (parametric and absorptive). Optomechanical response in the [100]-oriented devices is measured as a function of incident optical power when the optical beams are in the center of the cavity (parametric) (Fig. S6a), and when the optical fields have significant overlap with the surrounding metallic reflectors (absorption-mediated) (Fig.S6b). For the case where parametric interactions dominate the optomechanical response (Fig. S6a), negligible power-dependent effects are observed (Fig. S6c-S6d). In stark contrast, the resonant frequency and the quality factor vary significantly as a function of incident power for the absorption-mediated response (Fig. S6b-d). The observed differences suggest that excess optical absorption in the metal strips modifies the characteristics of the resonant mode, consistent with spurious heating of the substrate and associated changes in local elastic properties of the SAW cavity. As demonstrated in previous works, such absorptive effects could be employed for various classical applications, including optical signal processing[16, 17]. However, given their incoherent nature, absorptively mediated effects would generally be undesirable for applications requiring coherent interactions, including quantum control, transduction, and sensing. Additionally, spurious heating resulting from absorption could prevent the robust ground-state operation of quantum systems such as qubits. These parasitic thermal effects are minimized for devices investigated in this work by ensuring g the mirror separation is much larger than the incident optical beam waist. For example, for devices employed in this work with an approximate mirror separation of 500 \(\mu m\) and an optical beam of waist diameter of 60 \(\mu m\), the fraction of optical power spatially overlapping with the acoustic mirrors can be reduced to the level of \(\sim\)10\({}^{-15}\). Alternatively, any residual thermal effects can be eliminated by replacing the metallic stripe reflectors with etched grooves to confine SAWs[18, 13, 19]. **S10. SAW mediated cavity optomechanical devices** Here, we propose an possible iteration of a SAW-mediated cavity optomechanical system. For the SAW cavity system we assume a SAW cavity on [100]-cut GaAs optimized for an acoustic wavelength (frequency) of \(\lambda_{a}=700\) nm (\(\Omega_{0}\)\(\sim 4\) GHz ) with a Gaussian waist size of \(w_{0}=2\lambda_{a}\) and a cavity length of \(L_{\rm eff}\)\(\sim 30\lambda_{a}\). This SAW cavity can be phase matched to \(\lambda_{o}=1\ \mu m\) optical fields incident at \(\theta=45^{\circ}\). Assuming the fields are TM polarized Eqn. 78 can be used to estimate traveling-wave coupling rate of \(g_{0}\sim 2\pi\times 400\ kHz\). This estimated coupling rate is approximately 250x of experimentally measured cavities which had a frequency of 500 MHz. This large enhancement is a result of the acoustic mode volume scaling with the wavelength. Next we propose a possible optical cavity compatible with SAW cavities. Consider a coated DBR fiber-optic optical cavity enclosing a 4-GHz SAW cavity. Optical cavities like these have been commonly used membrane-type cavity optomechanical systems and CQED systems[20, 21]. The ability to miniaturize these optical cavities is ideal to obtain small optical mode volumes and consequently larger cavity optomechanical coupling strengths. Assuming optical cavity lengths of tens of micron (\(l_{opt}\)\(\sim\)\(10-100\ \mu m\)) and a conservative optical finesse of \(\mathcal{F}=10^{4}\) (these could be as large as \(10^{6}\)). Using Eqn. 8 we estimate the cavity optomechanical coupling rate as (for \(l_{opt}=10\ \mu m\))- \[g_{0}^{c}=g_{0}\left(\frac{l_{a}}{l_{opt}}\right)\approx 2\pi\times 5.5\ kHz \tag{101}\] The reduction in coupling rate when compared to the travelling-wave coupling rate quoted previously (\(g_{0}\approx 2\pi\times 400\ kHz\)) results from the relative modal size mismatch of the acoustic (\(\sim 0.5\ \mu m\)) and optical cavity (\(\sim\)\(10-100\ \mu m\)) modes. The extraordinary power handing capabilities of this system, limited only by bulk material thresholds could support large intracavity photon numbers (\(n_{c}>10^{9}\)), typically employed in Bulk cavity optomechanical systems[22]. The optically loaded cavity optomechanical cooperativity is given by[4] \[C_{\rm om}=\frac{4g_{0}^{2}n_{c}}{\Gamma_{a}\kappa} \tag{102}\] Where \(\Gamma_{a}\) and \(\kappa\) refer to acoustic and optical cavity decay rates. Assuming experimentally observed quality factor \(Q\approx 10^{5}\) and \(g_{0}^{c}\approx 2\pi\times 5.5\) kHz the cavity optomechanical cooperativity is estimated to be \[C_{\rm om}\approx 2500 \tag{103}\] This platform retains the high-power handling capability of bulk optomechanical systems[5, 23] while offering much larger coupling rates (50-500x), devices with a smaller footprint, and simpler integrability to other quantum systems and sensing devices. ### S11. Additional experimental data Here additional experimental data is presented for the [100]-oriented device on [100]-cut GaAs (Fig. S7) for the cases where one of the acoustic drives is turned off (green trace Fig. S7a), when the two acoustic drives are orthogonally polarized with respect to each other (TE-TM scattering, purple trace Fig. S7a), and when the optical LO is polarized orthogonally to the probe field (yellow trace Fig. S7a). These results are in excellent agreement with theoretical predictions and strongly suggest that the observed resonance results from optomechanical processes. The TM-TM scattering trace is also shown (Fig. S7b), displaying a resonance with an estimated quality factor of 120,000.
2306.17070
Interdisciplinary Methods in Computational Creativity: How Human Variables Shape Human-Inspired AI Research
The word creativity originally described a concept from human psychology, but in the realm of computational creativity (CC), it has become much more. The question of what creativity means when it is part of a computational system might be considered core to CC. Pinning down the meaning of creativity, and concepts like it, becomes salient when researchers port concepts from human psychology to computation, a widespread practice extending beyond CC into artificial intelligence (AI). Yet, the human processes shaping human-inspired computational systems have been little investigated. In this paper, we question which human literatures (social sciences, psychology, neuroscience) enter AI scholarship and how they are translated at the port of entry. This study is based on 22 in-depth, semi-structured interviews, primarily with human-inspired AI researchers, half of whom focus on creativity as a major research area. This paper focuses on findings most relevant to CC. We suggest that which human literature enters AI bears greater scrutiny because ideas may become disconnected from context in their home discipline. Accordingly, we recommend that CC researchers document the decisions and context of their practices, particularly those practices formalizing human concepts for machines. Publishing reflexive commentary on human elements in CC and AI would provide a useful record and permit greater dialogue with other disciplines.
Nadia M. Ady, Faun Rice
2023-06-29T16:17:04Z
http://arxiv.org/abs/2306.17070v1
# Interdisciplinary Methods in Computational Creativity: ###### Abstract The word _creativity_ originally described a concept from human psychology, but in the realm of computational creativity (CC), it has become much more. The question of what creativity means when it is part of a computational system might be considered core to CC. Pinning down the meaning of creativity, and concepts like it, becomes salient when researchers port concepts from human psychology to computation, a widespread practice extending beyond CC into artificial intelligence (AI). Yet, the human processes shaping human-inspired computational systems have been little investigated. In this paper, we question _which_ human literatures (social sciences, psychology, neuroscience) enter AI scholarship and _how_ they are translated at the port of entry. This study is based on 22 in-depth, semi-structured interviews, primarily with human-inspired AI researchers, half of whom focus on creativity as a major research area. This paper focuses on findings most relevant to CC. We suggest that _which_ human literature enters AI bears greater scrutiny because ideas may become disconnected from context in their home discipline. Accordingly, we recommend that CC researchers document the decisions and context of their practices, particularly those practices formalizing human concepts for machines. Publishing reflexive commentary on human elements in CC and AI would provide a useful record and permit greater dialogue with other disciplines. ## Introduction Computational creativity (CC) is informed by many human literatures, including psychology, sociology, cognitive science, and philosophy (Ackerman et al., 2017, p. 11; McGregor, Wiggins, and Purver, 2014). There is a long history of reflection on the relationship between CC's parent, AI, with other disciplines (Newell, 1970) which continues today (Lieto and Radicioni, 2016; MacPherson et al., 2021; Cassenti, Veksler, and Ritter, 2022). Social sciences also offer relevant commentary: Science, Technology, and Society (STS) is concerned with how scientific methods produce knowledge and shape the world, calling attention to the human processes inherent in scientific work using a broad methodological toolkit (Jasanoff, 2013; Lippert and Mewes, 2021; Law, 2004; Suchman and Trigg, 1993). In alignment with these conversations, our project explores the human processes involved when researchers draw inspiration from concepts from human psychology for computational systems. In this paper, we present early findings centred on CC. This work responds to calls to articulate the "methodological and conceptual barriers... [which] confront attempts to work across disciplinary boundaries" (MacLeod, 2018, p. 697). Our dataset is 22 in-depth, semi-structured interviews with CC and AI researchers working closely with concepts from human psychology (see Methodology). For 11 interviewees, the concept of creativity is a key thread in their research: the other 11 engaged with concepts such as curiosity, forgetting, or mental time travel. We use "human-inspired" as shorthand for this heterogeneous group throughout the paper, and transcripts from non-CC participants refine our understanding of each finding, though our focus here is on CC. We build on existing scholarship by suggesting that human and social factors impact _which_ human literature enters AI and _how_ it is translated for computation at its port of entry. Further, we suggest that human and social processes in CC are productive areas of inquiry, and that qualitative methods offer fruitful ways of exploring these topics, in agreement with scholars like Perez y Perez and Ackerman (2020). In demonstration, we outline two phenomena related to the challenges of interdisciplinary work, followed by an example of intellectual influence on human-inspired AI that emerged from qualitative interviews. ## Methodology This study has used a grounded theory approach to conception, data collection, and analysis. Aligned with grounded theory methodologies, we began with a broad interest rather than a hypothesis (Qureshi and Unlu, 2020); prioritized inductive findings from primary qualitative research (Glaser and Strauss, 1967); and participated collaboratively in transcription, line-by-line coding, memoing, focused coding, and forming early-stage conceptual categories (Wiener, 2007, p. 301; Charmaz, 2014). We began with purposying sampling of human-inspired CC and AI researchers. We used interviewees' publications to assess their relevance to study aims, and proceeded via snowball sampling. In one-hour long semi-structured interviews, we asked participants how they defined the hu
2305.05124
Lifespan estimates for semilinear damped wave equation in a two-dimensional exterior domain
Lifespan estimates for semilinear damped wave equations of the form $\partial_t^2u-\Delta u+\partial_tu=|u|^p$ in a two dimensional exterior domain endowed with the Dirichlet boundary condition are dealt with. For the critical case of the semilinear heat equation $\partial_tv-\Delta v=v^2$ with the Dirichlet boundary condition and the initial condition $v(0)=\varepsilon f$, the corresponding lifespan can be estimated from below and above by $\exp(\exp(C\varepsilon^{-1}))$ with different constants $C$. This paper clarifies that the same estimates hold even for the critical semilinear damped wave equation in the exterior of the unit ball under the restriction of radial symmetry. To achieve this result, a new technique to control $L^1$-type norm and a new Gagliardo--Nirenberg type estimate with logarithmic weight are introduced.
Masahiro Ikeda, Motohiro Sobajima, Koichi Taniguchi, Yuta Wakasugi
2023-05-09T01:49:26Z
http://arxiv.org/abs/2305.05124v1
# Lifespan estimates for semilinear damped wave equation in a two-dimensional exterior domain ###### Abstract Lifespan estimates for semilinear damped wave equations of the form \(\partial_{t}^{2}u-\Delta u+\partial_{t}u=|u|^{p}\) in a two dimensional exterior domain endowed with the Dirichlet boundary condition are dealt with. For the critical case of the semilinear heat equation \(\partial_{t}v-\Delta v=v^{2}\) with the Dirichlet boundary condition and the initial condition \(v(0)=\varepsilon f\), the corresponding lifespan can be estimated from below and above by \(\exp(\exp(C\varepsilon^{-1}))\) with different constants \(C\). This paper clarifies that the same estimates hold even for the critical semilinear damped wave equation in the exterior of the unit ball under the restriction of radial symmetry. To achieve this result, a new technique to control \(L^{1}\)-type norm and a new Gagliardo-Nirenberg type estimate with logarithmic weight are introduced. _Mathematics Subject Classification_ (2020): Primary:35L20, Secondary:35L71. _Key words and phrases_: Damped wave equations, two-dimensional exterior problems, lifespan estimates. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Abstract version of Matsumura estimates * 2.2 Dirichlet heat semigroup on the exterior domain \(B^{c}\) * 2.3 Some functional inequalities * 3 Linear decay estimates for \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) * 3.1 A Matsumura estimate with logarithmic weight * 3.2 A positively preserving property * 3.3 An \(L^{1}_{d\mu}\)-estimate for \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) * 4 Estimates for semilinear problem * 4.1 A Gagliardo-Nirenberg type inequality with logarithmic weight * 4.2 Lower bound for lifespan of semilinear problem * 5 Introduction In this paper, we consider the initial-boundary value problem of the semilinear damped wave equation in the exterior of the two-dimensional closed unit ball \(B=\{x\in\mathbb{R}^{2}\;;\;|x|\leq 1\}\), that is, \[\begin{cases}\partial_{t}^{2}u(x,t)-\Delta u(x,t)+\partial_{t}u(x,t)=|u(x,t)|^{ p}&\text{in }B^{c}\times(0,T),\\ u(x,t)=0&\text{on }\partial B^{c}\times(0,T),\\ (u,\partial_{t}u)(x,0)=(0,\varepsilon g(x)),&\text{in }B^{c}.\end{cases} \tag{1.1}\] Here \(p>1\) indicates the structure of the nonlinear term. The function \(g:B^{c}\to\mathbb{R}\) is given and \(u:B^{c}\times[0,T)\to\mathbb{R}\) is unknown. The constant \(\varepsilon>0\) is the parameter describing the smallness of initial data. Our interest is the behavior of solutions to (1.1) with small initial data. For the semilinear heat equation \[\begin{cases}\partial_{t}v(x,t)-\Delta v(x,t)=v(x,t)^{p}&\text{in }\mathbb{R}^{N} \times(0,T),\\ v(x,0)=f(x)\geq 0,&\text{in }\mathbb{R}^{N},\end{cases} \tag{1.2}\] from the pioneering work [4] by Fujita, there are many papers dealing with the existence/nonexistence of global solutions to (1.2) (see Quittner-Souplet [17]). Nowadays, the exponent \(p_{F}(N)=1+\frac{2}{N}\) is well-known as the threshold for dividing the situation of the existence/nonexistence of nonnegative global solutions, namely, * if \(1<p\leq p_{F}(N)\), then (1.2) does not possess non-trivial global solutions; * if \(p>p_{F}(N)\), then (1.2) possesses a non-trivial global solution. The similar phenomenon occurs also for the semilinear damped wave equation \[\begin{cases}\partial_{t}^{2}u(x,t)-\Delta u(x,t)+\partial_{t}u(x,t)=|u(x,t)|^ {p}&\text{in }\mathbb{R}^{N}\times(0,T),\\ (u,\partial_{t}u)(x,0)=(u_{0}(x),u_{1}(x)),&\text{in }\mathbb{R}^{N}.\end{cases} \tag{1.3}\] The pioneering work for the problem (1.3) is the paper [12] by Matsumura via the analysis of the profile of linear solutions in the following inequalities \[\|\partial_{t}^{k}\partial_{x}^{\alpha}u(t)\|_{L^{\infty}(\mathbb{ R}^{N})} \leq C(1+t)^{-\frac{N}{2}-k-\frac{|\alpha|}{2}}(\|u_{0}\|_{H^{m+1} \cap L^{1}(\mathbb{R}^{N})}+\|u_{1}\|_{H^{m}\cap L^{1}(\mathbb{R}^{N})}), \tag{1.4}\] \[\|\partial_{t}^{k}\partial_{x}^{\alpha}u(t)\|_{L^{2}(\mathbb{R}^{ N})} \leq C(1+t)^{-\frac{N}{4}-k-\frac{|\alpha|}{2}}(\|u_{0}\|_{H^{\bar{ m}+1}\cap L^{1}(\mathbb{R}^{N})}+\|u_{1}\|_{H^{\bar{m}}\cap L^{1}(\mathbb{R}^{N})}) \tag{1.5}\] (with \(m=[\frac{N}{2}]+k+|\alpha|\) and \(\widetilde{m}=k+|\alpha|-1\)), which are so-called "Matsumura estimates". Until Todorova-Yordanov [20] and Zhang [21], (a rough description of) the situation of the existence/nonexistence of non-trivial (small) global solutions to (1.3) is clarified as follows: * if \(1<p\leq p_{F}(N)\) and \(\int_{\mathbb{R}^{N}}(u_{0}+u_{1})\,dx>0\), then (1.3) does not possess non-trivial global solutions (a kind of smallness does not provide global solutions); * if \(p>p_{F}(N)\), then (1.2) possesses a non-trivial global solution. After that, the precise estimates of the lifespan \(T_{\varepsilon}\) (maximal existence time) of solutions to (1.3) with the small initial data \((\varepsilon f,\varepsilon g)\) became the subject of interest. It is firstly discussed in Li-Zhou [11], and until Lai-Zhou [10] the sharp lifespan estimates are clarified as the following: for sufficiently small \(\varepsilon>0\), \[\begin{cases}c_{p}\varepsilon^{-(\frac{1}{p-1}-\frac{N}{2})^{-1}}\leq T_{ \varepsilon}\leq C_{p}\varepsilon^{-(\frac{1}{p-1}-\frac{N}{2})^{-1}}&\text{ if }1<p<p_{F}(N),\\ \exp(c_{p}\varepsilon^{-(p-1)})\leq T_{\varepsilon}\leq\exp(C_{p}\varepsilon^{ -(p-1)})&\text{ if }p=p_{F}(N)\end{cases} \tag{1.6}\] for some positive constants \(c_{p}\) and \(C_{p}\) (independent of \(\varepsilon\)); note that these estimates are completely same as the ones for the blowup time of solutions to the semilinear heat equation (1.2) with the initial condition \(v(x,0)=\varepsilon f(x)\) (having the small parameter \(\varepsilon\)). In the case of the problem of the semilinar heat equation in an \(N\)-dimensional exterior domain \(\Omega\) with the Dirichlet boundary condition: \[\begin{cases}\partial_{t}v(x,t)-\Delta v(x,t)=v(x,t)^{p}&\text{in }\Omega\times(0,T),\\ v(x,t)=0&\text{on }\Omega\times(0,T),\\ v(x,0)=f(x)\geq 0,&\text{in }\Omega,\end{cases} \tag{1.7}\] the situation is almost similar as in the case of the whole space when \(N\geq 3\). Actually, for the linear case, Grigor'yan and Saloff-Coste [5] discussed the asymptotic behavior of the Dirichlet heat kernel for the exterior of a compact set in Riemannian manifolds. It is shown that the Dirichlet heat kernel for an \(N\)-dimensional exterior domain (\(N\geq 3\)) behaves like the one for \(\mathbb{R}^{N}\) in the far field. In contrast, the Dirichlet heat kernel in two-dimensional exterior domains cannot be approximated by the one for \(\mathbb{R}^{2}\). This fact can be explained as the transient (recurrence) property for \(N\geq 3\) (\(N=2\)) of the Brownian motion in \(\mathbb{R}^{N}\). This significant difference reflects the difficulty of the case of two-dimensional exterior problem. By using the Kaplan's method via the Dirichlet heat kernel found in [5], Pinsky [15] tried to draw the complete picture of the existence/nonexistence of global solutions to (1.7), however, the critical case \(p=p_{F}(2)=2\) seems to have a gap in his proof. Later, nonexistence of global solutions for the case \(p=2\) is proved in Ikeda-Sobajima [6] via a sharpened test function method with a distinctive shape of the (double exponential type) lifespan estimate \[T_{\varepsilon}\leq\exp(\exp(C\varepsilon^{-1})) \tag{1.8}\] for the solution with the initial condition \(v(x,0)=\varepsilon f(x)\). One can prove that this is actually the sharp lifespan estimate via the supersolution-subsolution method (explained in [17, Section 20]) with a supersolution \[U(x,t)=\alpha(t)e^{t\Delta_{\Omega}}f,\quad\alpha(t)=\varepsilon\left(1-(p-1) \varepsilon^{p-1}\int_{0}^{t}\|e^{s\Delta_{\Omega}}f\|_{L^{\infty}(\Omega)} ^{p-1}\,ds\right)^{-\frac{1}{p-1}}\] with the \(L^{\infty}\)-estimate of the Dirichlet heat semigroup \(e^{t\Delta_{\Omega}}\) : \[\|e^{t\Delta_{\Omega}}f\|_{L^{\infty}(\Omega)}\leq Ch(t)\big{(}\|(\log|x|)f\|_{L^ {1}(\Omega)}+\|f\|_{L^{\infty}(\Omega)}\big{)},\quad t>0 \tag{1.9}\] with the decay rate involving the logarithmic function \[h(t)=\frac{1}{(1+t)(1+\log(1+t))}. \tag{1.10}\] This is valid only for the case of two-dimensional (general) exterior domains. For the semilinear damped wave equation (1.1), the existence/nonexistence of solutions to (1.1) also has been dealt with in the literature (e.g., Ikehata [7] for the existence and Ogawa-Takeda [13] for the nonexistence). Although the effect of the recurrence of the Brownian motion in \(\mathbb{R}^{2}\) could appear also in the analysis of the damped wave equation, however, studies from such a viewpoint are few. Only in [6], one can find that the lifespan estimate of the solution \(u\) to the semilinear damped wave equation (1.1) has the same upper bound (as in (1.8)) as the case of the semilinear heat equation. In this connection, the question about the sharpness of this lifespan estimate naturally arises. The purpose of the present paper is to address this problem, that is, to clarify the sharpness of the (double exponential type) lifespan estimate for the two-dimensional exterior problem of the semilinear damped wave equation (1.1). To state the result, we clarify the definition of solutions to (1.1) as follows. **Definition 1.1**.: For general open set \(\Omega\) in \(\mathbb{R}^{N}\) (with a smooth boundary), we denote \(\Delta_{\Omega}\) as the Laplacian endowed with the domain \(D(\Delta_{\Omega})=H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) (which describes the Laplace operator \(\Delta\) involving the Dirichlet boundary condition). Then we define that \(u:B^{c}\times[0,T)\to\mathbb{R}\) is a weak solution of (1.1) in \((0,T)\) with the initial condition \((u,\partial_{t}u)(0)=(u_{0},u_{1})\in H^{1}_{0}(B^{c})\times L^{2}(B^{c})\) if \(u\in C^{1}([0,T);L^{2}(B^{c}))\cap C([0,T);H^{1}_{0}(B^{c}))\) and \[\big{(}u(t),\partial_{t}u(t)\big{)}=e^{t\mathcal{L}_{B^{c}}}(u_{0},u_{1})+ \int_{0}^{t}e^{(t-s)\mathcal{L}_{B^{c}}}(0,|u(s)|^{p})\,ds,\quad t\in(0,T),\] where \((e^{t\mathcal{L}_{B^{c}}})_{t\geq 0}\) is the \(C_{0}\)-semigroup on \(\mathcal{H}=H^{1}_{0}(B^{c})\times L^{2}(B^{c})\) generated by \(\mathcal{L}_{B^{c}}(u,v)=(v,\Delta_{B^{c}}u-v)\) with the domain \(D(\mathcal{L}_{B^{c}})=D(\Delta_{B^{c}})\times H^{1}_{0}(B^{c})\). _Remark 1.1_.: It turns out that the following representation of solution \(u\) of (1.1) with initial condition \((u,\partial_{t}u)(0)=(u_{0},u_{1})\) is also valid: \[u(t)=\partial_{t}S(t)u_{0}+S(t)(u_{0}+u_{1})+\int_{0}^{t}S(t-s)|u(s)|^{p}\,ds,\] where \(S(t)g:=P_{1}e^{t\mathcal{L}_{B^{c}}}(0,g)\) with the projection \(P_{1}(u,v)=u\). Our argument in the present paper relies on this representation. Existence and uniqueness of solutions to (1.1) and also the blowup alternative are well-known (see e.g., Ikehata [7]). **Proposition 1.1**.: _The following assertions hold:_ * _For every_ \((u_{0},u_{1})\in H^{1}_{0}(B^{c})\times L^{2}(B^{c})\)_, there exist a positive constant_ \(T\) _and_ \(u\in C^{1}([0,T);L^{2}(B^{c}))\cap C([0,T);H^{1}_{0}(B^{c}))\) _such that_ \(u\) _is a unique weak solution of (_1.1_) in_ \((0,T)\) _with the initial condition_ \((u,\partial_{t}u)(0)=(u_{0},u_{1})\)_._ * _The weak solution_ \(u\) _of (_1.1_) in a bounded interval_ \((0,T)\) _cannot be able to extend to a solution in a wider interval if and only if_ \[\lim_{t\to T}\Big{(}\|\partial_{t}u(t)\|_{L^{2}(B^{c})}+\|\nabla u(t)\|_{L^{2} (B^{c})}+\|u(t)\|_{L^{2}(B^{c})}\Big{)}=+\infty.\] (1.11) By virtue of Proposition 1.1, we can define the lifespan of solutions to (1.1). **Definition 1.2**.: Define the lifespan \(T_{\max}(u_{0},u_{1})\in(0,\infty]\) as the maximal existence time of weak solutions to (1.1) with the initial condition \((u,\partial_{t}u)(0)=(u_{0},u_{1})\). Namely, \[T_{\max}(u_{0},u_{1})=\sup\big{\{}T>0\;;\;\eqref{eq:1.1}\text{ has a weak solution in }(0,T)\big{\}}.\] In Ikeda-Sobajima [6], blowup of solutions to (1.1) with small initial data is proved under the initial condition \((u,\partial_{t}u)(x,0)=(\varepsilon f,\varepsilon g)\) with \[\int_{B^{c}}\big{(}f(x)+g(x)\big{)}\log|x|\,dx>0.\] We should point out that the weight function \(\log|x|\) has been chosen as the positive harmonic function satisfying the Dirichlet boundary condition. To reflect the above situation and also the \(L^{\infty}\)-decay estimate (1.9) for \(e^{t\Delta_{B^{c}}}\) to our consideration in the present paper, we need to introduce the following \(L^{p}\)-spaces with weighted measure involving \(\log|x|\). **Definition 1.3**.: Define the measure \(d\mu=(1+\log|x|)\,dx\) and for \(1\leq p<\infty\), \[L^{p}_{d\mu}=\Big{\{}f\in L^{p}(B^{c})\;;\;\|f\|_{L^{p}_{d\mu}}<+\infty\Big{\}} \,,\quad\|f\|_{L^{p}_{d\mu}}=\left(\int_{B^{c}}|f(x)|^{p}\,d\mu\right)^{\frac{ 1}{p}}.\] Now we are in a position to state our result for the lower bound for the lifespan estimate of solutions to (1.1) under the radially symmetric setting. The following assertion is formulated in the subspace of radially symmetric functions in \(L^{2}(B^{c})\): \[L^{2}_{\mathrm{rad}}=\{f\in L^{2}(B^{c})\;;\;f\text{ is radially symmetric}\}.\] **Theorem 1.2**.: _If \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\), then the following assertions hold:_ * _If_ \(1<p\leq 2\)_, then there exist positive constants_ \(\varepsilon_{0}>0\) _and_ \(c>0\) _such that for_ \(\varepsilon\in(0,\varepsilon_{0}]\)_,_ \[T_{\max}(0,\varepsilon g)\geq\begin{cases}c\left(\frac{1}{ \varepsilon}\log\frac{1}{\varepsilon}\right)^{\frac{p-1}{2-p}}&\text{if }1<p<2,\\ \exp(\exp(c\varepsilon^{-1}))&\text{if }p=2.\end{cases}\] **(ii)**: _If_ \(2<p<\infty\)_, then there exist positive constants_ \(\delta\) _and_ \(C\) _such that if_ \(\|g\|_{L^{2}(B^{c})}+\|g\|_{L^{1}_{d_{\mu}}}\leq\delta\)_, then_ \(T_{\max}(0,g)=+\infty\) _with_ \[\|u(t)\|_{L^{1}_{d_{\mu}}} \leq C\delta,\quad t>0,\] \[\|\nabla u(t)\|_{L^{2}(B^{c})} \leq\frac{C\delta}{(1+t)(1+\log(1+t))},\quad t>0,\] \[\|\partial_{t}u(t)\|_{L^{2}(B^{c})} \leq\frac{C\delta}{(1+t)^{\frac{3}{2}}(1+\log(1+t))},\quad t>0.\] The lower bounds of the lifespan in Theorem 1.2**(i)** are derived from the following relation: \[c_{p}\leq\varepsilon^{p-1}\int_{0}^{T_{\max}(0,\varepsilon g)}h(t)^{p-1}\,dt\] for some small constant \(c_{p}\), where \(h(t)\) is given in (1.10). Combining Theorem 1.2**(i)** with the upper bound of the lifespan given in Ikeda-Sobajima [6], we obtain the following: **Corollary 1.3**.: _Let \(g\in L^{2}_{\rm rad}\cap L^{1}_{d\mu}\) and let \(T_{\max}\) be in Definition 1.2. If \(g\geq 0\) and \(g\not\equiv 0\), then for every \(1<p\leq 2\), one has_ \[0<\liminf_{\varepsilon\to 0}\Big{(}\varepsilon^{p-1}\int_{0}^{T_{\max}(0, \varepsilon g)}h(t)^{p-1}\,dt\Big{)}\leq\limsup_{\varepsilon\to 0}\Big{(} \varepsilon^{p-1}\int_{0}^{T_{\max}(0,\varepsilon g)}h(t)^{p-1}\,dt\Big{)}<+\infty.\] _Remark 1.2_.: We do not know whether the quantity \[\varepsilon^{p-1}\int_{0}^{T_{\max}(0,\varepsilon g)}h(t)^{p-1}\,dt\] converges to a constant as \(\varepsilon\to 0\) or not. The counter part about the global existence, we also have the assertion for the total energy decay. **Corollary 1.4**.: _Let \(2<p<\infty\) and let \(u\) be the unique global solution of (1.1) obtained in Theorem 1.2. Then one has the energy decay estimate for \(u\):_ \[\int_{B^{c}}\Big{(}|\nabla u(t)|^{2}+(\partial_{t}u(t))^{2}\Big{)}\,dx\leq \frac{C^{2}\delta^{2}}{(1+t)^{2}(1+\log(1+t))^{2}},\quad t>0.\] _Remark 1.3_.: Ono [14] obtained the energy decay estimate \[\int_{B^{c}}\Big{(}|\nabla u(t)|^{2}+(\partial_{t}u(t))^{2}\Big{)}\,dx\leq \frac{C_{\delta}}{(1+t)^{2-\delta}},\quad t>0\] for the solution \(u\) of linear damped wave equation with initial data \((u,\partial_{t}u)(0)=(u_{0},u_{1})\in[H^{1}_{0}\cap L^{1}(B^{c})]\times[L^{2} \cap L^{1}(B^{c})]\). After that, Ikehata [7] removed the loss \(\delta\) of the decay rate by assuming \(|x|\log|x|(u_{0}+u_{1})\in L^{2}(B^{c})\). The decay estimate in Corollary 1.4 (even though it is the semilinear problem) is faster than those by a different assumption \((u,\partial_{t}u)(0)=(0,g)\) with \(g\in L^{2}_{\rm rad}\cap L^{1}_{d\mu}\). Incidentally, one can find that the decay of total energy in Corollary 1.4 is faster than the local energy decay proved in Dan-Shibata [2]. Now we shall describe the strategy of the present paper. Here we only focus our attention to the two-dimensional case. If we move to the semilinear heat equation (of course in an exterior domain), we can employ the supersolution-subsolution method as explained before. However, it seems impossible to apply this argument for the hyperbolic equation (1.1). Instead of this defect, in the case of the Cauchy problem of the semilinear damped wave equation (1.3), the positively preserving property of the linear solution map \(S_{*}(t)=P_{1}e^{t\mathcal{L}_{\mathbb{R}^{2}}}\) seems useful to control the \(L^{1}\)-type norm of the solution \(S_{*}(t)g\). Together with the basic energy functional of the form \(\|\partial_{t}u\|_{L^{2}(\mathbb{R}^{2})}^{2}+\|\nabla u\|_{L^{2}(\mathbb{R}^{ 2})}^{2}\), this consideration enables us to control the \(L^{1}\)-norm of the semilinear term \(|u|^{p}\) via the Gagliardo-Nirenberg inequality \[\|f\|_{L^{p}(\mathbb{R}^{2})}\leq C_{p}\|\nabla f\|_{L^{2}(\mathbb{R}^{2})}^{1- \frac{1}{p}}\|f\|_{L^{1}(\mathbb{R}^{2})}^{\frac{1}{p}}\quad f\in H^{1}( \mathbb{R}^{2})\cap L^{1}(\mathbb{R}^{2})\] (when \(p=2\), this is called the Nash inequality). We emphasize that the \(L^{1}\)-norm is a conserved quantity of the linear heat semigroup \(e^{t\Delta_{\mathbb{R}^{2}}}\) (for nonnegative initial data) which represents the asymptotic behavior of solutions (1.2). Employing this procedure with the Matsumura estimate (1.5), we can reach the sharp lower bound of lifespan of the solution to (1.3) with the initial condition \((u,\partial_{t}u)(0)=(0,\varepsilon g)\) with \(g\in L^{2}(\mathbb{R}^{2})\cap L^{1}(\mathbb{R}^{2})\). One of the novelty of the present paper is to introduce this kind of strategy in the analysis of the semilinear damped wave equation. Let us go back to the target problem (1.1). In this case, a conserved quantity for the Dirichlet heat semigroup \(e^{t\Delta_{B_{c}}}\) can be chosen as \[I[f]=\int_{B_{c}}f(x)\log|x|\,dx,\] where the weight \(\log|x|\) is chosen as a positive harmonic function satisfying the Dirichlet boundary condition. Then it turns out that the behavior of the above quantity in the problem (1.1) can be tracked by the ordinary differential equation \[\left(\frac{d^{2}}{dt^{2}}+\frac{d}{dt}\right)\int_{B_{c}}u(x,t)\log|x|\,dx= \int_{B_{c}}|u(x,t)|^{p}\log|x|\,dx.\] This suggests that the following version of the Gagliardo-Nirenberg type inequality seems reasonable: \[\int_{B^{c}}|f|^{p}\log|x|\,dx\leq\widetilde{C}_{p}\|\nabla f\|_{L^{2}(B^{c}) }^{p-1}\int_{B^{c}}|f|\log|x|\,dx,\quad f\in H^{1}_{0}(B^{c})\cap L^{1}_{d \mu},\] which is not discussed so far. To proceed this strategy, the positively preserving property of the linear solution map \(S(t)\) is essential. To justify this property, additionally we assume the radial symmetry. Moreover, to reach the sharp lifespan estimate, we also use the corresponding Matsumura type estimate which should have a logarithmic decay factor as in (1.9). This could be proved via the diffusion phenomenon with the decay estimate via the estimate for the Dirichlet heat semigroup \(e^{t\Delta_{B^{c}}}\). The present paper is organized as follows. In section 2, we collect the important tools in this paper. More precisely, an abstract version of Matsumura estimates, decay estimates for the Dirichlet heat semigroup \(e^{t\Delta_{B^{c}}}\) (an alternative proof is written in Appendix), the usual Gagliardo-Nirenberg estimates and the critical Hardy inequality are listed. In section 3, the linear solution \(S(t)g\) is analysed. Here we prove a modified Matsumura estimate with decay involving logarithmic factor, the positively preserving property and an \(L^{1}_{d\mu}\)-estimate for \(S(t)g\). Section 4 is devoted to the proof of the lifespan estimate from below. A Gagliardo-Nirenberg inequality with logarithmic weight is proved by the use of the critical Hardy inequality at the beginning of Section 4. ## 2 Preliminaries In this section we collect several important tools to analyse the target problem (1.1). ### Abstract version of Matsumura estimates We first state an abstract version of Matsumura estimates which is valid for the second order differential equation \[\begin{cases}u^{\prime\prime}(t)+Au(t)+u^{\prime}(t)=0,\quad t>0,\\ (u,u^{\prime})(0)=(u_{0},u_{1})\end{cases} \tag{2.1}\] in a Hilbert space \(H\). Here \(A\) is a nonnegative selfadjoint operator in \(H\) endowed with domain \(D(A)\). Existence and uniqueness of solutions to (2.1) are verified via the well-known Hille-Yosida theorem. Here we denote by \(S_{A}(t)g\) the solution \(u\in C^{1}([0,\infty);H)\cap C([0,\infty);D(A^{1/2}))\) of (2.1) with the initial condition \((u,u^{\prime})(0)=(0,g)\). Note that by using \(S_{A}(t)\), one can find that the solution of (2.1) has a representation \[u(t)=\frac{d}{dt}[S_{A}(t)u_{0}]+S_{A}(t)[u_{0}+u_{1}].\] The following lemma provides the asymptotic profile of \(S_{A}(t)g\) (in the sense of the energy functional), which can be described by using the \(C_{0}\)-semigroup \((e^{-tA})_{t\geq 0}\) on \(H\) generated by \(-A\). **Lemma 2.1** (Radu-Todorova-Yordanov [18]).: _The following assertions hold:_ * _There exists a positive constant_ \(C_{\mathrm{M},1}\) _such that for every_ \(g\in H\) _and_ \(t\geq 1\)_,_ \[\left\|A^{1/2}(S_{A}(t)-e^{-tA})g\right\|_{H}\leq C_{\mathrm{M},1}\Big{(}t^{- \frac{3}{2}}\|e^{-\frac{t}{2}A}g\|_{H}+e^{-\frac{t}{16}}\|(A^{1/2}+1)^{-1}A^{1 /2}g\|_{H}\Big{)}.\] * _There exists a positive constant_ \(C_{\mathrm{M},2}\) _such that for every_ \(g\in H\) _and_ \(t\geq 1\)_,_ \[\left\|\frac{d}{dt}(S_{A}(t)-e^{-tA})g\right\|_{H}\leq C_{\mathrm{M},2}t^{-2} \Big{(}\|g\|_{H}+e^{-\frac{t}{4}}\|g\|_{H}\Big{)}.\] _Remark 2.1_.: The assertion **(ii)** in Lemma 2.1 is not written explicitly in [18], but the same strategy also provides the estimate for the derivative in \(t\). In our situation, we employ Lemma 2.1 with the (negative) Dirichlet Laplacian \(-\Delta_{B^{c}}\) in the Hilbert space \(L^{2}(B^{c})\). The corresponding linear damped wave equation is as follows: \[\begin{cases}\partial_{t}^{2}u(x,t)-\Delta u(x,t)+\partial_{t}u(x,t)=0&\text{ in }B^{c}\times(0,\infty),\\ u(x,t)=0&\text{on }\partial B^{c}\times(0,\infty),\\ (u,\partial_{t}u)(x,0)=(0,g(x)),&\text{in }B^{c}.\end{cases} \tag{2.2}\] The classical energy identity for the solution \(S(t)g=P_{1}e^{t\mathcal{L}_{B^{c}}}(0,g)\) (given in Remark 1.1) provides the following basic inequality. **Lemma 2.2**.: _For every \(g\in L^{2}(B^{c})\), one has_ \[\|\nabla S(t)g\|_{L^{2}(B^{c})}^{2}+\|\partial_{t}S(t)g\|_{L^{2}(B^{c})}^{2} \leq\|g\|_{L^{2}(B^{c})}^{2},\quad t\geq 0.\] The following is the result by [18] applied to the case (2.2). **Lemma 2.3**.: _There exist positive constants \(C^{\prime}_{\mathrm{M},1}\) and \(C^{\prime}_{\mathrm{M},2}\) such that for every \(g\in L^{2}(B^{c})\) and \(t\geq 1\),_ \[\|\nabla(S(t)-e^{t\Delta_{B^{c}}})g\|_{L^{2}(B^{c})} \leq C^{\prime}_{\mathrm{M},1}t^{-\frac{3}{2}}\|g\|_{L^{2}(B^{c})},\] \[\|\partial_{t}(S(t)-e^{t\Delta_{B^{c}}})g\|_{L^{2}(B^{c})} \leq C^{\prime}_{\mathrm{M},2}t^{-2}\|g\|_{L^{2}(B^{c})}.\] ### Dirichlet heat semigroup on the exterior domain \(B^{c}\) Here we state the \(L^{p}\)-\(L^{q}\) type estimates for \(e^{t\Delta_{B^{c}}}\) with logarithmic weight. Although it can be obtained via the heat kernel estimate in [5] (as explained in Introduction), in Appendix we give an alternative proof based on the technique of the analysis of partial differential equations of parabolic type. In the discussion of the present paper, the following estimates are crucial. **Lemma 2.4**.: _For every \(q\in[1,2]\), there exists a positive constant \(C_{\mathrm{H},q}>0\) such that if \(f\in L^{q}_{d\mu}\), then for every \(t>0\), one has \(e^{t\Delta_{B^{c}}}f\in L^{2}(B^{c})\) with_ \[\|e^{t\Delta_{B^{c}}}f\|_{L^{2}(B^{c})}\leq\frac{C_{\mathrm{H},q}}{t^{\frac{1 }{q}-\frac{1}{2}}(1+\log(1+t))^{\frac{1}{q}}}\|f\|_{L^{q}_{d\mu}}.\] ### Some functional inequalities In the present paper, the following form of the Gagliardo-Nirenberg inequalities will be used (cf. Friedman [3]); note that the following inequality with \(q=2\) is also called the Nash inequality. **Lemma 2.5**.: _For every \(1<q<\infty\), one has \(H^{1}_{0}(B^{c})\cap L^{1}(B^{c})\subset L^{q}(B^{c})\) and there exists a positive constant \(C_{\mathrm{GN},q}\) such that_ \[\|f\|_{L^{q}(B^{c})}\leq C_{\mathrm{GN},q}\|\nabla f\|_{L^{2}(B^{c})}^{1-\frac{ 1}{q}}\|f\|_{L^{1}(B^{c})}^{\frac{1}{q}},\quad\forall f\in H^{1}_{0}(B^{c}) \cap L^{1}(B^{c}).\] To elicit the effect from the boundary (in two dimension), we also use the critical case of the Hardy inequality (cf. Ladyzhenskaya [9] and also Dan-Shibata [2]). For the reader's convenience, we give a short proof. **Lemma 2.6**.: _If \(f\in H^{1}_{0}(B^{c})\), then_ \[\frac{1}{4}\int_{B^{c}}\frac{f^{2}}{|x|^{2}(1+\log|x|)^{2}}\,dx\leq\int_{B^{c} }|\nabla f|^{2}\,dx.\] Proof.: By density, it suffices to discuss the estimate for \(f\in C^{\infty}_{0}(B^{c})\). To simplify the notation, we set \(H(x)=1+\log|x|\) which is positive and harmonic in \(B^{c}\). Then by using the transform \(g=H^{-\frac{1}{2}}f\in C^{\infty}_{0}(B^{c})\) and integration by parts, we can calculate as follows: \[\int_{B^{c}}|\nabla f|^{2}\,dx =\int_{B^{c}}\left(|\nabla g|^{2}H+g\nabla g\cdot\nabla H+\frac{| \nabla H|^{2}}{4H}g^{2}\right)dx\] \[=\int_{B^{c}}|\nabla g|^{2}H\,dx+\int_{B^{c}}\left(-\frac{\Delta H }{2H}+\frac{|\nabla H|^{2}}{4H^{2}}\right)f^{2}\,dx.\] This gives the desired inequality. ## 3 Linear decay estimates for \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) In this section we study the decay property of the solution \(u(t)=S(t)g\) of the linear damped wave equation \[\begin{cases}\partial_{t}^{2}u(x,t)-\Delta u(x,t)+\partial_{t}u(x,t)=0&\text{ in }B^{c}\times(0,T),\\ u(x,t)=0&\text{ on }\partial B^{c}\times(0,T),\\ (u,\partial_{t}u)(x,0)=(0,g(x)),&\text{ in }B^{c}.\end{cases} \tag{3.1}\] under the additional integrability condition \(g\in L^{1}_{d\mu}\). Actually, under the condition \(g\in L^{1}_{d\mu}\), we can find how a compact obstacle affects to the behavior of solutions to the damped wave equation. The harvests are different from the usual damped wave equation in the whole space. ### A Matsumura estimate with logarithmic weight Here we state the decay estimate for solutions to the damped wave equation (3.1) for the initial condition \((u,\partial_{t}u)(0)=(0,g)\) with \(g\in L^{2}(B^{c})\cap L^{1}_{d\mu}\). We emphasize that as the effect from the boundary, the following Matsumura type estimates have the decay rate involving the logarithmic function. This fact can be shown via the use of the \(L^{q}_{d\mu}\)-\(L^{2}\) estimates for the Dirichlet heat semigroup \(e^{t\Delta_{B^{c}}}\). **Lemma 3.1**.: _For every \(q\in[1,2]\), there exist positive constants \(C^{\sharp}_{\mathrm{M},1,q}\) and \(C^{\sharp}_{\mathrm{M},2,q}\) such that if \(g\in L^{2}(B^{c})\cap L^{q}_{d\mu}\), then for every \(t>0\),_ \[\|\nabla S(t)g\|_{L^{2}(B^{c})} \leq C^{\sharp}_{\mathrm{M},1,q}h(t)^{\frac{1}{q}}(\|g\|_{L^{q}_{ d\mu}}+\|g\|_{L^{2}(B^{c})}),\] \[\|\partial_{t}S(t)g\|_{L^{2}(B^{c})} \leq C^{\sharp}_{\mathrm{M},2,q}(1+t)^{-\frac{1}{2}}h(t)^{\frac{1 }{q}}(\|g\|_{L^{q}_{d\mu}}+\|g\|_{L^{2}(B^{c})}).\] Proof.: The case \(0<t\leq 1\) is obvious via Lemma 2.2. Let \(t\geq 1\) be arbitrary. using Lemma 2.3, we see by the notation \(s=t/2\) that \[\|\nabla S(t)g\|_{L^{2}(B^{c})} \leq\|\nabla e^{t\Delta_{B^{c}}}g\|_{L^{2}(B^{c})}+\|\nabla(S(t)- e^{t\Delta_{B^{c}}})g\|_{L^{2}(B^{c})}\] \[\leq\frac{1}{s^{\frac{1}{2}}}\|e^{s\Delta_{B^{c}}}g\|_{L^{2}(B^{c })}+\frac{C^{\prime}_{\mathrm{M},1}}{t^{\frac{3}{2}}}\|g\|_{L^{2}(B^{c})}\] \[\leq\frac{C_{\mathrm{H},q}}{s^{\frac{1}{q}}(1+\log(1+s))^{\frac{1 }{q}}}\|g\|_{L^{q}_{d\mu}}+\frac{C^{\prime}_{\mathrm{M},1}}{t^{\frac{3}{2}}}\| g\|_{L^{2}(B^{c})}\] and also \[\|\partial_{t}S(t)g\|_{L^{2}(B^{c})} \leq\|\partial_{t}e^{t\Delta_{B^{c}}}g\|_{L^{2}(B^{c})}+\| \partial_{t}(S(t)-e^{t\Delta_{B^{c}}})g\|_{L^{2}(B^{c})}\] \[\leq\frac{1}{s}\|e^{s\Delta_{B^{c}}}g\|_{L^{2}(B^{c})}+\frac{C^{ \prime}_{\mathrm{M},2}}{t^{2}}\|g\|_{L^{2}(B^{c})}\] \[\leq\frac{C_{\mathrm{H},q}}{s^{\frac{1}{2}+\frac{1}{q}}(1+\log( 1+s))^{\frac{1}{q}}}\|g\|_{L^{q}_{d\mu}}+\frac{C^{\prime}_{\mathrm{M},2}}{t^{ 2}}\|g\|_{L^{2}(B^{c})}.\] These imply the desired estimates. ### A positively preserving property In this subsection, we study a positively preserving property of the solution map \(S(t)\) for the linear damped wave equation (3.1) under the radial symmetry. **Proposition 3.2**.: _If \(g\in L^{2}_{\mathrm{rad}}\) is nonnegative, then the solution \(S(t)g\) of (3.1) is also radially symmetric and nonnegative for \(t>0\)._ Proof.: It is enough to show the assertion for the functions belonging to \(D(\Delta_{B^{c}})=H^{2}(B^{c})\cap H^{1}_{0}(B^{c})\). Indeed, since the resolvent operator \((1-\frac{1}{n}\Delta_{B^{c}})^{-1}\) preserves the nonnegativity and the radial symmetry (for nonnegativity see e.g, Brezis [1, Section 9.7], the radial symmetry is the consequence of rotation), if \(g\in L^{2}_{\mathrm{rad}}\), then for each \(n\in\mathbb{N}\), \(g_{n}=(1-\frac{1}{n}\Delta_{\Omega})^{-1}g\in D(\Delta_{B^{c}})\) is also nonnegative and radially symmetric. If the assertion for \(D(\Delta_{B^{c}})\) holds, then we have the nonnegativity of \(S(t)g_{n}\) for \(n\in\mathbb{N}\). Noting that \(g_{n}\to g\) in \(L^{2}(B^{c})\) as \(n\to\infty\) and recalling \(S(t)g_{n}=P_{1}e^{t\mathcal{L}_{B^{c}}}(0,g_{n})\) with the projection \(P_{1}(u,v)=u\), we see by the continuity of \(e^{t\mathcal{L}_{B^{c}}}\) in \(H^{1}_{0}(B^{c})\times L^{2}(B^{c})\) that \(S(t)g_{n}\to S(t)g\) in \(H^{1}_{0}(B^{c})\) as \(n\to\infty\) which implies the nonnegativity and the radial symmetry of \(S(t)g\) Now we assume \(g\in D(\Delta_{B^{c}})=H^{2}(B^{c})\cap H^{1}_{0}(B^{c})\). Then, the corresponding solution \(u=S(t)g\) has the regularity \[u\in C([0,\infty);H^{3}(B^{c}))\cap C^{1}([0,\infty);H^{2}(B^{c}))\cap C^{2}([0, \infty);H^{1}(B^{c}))\] with the boundary condition \(u,\Delta u\in C([0,\infty);H^{1}_{0}(B^{c}))\) and satisfies the equation of (2.2) in \(H^{1}_{0}(B^{c})\). Since \(g\) is radially symmetric, so is \(u(\cdot,t)=S(t)g\) (as the consequence of rotation) and hence, noting \(\Delta u=\partial_{r}^{2}u+\frac{1}{r}\partial_{r}u\) we can find that the new function \(U(r,t)=e^{t/2}r^{1/2}u(x,t)\) with \(r=|x|\) satisfies the equation \[\partial_{t}^{2}U-\partial_{r}^{2}U=\frac{1}{4}\left(\frac{1}{r^{2}}+1\right) U\quad\text{in }(1,\infty)\times(0,\infty)\] with the initial condition \(U(r,0)=0\) and \(\partial_{t}U(r,0)=g_{1}(r)=r^{1/2}g(r)\). Then, the above regularity of \(u\) shows \(U\in C^{2}([1,\infty)\times[0,\infty))\) with \(U(0,t)=\partial_{r}^{2}U(0,t)=0\), which justifies the following argument in the classical sense. By the reflection, we extend \(g_{1}(\cdot)\) and \(U(\cdot,t)\) to be odd functions. We set \(\widetilde{g}_{1}\) and \(\widetilde{U}\) as \[\widetilde{g}_{1}(y)=\begin{cases}g_{1}(1+y)&(y>0),\\ 0&(y=0),\end{cases}\quad\widetilde{U}(y,t)=\begin{cases}U(1+y,t)&(y>0,t>0),\\ 0&(y=0,t>0),\\ -U(1-y,t)&(y<0,t>0),\end{cases}\] which satisfy the initial value problem \[\begin{cases}\partial_{t}^{2}\widetilde{U}-\partial_{y}^{2}\widetilde{U}=m \widetilde{U}&\text{in }\mathbb{R}\times(0,\infty),\\ (\widetilde{U},\partial_{t}\widetilde{U})(y,0)=(0,\widetilde{g}_{1}(y))& \text{in }\mathbb{R}\times\{t=0\},\end{cases}\] where \(m(y)=\frac{1}{4}\big{(}\frac{1}{(|y|+1)^{2}}+1\big{)}\). Note that \(\widetilde{g}_{1}\in C^{1}(\mathbb{R})\) and \(\widetilde{U}\in C^{2}(\mathbb{R}\times(0,\infty))\) by virtue of the behavior of the boundary \(g_{1}(1)=0\) and \(U(1,t)=\partial_{r}^{2}U(1,t)=0\). We prove the nonnegativity of \(u\) by a contradiction argument similar to [16, Chapter 4]. We assume that there exists a point \((x_{0},t_{0})\in B^{c}\times(0,\infty)\) such that \(u(x_{0},t_{0})<0\). Then clearly, setting \(y_{0}=|x_{0}|-1>0\), we have \[\widetilde{U}(y_{0},t_{0})=U(|x_{0}|,t_{0})=e^{t_{0}/2}|x_{0}|^{\frac{1}{2}}u( x_{0},t_{0})<0.\] Now we fix the parameter \(\varepsilon>0\) satisfying \(\widetilde{U}(y_{0},t_{0})+\varepsilon e^{t_{0}}=0\) and put \[V(y,t)=\widetilde{U}(y,t)+\varepsilon e^{t},\quad(y,t)\in\mathbb{R}\times(0, \infty).\] Then, the new function \(V\) satisfies \[\begin{cases}\partial_{t}^{2}V-\partial_{y}^{2}V=mV+\varepsilon(1-m)e^{t}& \text{in }\mathbb{R}\times(0,\infty),\\ (V,\partial_{t}V)(y,0)=(\varepsilon,\widetilde{g}_{1}(y)+\varepsilon)&\text{ in }\mathbb{R}\times\{t=0\}.\end{cases}\] Then we consider the triangular region \[D_{0}=\{(y,t)\in\mathbb{R}\times(0,\infty);t+|y-y_{0}|<t_{0}\}.\] By \(V(y,0)=\varepsilon>0\) and the continuity, \(V\) is positive in \(\overline{D_{0}}\) with sufficiently small \(t\). Thus, by considering a zero of \(V\) in \(\overline{D_{0}\cap\{y>0\}}\) with the smallest time (the zero set of \(V\) in \(\overline{D_{0}\cap\{y>0\}}\) is not empty due to \(V(y_{0},t_{0})=0\)), there exists a point \((y_{1},t_{1})\in\overline{D_{0}\cap\{y>0\}}\) such that \[V(y_{1},t_{1})=0\quad\text{and}\quad V>0\text{ in }D_{1}\cap\{y>0\},\] where \(D_{1}\) is the triangular region \(D_{1}=\{(y,t)\in\mathbb{R}\times(0,\infty);t+|y-y_{1}|<t_{1}\}(\subset D_{0})\). We further define the subregion \(D_{1}^{\prime}=\{(y,t)\in D_{1};t+|y|<t_{1}-y_{1}\}\). Note that \(D_{1}^{\prime}=\emptyset\) if \(t_{1}\leq y_{1}\) and \(D_{1}\setminus D_{1}^{\prime}\) is the trapezoidal region with the vertices \((y_{1},t_{1}),(0,t_{1}-y_{1}),(t_{1}-y_{1},0),(y_{1}+t_{1},0)\) if \(t_{1}>y_{1}\). Applying the d'Alembert formula with the above notation, we have \[0=V(y_{1},t_{1}) =\frac{1}{2}\left(V(y_{1}-t_{1},0)+V(y_{1}+t_{1},0)\right)+\frac {1}{2}\int_{y_{1}-t_{1}}^{y_{1}+t_{1}}\partial_{t}V(y,0)\,dy\] \[\quad+\frac{1}{2}\iint_{D_{1}}m(y)V(y,t)\,dydt+\frac{\varepsilon} {2}\iint_{D_{1}}(1-m(y))e^{t}\,dydt\] \[=(1+t_{1})\varepsilon+\frac{1}{2}\int_{|y_{1}-t_{1}|}^{y_{1}+t_{ 1}}\widetilde{g}_{1}(y)\,dy+\frac{\varepsilon}{2}\iint_{D_{1}^{\prime}}m(y)e^ {t}\,dydt\] \[\quad+\frac{1}{2}\iint_{D_{1}\setminus D_{1}^{\prime}}m(y)V(y,t) \,dydt+\frac{\varepsilon}{2}\iint_{D_{1}}(1-m(y))e^{t}\,dydt,\] where we have used that \(\widetilde{g}\) and \(\widetilde{U}(\cdot,t)\) are odd functions and \(m\) is an even function. Then using the conditions \[\widetilde{g}_{1}\geq 0\text{ on }(0,\infty),\quad V>0\text{ on }D_{1}\cap\{y>0\},\quad 0 \leq m\leq\frac{1}{2}\text{ on }\mathbb{R}\] and the fact \(D_{1}\setminus D_{1}^{\prime}\subset D_{1}\cap\{y>0\}\), we find that the right-hand side of the above identity is positive, which is contradiction. The proof is complete. ### An \(L^{1}_{d\mu}\)-estimate for \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) By virtue of the positively preserving property of \(S(t)\) on the radially symmetric functions, we can discuss the validity of \(L^{1}_{d\mu}\)-estimates for \(S(t)g\). Basically, the following lemma is similar to the analysis of the ordinary differential equation \(y^{\prime\prime}+y^{\prime}=0\). **Lemma 3.3**.: _For every \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\), one has \(S(t)g\in C([0,\infty);L^{1}_{d\mu})\) with_ \[\|S(t)g\|_{L^{1}_{d\mu}}\leq(1-e^{-t})\|g\|_{L^{1}_{d\mu}},\quad t>0.\] _In particular, \(S(t)\) can be extended to the bounded operator from \(L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) to itself._ Proof.: To shorten the notation, we use \(u=S(t)g\). We divide the proof into three steps as follows: **Step 1.**: the case where \(0\leq g\in L^{2}_{\rm rad}\) having bounded support, **Step 2.**: the case where \(0\leq g\in L^{2}_{\rm rad}\cap L^{1}_{d\mu}\) without assumption on the support, **Step 3.**: the case where \(g\in L^{2}_{\rm rad}\cap L^{1}_{d\mu}\) admitting the change of sign. **(Step 1)** Let \(R>1\) satisfy \(\mathop{\rm supp}g\subset B(0,R)=\{x\in\mathbb{R}^{2}\;;\,|x|<R\}\). Then by finite propagation property, we have \(\mathop{\rm supp}u(t)\subset B(0,R+t)\). Here we fix \(\zeta\in C^{\infty}(\mathbb{R}^{2}\times[0,\infty))\) as \(\zeta(x,t)=\zeta_{0}((R+t)^{2}-|x|^{2})\) with \(\zeta_{0}\in C^{\infty}(\mathbb{R})\) satisfying \(\zeta_{0}\equiv 1\) on \([0,\infty)\) and \(\zeta_{0}\equiv 0\) on \((-\infty,-1)\). Here we note that \(\zeta(\cdot,t)\in C^{\infty}_{0}(\mathbb{R}^{N})\) and \(\zeta\equiv 1\) on \(\mathop{\rm supp}u\). By the nonnegativity of \(u(t)\) (provided by Proposition 3.2), we can see from \(u\in C([0,\infty);L^{2}(B^{c}))\) and \(\zeta(x,t)(1+\log|x|)\in C([0,\infty);L^{2}(B^{c}))\) that \(u(x,t)=\zeta(x,t)u(x,t)\in C([0,\infty);L^{1}_{d\mu})\). Moreover, Setting \(\varphi_{n}(x)=1+\log|x|-|x|^{-n}\) (for \(n\in\mathbb{N}\)), we have \(\varphi_{n}(x)\to 1+\log|x|\) as \(n\to\infty\) and \(\zeta(x,t)\varphi_{n}(x)\in C^{2}([0,\infty);H^{1}_{0}(B^{c}))\) which is applicable to the test function for the equation in (2.2) (verified in \(H^{-1}(B^{c})\)). Hereafter, all integrals on \(B^{c}\) always can be justified by using the relation \(\zeta(x,t)u(x,t)=u(x,t)\). Noting that \[\Delta\varphi_{n}=-\Delta|x|^{-n}=n\mathop{\rm div}(x|x|^{-n-2})=-n^{2}|x|^{- n-2}\leq 0,\] we see from the nonnegativity of \(u(t)\) that \[\langle\Delta u(t),\zeta(t)\varphi_{n}\rangle_{H^{-1},H^{1}_{0}}=\int_{B^{c}}u (t)\Delta\varphi_{n}\,dx\leq 0.\] Therefore the equation in (2.2) gives \[\frac{d^{2}}{dt^{2}}\int_{B^{c}}u(t)\varphi_{n}\,dx+\frac{d}{dt}\int_{B^{c}}u (t)\varphi_{n}\,dx=\langle\Delta u(t),\zeta(t)\varphi_{n}\rangle_{H^{-1},H^{ 1}_{0}}\leq 0.\] which implies \[\int_{B^{c}}u(t)\varphi_{n}\,dx\leq(1-e^{-t})\int_{B^{c}}g\varphi_{n}\,dx.\] Letting \(n\to\infty\), we obtain the desired inequality. **(Step 2)** We use the cut-off approximation \(g_{n}=\chi_{B^{c}\cap B(0,n)}(x)g(x)\), where \(\chi_{K}\) is the indicator function of \(K\). Put \(u_{n}=S(t)g_{n}\). Then \(g_{n}\to g\) in \(L^{2}(B^{c})\) as \(n\to\infty\) and hence \(u_{n}(t)\to u(t)\) in \(H^{1}_{0}(B^{c})\) as \(n\to\infty\). For each \(n\in\mathbb{N}\), we can apply the claim in Step 1 to \(g_{n}\) and also \(g_{m}-g_{n}\) (\(m>n\)) and then \(u_{n}\in C([0,\infty);L^{1}_{d\mu})\) and \[\|u_{n}(t)\|_{L^{1}_{d\mu}}\leq(1-e^{-t})\|g_{n}\|_{L^{1}_{d\mu}} \leq(1-e^{-t})\|g\|_{L^{1}_{d\mu}},\] \[\|u_{m}(t)-u_{n}(t)\|_{L^{1}_{d\mu}}\leq(1-e^{-t})\|g_{m}-g_{n}\| _{L^{1}_{d\mu}}.\] Therefore \(u_{n}\) is the Cauchy sequence in the Banach space \(C(I;L^{1}_{d\mu})\) endowed with the sup norm for any compact interval \(I\subset[0,\infty)\). Since \(u_{n}(t)\) converses to \(u(t)\) in the pointwise sense, we can obtain \(u\in C([0,\infty);L^{1}_{d\mu})\) and the desired inequality. **(Step 3)** In this case we use the decomposition \[g=g_{+}-g_{-},\quad g_{\pm}:=\max\{\pm g,0\}\geq 0.\] Then by the consequence of Step 2, we have \(S(t)g=S(t)g_{+}-S(t)g_{-}\in C([0,\infty);L^{1}_{d\mu})\) and \[\|S(t)g\|_{L^{1}_{d\mu}} \leq\|S(t)g_{+}\|_{L^{1}_{d\mu}}+\|S(t)g_{-}\|_{L^{1}_{d\mu}}\] \[\leq(1-e^{-t})\Big{(}\|g_{+}\|_{L^{1}_{d\mu}}+\|g_{-}\|_{L^{1}_{d \mu}}\Big{)}\] \[=(1-e^{-t})\|g\|_{L^{1}_{d\mu}}.\] The proof is complete. ## 4 Estimates for semilinear problem In this section, we discuss the estimate from below for the lifespan of solution \[u(t)=\varepsilon S(t)g+\int_{0}^{t}S(t-s)[|u(s)|^{p}]\,ds,\quad t\in(0,T)\] to (1.1). The argument depends on the continuity method based on the blowup alternative (Proposition 1.1**(ii)**). The quantity \[\|v\|_{X_{T}}=\sup_{0\leq t<T}\left(\|v(t)\|_{L^{1}_{d\mu}}+\frac{\|\nabla v( t)\|_{L^{2}(B^{c})}}{h(t)}\right),\quad v\in C([0,T);H^{1}_{0}(B^{c})\cap L^{1 }_{d\mu}). \tag{4.1}\] plays a crucial role. We first note that the above quantity for the linear solution \(S(t)g\) with \(g\in L^{2}_{\rm rad}\cap L^{1}_{d\mu}\) is finite. **Lemma 4.1**.: _There exists a positive constant \(C_{1}\) such that for every \(g\in L^{2}_{\rm rad}\cap L^{1}_{d\mu}\),_ \[\|S(t)g\|_{X_{\infty}}\leq C_{1}\Big{(}\|g\|_{L^{1}_{d\mu}}+\|g\|_{L^{2}(B^{c} )}\Big{)}.\] Proof.: We can choose \(C_{1}=1+C^{\sharp}_{{\rm M},1,1}\) which is a consequence of Lemmas 3.3 and 3.1. Before treating the lifespan of \(u\), we state a modified assertion about the blowup alternative from the viewpoint of the quantity \(\|u\|_{X_{T}}\). **Lemma 4.2**.: _Let \(u\) be the weak solution of (1.1) in \((0,T)\) with \(g\) having a compact support. Then \(T=T_{\max}(0,\varepsilon g)\) if and only if \(T=+\infty\) or \(T<+\infty\) with \(\lim_{t\to T}\|u\|_{X_{t}}=+\infty\)._ Proof.: Suppose that \(T_{\max}(0,\varepsilon g)=T<+\infty\). If \(\lim_{t\to T}\|u\|_{X_{t}}<+\infty\), then Lemma 2.5 with \(q=2\) yields \[\|u(t)\|_{L^{2}(B^{c})}\leq C_{{\rm GN},2}\|\nabla u(t)\|_{L^{2}(B^{c})}^{\frac {1}{2}}\|u(t)\|_{L^{1}(B^{c})}^{\frac{1}{2}}\leq C_{{\rm GN},2}\|u\|_{X_{T}}h(t )^{\frac{1}{2}}\] and Lemmas 2.2 and 2.5 with \(q=2p\) give for every \(t\in(0,T)\), \[\|\partial_{t}u(t)\|_{L^{2}(B^{c})} =\left\|\int_{0}^{t}\partial_{t}S(t-s)|u(s)|^{p}\,ds\right\|_{L^{2} (B^{c})}\] \[\leq\int_{0}^{t}\|u(s)\|_{L^{2p}(B^{c})}^{p}\,ds\] \[\leq(C_{\mathrm{GN},2p})^{p}\int_{0}^{t}\|\nabla u(s)\|_{L^{2}(B^ {c})}^{p-\frac{1}{2}}\|u(s)\|_{L^{1}(B^{c})}^{\frac{1}{2}}\,ds\] \[\leq(C_{\mathrm{GN},2p})^{p}\|u\|_{X_{T}}^{p}\int_{0}^{t}h(s)^{p- \frac{1}{2}}\,ds.\] These inequalities yield \(\|u\|_{H^{1}_{0}(B^{c})}+\|\partial_{t}u\|_{L^{2}(B^{c})}\) is bounded in \((0,T)\). By Proposition 1.1, we have \(T<T_{\max}(0,\varepsilon g)\) which is contradiction. On the contrary, suppose \(T<+\infty\) with \(\lim_{t\to T}\|u\|_{X_{t}}=+\infty\). We fix \(R>1\) such that \(\operatorname{supp}g\in B(0,R)\). By finite propagation property we also have \(\operatorname{supp}u(t)\subset B(0,R+T)\). We see from the Holder inequality, that \[\|u(t)\|_{L^{1}_{d\mu}} =\int_{B^{c}\cap B(0,R+T)}|u(t)|(1+\log|x|)\,dx\] \[\leq\left(\int_{B^{c}\cap B(0,R+T)}(1+\log|x|)^{2}\,dx\right)^{ \frac{1}{2}}\|u(t)\|_{L^{2}(B^{c})}\] and hence \(\lim_{t\to T}\|u(t)\|_{H^{1}_{0}(B^{c})}=+\infty\) which means that \(T=T_{\max}(0,\varepsilon g)\). ### A Gagliardo-Nirenberg type inequality with logarithmic weight As explained in Introduction, to estimate the \(L^{1}_{d\mu}\)-norm of the solution \(u\) to the problem (1.1), we have to control the quantity \[\left\|\left|u\right|^{p}\right\|_{L^{1}_{d\mu}}=\int_{B^{c}}|u(x,t)|^{p}(1+ \log|x|)\,dx\] which comes from the semilinear term \(|u|^{p}\). The following inequality enables us to control such a quantity via the ingredients in \(\|u\|_{X_{T}}\). The form is close to the classical Gagliardo-Nirenberg inequality but involving the logarithmic weight. Here we do not need to assume the radial symmetry for functions. **Lemma 4.3**.: _For every \(1<q<\infty\), one has \(H^{1}_{0}(B^{c})\cap L^{1}_{d\mu}\subset L^{q}_{d\mu}\) and there exists a positive constant \(C^{\sharp}_{\mathrm{GH},q}\) such that_ \[\|f\|_{L^{q}_{d\mu}}\leq C^{\sharp}_{\mathrm{GN},q}\|\nabla f\|_{L^{2}(B^{c})} ^{1-\frac{1}{q}}\|f\|_{L^{4}_{d\mu}}^{\frac{1}{q}},\quad f\in H^{1}_{0}(B^{c}) \cap L^{1}_{d\mu}.\] Proof.: Let \(f\in H^{1}_{0}(B^{c})\cap L^{1}_{d\mu}\) be fixed. Put \(\eta\in C^{\infty}_{0}(\mathbb{R})\) satisfying \[\eta(s)\begin{cases}>0&\text{if }s\in(1/2,2),\\ =0&\text{if }s\notin(1/2,2).\end{cases}\] Note that for every \(a>0\) and \(\sigma\in[1,\infty)\), \[\int_{0}^{\infty}\eta\left(\frac{a}{R}\right)^{\sigma}\,dR=aK_{\sigma},\quad K _{\sigma}=\int_{0}^{\infty}\frac{\eta(\rho)^{\sigma}}{\rho^{2}}\,d\rho<\infty. \tag{4.2}\] Now we define the (localized) functions \(f_{R}\in H^{1}_{0}(B^{c})\cap L^{1}(B^{c})\) for \(R>0\) as \[f_{R}(x)=\eta\left(\frac{1+\log|x|}{R}\right)f(x),\quad x\in B^{c}.\] We see from the Fubini-Tonelli theorem and (4.2) that for \(1\leq\sigma<\infty\), \[\int_{0}^{\infty}\|f_{R}\|_{L^{\sigma}(B^{c})}^{\sigma}\,dR=\int_{B^{c}}|f(x) |^{\sigma}\int_{0}^{\infty}\eta\left(\frac{1+\log|x|}{R}\right)^{\sigma}\,dR \,dx=K_{\sigma}\|f\|_{L^{\sigma}_{d\mu}}^{\sigma}. \tag{4.3}\] On the other hand, by Lemma 2.5 we have \[\|f_{R}\|_{L^{q}(B^{c})}^{q}\leq(C_{\text{GN},q})^{q}\|\nabla f_{R}\|_{L^{2}(B ^{c})}^{q-1}\|f_{R}\|_{L^{1}(B^{c})}. \tag{4.4}\] Noting that \(\frac{1}{2}R\leq 1+\log|x|\leq 2R\) on \(\operatorname{supp}f_{R}\), one can compute as \[|\nabla f_{R}(x)|^{2} =\left|\eta\left(\frac{1+\log|x|}{R}\right)\nabla f(x)+\eta^{ \prime}\left(\frac{1+\log|x|}{R}\right)\frac{x}{R|x|^{2}}f(x)\right|^{2}\] \[\leq 2\|\eta\|_{L^{\infty}}^{2}|\nabla f(x)|^{2}+8\|\eta^{\prime} \|_{L^{\infty}}^{2}\frac{|f(x)|^{2}}{|x|^{2}(1+\log|x|)^{2}}.\] Combining the above inequality with Lemma 2.6, we deduce \[\|\nabla f_{R}\|_{L^{2}(B^{c})}^{2}\leq\left(2\|\eta\|_{L^{\infty}}^{2}+32\| \eta^{\prime}\|_{L^{\infty}}^{2}\right)\|\nabla f\|_{L^{2}(B^{c})}^{2}. \tag{4.5}\] Consequently, using (4.4), (4.5) and (4.3) with \(\sigma=q\) and also \(\sigma=1\), we obtain \[K_{q}\|f\|_{L^{q}_{d\mu}}^{q} =\int_{0}^{\infty}\|f_{R}\|_{L^{q}(B^{c})}^{q}\,dR\] \[\leq(C_{\text{GN},q})^{q}\int_{0}^{\infty}\|\nabla f_{R}\|_{L^{2} (B^{c})}^{q-1}\|f_{R}\|_{L^{1}(B^{c})}\,dR\] \[\leq(C_{\text{GN},q})^{q}\left(2\|\eta\|_{L^{\infty}}^{2}+32\| \eta^{\prime}\|_{L^{\infty}}^{2}\right)^{\frac{q-1}{2}}\|\nabla f\|_{L^{2}(B^ {c})}^{q-1}\int_{0}^{\infty}\|f_{R}\|_{L^{1}(B^{c})}\,dR\] \[=K_{1}(C_{\text{GN},q})^{q}\left(2\|\eta\|_{L^{\infty}}^{2}+32\| \eta^{\prime}\|_{L^{\infty}}^{2}\right)^{\frac{q-1}{2}}\|\nabla f\|_{L^{2}(B^ {c})}^{q-1}\|f\|_{L^{1}_{d\mu}}.\] The proof is complete. ### Lower bound for lifespan of semilinear problem Here we discuss a priori estimate for \(u\) via the quantity \(\|u\|_{X_{T}}\). The first part is the derivation of \(L^{1}_{d\mu}\)-estimate as a merit of the positively preserving property of \(S(t)\) for the radially symmetric functions. **Lemma 4.4**.: _Let \(u\) be the weak solution of (1.1) in \((0,T)\) with \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) and let \(\|u\|_{X_{T}}\) be given in (4.1). Then there exists a positive constant \(C_{2}\) (independent of \(\varepsilon,g\) and \(T\)) such that for every \(t\in(0,T)\),_ \[\|u(t)\|_{L^{1}_{d\mu}}\leq\varepsilon\|g\|_{L^{1}_{d\mu}}+C_{2}\|u\|_{X_{T}}^{ p}\int_{0}^{t}h(s)^{p-1}\,ds.\] Proof.: Employing Lemma 4.3 with \(q=p\), we have for every \(s\in(0,T)\), \[\|u(s)\|_{L^{p}_{d\mu}}^{p} \leq(C^{\sharp}_{\mathrm{GN},p})^{p}\|\nabla u(s)\|_{L^{2}}^{p-1} \|u(s)\|_{L^{1}_{d\mu}}\] \[\leq(C^{\sharp}_{\mathrm{GN},p})^{p}\|u\|_{X_{T}}^{p}h(s)^{p-1}.\] Combining the above inequality with Lemma 3.3, we deduce that for every \(t\in(0,T)\), \[\|u(t)\|_{L^{1}_{d\mu}} \leq\varepsilon\|S(t)g\|_{L^{1}_{d\mu}}+\int_{0}^{t}\|S(t-s)|u(s)| ^{p}\|_{L^{1}_{d\mu}}\,ds\] \[\leq\varepsilon\|g\|_{L^{1}_{d\mu}}+\int_{0}^{t}\|u(s)\|_{L^{p}_{ d\mu}}^{p}\,ds\] \[\leq\varepsilon\|g\|_{L^{1}_{d\mu}}+(C^{\sharp}_{\mathrm{GN},p})^ {p}\|u\|_{X_{T}}^{p}\int_{0}^{t}h(s)^{p-1}\,ds.\] This is the desired inequality. **Lemma 4.5**.: _Let \(u\) be the weak solution of 1.1 in \((0,T)\) with \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) and let \(\|u\|_{X_{T}}\) be given in (4.1). Then there exists a positive constant \(C_{3}\) (independent of \(\varepsilon,g\) and \(T\)) such that for every \(t\in(0,T)\),_ \[\|\nabla u(t)\|_{L^{2}(B^{c})}\leq C_{3}h(t)\left(\varepsilon(\|g\|_{L^{1}_{ d\mu}}+\|g\|_{L^{2}(B^{c})})+\|u\|_{X_{T}}^{p}\int_{0}^{t}h(s)^{p-1}\,ds \right).\] Proof.: Set \[J_{1}(t)=\int_{0}^{\frac{t}{2}}S(t-s)|u(s)|^{p}\,ds,\quad J_{2}(t)=\int_{\frac {t}{2}}^{t}S(t-s)|u(s)|^{p}\,ds.\] Then it is obvious that \(u(t)=\varepsilon S(t)g+J_{1}(t)+J_{2}(t)\). The estimate for the linear part \(\varepsilon S(t)g\) has already done in Lemma 4.1. For the estimate of \(J_{1}(t)\), applying Lemma 3.1 with \(q=1\), we see that \[\|\nabla J_{1}(t)\|_{L^{2}(B^{c})} \leq\int_{0}^{\frac{t}{2}}\|\nabla S(t-s)|u(s)|^{p}\|_{L^{2}(B^{c })}\,ds\] \[\leq C^{\sharp}_{\mathrm{M},1,1}\int_{0}^{\frac{t}{2}}h(t-s)\Big{(} \|u(s)\|_{L^{p}_{d\mu}}^{p}+\|u(s)\|_{L^{2p}(B^{c})}^{p}\Big{)}\,ds.\] The integrands in the right-hand side of the above inequality are estimated as follows: via Lemma 4.3 with \(q=p\), one has \[\|u(s)\|_{L^{p}_{d\mu}}^{p} \leq(C^{\sharp}_{\mathrm{GN},p})^{p}\|\nabla u(s)\|_{L^{2}(B^{c})}^ {p-1}\|u(s)\|_{L^{1}_{d\mu}}\] \[\leq(C^{\sharp}_{\mathrm{GN},p})^{p}\|u\|_{X_{T}}^{p}h(s)^{p-1}\] and via Lemma 2.5 with \(q=2p\), one has \[\|u(s)\|_{L^{2p}(B^{c})}^{p} \leq(C_{\mathrm{GN},2p})^{p}\|\nabla u(s)\|_{L^{2}(B^{c})}^{p- \frac{1}{2}}\|u(s)\|_{L^{1}(B^{c})}^{\frac{1}{2}}\] \[\leq(C_{\mathrm{GN},2p})^{p}\|u\|_{X_{T}}^{p}h(s)^{p-\frac{1}{2}}.\] Noting that \(h(s)^{p-\frac{1}{2}}\leq h(s)^{p-1}\), we can deduce that \[\|\nabla J_{1}(t)\|_{L^{2}(B^{c})} \leq C^{\sharp}_{\mathrm{M},1,1}\big{(}(C^{\sharp}_{\mathrm{GN}, p})^{p}+(C_{\mathrm{GN},2p})^{p}\big{)}\|u\|_{X_{T}}^{p}\int_{0}^{\frac{t}{2}}h(t-s)h (s)^{p-1}\,ds\] \[\leq C^{\sharp}_{\mathrm{M},1,1}\big{(}(C^{\sharp}_{\mathrm{GN}, p})^{p}+(C_{\mathrm{GN},2p})^{p}\big{)}\|u\|_{X_{T}}^{p}h(t/2)\int_{0}^{\frac{t}{2}}h(s )^{p-1}\,ds.\] For the estimate of \(J_{2}(t)\), applying Lemma 3.1 with \(q=2\) and Lemma 4.3 with \(q=2p\), we can also compute in a similar way: \[\|\nabla J_{2}(t)\|_{L^{2}(B^{c})} \leq 2C^{\sharp}_{\mathrm{M},1,2}\int_{\frac{t}{2}}^{t}h(t-s)^{ \frac{1}{2}}\|u(s)\|_{L^{2p}_{d\mu}}^{p}\,ds\] \[\leq 2C^{\sharp}_{\mathrm{M},1,2}(C^{\sharp}_{\mathrm{GN},2p})^{p }\|u\|_{X_{T}}^{p}\int_{\frac{t}{2}}^{t}h(t-s)^{\frac{1}{2}}h(s)^{p-\frac{1}{ 2}}\,ds\] \[\leq k_{0}C^{\sharp}_{\mathrm{M},1,2}(C^{\sharp}_{\mathrm{GN},2p} )^{p}\|u\|_{X_{T}}^{p}th(t/2)^{p}\] \[\leq 2k_{0}C^{\sharp}_{\mathrm{M},1,2}(C^{\sharp}_{\mathrm{GN},2p })^{p}\|u\|_{X_{T}}^{p}h(t/2)\int_{0}^{\frac{t}{2}}h(s)^{p-1}\,ds,\] where we have used the inequality \[\int_{0}^{\tau}h(s)^{\frac{1}{2}}\,ds\leq k_{0}\tau h(\tau)^{\frac{1}{2}}, \quad\tau\in(0,\infty)\] for some \(k_{0}>0\). Combining these inequalities and noting \(h(t/2)\leq 4h(t)\), we arrive at the desired estimate. Summarizing the above two lemmas, we conclude the following **Proposition 4.6**.: _Let \(u\) be the weak solution of 1.1 in \((0,T)\) with \(g\in L^{2}_{\mathrm{rad}}\cap L^{1}_{d\mu}\) and let \(\|u\|_{X_{T}}\) be given in (4.1). Then there exists a positive constant \(C_{4}\) (independent of \(\varepsilon,g\) and \(T\)) such that for every \(t\in(0,T)\),_ \[\|u\|_{X_{t}}\leq C_{4}\left(\varepsilon\big{(}\|g\|_{L^{1}_{d\mu}}+\|g\|_{L^ {2}(B^{c})}\big{)}+\|u\|_{X_{t}}^{p}\int_{0}^{T}h(s)^{p-1}\,ds\right).\] Proof of Theorem 1.2.: Put \(\left|\!\left|\!\left|g\right|\!\right|\!\right|=\left|\!\left|g\right|\!\right|_{L ^{1}_{d\mu}}+\left|\!\left|g\right|\!\right|_{L^{2}(B^{c})}\) to shorten the notation. We first prove the assertion in Theorem 1.2 when \(g\) has a compact support. In this case, a similar discussion with \(\zeta(x,t)\) as in Lemma 3.3, we can find that \(u\in C([0,T);L^{1}_{d\mu})\) and therefore the function \(t\in(0,T_{\max}(0,\varepsilon g))\mapsto\|u\|_{X_{t}}\) is non-decreasing and continuous. In view of Lemma 4.2, we can take \[T_{*}=\sup\{t\in(0,T_{\max}(0,\varepsilon g)]\;;\;\|u\|_{X_{t}}\leq 2C_{4} \varepsilon\|\!\left|\!\left|g\right|\!\right|\}.\] Then by Proposition 4.6 we have \[\|u\|_{X_{T_{*}}}\leq C_{4}\varepsilon\|\!\left|\!\left|g\right|\!\right|\! \left|\left(1+(2C_{4})^{p}\|\!\left|\!\left|g\right|\!\right|\!\right|^{p-1} \varepsilon^{p-1}\int_{0}^{T_{*}}h(s)^{p-1}\,ds\right).\] This yields that \(T_{*}\) has the lower bound \(T_{*}\geq T_{\varepsilon}\) with \[T_{\varepsilon}=\sup\left\{t\in(0,\infty]\;;\;(2C_{4})^{p}|\!\left|\!\left|g \right|\!\right|\!\right|^{p-1}\int_{0}^{t}h(s)^{p-1}\,ds\leq\frac{1}{ \varepsilon^{p-1}}\right\}. \tag{4.6}\] Since \[\int_{0}^{t}h(s)^{p-1}\,ds\begin{cases}\leq k_{p}(1+t)^{2-p}(1+\log(1+t))^{1-p }&\text{if $1<p<2$},\\ =\log(1+\log(1+t))&\text{if $p=2$},\\ <+\infty&\text{if $p>2$}\end{cases}\] (for some positive constants \(k_{p}\)), the lifespan estimates for \(1<p\leq 2\) and the existence of global weak solution of (1.1) are proved. Next we consider the case where \(g\in L^{2}_{\text{rad}}\cap L^{1}_{d\mu}\) does not have compact support. In this case, we use a cut-off argument. Put \(g_{n}=g\chi_{B^{c}\cap B(0,n)}\). Note that \(\left|\!\left|g_{n}\right|\!\right|\leq\left|\!\left|\!\left|g\right|\!\right|\!\right|\). By the first step, we have the respective solutions \[u_{n}\in C([0,T_{\varepsilon});H^{1}_{0}(B^{c}))\cap C^{1}([0,T_{\varepsilon });L^{2}(B^{c}))\cap C([0,T_{\varepsilon});L^{1}_{d\mu})\] satisfying \[\|u_{n}\|_{X_{T_{\varepsilon}}}\leq 2C_{4}\varepsilon\|\!\left|\!\left|g_{n} \right|\!\right|\leq 2C_{4}\varepsilon\|\!\left|\!\left|g\right|\!\right|\!\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\ \(|z_{2}|)^{p-1}|z_{1}-z_{2}|\) (\(z_{1},z_{2}\in\mathbb{R}\)) that \[\|u_{n}(t)-u_{m}(t)\|_{L^{1}_{d\mu}}\] \[\leq\varepsilon\|g_{n}-g_{m}\|_{L^{1}_{d\mu}}+\int_{0}^{t}\big{\|} |u_{n}(s)|^{p}-|u_{m}(s)|^{p}]\big{\|}_{L^{1}_{d\mu}}\,ds\] \[\leq\varepsilon\|g_{n}-g_{m}\|_{L^{1}_{d\mu}}+p\int_{0}^{t}\big{(} \|u_{n}(s)\|_{L^{p}_{d\mu}}+\|u_{m}(s)\|_{L^{p}_{d\mu}}\big{)}^{p-1}\|u_{n}(s) -u_{m}(s)\|_{L^{p}_{d\mu}}\,ds\] \[\leq\varepsilon\|g_{n}-g_{m}\|_{L^{1}_{d\mu}}+p(2C_{5,p} \varepsilon)^{p-1}C^{\prime}_{5,p}\int_{0}^{t}y(s)\,ds\] and \[\|\nabla u_{n}(t)-\nabla u_{m}(t)\|_{L^{2}(B^{c})}\] \[\leq\varepsilon\|g_{n}-g_{m}\|_{L^{2}(B^{c})}+\int_{0}^{t}\big{\|} |u_{n}(s)|^{p}-|u_{m}(s)|^{p}]\big{\|}_{L^{2}(B^{c})}\,ds\] \[\leq\varepsilon\|g_{n}-g_{m}\|_{L^{2}(B^{c})}+p\int_{0}^{t}\big{(} \|u_{n}(s)\|_{L^{2p}_{d\mu}}+\|u_{m}(s)\|_{L^{2p}_{d\mu}}\big{)}^{p-1}\|u_{n} (s)-u_{m}(s)\|_{L^{2p}_{d\mu}}\,ds\] \[\leq\varepsilon\|g_{n}-g_{m}\|_{L^{1}_{d\mu}}+p(2C_{5,2p} \varepsilon)^{p-1}C^{\prime}_{5,2p}\int_{0}^{t}y(s)\,ds.\] These imply \[y(t)\leq\varepsilon\big{|}\!\!\big{|}\!\big{|}g_{n}-g_{m}\big{|}\!\!\big{|}+C_ {6,p}\varepsilon^{p-1}\int_{0}^{t}y(s)\,ds,\quad t\in[0,T_{\varepsilon})\] with \(C_{6,p}=p(2C_{5,p})^{p-1}C^{\prime}_{5,p}+p(2C_{5,2p})^{p-1}C^{\prime}_{5,2p}\). By the Gronwall inequality we obtain \[y(t)\leq\varepsilon\big{|}\!\!\big{|}\!\big{|}g_{n}-g_{m}\big{|}\!\!\big{|}e^{ C_{6,p}\varepsilon^{p-1}t},\quad t\in[0,T_{\varepsilon})\] which ensures that \(\{u_{n}\}_{n}\) is a Cauchy sequence in the Banach space \(C(I;H^{1}_{0}(B^{c})\cap L^{1}_{d\mu})\) for any compact interval \(I\subset[0,T_{\varepsilon})\). Since the limit \(u\) satisfies \[u(t)=S(t)g+\int_{0}^{t}S(t-s)|u(s)|^{p}\,ds,\quad t\in[0,T_{\varepsilon}),\] the proof of this part (\(g\) with non-compact support) is complete. It only remains to show the decay estimate for \(\partial_{t}u\) when \(u\) is the global weak solution of (1.1) with \(p>2\) and \(|\!\!|g|\!\big{|}\!\!|\) is sufficiently small verifying \(T_{\varepsilon}=\infty\). Then using Lemma 3.1, we have \[\|\partial_{t}u(t)\|_{L^{2}(B^{c})} \leq\varepsilon\|\partial_{t}S(t)g\|_{L^{2}(B^{c})}+\int_{0}^{t} \|\partial_{t}S(t-s)|u(s)|^{p}\|_{L^{2}(B^{c})}\,ds\] \[\leq C^{\sharp}_{\mathrm{M},2,1}\varepsilon(1+t)^{-\frac{1}{2}}h( t)|\!\!|g|\!|\!|+C^{\sharp}_{\mathrm{M},2,1}\int_{0}^{\frac{t}{2}}(1+t-s)^{-\frac{1}{2} }h(t-s)|\!\!|\!|u(s)|^{p}|\!|\!|\,ds\] \[\quad+2C^{\sharp}_{\mathrm{M},2,2}\int_{\frac{t}{2}}^{t}(1+t-s)^{ -\frac{1}{2}}h(t-s)^{\frac{1}{2}}\||u(s)|^{p}\|_{L^{2}_{d\mu}}\,ds.\] The rest of the proof of the boundedness is similar to the proof of Lemma 4.5 (the difference is just the validity of \(\|u\|_{X_{\infty}}\leq 2C_{4}\varepsilon\big{|}\!\!|g|\!|\!|\!|\!\big{|}\!\big{|}\!\!|\!\big{|}\!\!|\)). The proof is complete. ## Appendix Here we give an alternative proof of the \(L^{p}\)-\(L^{q}\) type estimate (Lemma 2.4) involving the logarithmic weight for the Dirichlet heat semigroup \(e^{t\Delta_{B^{c}}}\), which describes the peculiarity of the two-dimensional exterior domain. Here we shall discuss it via the classical comparison principle for parabolic equations. Although all statements here can be shown for general exterior domains, we only pay our attention to the typical case \(B^{c}\). A similar treatment also can be found in Sobajima [19]. **Lemma A.1**.: _For every \(q\in[1,\infty]\), there exists a positive constant \(C_{\mathrm{A},1,q}\) such that if \(f\in L^{q}(B^{c})\), then_ \[|e^{t\Delta_{B^{c}}}f(x)|\leq\frac{C_{\mathrm{A},1,q}\|f\|_{L^{q}(B^{c})}}{t^{ \frac{1}{q}}(1+\log(1+t))}(1+\log|x|),\quad(x,t)\in B^{c}\times(0,\infty).\] Proof.: Since \(e^{t\Delta_{B^{c}}}\) is a positive operator, we may assume \(f\geq 0\) without loss of generality. The standard (two-dimensional) \(L^{\infty}\)-\(L^{q}\) estimate shows for every \((x,t)\in B^{c}\times(0,\infty)\), \[e^{t\Delta_{B^{c}}}f(x)\leq e^{t\Delta_{\mathbb{R}^{2}}}f(x)\leq\frac{\kappa_ {q}\|f\|_{L^{q}(B^{c})}}{t^{\frac{1}{q}}},\quad\kappa_{q}=\Big{(}1-\frac{1}{q} \Big{)}^{1-\frac{1}{q}}\frac{1}{(4\pi)^{\frac{1}{q}}}\] which immediately gives for every \(0<t\leq\tau=4\), \[e^{t\Delta_{B^{c}}}f(x)\leq\frac{(1+\log 5)\kappa_{q}\|f\|_{L^{q}(B^{c})}}{t^{ \frac{1}{q}}(1+\log(1+t))}(1+\log|x|),\quad x\in B^{c}.\] Therefore we focus our attention to the case \(t\geq\tau=4\). Put \(\mathcal{C}_{t}=B(0,t^{1/2})(\supset B)\) and \[\mathcal{Q}_{1}=\bigcup_{t\geq\tau}(B_{c}\cap\mathcal{C}_{t})\times\{t\}, \quad\mathcal{Q}_{2}=(B^{c}\times[\tau,\infty))\setminus\mathcal{Q}_{1}.\] We see from \(1+\log(1+t)\leq 2(1+\log|x|)\) on \(\mathcal{Q}_{2}\) that \[e^{t\Delta_{B^{c}}}f(x)\leq\frac{\kappa_{q}\|f\|_{L^{q}(B^{c})}}{t^{\frac{1}{ q}}}\times\frac{2(1+\log|x|)}{1+\log(1+t)},\quad(x,t)\in\mathcal{Q}_{2}.\] For the estimate on \(\mathcal{Q}_{1}\), we employ the comparison principle. Put for \((x,t)\in\overline{\mathcal{Q}_{1}}\), \[\Phi(x,t)=\frac{1+\log|x|^{2}}{2+\log t+\log|x|^{2}},\quad U(x,t)=\frac{\Phi( x,t)}{t^{\frac{1}{q}}}e^{-\frac{|x|^{2}}{4t}}.\] Observing that \[\partial_{t}\Phi=-\frac{1+\log|x|^{2}}{t\Theta(x,t)^{2}},\quad\nabla\Phi= \frac{2(1+\log t)x}{\Theta(x,t)^{2}|x|^{2}},\quad\Delta\Phi=-\frac{8(1+\log t )}{\Theta(x,t)^{3}|x|^{2}}\] with \(\Theta(x,t)=2+\log t+\log|x|^{2}\), we can deduce \[\partial_{t}U-\Delta U\geq t^{-1-\frac{1}{q}}e^{-\frac{|x|^{2}}{4t}}\left[\frac{ 1+2(\log t-\log|x|)}{\Theta^{2}}+\Big{(}1-\frac{1}{q}\Big{)}\Phi\right]\geq 0, \quad(x,t)\in\mathcal{Q}_{1}.\] Moreover, noting that \(\frac{1}{2+\log t}\leq\Phi(x,t)\leq\frac{1}{2}\) on \(\mathcal{Q}_{1}\), we can find the comparison on the parabolic boundary of \(\mathcal{Q}_{1}\) as follows: \[\begin{cases}e^{\tau\Delta}f(x)\leq 4e^{\frac{1}{4}}(1+\log 2)\kappa_{q}\|f \|_{L^{q}(B^{c})}U(x,\tau),&x\in B^{c}\cap\mathcal{C}_{\tau},\\ e^{t\Delta}f(x)=0\leq U(x,t),&x\in\partial\mathcal{C}_{t},\ t\geq\tau,\\ e^{t\Delta}f(x)\leq 2e^{\frac{1}{4}}\kappa_{q}\|f\|_{L^{q}(B^{c})}U(x,t).&x\in \partial B^{c},\ t\geq\tau.\end{cases}\] Therefore the comparison principle shows that \[e^{t\Delta}f(x)\leq 4e^{\frac{1}{4}}(1+\log 2)\kappa_{q}\|f\|_{L^{q}(B^{c})}U (x,t),\quad(x,t)\in\mathcal{Q}_{1}.\] The proof is complete. Taking the adjoint in Lemma A.1, we have **Lemma A.2**.: _For every \(q\in[1,\infty]\), there exists a positive constant \(C_{\mathrm{A},2,q}\) such that if \(f\in L^{1}_{d\mu}\), then_ \[\|e^{t\Delta_{B^{c}}}f\|_{L^{q}(B^{c})}\leq\frac{C_{\mathrm{A},2,q}}{t^{1- \frac{1}{q}}(1+\log(1+t))}\|f\|_{L^{1}_{d\mu}},\quad t>0.\] Proof.: Taking \(g\in L^{q^{\prime}}(B^{c})\) with \(\frac{1}{q}+\frac{1}{q^{\prime}}=1\), we have \[\int_{B^{c}}(e^{t\Delta_{B^{c}}}f)g\,dx=\int_{B^{c}}f(e^{t\Delta_{B^{c}}}g)\, dx\leq\frac{C_{\mathrm{A},1,q^{\prime}}\|g\|_{L^{q^{\prime}}(B^{c})}}{t^{\frac{1}{ q^{\prime}}}(1+\log(1+t))}\int_{B^{c}}|f|(1+\log|x|)\,dx.\] Since \(g\) is arbitrary, we obtain the desired inequality. The following is \(L^{p}\)-\(L^{q}\) type estimate with the logarithmic weight which is essentially used in this paper. **Lemma A.3**.: _For every \(p\in[1,\infty)\) and \(q\in[p,\infty]\), there exists a positive constant \(C_{\mathrm{A},3,p,q}\) such that if \(f\in L^{p}_{d\mu}\), then_ \[\|e^{t\Delta_{B^{c}}}f\|_{L^{q}(B^{c})}\leq\frac{C_{\mathrm{A},3,p,q}}{t^{ \frac{1}{p}-\frac{1}{q}}(1+\log(1+t))^{\frac{1}{p}}}\|f\|_{L^{p}_{d\mu}},\quad t >0.\] Proof.: The case \(p=1\) is already proved in Lemma A.2. Let \(1<p<\infty\) and \(t>0\) be fixed. Then combining the \(L^{\infty}\)-contraction property for \(e^{t\Delta_{B^{c}}}\) written as \[\|e^{t\Delta_{B^{c}}}f\|_{L^{\infty}(B^{c})}\leq\|f\|_{L^{\infty}(B^{c})}, \quad f\in L^{\infty}(B^{c})\] with the inequality Lemma A.2 of the form \[\|e^{t\Delta_{B^{c}}}f\|_{L^{\frac{\theta}{p}}(B^{c})}\leq\frac{C_{\mathrm{A},2, \frac{\theta}{p}}}{t^{1-\frac{p}{q}}(1+\log(1+t))}\|f\|_{L^{1}_{d\mu}},\quad f \in L^{1}_{d\mu},\] we see from the Riesz-Thorin theorem with the parameter \(\theta\in(0,1)\) satisfying \[\frac{\theta}{\infty}+\frac{(1-\theta)p}{q}=\frac{1}{r_{1}},\quad\frac{\theta }{\infty}+\frac{1-\theta}{1}=\frac{1}{r_{2}}\] that \(e^{t\Delta_{B^{c}}}\) can be regarded as the bounded operator from \(L^{r_{2}}_{d\mu}\) to \(L^{r_{1}}(B^{c})\) with \[\|e^{t\Delta_{B^{c}}}f\|_{L^{r_{1}}(B^{c})}\leq\left(\frac{C_{\mathrm{A},2, \frac{\theta}{p}}}{t^{1-\frac{p}{q}}(1+\log(1+t))}\right)^{1-\theta}\|f\|_{L^ {r_{2}}_{d\mu}},\quad f\in L^{r_{2}}_{d\mu}.\] Choosing \(\theta=1-\frac{1}{p}\), we obtain the desired inequality. ### Acknowledgements This work was supported by JSPS KAKENHI Grant Numbers 20K14346, 22H00097 and 23K03174.
2303.08956
Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases
The development of privacy-enhancing technologies has made immense progress in reducing trade-offs between privacy and performance in data exchange and analysis. Similar tools for structured transparency could be useful for AI governance by offering capabilities such as external scrutiny, auditing, and source verification. It is useful to view these different AI governance objectives as a system of information flows in order to avoid partial solutions and significant gaps in governance, as there may be significant overlap in the software stacks needed for the AI governance use cases mentioned in this text. When viewing the system as a whole, the importance of interoperability between these different AI governance solutions becomes clear. Therefore, it is imminently important to look at these problems in AI governance as a system, before these standards, auditing procedures, software, and norms settle into place.
Emma Bluemke, Tantum Collins, Ben Garfinkel, Andrew Trask
2023-03-15T21:56:59Z
http://arxiv.org/abs/2303.08956v2
Exploring the Relevance of Data Privacy-Enhancing Technologies for AI Governance Use Cases ###### Abstract The development of privacy-enhancing technologies has made immense progress in reducing trade-offs between privacy and performance in data exchange and analysis. Similar tools for structured transparency could be useful for AI governance by offering capabilities such as external scrutiny, auditing, and source verification. It is useful to view these different AI governance objectives as a system of information flows in order to avoid partial solutions and significant gaps in governance, as there may be significant overlap in the software stacks needed for the AI governance use cases mentioned in this text. When viewing the system as a whole, the importance of interoperability between these different AI governance solutions becomes clear. Therefore, it is imminently important to look at these problems in AI governance as a system, before these standards, auditing procedures, software, and norms settle into place. ## 1 Introduction Sensitive information is essential for many socially valuable activities, including medical research, public health policies, political coordination, and personalised digital services. However, sharing such data brings risks: allowing others to use this information can open the door to misuse, including manipulation, public exposure, theft, discrimination, and threats to national security [1, 2]. These trade-offs can seem unavoidable: we can benefit from data analysis or retain data privacy, but not do both. Similarly, as algorithms grow more capable and exhibit increasing potential for broad, significant societal impact, scholars, activists and others have voiced concerns around the ability for external researchers or auditors to evaluate biases and other harmful behaviours[3]. Although releasing a model open-source can allow such access, doing so can also proliferate harmful capabilities and compromise proprietary IP [3, 4, 5]. These scenarios demonstrate the tensions that exist between transparency, privacy, and accountability in the governance of data and algorithms. At its core, the issue is allowing the appropriate use of information while avoiding its inappropriate use - the term'structured transparency' is used to describe this aim [6]. In 2020, we provided a general framework and vocabulary which characterise the fundamental components of structured transparency [6]. Since the release of that paper, advancements in privacy-enhancing technologies (PETs), such as secure computation and differential privacy techniques, have enabled levels of structured transparency that were previously impractical [7, 8]. These tools have already been used to make progress on addressing tradeoffs in the realm of data privacy, and similar tools (and skill sets) could play useful rules in AI governance. In this report, we briefly review the components of structured transparency from the perspective of AI governance, and present several use cases that illustrate the applicability of structured transparency. ## 2 Using an 'information flow' framing rather than 'privacy' or 'access' Structured transparency focuses on enabling a _desired information flow,_ answering: who should be able to know what, when they should be able to know it, and what they should be able to do with this knowledge. Technologies for structured transparency can allow us to design and enforce more precise information flows that satisfy the requirements of a specified collaborative objective while minimising the potential for alternative uses of the data involved. The basis for thinking about 'privacy' as a matter of information flow has its foundation in the 'contextual integrity' framework proposed by Nissenbaum _et al_. [9, 10], which posits people care most about ensuring that certain information _flows appropriately_ (rather than simply _restricting access_ to it). Across social contexts such as education, healthcare, and politics, societies have developed norms that regulate the flow of personal information [10]. These norms help to protect individuals and groups from harm and to balance power distributions among competing actors. In the framework of contextual integrity, an ideal information flow enables parties to collaborate via this digital information while ensuring that information only supports agreed-upon, context-relative 'approved' purposes. ### Four hurdles to maintaining precise information flows: **The copy problem**: When a bit of information is shared, the recipient gains control over its use, and they are generally not constrained by any technical limitations that would inhibit further sharing or other misuse. **The bundling problem:** The information that we want to convey is often bundled with other pieces of information that we don't want to share. For example, consider a driver's licence, which reveals a host of the details (e.g. height, eye colour) in order to verify a single piece of information, namely whether the individual is old enough to enter a given venue. Put another way, while one could show only one's date-of-birth, this would not suffice to enter an age-restricted establishment, because the legitimacy of that information cannot be verified without being able to check the other information (e.g. comparing visual appearance as represented on the ID with the person presenting it). **The edit problem:** If the entity that stores a piece of information makes an edit before transmitting it to another party, the recipient has no way of knowing that the information was altered. For example, the recipient of an approved/audited model may want verification that the model sent to them has not been modified since it was audited. The use of third-party oversight institutions can solve issues caused by the copy, bundling, and edit problems. In doing so, however, this solution presents a fourth issue: **The recursive oversight problem:** The oversight of an information flow by a given party creates another, even more knowledgeable entity that could potentially misuse that information. This raises the question of 'who watches the watchers?' - in other words, how can we ensure that the oversight institution itself is trustworthy and accountable? In summary, breaking concerns around 'privacy' or 'access' down into these problems can be very useful when discussing data or algorithms. For a more in-depth discussion of each of these problems, see Trask _et al_. [6, 11]. ## 3 Tools for Structured Transparency Structured transparency revolves around five sub-problems: input privacy, output privacy, input verification, output verification, and flow governance structures. Not every situation requires that all be explicitly addressed, but most trade-offs can be reduced to some combination of these issues. For example, achieving input and output privacy alone can conflict with the need for recipients to verify the accuracy and trustworthiness of that information. Therefore, often, privacy must be balanced with verification to ensure that recipients can rely on the information they receive. Below, we explain each sub-problem and briefly note which technological tools help address them. ### Input privacy **What it is:** Input privacy refers to the ability to process information without gaining interpretable access to it. **Technical tools:** Technical input privacy tools come primarily from the field of cryptography: public-key cryptography, end-to-end encryption, secure multi-party computation, homomorphic encryption, functional encryption, garbled-circuits, oblivious RAM, federated learning, on-device analysis, and secure enclaves are several popular (and overlapping) techniques capable of providing input privacy [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. Many of these techniques can theoretically facilitate any arbitrary computation (also known as 'Turing-complete computation') while keeping the computation inputs secret from all parties involved [24]. These methods differ in terms of performance: homomorphic encryption requires heavy computation even for relatively simple information flows, while secure multi-party computation requires less computation but greater message volume between the various parties in the flow (increased network overhead) [24]. The field still lacks the general-purpose software implementations necessary for widespread use, but this is an active and quickly-maturing area of research. ### 3.2 Output Privacy **What it is:** Output privacy allows a user to receive the output of an information flow without being able to infer further information about the input and, symmetrically, to contribute to the input of an information flow without worrying that the later output could be reverse engineered to learn about the input. **Technical tools:** Technical output privacy tools (chiefly, differential privacy and related techniques) can provide strict upper bounds on the likelihood that a data point could be reverse-engineered [25, 26, 27]. This capability is useful in many settings, but it has particular significance in aggregator flows where the actor processing the information is performing statistical analysis; with differential privacy, aggregator flows can reveal high-level insights without ever observing individuals' data in detail. This holds great promise for preserving privacy in the context of scientific inquiry, census statistics, and particular use cases of surveillance (such as public-health surveillance used to track the progression of COVID-19). ### 3.3 Input Verification **What it is:** Input verification allows a user to verify that information received from an information flow is sourced from trusted entities, and (symmetrically), it enables the sending of information such that the output can be verifiably associated with a given party. Novel input verification techniques empower a signer to verify _specific_ attributes of an input to an information flow, such as that it came from a trusted source or that it happened within a specific date range. **Tools:** Most input verification techniques use some combination of public-key infrastructure (SSI, key transparency, etc.), cryptographic signatures, input privacy techniques with active security, and zero-knowledge proofs [28, 29, 30]. These methods can allow an actor to verify a specific attribute such that the information flow output contains cryptographic proof of this verification. For example, consider the example of a driver's licence: a barman inspecting a driver's licence must view the card in its entirety in order to verify that someone is above the legal drinking age - showing them the date-of-birth removed from the rest of the card would carry no weight from a verification perspective. Technical input verification tools do not suffer from this constraint [31], since these tools can verify and reveal individual attributes within an information flow (e.g. reliably verify that someone's age is above the legal drinking age without exposing their date-of-birth, address, or name.). Critically, this allows for high levels of both input privacy and input verification. Output watermarking and other types of cryptographic or stenographic [32] embeddings which could prove the source of information are all types of input verification that of particular relevance to AI governance: for example, it may become important to reliably verify whether a piece of information was generated by a model [33], or vice-versa, verify that it was generated by a human. These all have important applications in guarding against negative societal impacts of generative AI [34]. One particular group dedicated to this issue is the Coalition for Content Provenance and Authenticity (C2PA), who is developing technical standards for certifying the source and history (or provenance) of media content [35]. ### 3.4 Output Verification **What it is:** Output verification allows a user to verify attributes of any information processing (computation) within an information flow. This is relevant for reducing the'recursive oversight' problem (see Section 2). **Tools:** When combined with input privacy techniques, technical tools for output verification can overcome the recursive oversight problem. An external auditor could verify properties of an information flow without learning anything beyond the output of targeted tests (e.g. searching for patterns reflective of fraud) while also ensuring that the tests ran correctly. This has significant implications for increasing the precision, scale, and security of auditing institutions, potentially facilitating new types of checks and balances and fairer distributions of power. In addition, output verification relates to ongoing research for auditing or evaluating models for fairness, bias, or emerging capabilities [36, 37, 38, 39]. ### 3.5 Flow Governance **What it is:** Flow governance is satisfied if each party with concern/standing over how information should be used has guarantees that the information flow will adhere to the intended use. This is important because even if a flow satisfies the necessary criteria of input and output privacy and input and output verification, questions still remain concerning who holds the authority to _modify_ the flow. **Tools:** Traditional analog methods of information flow governance rely heavily on legal and physical measures. For example, multi-key safety-deposit boxes for holding secure documents accomplish similar goals. However, technical tools for flow governance offer distinct advantages in terms of scalability and efficiency over their analog counterparts (see 'policy enforcement' in [24]). Secure multi-party computation (SMPC) serves as an excellent example of this: with SMPC, several parties can be selected to govern the flow of any given information. Rather than relying solely on legal repercussions for violations, SMPC can implement hard cryptographic limitations to prevent unauthorised behaviour, establishing trust in the system. ### An Analogy: Sending a Letter To summarise these characteristics in the context of sending a letter, where the letter is the "data" and sending it from one place to another is the "computation", we can describe input privacy as the protective envelope that prevents unauthorised access, while output privacy involves the withholding of sensitive personal information from the letter. Input verification can be likened to the signature on the letter, while output verification is demonstrated through a wax seal that assures the recipient that the letter has not been tampered with. and Finally, flow governance can be represented by the concept of shipping the letter in a secure safe with a combination lock, where only the intended recipient has access. ## 4 Specific Illustrations of Use-Cases of Structured Transparency Together, these tools help address the question of how to build and enforce a desired information flow. Most importantly, they provide: 1. The ability to **unbundle** information such that one needs to share only the bits necessary for a collaboration 2. A solution to the recursive enforcement problem such that small actors can **audit information to which they do not themselves have access.** Here are use cases that illustrate the use of structured transparency for specific topics in AI governance. ### External Scrutiny of Frontier Models As the AI research frontier advances and the potential benefits and harms of state-of-the-art models grow, researchers with a diverse range of focus areas should be allowed to evaluate model capabilities, biases, dangers, and failures [40]. While open-source release allows for such assessment, it sits in tension with three common concerns: commercial interests, safety, and governance. First, companies may want to maintain a competitive edge and profitability by keeping their models private and not allowing them to be copied and distributed [3]. Second, once a model is released open-source, the originating lab retains no way of verifying how it is being used, and no technical way of preventing dangerous or unethical uses of that model4. Lastly, as Seger notes in her breakdown of what it means to 'democratise AI', releasing a model open-source creates a situation in which 'a single actor or company makes a major AI governance decision: the decision that a dual-use AI system should be made freely accessible to all' - with no safeguards to prevent future malicious adaptations or misuse [41, 42]. These three concerns arise due to the copy problem and bundling problems. Footnote 4: Some open-source licences do allow the developer to retain the right to take legal action if the use-licence is broken, however this does not _prevent_ the unethical use from occurring, and does not guarantee that the developer would be aware that the misuse occurred. In _The Gradient of Generative AI Release,_ Solaiman has broken down AI system components into the following broad and partially overlapping categories of information [43]: 1. **The model itself** (weights, ability to query, adapt, or otherwise examine and conduct further research in to a model); 2. **Components for risk analysis** (parts of system development that could provide further insight into the model; the model's capabilities; results from any evaluations that the model owner may have run on the base model); 3. **Components for replication** (the training process, code used to train the model) Recall that structured transparency tools allow for unbundling certain parts of information from the parts strictly necessary for a particular computation. Solaimon's breakdown sheds light on model release from a structured transparency lens because it helps researchers specify exactly what information is necessary for their research (interpretability, safety, bias, performance, fairness, etc.). Solaiman outlines a gradient of options for model release, ranging from open-source to fully closed, with some of middling options using structured transparency techniques [43]. Additionally, the'structured access' paradigm advocated by Shevlane proposes options for enabling API access to models without full release [3, 44], which may fulfil the information flow needs of some types of external scrutiny [45, 46]. By sharing the information necessary to evaluate the capabilities, biases, and other traits of the model, rather than the full set of model weights, API access solves the copy and bundling problems, allowing the owner to enforce rules on how the model should be used, and to withdraw access to the model if harmful capabilities are found. Additional considerations may have a bearing on determining and achieving the intended information flow. For example, a model owner might use encryption techniques to prevent users from being able to reverse engineer an algorithm from its outputs [3, 47, 48], thereby enforcing input and output privacy. One form of structured transparency has already been adopted for enabling external scrutiny on recommender systems - the Christchurch Call Initiative on Algorithmic Outcomes (CCIAO) is a joint project between Twitter, Microsoft, and the US, France, and New Zealand governments [49]. The CCIAO project will use a structured transparency software library called PySyft [50] (originally developed for the purpose of federated learning) to allow external civil society researchers to audit proprietary recommender systems at Twitter and Microsoft. (The programme reserves room for expansion to additional AI partners upon successful completion.) Depending on the level of trust between parties and the nature of the scrutiny, combinations of the following techniques could make a workflow for external scrutiny possible. **Technical input privacy techniques** could enable model owners to grant researchers the ability to perform specific computations over their model (e.g. evaluating model behaviour and bias) without providing access for any other operations such as distributing the model for misuse. Additionally, using **technical output privacy techniques**, model owners could prevent reverse engineering the model via computation output. **Input verification techniques** could allow model owners to prove to researchers various attributes of the model, such as which version of the model they evaluated. In situations that feature a competitive relationship between institutions or research groups, **output verification** could help verify that the result of the evaluation was actually computed by the model owner using the exact computations requested by the researcher (i.e. showing that the model owner did not alter any of the evaluation code). Finally, **flow governance** could distribute control across third parties (e.g. funding bodies, stakeholders in a collaboration network, groups safeguarding rights for vulnerable populations), thereby making especially sensitive models available for appropriate research only when approved by several parties. Doing this could minimise the risk of misuse [51]. ### Robust Verification of Model Auditing As consumer-facing AI applications grow, more domain-specific auditing procedures may become necessary. For instance, once an audited model is deployed into a production setting, the recipients of its predictions might ask, "can I know with confidence that these predictions are actually coming from the original audited model?" Verification processes that confirm that a deployed model is the exact version previously approved would minimise the risk of unauthorised changes and manipulation. A trusted registry listing models, audits, and findings, could enable **input verification techniques** to provide cryptographic evidence that the expected inputs are fed into the model making predictions. **Output verification techniques** can provide cryptographic evidence that the user of a deployed model is receiving predictions from the model they expect (e.g. a specific model from a public audit registry). As AI models are deployed in environments of increasing importance, ensuring their quality will become a critical concern. Guaranteeing that a model in use has undergone auditing helps protect users from compromised or unaudited models and can give them greater confidence in the quality of the predictions. ### Monitoring Model Safety in Sensitive or Commercial Applications As the use of AI in sensitive applications expands, models will increasingly interact with private information (e.g. language models used in therapy apps). This, in turn, will likely trigger pressure to ensure models' quality adherence to guidelines, while also maintaining the customer's privacy. This will require model owners to have effective ways of monitoring and verifying that the output of their model does break any quality guidelines while also guaranteeing that they cannot view personal information about the user (either through the user's input or the model's response). Privacy-preserving techniques can enable this by allowing model owners to monitor and validate the output without learning specific information pertaining to any particular user. Ultimately, the ability to balance privacy and quality/safety monitoring will be key to building trust and confidence in AI systems that deal with sensitive or commercial information. Figure 1: This simplified illustration highlights the similarities in the general set-up of data privacy and AI governance problems. External scrutiny is most analogous to the common data-privacy focus on federated learning. ### Enabling Collective Governance of AI Models In some cases, it may be desirable for an AI model to be governed by multiple parties. For example, if multiple actors bear the cost of creation by pooling their datasets, computational resources, or AI research talent, they may wish to ensure that subsequent commercialization of the asset distributes profits amongst the group. Such governance might also add value when actors have diverging interests regarding model development and deployment (e.g. the public and a for-profit company might have different preferences). An AI firm could elect to share governance with an outside party as an exercise in proving compliance with a norm or law, owing to the ability for that outside party to subsequently know how and when the model is used. In addition to helping with **flow governance**, structured transparency could increase the bargaining power of data consortia [52] to influence the variety of AI models. For example, a consortium might only agree to participate if the model creator consented to pool information about model safety and accidents. Through such collective bargaining, data consortia could help to build a more comprehensive understanding of the risks and benefits of AI models. Naturally, in order for such a consortium to be effective, it would be imperative that the datasets continue to reside with the consortium and that AI developers only access such information via structured-transparency-compliant APIs that preserve **input and output privacy** (to avoid the copy problem). In addition, consortia may seek to enable participants to pool information about model safety and accidents in a manner that guarantees source privacy and anonymity, while also providing **verification** that the source is from a particular source (e.g. a verified employee from an institution). The need for this became apparent when Twitter was flooded with unverified screenshots of Bing Chat's behaviour, some of which were photoshopped, causing confusion and obfuscating underlying issues. ### Enabling Agile Regulatory Markets Understanding of AI evaluation standards will continue to evolve. In order for regulatory systems to keep up with AI development, they must exhibit sufficient agility to incorporate new standards from multiple interest groups covering issue areas ranging from norms for models interacting with minors to assessments of manipulative behaviour, fairness and bias, etc. Clark and Hadfield have stated that a critical challenge for ensuring that AI follows a safe and beneficial development path lies in ensuring that regulatory systems can handle AI's complexity, global reach and pace of change. To achieve this, Clark and Hadfield suggest a new approach to regulation: global regulatory markets [53]. These markets could take the form of a digital network that enables model evaluations (e.g. locally on the model owners cloud), the subsequent sharing of evaluation results back to the evaluators/auditors, and verification by auditors of whether models passed their standards. This is similar to the federated learning networks in the data governance space. Since models will likely be fine-tuned and improved rapidly, this network could provide an record of models that have passed the standards/audits, enabling the end-users of the models to actually verify (e.g. cryptographically) that the model they're using in their use case has been approved for that specific use. For example, someone building an education app that incorporates an LLM would need to have a guarantee that the version of the model is approved for interacting with children (note that in this example the 'end user' is the app developer). This sort of agile digital regulatory network would become increasingly important for many reasons: 1. to allow quick auditing post small model alterations, 2. to allow different interest groups to be able to submit evaluations/standards to be run, and 3. to be able to provide hard verification to the end-users that the model they're using is approved for their use-case The Importance of Taking a System-Level View AI and data governance is now a wide field with increasingly different subtopics and concerns. It is important to consider the various problems in AI governance as a whole, as a system, for the following reasons. First, there will likely be large similarities and overlaps in the software stacks needed to achieve these different objectives AI governance, and the software must be interoperable to avoid partial solutions and significant gaps in governance. For example, the software stack for enabling external scrutiny on models should be interoperable with official auditing procedures, and these audits need to be recorded in such a way that it's possible for downstream users to verify that the version of the model they're receiving has undergone auditing. When viewing the system as a whole, we can think of the network of infrastructure and protocols needed for these purposes as very similar to the internet. Our internet relies on the existence of open-source, non-proprietary standards which are critical to allowing devices, services, and applications to work together across a wide and dispersed network [54, 55, 56, 57]. From this perspective, it is likely important to ensure a similar level of interoperability in the protocols and standards underlying our model governance mechanisms. ### Why is this important right now? There will likely be a rush to build the software for enabling these markets - AI-auditing software startups already exist, and it will be important for them to be interoperable with future broader regulatory markets. However, it may be important that the underlying network is not based on proprietary software, and instead built with a focus on open protocols and interoperability, much like the internet. This will be important to prevent an ecosystem of un-interoperable, siloed, and incomplete governance solutions, and also to prevent single points-of-failure within the AI governance ecosystem, or one actor effectively 'owning' the AI governance ecosystem. It's important to look at these problems in AI governance as a whole, as a system, before all of these standards, auditing procedures, and software and norms for enabling external scrutiny settle into place. ## 6 Conclusion The development of privacy-enhancing technologies has made immense progress on addressing the use-misuse trade-offs in the realm of data privacy. Highlighting the applications of these tools for AI governance is important since there may be significant overlap in the software stacks needed for the AI governance use cases mentioned in this text. When viewing the system as a whole, the importance of interoperability and between these different AI governance solutions becomes clear. Therefore, it is imminently important to look at these problems in AI governance from a system-level view before these standards and norms settle into place. ## Acknowledgements These ideas are the fruit of years of discussion about'structured transparency' between a wide community of researchers within the AI ethics, governance, safety, and privacy communities around the Centre for the Governance of AI and OpenMined. We thank Eric Drexler for proposing the name'structured transparency', and the following people for their input at various stages of these papers on structured transparency: Iason Gabriel, Allan Dafoe, Toby Shevlane, William Isaac, Phil Blunsom, Jan Leike, Vishal Maini, Kenneth Cukier, Helen Nissenbaum, Markus Anderljung, Elizabeth Seger, Georgios Kaissis, Claudia Ghezzou Cuervas-Mons.
2307.04443
Search-time Efficient Device Constraints-Aware Neural Architecture Search
Edge computing aims to enable edge devices, such as IoT devices, to process data locally instead of relying on the cloud. However, deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive. Creating manual architectures specialized for each device is infeasible due to their varying memory and computational constraints. To address these concerns, we automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS). We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints such as model size and floating-point operations. It incorporates weight sharing and channel bottleneck techniques to speed up the search time. Based on our experiments, we see that DCA-NAS outperforms manual architectures for similar sized models and is comparable to popular mobile architectures on various image classification datasets like CIFAR-10, CIFAR-100, and Imagenet-1k. Experiments with search spaces -- DARTS and NAS-Bench-201 show the generalization capabilities of DCA-NAS. On further evaluating our approach on Hardware-NAS-Bench, device-specific architectures with low inference latency and state-of-the-art performance were discovered.
Oshin Dutta, Tanu Kanvar, Sumeet Agarwal
2023-07-10T09:52:28Z
http://arxiv.org/abs/2307.04443v1
# Search-time Efficient Device Constraints-Aware Neural Architecture Search ###### Abstract Edge computing aims to enable edge devices, such as IoT devices, to process data locally instead of relying on the cloud. However, deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive. Creating manual architectures specialized for each device is infeasible due to their varying memory and computational constraints. To address these concerns, we automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS). We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints such as model size and floating-point operations. It incorporates weight sharing and channel bottleneck techniques to speed up the search time. Based on our experiments, we see that DCA-NAS outperforms manual architectures for similar sized models and is comparable to popular mobile architectures on various image classification datasets like CIFAR-10, CIFAR-100, and Imagenet-1k. Experiments with search spaces--DARTS and NAS-Bench-201 show the generalization capabilities of DCA-NAS. On further evaluating our approach on Hardware-NAS-Bench, device-specific architectures with low inference latency and state-of-the-art performance were discovered. Keywords:Neural Architecture Search DARTS Meta-Learning Edge Inference Constrained Optimization ## 1 Introduction In recent years, there has been significant progress in developing Deep Neural Network (DNN) architectures [33, 47, 34] for edge and mobile devices.However, designing DNN architectures for specific hardware constraints and tasks is a time-consuming and computationally expensive process [3]. To address this, Neural Architecture Search (NAS) [2, 32, 49] has become popular as it discovers optimal architectures given a task and network operations. Despite its success, traditional NAS techniques cannot guarantee optimal architecture for specific devices with hardware constraints such as storage memory and maximum supported FLOPs. To address this concern, researchers have developed hardware-aware algorithms [36, 4] that find optimal device architectures with low resource training overhead and search time. These methods often use inference latency [4], FLOPs [36] or a combination of hardware metrics [36] as constraints scaled by a tunable factor. However, the time to tune the scaling factor is often not considered within the NAS search time and can be ten times the reported search time. To address these issues, we propose the Device Constraints-Aware NAS (DCA-NAS), a principled differentiable NAS method that introduces total allowable model size or floating-point operations (FLOPs) as constraints within the optimization problem, with minimal hyper-parameter tuning. Unlike inference latency which is task dependent, FLOPs and memory are specified with a given hardware and thus are appropriate for our generic method. The approach is adaptable to other hardware metrics such as energy consumption or inference latency using additional metric-measuring functions. The paper make the following significant contributions: * It introduces a fast method that uses weight sharing among operations in the search space and channel bottleneck, along with a differentiable resource constraint, for continuous exploration of the search space. * A training pipeline that allows a user to input device memory or FLOPs and search for optimal architecture with minimal hyper-parameter tuning. * Our extensive experimentation on vision datasets- CIFAR-10, CIFAR-100, TinyImagenet, Imagenet-1k and inference-latency comparisons of trained models on Hardware-NAS-bench demonstrate the efficiency of our method. The generalization of our method to different search spaces is shown with experiments on DARTS and NAS-Bench. ## 2 Related Work **Neural Architecture Search** Popular approaches [12, 22, 1] designed architectures for high performance on specific tasks or datasets with the traditional deep learning perspective that bigger is better, resulting in computationally and memory-intensive inference on edge devices. Network pruning [13], channels removal [26, 34] and weights/activations quantization [8, 50] can compress Figure 1: DCA-NAS framework:Weight sharing in the search space and Derived cells lowers the search time from other DNAS. Target device constraint is used to query search constraint from look-up graph for constrained optimization. architectures, but require pre-training, hyperparameter tuning, and often lack transferability.Neural Architecture Search (NAS) methods such as Reinforcement Learning [30, 4], Evolutionary Learning [11, 21] and Differentiable Neural Architecture Search (DNAS) [25, 43] can automatically search for architectures without user intervention, and can transfer across similar tasks. DNAS with surrogate metrics [42, 48] have also been used to explore the architecture search space. However, architectures found by DNAS methods are not optimized for deployment on edge devices and smaller models obtained by reducing layers or channels are often sub-optimal. **Hardware-aware Neural Architecture search** Certain NAS methods optimize [4, 40, 3, 19] for constraints such as latency, inference speed [41], FLOPS [36, 37], memory usage [24]. Some use a separate DNN to predict constraint metrics and evolutionary search to obtain hardware-aware optimal models [36, 3], while others consider real-time latencies of edge devices or provide specific architectures for specific devices [27, 7]. However, these methods require significant search time and tuning of scaling factors controlling the trade-off between the performance and the constraint, and do not always account for optimal architectures. In contrast, we use a differentiable hardware-aware objective function with generic hardware metrics, and do not require a tunable scaling factor. Certain methods [3, 29, 9] train a supernet first and then search for a smaller architecture, but this is only efficient when there are more than fifteen different edge devices with different limitations or deployment scenarios [3] as training the supernet takes huge resources-32 V100s taking about 1,200 GPU hours. Search stage followed by evaluation, as done in our approach is more efficient when the different number of possible edge devices is less than fifteen. ## 3 DCA-NAS: Device Constraints Aware Fast Neural Architecture Search We present the preliminary gradient-based NAS objective function in section 3.1 and then formulate the problem of incorporating the hardware-awareness in NAS as a constrained optimization problem in section 3.2 followed by techniques to reduce the search time in section 3.3. The framework of our approach is illustrated in Figure 1. ### Gradient-based NAS Objective Function Popular DNAS techniques [25, 46] have two stages, the search phase and the evaluation phase. During the search phase, given a task or a dataset the techniques search for a network of cells, which are directed acyclic graphs with \(N\) nodes. The edges of the graph are network layers, whose operations are to be selected from a pre-defined set \(\mathcal{O}\) containing operations such as 3x3 separable convolution and identity operations with trainable weights \(w_{o}\). The search is made differentiable by making the choice of a particular operation to be a softmax of architecture weights \(\alpha\) of all operations. Thus, the intermediate output \(z_{j}\) at node \(j\) is given by, \[z_{j}=\sum_{o\in\mathcal{O}}\frac{\exp\left\{\alpha_{o}^{i,j}\right\}}{\sum_{o ^{\prime}\in\mathcal{O}}\exp\left\{\alpha_{o^{\prime}}^{i,j}\right\}}\cdot o \left(w_{o}^{i,j},\mathbf{z}_{i}\right) \tag{1}\] ### DCA-NAS formulation Previous DNAS approaches [25; 45; 46] did not focus on searching architectures specifically for inference on resource-constrained devices. In contrast, we formulate the DNAS objective function as a constrained optimization problem by incorporating device resource constraints (memory or FLOPs) in the search objective function. The constrained bi-level optimization problem is written as, \[\begin{array}{rl}\min_{\alpha}&\mathcal{L}_{\mathrm{val}}\ (w^{*}(\alpha), \alpha)\\ \mathrm{s.t.}&w^{*}(\alpha)=\operatorname*{argmin}_{w}\mathcal{L}_{\mathrm{ train}}\ (w,\alpha)\\ \mathrm{s.t.}&k_{s}(\alpha)\leq K_{d}\end{array} \tag{2}\] where training dataset is split into \(train\) and \(val\) to optimize \(w\) and \(\alpha\) simultaneously in each iteration subject to the constraint that the architecture's number of parameters or FLOPs \(k_{s}\) must be less than or equal to the device resource constraint \(K_{d}\). The following equation calculates the architecture's number of parameters or FLOPs during search given the number of cells\(c_{n}\). Our method can also be adapted to use other metrics such as latency and energy consumption with additional metric measuring functions. \[k_{s}(\alpha)=c_{n}\sum_{(i,j)\in N}\sum_{o\in\mathcal{O}}\frac{\exp\{\alpha_{ o}^{i,j}\}*b\left(o\right)}{\sum_{o^{\prime}\in\mathcal{O}}\exp\{\alpha_{o^{ \prime}}^{i,j}\}} \tag{3}\] #### 3.2.1 Tackling the difference in search and evaluation networks The size of the architecture in the search phase \(k_{s}\) is different from the architecture size in evaluation phase due to the softmax weighting factor in equation 3 (demonstration can be found in the appendix). To address this, we introduce a tighter bound on the search constraint \(K_{d^{\prime}}\), which is less than the device resource constraint \(K_{d}\). A lookup graph (LUG) needs to be made for each dataset by varying \(K_{d^{\prime}}\) within appropriate bounds and running the algorithm until convergence each time to obtain the corresponding device resource constraint \(K_{d}\). The computation time of the LUG can be reduced by running the searches in parallel. Thus, on incorporating the tighter constraint by looking-up the graph for the given device resource constraint \(K_{d}\) along with the trainable Lagrange multiplier \(\lambda\) in Equation 2, the objective function is re-written as, \[\begin{array}{rl}\widetilde{\mathcal{L}}=\mathcal{L}_{\mathrm{val}}&(w^{*}( \alpha),\alpha)+\lambda(k_{s}(\alpha)-LUG(K_{d}))\\ \mathrm{s.t.}&w^{*}(\alpha)=\operatorname*{argmin}_{w}\mathcal{L}_{\mathrm{ train}}\ (w,\alpha)\end{array} \tag{4}\] ### Techniques to reduce search time #### 3.2.2 Channel Bottleneck We use convolutional layers of 1x1 kernel to reduce the depth of output channels of operations in the search space to save computation time and memory overhead. **Derived Cell and Weight sharing.** During architecture search, only one cell with trainable \(\alpha\) is used to optimize architecture parameters. The target network for inference is built by stacking cells with architectures derived from highly weighted operations. This can be done during search by deriving the other cell architectures from the first at each iteration [46]. The arrangement of the cells for search is given in the appendix. This derived cell saves computation and memory overhead. A weight sharing strategy [46] among same operations with the same originating node \(i\) to all nodes \(i<j<N\) has been applied within a cell. This is motivated by the observation that non-parametric operations operating on the representation of a node produce the same feature map irrespective of the output node and thereby extended to parametric operations. Thus, Equation 1 may be re-written to the following, \[z_{j}=\sum_{o\in\mathcal{O}}\frac{\exp\left\{\alpha_{o}^{i,j}\right\}}{\sum_{o^ {\prime}\in\mathcal{O}}\exp\left\{\alpha_{o^{\prime}}^{i,j}\right\}}\cdot o \left(w_{o}^{i},\mathbf{z}_{i}\right) \tag{5}\] ## 4 Experimental Results Our approach is evaluated on two search spaces- DARTS and NAS-Bench with vision datasets- CIFAR10, TinyImagenet, Imagenet-16-20 and Imagenet-1k. The details of the search space and implementation is given in the appendix \begin{table} \begin{tabular}{c c c c c c} \hline **Dataset** & **Search Strategy** & **Method** & **Accuracy** & **Parameters GPU** \\ \hline CIFAR-10 & manual & PyramidNet-110 (2017) [12] & 95.74 & 3.8 & - \\ & manual & VGG-16 pruned (2017) [16] & 93.4 & 5.4 & - \\ & evolution & Evolution + Cutout (2019) [39] & 96.43 & 5.8 & 12 \\ & random & NAO Random-WS (2019) [31] & 96.08 & 3.9 & 7.2 \\ & gradient & ENAS + micro + Cutout (2018) [30] & 96.46 & 4.6 & 12 \\ & gradient & DARTS + Cutout (2nd) (2018) [25] & 97.24\(\pm\)0.09 & 3.3 & 24 \\ & gradient & SNAS + Cutout (2018) [43] & 97.15 & 2.8 & 36 \\ & gradient & PC-DARTS (2019) [45] & 97.43\(\pm\) 0.07 & 3.6 & 2.4 \\ & gradient & SGAS (2020) [23] & 97.34 & 3.7 & 6 \\ & gradient & DNAS (2020) [6] & 97.46 \(\pm\) 0.03 & 4.0 & 9.6 \\ & gradient & DARTS+PT (2021) [38] & 97.39 \(\pm\) 0.08 & 3.0 & 19.2 \\ & gradient & Shapley-NAS (2022) [42] & 97.53 \(\pm\) 0.04 & 3.4 & 7.2 \\ \hline & RCAS & DC-NAS-3.5 M (CIFAR-10) & 97.2\(\pm\)0.09 & **3.4** & **1.37** \\ \hline Tiny ImageNet & manual & SupeaceNet (2016) [18] & 54.00 & - & - \\ & manual & PreActResNet18 (2020) [22] & 63.48 & - & - \\ & manual & ResNet18 (2016) [15] & 58.4 & 6.4 & - \\ & manual & DenseNet (2020) [1] & 62.73 & 11.8 & - \\ & gradient & DARTS+ Cutout (2018) [25] & 62.15\(\pm\)0.15 & 7.3 & 219 \\ & RCAS & DC-NAS-3.5 M (CIFAR-10) & 61.4\(\pm\)0.15 & **3.5** & **12.5** \\ & RCAS & DC-NAS-3.5 M (CIFAR-10) & 61.4\(\pm\)0.15 & **3.4** & **1.37** \\ \hline \end{tabular} \end{table} Table 1: Performance comparison of architectures evaluated on visual datasets-CIFAR-10 and TinyImagenet. ’(CIFAR-10)’ indicates search with CIFAR-10. ’X M’ in ’DCA-NAS-X M’ denotes the input memory constraint. RCAS- Resource Constrained Architecture Search Figure 2: Plots show that DCA-NAS method discovers models with fewer parameters than other NAS methods and manual architectures without sacrificing prediction performance to a large extent. ### Results on DARTS search space #### 4.1.1 Transferability- learning of coarse features during search. We transfer the architecture searched on CIFAR-10 to train and evaluate the model weights on TinyImagenet in Table 1 and ImageNet-1k in Table 2. This transferred model yields higher performance than manually designed architectures [33, 28] for the target dataset. It is observed that performance of the transferred model is comparable to the architecture searched on the target dataset itself which can be attributed to the architecture learning coarse features than objects during search. #### 4.1.2 Performance versus Device-Constraints trade-off DCA-NAS discovers 2 to 4% better-performing architectures than manual designs with a memory constraint of 3.5 million parameters on CIFAR-10 and similar performance on TinyImagenet as in Table 1. On Imagenet-1k, DCA-NAS yields models with similar performance to other NAS methods [42, 6, 45] with a constraint of 5.5 million parameters (taken to yield similar sized models as other NAS methods) as in Table 2. We vary the input device resource constraint and plot the performance of the searched models against the number of parameters in Figure 2. As observed, DCA-NAS searched models can yield 15x lower sized models than manual architectures like PyramidNet-272 [12] with at most 1% reduction in accuracy on CIFAR-10. On TinyImagenet, DCA-NAS yields models similar in performance but 6x smaller in size than the manual Resnet variant. In comparison to ProxylessNAS [4] for Imagenet-1k, DCA-NAS yields 32% smaller model in terms of model parameters for similar accuracy. In comparison to DNAS methods [25, 45] for each of the three datasets, we observe that the performance of the DCA-NAS searched models is retained to a certain extent as resources are further limited \begin{table} \begin{tabular}{l c c c c c} \hline **Method** & \multicolumn{3}{c}{**Test Error (\%) Parameters FLOPS Search Cost Search**} \\ & **top-1** & **top-5** & **(Mil)** & **(Mil)** & **(GPU days)** & **Strategy** \\ \hline Inception-v1 (2015) [35] & 30.2 & 10.1 & 6.6 & 1448 & - & manual \\ MobileNet,V1 (2017) [17] & 29.4 & 10.5 & 4.2 & 569 & - & manual \\ MobileNet,V2 (2018) [33] & 72.0 & 91.0 & 3.4 & 300 & - & manual \\ ShuffleNet 2\(\times\) (v2) (2018) [28] & 25.1 & - & 5 & 591 & - & manual \\ \hline MaseNet-92 (2020) [14] & 25.2 & 8.0 & 4.4 & 388 & - & RL \\ AmoebaNet-C (2019) [31] & 24.3 & 7.6 & 6.4 & 570 & 3150 & evolution \\ \hline DARTS+Cutout (2018) [25] & 26.7 & 8.7 & 4.7 & 574 & 1.0 & gradient \\ SNAS (2018) [43] & 27.3 & 9.2 & 4.3 & 522 & 1.5 & gradient \\ GDAs (2019) [10] & 26.0 & 8.5 & 5.3 & 545 & 0.3 & gradient \\ BayesNAS (2019) [49] & 26.5 & 8.9 & 3.9 & - & 0.2 & gradient \\ P-DARTS (2018) [30] & 24.4 & 7.4 & 4.9 & 557 & 0.3 & gradient \\ SGAS (Cri 1. best) (2020) [23] & **24.2** & **7.2** & 5.3 & 585 & 0.25 & gradient \\ SDARTS-ADR (2020) [5] & 25.2 & 7.8 & 6.1 & - & 0.4 & gradient \\ Shapley-NAS (2022) [42] & 24.3 & - & 5.1 & 566 & 0.3 & gradient \\ \hline RC-DARTS (2019) [20] & 25.1 & 7.8 & 4.9 & 590 & 1 & RCAS \\ \hline **DCA-NAS** & **25.1** & 8.1 & **5.1** & 578 & **0.06** & **RCAS** \\ \hline ProxylessNAS (GPU) (2019) [4](Imagenet) & 24.9 & 7.5 & 7.1 & 465 & 8.3 & gradient \\ PC-DARTS (2019) [45] (Imagenet) & 24.2 & 7.3 & 5.3 & 597 & 3.8 & gradient \\ DFNAS (2020) [6] (Imagenet) & 24.2 & 7.3 & 5.2 & 644 & 3.9 & gradient \\ DARTS+PT (2021) [38] (Imagenet) & 25.5 & - & 4.7 & 538 & 3.4 & gradient \\ Shapley-NAS (2022) [42] (Imagenet) & 23.9 & - & 5.4 & 582 & 4.2 & gradient \\ \hline RCNet-B (2019) [44] (ImageNet) & 25.3 & 8.0 & 4.7 & 471 & 9 & RCAS \\ \hline **DCA-NAS-5.5 Millionet** & **24.4** & **7.2** & **5.3** & **597** & **1.9** & **RCAS** \\ \hline \end{tabular} \end{table} Table 2: Performance and comparison of architectures evaluated on Imagenet-1k. The label ”(Imagenet)” indicates that the architecture has been searched and evaluated on Imagenet-1k.; else it is searched on CIFAR-10. ’X M’ in ’DCA-NAS-X M’ denotes the input memory constraint after which the model performance degrades. DCA-NAS model of similar size has the advantage of better performance (by 1%) and being automatically searched over MobileNet-v2 [33], a manually designed network on Imagenet-1k. **Search time comparison** For evaluation on TinyImagenet in Table 1, the architecture searched on CIFAR-10 with DCA-NAS yields model in the lowest search time which indicates the search-time efficiency of the transferability property. Our method requires about 4x lower search cost than SGAS [23] which performs the best among the other transferred architectures and 16x lower search time than the other resource-constrained approach [20] for similar performance as seen in Table 2. Moreover, ProxylessNAS [4] takes about 4x more search time than DCA-NAS whereas PC-DARTS takes about 2x more search time with no capability to constraint model size. ### Results on NAS-Bench-201 search space #### 4.2.1 Performance and Latency comparisons on different devices Our method reports the mean by averaging over five runs with different random seed. Figure 3 compares the performance of models searched with DCA-NAS and PC-DARTS by varying the latency constraints. It shows that unlike PC-DARTS, DCA-NAS can search for more efficient models which have lower inference latency for similar test accuracy. Moreover, we observe that models with similar performance have lower latency when tested on Pixel 3 than on Raspberry Pi 4 due to a faster RAM in Pixel 3. DCA-NAS takes the lowest search time among all the NAS methods due to the addition of search-time-efficient techniques while being at-par in terms of performance across all datasets. ## 5 Ablation Study #### 5.0.1 Effectiveness of various algorithmic augmentations for faster search: We analyze the effectiveness of algorithmic augmentations mentioned preciously 3.3 to reduce search cost in our study. We sequentially add weight sharing, channel bottleneck, and derived cells to the baseline DARTS [25] method and measure search time and accuracy. Weight sharing, channel bottleneck, and derived cells was observed to significantly reduce search memory overhead, enabling us to use larger batch sizes and reducing overall search cost as seen in Figure 3(a). Adding the resource-constraint in the final DCA-NAS method negligibly increases search Figure 3: Plots show DCA-NAS searched models with similar performance but lower inference latency (on two devices- Pixel 3 and Raspberry Pi 4) to previous SOTA NAS method- PC-DARTS when evaluated on NAS-Bench dataset. cost while maintaining performance. **Stability of the approach:** We test stability by running the search algorithm independently five times with different initial seeds and the same constraints and hyperparameters. The architectures found during each run have similar performance when re-trained and evaluated as shown in Fig. 3(b). Smaller models have lower performance due to restrictions in model complexity compared to larger models. ## 6 Conclusion We present DCA-NAS, a device constraints-aware neural architecture search framework which discovers architectures optimized to the memory and computational constraints of an edge device in a time-efficient manner. It does so by incorporating a constraint in terms of the number of parameters or floating point operations (FLOPs) in the objective function with the help of a Lagrange multiplier. DCA-NAS in essence searches for a Pareto optimal solution given the edge device memory or FLOPs constraint. Moreover, it enables architecture search with search cost 4 to 17 times lower than the previous state-of-the-art Hardware-aware NAS approaches. DCA-NAS can discover models with size about 10 to 15 times lower than manually designed architectures for similar performance. In comparison to DARTS and its other NAS variants, DCA-NAS can discover models upto 3x smaller in size with similar performance. This hardware-aware approach can be generalized to any future updates to differential neural architecture search and possibly to training-free methods of NAS with some adaptation. ## Acknowledgement We thank the anonymous reviewers; Profs. Surendra Prasad and Brejesh Lall of IIT Delhi; and colleagues at Cadence India for their valuable feedback and inputs. This research is supported by funding from Cadence India; the first author is also supported by a fellowship from the Ministry of Education, India. Figure 4: (a) Ablation study with CIFAR-10 dataset- Each component added to DARTS leads to the reduction in the search cost of DCA-NAS while performance is retained. WS- Weight Sharing, CB- Channel Bottleneck, DC- Derived Cell, RC- Resource Constraint, BS- Batch Size (b) Shows stability of performance of DCA-NAS searched models for runs with varying seeds on CIFAR-10 dataset.
2306.12108
Complex accident, clear responsibility
The problem of allocating accident responsibility for autonomous driving is a difficult issue in the field of autonomous driving. Due to the complexity of autonomous driving technology, most of the research on the responsibility of autonomous driving accidents has remained at the theoretical level. When encountering actual autonomous driving accidents, a proven and fair solution is needed. To address this problem, this study proposes a multi-subject responsibility allocation optimization method based on the RCModel (Risk Chain Model), which analyzes the responsibility of each actor from a technical perspective and promotes a more reasonable and fair allocation of responsibility.
Dexin Yi
2023-06-21T08:47:35Z
http://arxiv.org/abs/2306.12108v1
# Complex accident, clear responsibility ###### Abstract The problem of allocating accident responsibility for autonomous driving is a difficult issue in the field of autonomous driving. Due to the complexity of autonomous driving technology, most of the research on the responsibility of autonomous driving accidents has remained at the theoretical level. When encountering actual autonomous driving accidents, a proven and fair solution is needed. To address this problem, this study proposes a multi-subject responsibility allocation optimization method based on the RCModel (Risk Chain Model), which analyzes the responsibility of each actor from a technical perspective and promotes a more reasonable and fair allocation of responsibility. ## I Introduction In recent years, due to the development of autonomous driving technology, the existing methods of allocating responsibility for traffic accidents have become obsolete. As autonomous driving technology is still in its early stages of development, its safety cannot be considered absolutely reliable. Once an accident occurs, it becomes difficult to clarify accident responsibility due to the complexity of the driving model and environment. Furthermore, autonomous driving technology alters traditional driving styles, making the current system of allocating accident responsibility difficult to adapt to. Therefore, society requires a more reasonable rule for allocating responsibility. There are five methods of allocating responsibility for autonomous driving accidents, depending on the subject of responsibility. * First, the responsibility lies with the automobile manufacturer. One perspective assert that autonomous driving accidents fall under general tort liability rules, as automobile manufacturers cannot prove their lack of fault in the occurrence of accidents.[1] Another perspective suggests that product liability rules should apply to autonomous driving accidents since the driver does not engage in driving behavior during vehicle operation, holding the manufacturer accountable for product liability.[2] A third perspective is that autonomous vehicle, which do not need to be operated, are similar to elevators, and that the manufacturer should be held responsible for them by analogy with "elevator tort accidents".[3] * Second, the responsibility lies with the vehicle users. This method contends that the allocation of responsibility in autonomous driving accidents should follow the principles of tort liability in traffic accidents. As autonomous vehicles inherently pose certain risks, human drivers should assume responsibility, which implies the expectation that they supervise autonomous driving to enhance risk awareness and minimize the occurrence of risks.[4] * Third, the responsibility lies with the vehicle owners. One perspective treats autonomous driving as employees, applying the rules of vicarious liability.[5] Another perspective likens autonomous driving to pets, applying the rules of animal liability.[6] * Fourth, the responsibility lies with the establishment of an external risk-sharing mechanism. One perspective is to adopt a dual-level liability insurance framework based on the "nuclear accident tort liability rules."[7] Another perspective is to establish a large-scale relief fund based on the "vaccine accident tort liability rules."[7] * Fifth, the responsibility lies with granting legal personhood to autonomous driving, making them responsible for their own actions.[8] The above methods clearly show the reasons for assigning responsibility for the accident to different responsible subjects. However, these methods have the following drawbacks: First, the above methods lack a clear distinction between "responsibility" and "liability". The establishment of the crime requires the satisfaction of both the "existence of the unlawful act" and "have the reason and ability to take responsibility" two conditions. If the "unlawful act exists", the subject is considered to have "responsibility" for the accident. However, the existence of "liability" also requires consideration of whether the subject has subjective intent, whether the subject has reached the legal age of liability, and many other aspects. This study focuses on the "responsibility" of the subject. Second, the above methods lack a comprehensive grasp of the level and status of autonomous driving. In a broad sense, vehicles equipped with autonomous driving functions can be referred to as "autonomous vehicle (AV)." However, the term "autonomous driving" in the first and third methods mentioned above actually describes unmanned driving and only discussed the scenarios of Level 4 and Level 5 as defined in J3016[9] Because both the driver and the autonomous driving system may be involved in the operation of the vehicle in Levels 1 to 3 autonomous driving, it is unreasonable to assign responsibility for the automobile manufacturer. Third, lack of considering perspective of multi-subject responsibility. The accident responsibility should be allocated in a way that recognizes the possible responsibility of the automobile manufacturer, the driver and the third-party involved in the accident and denies the responsibility of the autonomous driving system. In terms of the driving process, on the one hand, safe driving requires that a normal driving environment be maintained inside and outside the vehicle, and a safe driving environment requires that third-party (other traffic participants) do not interfere with the driver or the vehicle. On the other hand, safe driving requires the proper operation or mutual cooperation of driving subjects. Therefore, the automobile manufacturer, the driver and the third-party involved in the accident all have the potential to be held responsible. However, it should be denied that the autonomous driving system itself can be held responsible. This is because the autonomous driving system is only an external response to the manufacturer's level of technology and is not self-aware and does not have the ability to bear responsibility. Thus, it can also be inferred that it is not applicable to consider it as a thinking "hired man" or "pet". Fourth, appropriate consideration should be given to the fact that the automobile manufacturer should be assigned the primary responsibility. When an accident occurs as a result of the combined actions of automobile manufacturer and other subjects, the manufacturer should be considered, within reason, to be more responsible. This is because manufacturers generally have a greater capacity to compensate than other subjects, and allocating responsibility to them is consistent with the principle of risk-income consistency. Fifth, external risk-sharing mechanisms should not be the primary method of risk avoidance. Although the introduction of more responsible parties can spread the risk, it can also increase the burden on society to some extent. Compared with nuclear accidents or vaccine accidents, there are more autonomous driving accidents with less social consequences. Therefore, it is unreasonable to allow external risk-sharing mechanisms to fully assume responsibility for accidents. In addition, completely transferring the accident risk reduces the manufacturer's responsibility and is not conducive to spurring manufacturers to improve their technology and strive to reduce accident risk. Considering the drawbacks of the above methods, a reasonable responsibility allocation system should have the following features: * Applicable for all levels of autonomous driving systems. This requires the method to be directly integrated with technical principles and analyze the accident occurrence logic from the bottom. * Use a multi-subject risk sharing method to share the risk. This requires that the method is applicable to all responsible subjects. * The allocated risk must be reasonable. Each responsible subject only bears the risk due to subject's own negligence. Here, on the one hand, the method requires that the means of finding the accident risk must be clear and traceable, and on the other hand, the evaluation of the method for each subject must be based on the facts of the accident and logical. ## II Highlights ### _Universal method of all levels_ Applicable to various levels of autonomous vehicles. The accident responsibility evaluation method based on technological principles is universal. ### _Multi-subject risk allocation_ Multi-subject allocate the responsibility for the risk. It is more practical and equitable to change the practice of having a single subject allocate the risk of an accident. ### _Traceable method_ It is beneficial for reconstructing the accident scenario and evaluating the appropriateness of actions taken by all subjects involved. ## III Method As figure 1 shows, the proposed AV-RCModel consists of three processes. The following gives a brief introduction to each process: * **Relevant factors**: Specify the level of driving automation according to J3016[9], identify the involved subjects based on the accident, and categorize the types of accidents according to ISO21448 and ISO21262[10]. * **RCModel**: Analyze the sequence of risk occurrences using RCModel, elucidate the relationship between risks, and assess the level of risk severity. * **AVModel**: Analyze the specific actions of the autonomous vehicle and the driver during the accident using AVModel, assess the contingency and inevitability of risk factors, and determine their influence on the accident occurrence.[11] ### _Step 1: relevant factors_ In the first step, three technical factors related to autonomous driving need to identify. First, since the main driver subject of an autonomous driving varies from class to class, the level of automation of an autonomous vehicle should be clearly defined. Secondly, because it requires multi-subject to take responsibility, it is necessary to preliminarily judge the responsible subjects involved in the accident. Third, in order Figure 1: The flow chart of AV-RCModel to facilitate the subsequent analysis, the accident types should be judged according to the known information of the accident. **Level of driving automation:** Clearly define the level of automation of the involved vehicle's autonomous driving system to determine the actual controlling subject at the time of the accident. For instance, in the case of an L2 autonomous driving accident where it is difficult to determine the actual controlling subject, it can be initially presumed that the accident occurred while the driver was in control. **Multi-responsible subjects:** The responsible subjects for autonomous driving accidents include the automobile manufacturer, the driver, and third-party. The automobile manufacturer accountable for any issues related to the autonomous driving system or vehicle components. Third-party refer to someone that may pose harm to the driver or the driving environment, also referred to as other traffic participants. **Types of accident:** To alleviate the complexity of human-vehicle interaction scenarios, accident can be categorized into four types according to ISO 21448 and ISO 21262. These types are: functional safety accidents caused by damages to vehicle electronic components, safety of the intended function (SOTF) accidents resulting from limitations of the autonomous driving system, driver-operated accidents due to driver errors, and accidents caused by the actions of Third-party. It is important to note that autonomous driving accidents may occur in various types. The above analysis allows for a preliminary analysis of the character of the accident. ### _Step 2: Analysis of RCModel_ Risk Chain Model (RCModel) is used in this research to analyze accident risk. This is an artificial intelligence risk control research method that identifies "critical risks" by analyzing artificial intelligence systems, service providers, and users.[11] Figure 2 illustrates the flow chart of RCModel, which employs a comprehensive approach. RCModel incorporates the principles and key terms outlined in AI ethics and governance guidelines from both domestic and international sources. It classifies the risk factors associated with AI services into three distinct layers: (1) AI systems, (2) AI service providers, and (3) users. The first layer, known as the technical layer, encompasses elements such as AI models, data, rule-based applications, and system environments. The second layer, referred to as the service operation layer, encompasses not only the AI systems but also the code of conduct, operations, and communication pertaining to the provision of services. Finally, the third layer represents the users themselves and includes aspects such as user understanding, actions, and the user environment.[11] In this study, the AI system corresponds to the autonomous driving system, the AI service provider corresponds to the manufacturer of the vehicle and the autonomous driving system. Consider that in autonomous driving, third-party can cause uncertainty to the driving environment. At the same time, the operation of the autonomous vehicle may also cause uncertainty to the life and health of the third-party. Therefore, when considering the "users" module of the RCModel, third-party should be recognized as users. RCModel is applied to analyze the risk factors in an accident. First, the risks are analyzed in chronological order in relation to the types of accident in the first step, and how they were generated and eventually led to the accident. Second, for each risk factor, the responsible subjects are identified. Finally, the degree of hazard of each risk factor to the accident and to the society as a whole is assessed. RCModel mainly analyzes the nature and hazard of the behaviors, and cannot determine whether each risk factor has a necessary impact on the occurrence of the accident. Therefore, AVModel is used to analyze what actions each responsible subject in the accident actually did, and whether these behaviors had an impact on the occurrence of the accident. ### _Step 3: Analysis of AVModel_ The Autonomous Vehicle Model (AVModel) associated with the RCModel is produced to understand the behavior of each subject at the technical level for the specificity of the autonomous driving model. The autonomous driving system itself is an artificial intelligence system with risk, and it works with the human driver in the driving process to accomplish the mission of safe driving, so it is necessary to analyze whether the collaboration between the autonomous driving system and the driver is normal when an accident occurs. In addition, third-party (or other traffic participants) are also identified as users in this model. Figure 3 shows the composition diagram of AVModel. First, the operation of the autonomous driving system is divided into three aspects. Second, the manufacturer's autonomous vehicle product, which includes the autonomous driving system and in-vehicle software and hardware facilities. Figure 3: AVModel factors and structure Figure 2: RCModel factors and structure Considering that in autonomous driving, third-party can cause uncertainty to the driving environment, and at the same time, autonomous vehicles may also cause uncertainty to third-party. Therefore, the users part includes the behaviors of both the driver and other traffic participants. Analyze the influence of each risk factor on the accident result. First, using AVModel, analyze whether each risk behavior is inevitable factor or coincidental factor. Second, with the inevitable factors, the coincidental factors are permuted and the results of the accident under each combination are analyzed. ## IV Case study ### _Case I_ _Case review_: In 2018, an Uber automated vehicle was involved in a traffic accident in Tempe, Arizona, colliding with a pedestrian crossing the street. The vehicle was in automatic control mode at the time of the accident. The accident was caused by a vehicle recognition error and the safety operator's failure to effectively take over. In addition, the vehicle's emergency braking function was disabled. The verdict in the case was that the safety operator was fully responsible. * Step 1: Relevant Factors * **Level of driving automation**: L2. * **Multi-responsible subjects**: vehicle manufacturer, safety operator, pedestrian. * **Types of accident**: safety of the intended functional accident, driver operation accident. * Step 2: Analysis of RCModel RCModel is used to analyze the accident, and the result is marked in the form of red risk chain: Figure 4 shows RCModel factors and structure of this case. According to the red risk chain shown in figure 4, it was possible to find that the lack of consensus between the user and the service provider influenced the user's assessment of his or her responsibility. In other words, the safety operator may have been in a completely relaxed state at the time of the accident when the car manufacturer did not train the safety operator sufficiently, leading to a misunderstanding of the safety operator responsibility and of the autonomous driving technology. Analysis of the behaviors of the accident subjects: Table I shows the responsible subjects and risk factors corresponding to each risk behavior in the RCModel. At the same time, their hazards are evaluated for each behavior. This facilitates the final conclusion of the assessment of each subject. Step 3: Analysis of AVModel Based on the risks shown in the RCModel, the operations of each subject at the time of the accident are mapped using AVModel in conjunction with the accident process: Figure 5 shows the permutation of coincidental factors to consider the possible results of an accident. In this accident, first, the autonomous driving system proceeded to the decision-planning stage but was unable to control the autonomous vehicle; second, the driver was distracted during the accident; and third, the manufacturer turned off the emergency brake, so there \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Indicator** & **Layer** & **Factor** & **Hazard** \\ \hline \multirow{6}{*}{Autonomous driving to identify pedestrian} & \multirow{3}{*}{AI system} & \multirow{3}{*}{Data quality} & \multicolumn{1}{p{56.9pt}}{Serious: The problem may exist in all autonomous driving cars of the same batch of systems} \\ \cline{3-5} & & & Moderate: The probability of possible misidentification exists for all autonomous driving systems \\ \cline{2-5} & & & Moderate: The probability of possible misidentification exists for all autonomous driving systems \\ \hline Emergency brake function was disabled & Manufacturer (service provider) & \multirow{3}{*}{Safety} & \multirow{3}{*}{Sarious: Causes safety hazard} \\ \cline{2-5} & & & \\ \cline{2-5} & Manufacturer s do not train safety operator & Manufacturer (service provider) & & Slight: Manufacturer need to give safety operator relevant driving training and make them aware of the risks \\ \hline Safety & \multirow{3}{*}{Safety operator driving in compliance} & \multirow{3}{*}{User responsibili} & \multicolumn{1}{p{56.9pt}}{Serious: The requirement of the driving code of conduct} \\ \cline{2-5} & & & & \\ \cline{2-5} & & & & \\ \cline{2-5} & & & & \\ \cline{2-5} & & & & \\ \cline{2-5} & & & & \\ \hline Safety not take over & \multirow{3}{*}{Safety operator (Users)} & \multirow{3}{*}{Proper use} & \multicolumn{1}{p{56.9pt}}{Serious: timely debugger is a driver obligation} \\ \cline{2-5} & & & & \\ \cline{2-5} & & & & \\ \hline \end{tabular} \end{table} TABLE I: Analysis of the behaviors of the accident subjects: Fig. 4: RCModel factors and structure of this case Fig. 5: AVModel factors and structure of this case management and HMI. Analysis of inevitable and Coincidental factors in the driving environment: **Inevitable factor** * Vehicle has turned off emergency brake. **Coincidental factors** * Autonomous driving system fails to identify pedestrians. * The driver did not take over. The following table illustrates the four possible scenarios under the above risks: The table 2 shows that the automobile manufacturer turned off the emergency braking function of the vehicle. In this case, even if the autonomous driving system was able to identify the pedestrian, the vehicle would not have been able to stop. In other words, it is entirely up to the safety operator to prevent the accident from happening. * Conclusion In response to the accident, this study concludes that: * The manufacturer of the autonomous driving system: bears the primary responsibility. First, because the inevitable risk factor in this accident was generated by the manufacturer and that factor caused the autonomous driving system to fail to take action even if it identified the pedestrian; Second, the safety operator's non-compliant driving also reflected problems with the manufacturer's training of the safety operator. Manufacturers are responsible for ensuring that the design and functionality of their systems meet safety standards and should warn safety operators or drivers about any known defects or risks. * Safety operator: bears secondary responsibility. As a safety operator, it is reasonable to assume that safety operator should know and follow the relevant driving rules. However, the safety operator's behavior is partly caused by the manufacturer's training mistakes, so the safety operator bears secondary responsibility. * Pedestrian: No responsibility. There were no obvious violations of traffic rules by the pedestrian in this accident. ### _Case 2_ _Case review_: On December 29, 2019, a man was driving a Tesla Model S, via the highway, and speeding into the city of Gardena, Los Angeles. Immediately afterwards, the Tesla ran a red light in downtown Gardena at 119 mph and then crashed into a Honda car at an intersection, killing the two occupants of the Honda. Tesla engineers testified that the autonomous driving function was on at the time of the accident and that the driver's hands were not off the wheel at the time of the accident. However, there was no braking or slowing of the vehicle in the first six minutes of the accident. The verdict in the case was that the driver was fully responsible. * Step 1: Relevant Factors * **Level of driving automation**: L2. * **Multi-responsible subjects**: vehicle manufacturer, driver, object vehicle. * **Types of accident**: safety of the intended functional accident, driver operation accident. * Step 2: Analysis of RCModel RCModel is used to analyze the accident, and the result is marked in the form of red risk chain: The figure 6 shows RCModel factors and structure of this case. First, the vehicle's autonomous driving system itself had limitations and was unable to recognize traffic lights and turn on the deceleration function. Second, the accident vehicle's autonomous driving was significantly flawed, unable to even apply emergency braking to object vehicle. However, Tesla exaggerated the autonomous driving technology in its early publicity and did not clearly inform consumers of the technical limitations of the autonomous vehicle, resulting in a discrepancy in the perception of the autonomous driving system between the company and consumers. Subsequently, consumers who had overly high expectations of the autonomous driving technology mistakenly believed that it could be less regulated (or that it could be unregulated) and therefore reacted incorrectly, leading to the accident. According to the RCModel, the hazards of the subjects' behaviors corresponding to each risk factors are analyzed by combining the order of occurrence of the risk factors and the relationships between the risk factors: Fig. 6: RCModel factors and structure of this case Table 3 shows the responsible subjects and risk factors corresponding to each risk behavior in the RCModel. At the same time, their hazards are evaluated for each behavior. Step 3: Analysis of AVModel Based on the risks shown in the RCModel, the operations of each subject at the time of the accident are mapped using AVModel in conjunction with the accident process: The figure 7 shows AVModel factors and structure of this case. In the accident, the autonomous driving system only proceeded to the perception & fusion stage and the driver did not take over. Analysis of inevitable and Coincidental factors in the driving environment: **Inevitable factor** * Autonomous driving system cannot recognize traffic lights and didn't slow down. * Tesla misrepresents its autonomous driving system level. **Coincidental factors** * The driver did not take over. * Autonomous driving system fails to recognize vehicles and didn't apply emergency brakes. Permutation of coincidental factors to consider the possible results of an accident. The following table illustrates the four possible scenarios under the above risks: Table 4 shows that it is possible to avoid an accident as long as either the autonomous driving system and the driver are able to operate the vehicle correctly. system and is to some extent responsible for the driver's driving violations. * The driver: bears secondary responsibility. On the one hand, the driver's behavior of not taking over is caused by Tesla's exaggerated publicity to some extent, and it is believed that there are certain reasons for this behavior. On the other hand, the driver in the accident did not take any measures to prevent the accident, is not understandable. Considering the two factors and the enterprise's greater risk bearing capacity, the driver should assume secondary responsibility. * Object vehicle: No responsibility. Object vehicle did not violate traffic rules. ## V Conclusion This study uses a technology-based, multi-subject, traceable accident responsibility allocation method. The benefit is that it is applicable to all levels of autonomous driving because it discusses the interaction between driver and autonomous driving system; It is fairer and more reasonable because responsibility is shared by multi-subject. The RCModel is used to sort out the various risk factors in a chronological order, so the allocation of responsibility is clear and traceable. This research method can be used to analyze the responsible subjects and primary and secondary responsible subjects for different autonomous driving accidents. At the same time, in view of why each responsible subject needs to take responsibility, it gives analysis and explanation of their behavior. However, this research method still has some shortcomings. First of all, this research method cannot obtain an accurate proportion of responsibility allocation, because the specific proportion of responsibility should be combined with a more detailed accident report. Secondly, this research method also needs to go through a lot of case practice to verify the accuracy of the method. In view of the shortcomings of this research method, future research on autonomous driving accident responsibility allocation should be based on a large number of accident cases, and constantly seek for more reasonable responsibility allocation rules.
2310.17796
ControlLLM: Augment Language Models with Tools by Searching on Graphs
We present ControlLLM, a novel framework that enables large language models (LLMs) to utilize multi-modal tools for solving complex real-world tasks. Despite the remarkable performance of LLMs, they still struggle with tool invocation due to ambiguous user prompts, inaccurate tool selection and parameterization, and inefficient tool scheduling. To overcome these challenges, our framework comprises three key components: (1) a \textit{task decomposer} that breaks down a complex task into clear subtasks with well-defined inputs and outputs; (2) a \textit{Thoughts-on-Graph (ToG) paradigm} that searches the optimal solution path on a pre-built tool graph, which specifies the parameter and dependency relations among different tools; and (3) an \textit{execution engine with a rich toolbox} that interprets the solution path and runs the tools efficiently on different computational devices. We evaluate our framework on diverse tasks involving image, audio, and video processing, demonstrating its superior accuracy, efficiency, and versatility compared to existing methods. The code is at https://github.com/OpenGVLab/ControlLLM.
Zhaoyang Liu, Zeqiang Lai, Zhangwei Gao, Erfei Cui, Ziheng Li, Xizhou Zhu, Lewei Lu, Qifeng Chen, Yu Qiao, Jifeng Dai, Wenhai Wang
2023-10-26T21:57:21Z
http://arxiv.org/abs/2310.17796v3
# ControlLLM: Augment Language Models with Tools by Searching on Graphs ###### Abstract We present ControlLLM, a novel framework that enables large language models (LLMs) to utilize multi-modal tools for solving complex real-world tasks. Despite the remarkable performance of LLMs, they still struggle with tool invocation due to ambiguous user prompts, inaccurate tool selection and parameterization, and inefficient tool scheduling. To overcome these challenges, our framework comprises three key components: (1) a task decomposer that breaks down a complex task into clear subtasks with well-defined inputs and outputs; (2) a Thoughts-on-Graph (ToG) paradigm that searches the optimal solution path on a pre-built tool graph, which specifies the parameter and dependency relations among different tools; and (3) an execution engine with a rich toolbox that interprets the solution path and runs the tools efficiently on different computational devices. We evaluate our framework on diverse tasks involving image, audio, and video processing, demonstrating its superior accuracy, efficiency, and versatility compared to existing methods. + Footnote †: \({}^{\boxtimes}\)Corresponding authors ([email protected], [email protected]). [https://github.com/OpenGVLab/ControlLLM](https://github.com/OpenGVLab/ControlLLM) ## 1 Introduction Large-scale language models (LLMs) like ChatGPT [18] and LLAMA series [29, 30] have demonstrated impressive capability in understanding and generating natural language. Their skill at interaction, planning, and reasoning are also rapidly extended beyond NLP, propelling the advancement of the studies in multi-modal interaction [1, 13, 14, 17, 31, 32, 40]. An emerging example of multi-modal interaction is tool-augmented language models [16, 25, 26, 36, 37], which strive to enhance the capabilities of language models beyond text to include varying modalities such as images, videos, audio, _etc_. These models employ LLMs as primary controllers and incorporate tools with diverse functionalities as plugins. This helps to solve a wide range of multi-modal tasks and opens the door for innovative applications in multi-modal interaction. However, challenges persist in this field such as task decomposition, tool selection and invocation, argument and return value assignment to tools, and efficient tool execution scheduling. To address this challenge, recent methods [16, 26, 36, 38] use ChatGPT with techniques like Chain-of-Thought (CoT) [34], Tree-of-Thought (ToT) [38], self-consistency [33] and in-context learning [7] to solve complex tasks by breaking them into a chain or tree of sub-tasks. But these methods are based on the assumption that each sub-task has at most one preceding task. This presumption is inadequate for real-world applications, particularly multi-modal tasks, which usually require multiple inputs. For instance, to remove an object in an image, one needs to input the image path, the object's position, and the removal command. Therefore, a chain-shaped or tree-shaped paradigm may struggle to invoke tools in complex situations accurately and fail to manage intricate topological relationships among tools (see Fig. 1). In this work, we introduce ControlLLM, a new framework designed to assist Large Language Models (LLMs) in accurately and efficiently controlling multi-modal tools and providing comprehensive solutions for complex real-world tasks involving multi-modal inputs. Our framework places particular emphasis on three aspects as follows: **Task Decomposition.** A task decomposer is introduced to analyze the user prompt and breaks it down into well-defined subtasks, each with clear fields such as task description, task domain, arguments, and returned output. By breaking down complex tasks into manageable subtasks, the task decomposer significantly enhances the system's ability to handle intricate user prompts. It paves the way for follow-up task planning and solution execution. **Task Planning.** This part handles tool selection and tool argument assignment. We propose a thoughts-on-graph (ToG) paradigm that traverses a topological tool graph to search for solutions. The nodes of the graph are tools that are connected based on their dependencies and relationships. ToG orchestrates the selected tools and controls the flow of resources among them to form possible solutions. ToG can find the optimal solution for each sub-task by applying diverse search strategies on the graph. Due to the concrete definition in subtask and explicit tool dependencies in a tool graph, ToG can effectively search all feasible solution paths in cases where the selected optimal solution fails to meet users' preferences. **Solution Execution.** We design an execution engine that can execute the solution generated by ToG and craft informative and well-formatted responses. The engine has access to a versatile toolbox consisting of various tools from different sources, such as locally deployed APIs or cloud services. The engine can also parallelize the tool executions according to the topology of the solution path to reduce the latency and provide feedback during the execution process. Our ControlLLM offers several advantages. (1) It can accurately handle complex real-world tasks that involve multi-modal inputs and outputs, while previous methods [4, 15, 36, 26, 37, 16] are usually limited by the modality or complexity of the tasks; (2) It can reduce the impact of ambiguity problems in natural language. With well-defined objectives of subtask, our ToG can provide multiple alternative solutions; (3) It can overcome the token limitation of LLMs during task planning. It is because our method searches the optimal solution path on the tool graph, instead of asking LLMs to generate solutions to meet user requirements. (4) It is easy to scale up the toolbox in our method. Since the optimal solution lies in the tool graph, when the toolbox changes, we only need to rebuild the graph without re-training LLMs or updating in-context prompts. To evaluate the effectiveness of ControlLLM in tasks of different complexity, we designed a benchmark with a series of tailored metrics. Specifically, we use irrelevant tool inclusion rate and necessary tool inclusion rate to measure tool selection. We employ the resource hallucination rate and resource type consistency rate to assess argument assignments. We also split the test set into three difficulty levels based on the number of APIs involved: easy (\(<2\) APIs), medium (\(=2\) APIs), and hard (\(>2\) APIs). We conducted various experiments, both quantitatively and qualitatively, to compare our method with existing ones. The results show that ControlLLM achieves a higher success rate in tool invocation, especially for complicated instructions. In summary, the main contributions are as follows: (1) We propose ControlLLM, a framework that lets LLMs use various tools across different modalities to solve complex tasks in the real world. With a powerful toolbox, ControlLLM can be easily extended to tasks with natural language, images, audio, video, or any mix of them. (2) We design three tailored components in ControlLLM: Task decomposition, which breaks down the user prompt into subtasks with well-defined inputs and outputs; The ToG paradigm for task planning, which searches the optimal solution path on a graph that depicts tool dependencies; And an execution engine with a powerful toolbox, Figure 1: **Comparisons of different paradigms for task planning.** (a) Chain of Thought (CoT) [34], CoT with self-consistency [33] and (b) Tree of Thoughts [38] (ToT) essentially rely on the LLMs to perform task planning, where the edge is actually formed by LLMs at run time. (c) The Thoughts-on-Graph (ToG) paradigm in our method searches for solutions on a pre-built graph that captures the dependencies of tools, which avoids the hallucination problem in tool invocation. which efficiently schedules and executes the solution path. (3) We construct a benchmark to assess the efficacy of ControlLLM on tasks with different complexity levels. The experimental results demonstrate significant improvements in tool usage. Notably, ControlLLM achieves a success rate of 98% in the metric of overall solution evaluation on challenging tasks, while the best baseline only reaches 59%. ## 2 Related Work Planning, Reasoning, and Decision Making.It is a longstanding vision to empower autonomous agents with the abilities of planning, reasoning, and decision-making [12, 27, 35]. Despite progressive development, it was recent advancements in large language models (LLM) [3, 5] that have taken a breakthrough step in addressing these problems on the broad user requests. Nevertheless, it is shown that LLMs still suffer from difficulties in dealing with knowledge-heavy and complex tasks [23]. To overcome these issues, Chain of Thoughts (CoT) [34] is introduced as a simple yet effective prompting technique to elite the complex reasoning capabilities of LLMs. Following this line of work, CoT with self consistency [33], Tree of Thoughts (ToT) [34], and other techniques [6, 10, 41], have been proposed to improve the reasoning abilities further. There are also several works [2, 39] that introduce techniques called Graph-of-Thought (GoT). They all share a common insight that excessively relies on LLMs to generate thoughts for solving complicated NLP problems. In contrast, our ToG aims to endow the language model with the ability to use tools for a multi-modal dialogue system. It designs the search algorithm to form a complicated thought network of task planning. This paradigm evidently improves the performance of task planning. Tool-Augmented LLM.Drawing inspiration from the evolving planning and decision-making capabilities observed in Large Language Model (LLM) systems, a new wave of research [21] starts to enhance LLMs with external tools for accessing up-to-date information, reducing hallucination, multi-modal interactions, _etc._ Prominent examples include VisProg [8], Visual ChatGPT [36], Hugging-GPT [26], InternGPT [16], AutoGPT1, and Transformers Agent2. A distinctive trait of this line of research is its reliance on the zero-shot or few-shot in-context learning capabilities inherent in LLMs [3]. These capabilities enable task decomposition, tool selection, and parameter completion without requiring explicit fine-tuning. However, due to the inherent limitations of LLMs, issues such as hallucination and challenges in effective decomposition and deduction can arise with substantial frequency. In conjunction with this line of work, there are also instruction-tuning methods [9, 20, 22, 25, 37]. Whereas alleviating the aforementioned issues after being tuned on the conversations involved tools, these methods are still limited at expanding the toolset, i.e., additional training is required to add tools. Footnote 1: [https://github.com/Significant-Gravitas/Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) Footnote 2: [https://huggingface.co/docs/transformers/transformers_agents](https://huggingface.co/docs/transformers/transformers_agents) Multi-Modal LLMs.Developing LLMs that inherently possess multi-modal capabilities is another approach to extending the usage boundary of LLMs for more complex real-world scenarios. For instance, BLIP-2 [13] binds a frozen image encoder and LLM to enable the vision-language understanding and generation. Similarly, VisionLLM [32] and LISA [11] empower the LLMs with the visual perception capabilities such as object detection and segmentation. Nevertheless, these methods could only cover a limited range of modalities and often require huge effects on model fine-tuning. ## 3 ControlLLM The prevalence of LLMs has unprecedentedly boosted the development of human-computer interaction, and we can empower the LLMs with abilities to interact with broader modalities via tools. In response, we present an innovative framework, namely **ControlLLM**, characterized by its flexibility, extensibility, and analytical precision. As depicted in Fig. 2, our framework consists of four sequential stages, _i.e_., task decomposition, task planning, solution execution, and response generation. In the following, we illustrate the design of each stage in detail. ### Task Decomposition ControlLLM starts with task decomposition - a stage for decomposing the user request \(r\) into a list of parallel subtasks. We here can utilize a language model \(\mathcal{M}\), _e.g_., ChatGPT or instruction-tuned LLMA, to automatically decompose the user request as follows: \[\{s_{0},...,s_{i},...,s_{n}\}=\mathcal{M}(r), \tag{1}\] where \(s_{i}\) is the i-th subtask, \(n\) is the number of all subtasks. We will elaborate on the different choices of language model \(\mathcal{M}\) in Sec. 3.4 and discuss their impacts in experiments in Sec. 4.4. The result of task decomposition is JSON format, and the protocol is in Table 1. Task decomposition is different from task planning. It only breaks down the user's request into several parallel subtasks and summarizes the input resources for each subtask from the user request. It does not need to know what tools to use or how to use them. The objective of this stage is to achieve three aims. Firstly, it splits user requests into smaller and more manageable units, _i.e_., subtasks, thereby accelerating task planning. Secondly, it seeks to determine the search domain that is most relevant and appropriate for the given problem, thus further narrowing down the search space. Thirdly, it endeavors to infer the input and output resource types from the context of the instructions, which identifies the start and end nodes for ToG to search. ### Task Planning with Thoughts-on-Graph This stage constitutes the crux of the entire system. Given the results of task decomposition, we design a Thoughts-on-Graph (ToG) paradigm to heuristically find the solution on the graph. #### 3.2.1 Building the Tool Graph In this stage, we embark on constructing a Tool Graph \(G\) by simply using an adjacency matrix, which serves as a fundamental module for analyzing and optimizing the interactions between tools and resources. Our endeavor is driven by observing a discernible topological structure that inherently exists between the input and output of diverse tools, as demonstrated in Fig. 2. This compelling insight propels us to craft a comprehensive tool graph that encapsulates this inherent relationship. _Resource_ node can be formally defined as one-tuple: \(\langle\texttt{type}\rangle\), where "type" represents the specific type of re \begin{table} \begin{tabular}{p{42.7pt}|p{284.5pt}} \hline \hline **Field** & **Description** \\ \hline description & a brief summary of what the subtask wants to achieve. It gives some guidance on how to approach the problem for ToG. \\ \hline task\_domain & the domain that this task belongs to. It helps ToG narrow down the search space and find the most relevant and suitable tools for the subtask. \\ \hline args & the inputs that the user provides for this subtask. It is usually in the form of key-value pairs, where the key is the type of the argument, and the value is the resource path or text you want to use. For example, [\(\{\text{``type": ``image", "value": ``image_1.png"}\}\), \(\{\text{``type": ``text", ``value": ``remove the dog in the picture"}\}\)]. \\ \hline return & the expected output of the subtask. For example, the return is \(\{\text{``image": ``(GEN)-0”}\}\), which means the expected output is an image and “(GEN)-0” is just a temporary placeholder. \\ \hline \hline \end{tabular} \end{table} Table 1: **The output protocol of task decomposition.** We elaborate on each field in the output of task decomposition. Figure 2: **System design of ControlLLM.** The framework consists of three stages. The first stage is task decomposition, which parses the user input into several subtasks. Then, in Stage 2, ToG utilizes a depth-first search algorithm to find the optimal solution for each subtask. The execution engine in the last stage executes the solution and returns the output to users. We here use the example of generating a web page for the video to illustrate our method. source. _Tool_ node can be expressed as a three-tuple: \(\langle\text{desc, args, ret}\rangle\), where each element carries significant implications for comprehending the functionalities of a tool. The desc field encapsulates the description of the tool, elucidating its purpose, methodology, and intended applications. The args field represents a list of input resource types that the tool accepts, thereby giving the prerequisites for its effective utilization. Finally, the ret field designates the resource type that the tool generates. **Edge Definitions.** Edges in the tool graph intricately connect the nodes, highlighting the relationships between different tools. We define two types of edges in the graph. (1) _Tool-resource edge_ is established from the tool to its returned resource type. This signifies that the tool is capable of generating resources of the corresponding type. Mathematically, a tool-resource edge is represented as: \[G(T_{j},R_{i})=\begin{cases}\text{true}\,&\text{if R}_{i}\text{ equals to ret of T}_{j}\\ \text{false},&\text{otherwise}\end{cases}, \tag{2}\] where \(T_{j}\) is \(j\)-th tool node, \(R_{i}\) is \(i\)-th resource node, true denotes two nodes are connected, and false denotes two nodes are disconnected. #### 3.2.2 Resource-Tool Edge (2) _Resource-tool edge_ represents this resource type that can be accepted as input arguments for its adjacent tool. This connection indicates how the resources flow to the tool. The resource-tool edge is mathematically defined as: \[G(R_{i},T_{j})=\begin{cases}\text{true}\,&\text{if }R_{i}\text{ belongs to args of }T_{j}\\ \text{false},&\text{otherwise}\end{cases}. \tag{3}\] Through the establishment of this graph, we can use diverse search strategies to make informed decisions regarding tool selection, and input resource assignment. #### 3.2.3 Searching on the Graph The pseudocode of the solution search algorithm is shown in Algorithm 1. Our ToG is built upon a depth-first search (_DFS_) algorithm where the tool selection function \(\mathcal{F}\) is used to sample the tool nodes on the tool graph. The searched solution is a sequence of tools that take input resources as input and return the output resource to finish the user request. The algorithm starts from the input resource nodes and explores all possible paths to the output resource node while keeping track of the intermediate resources and tools along the way. The algorithm stops when it reaches the expected output node or when it exceeds a maximum length limit. In the end, the algorithm returns all the solutions it finds as a list of tool sequences. Each step from _resource node_ to _tool node_ represents a thought process, as it involves a decision that determines whether to use this tool and how to specify its input arguments from available resources. ``` 1:t: subtask obtained by Eq. 1 g: tool graph \(G\) constructed in Sec. 3.2.1 r: available resources, initialized with subtask["args"] s: recorded tools during searching 2: 3:solutions: all potential solutions for the subtask 4:function DFS_Search(t, g, r, s) 5:if len(s) \(>m\): 6:return[] 7:#\(\mathcal{F}\) finds all tool candidates, explained in Sec. 3.2.3 8:available_tools = \(\mathcal{F}\)(g, r) 9:solutions = [] 10:for tool in available_tools: 11:solution.append(tool) 12: resources.append(tool["args"]) 13:if tool["returns"] == subtask["returns"]: 14:results.append(solution.copy()) 15:results = DFS_Search(t, g, r, s) 16:solutions.extend(results) 17:resources.remove(tool["args"]) 18:solution.remove(tool) 19:return solutions \(\triangleright\) Return 20:endfunction ``` **Algorithm 1** The Python pseudocode of depth-first solution search in Thoughts-on-Graph To find a trade-off between time and space complexity, we develop a tool assessment module in which the language model is leveraged to score the tools in each search step and then filter out some irrelevant tools. With this assessment module, we design four search strategies for the function \(\mathcal{F}\) to determine which tool nodes within "task_domain" to visit among all adjacent nodes when searching on the graph: **Greedy Strategy.** This strategy selects the tool node with the highest score at each step, where the score indicates the relevance of the tool to the task. A higher score indicates that the tool is more helpful for solving the task. Greedy search is fast and simple, but it may not find the optimal solution or even any solution at all. **Beam Strategy.** It only keeps the \(k\) best nodes according to their assessment score and discards the rest. Beam search can expand the search space but reduce the search efficiency slightly. **Adaptive Strategy.** This is a variant of beam search where it dynamically adjusts the beam size by choosing the tools with scores higher than a fixed threshold, which is a trade-off between exploration and exploitation. It can widen the search space when there are many available choices and narrow down when there are few confident choices. **Exhaustive Strategy.** This strategy explores all possible paths from the start node to the terminal node. The exhaustive search is guaranteed to find an optimal solution if one exists, but it may be very slow and consume a lot of memory during the search. The impacts of different search strategies are studied in Sec. 4.4. By initiating a systematic traversal of this graph, commencing at the "args" nodes and culminating at the "return" node, a diverse list of conceivable solutions is meticulously synthesized. This diverse list, akin to a brainstorm or mind map, represents the spectrum of potential solutions. #### 3.2.4 Solution Expert In this section, we delve into the core concept of the solution expert that streamlines the process of evaluating and selecting optimal solutions from all possible candidates. By systematically converting each solution into a formatted string description and harnessing the capabilities of prompt engineering, the solution expert enables us to make informed decisions based on evaluated scores. **Solution Description Formatting.** To facilitate the solution expert to comprehend the solution, we need to generate the description for each solution candidate. This involves transforming raw solution data into structured, formatted string descriptions. These descriptions encapsulate the essence of each solution, highlighting its key features, including inputs, output, and docstring. **Solution Evaluation.** In the subsequent phase, the solution expert capitalizes on prompt engineering techniques to assess each solution based on subtask descriptions and formatted solution descriptions. The designed prompts serve as a bridge, guiding language model \(\mathcal{M}\) to evaluate the feasibility of each solution against the objective of the subtask. Through this process, we can assign scores to solutions, gauging their effectiveness and relevance to the task. Prompt engineering ensures that the evaluation process is focused, targeted, and aligned with the subtask. The prompt template is shown in the Table 8. **Solution Ranking.** The final aim of this module is to select the top-performing solutions. The optimal solution is identified as the highest score assessed in the last step. Given that sometimes the selected optimal solution may not meet the user requirements, we also provide several alternative solutions by setting a threshold score of 3. These solutions, which exhibit a higher degree of alignment with the subtask's requirements, emerge as the most promising candidates for user preference. Through collaborative efforts, the solution expert ensures that solutions are appropriately tailored, optimized, and well-adapted to the task. #### 3.2.5 Resource Expert In the Algorithm 1, we encounter a challenge stemming from the potential presence of multiple instances of the same resource type within the available resource list. This challenge introduces complexity, making it difficult to straightforwardly deduce certain arguments for tools using predefined rules. As a result, we design a solution expert. This module transforms the task of completing missing arguments into a fill-in-the-blank exercise. To achieve this, the resource expert crafts prompts that not only incorporate the task description but also include the available resource list. In this manner, a language model \(\mathcal{M}\) is employed to dynamically complete the missing parameters within a solution by interacting with the contextual information presented. We put the prompt template in the Table 9. ### Solution Execution After the task solutions are generated, they are passed to a tool engine for execution, as shown in Fig. 2. During this stage, the execution engine initially parses the solutions into a sequence of _Actions_. Each action is associated with particular tool services, which could be implemented via either handcrafted mapping tables or an automatic scheduler based on some strategies. Different from previous works [16, 36, 37] that adopt static tool mapping, our design empowers the system with the flexibility to schedule diverse tools (such as different object detection models) based on users' preference for computational resources and execution duration trade-offs. The parsed actions are executed by an interpreter that automatically dispatches the action to the local, remote, or hybrid endpoints. Similar to HuggingGPT [26], multiple independent subtasks would be executed in parallel to improve efficiency. Besides, our interpreter maintains a state memory storing all the intermediate results, including their values and types. This enables the running-time automatic correction for the action parameters. **Response Generation.** With all the execution results in hand, we could respond to the user requests. The unprocessed results may lack comprehensiveness and clarity, potentially making it difficult for user to understand. To this end, we introduce an additional stage to aggregate all the execution results and generate user-friendly responses. This is achieved by prompting the LLMs, such as ChatGPT, with the user request, action list, and execution results and asking them to summarize the answers intelligently. The prompt can be found in Table 10. ### The Choices of Language Model One feasible yet direct choice is to use off-the-shelf **large language models** (LLMs) such as ChatGPT and Llama 2, which are pre-trained on large-scale text corpora and can handle various NLP tasks. These LLMs are readily available. We design a series of elaborate prompts with in-context learning as shown in Appendix 6.3 for task decomposition, tool assessment, solution expert, and resource expert. The advantage of using off-the-shelf LLMs is that they have strong zero-shot capabilities and do not need to train a language model from scratch. However, the disadvantage is that they may lead to low performance as they are not trained for our requirements. Another alternative choice is to finetune a language model, _e.g_., LLaMA [29], by using self-instruct method [33]. More details of optimizing \(\mathcal{M}\) can be referred to Appendix 6.1. The advantage of finetuning a language model is that it can achieve high performance by adapting to the data and the task. However, the disadvantage is that it requires computational resources to train the model and may suffer from overfitting or data scarcity issues. Regarding this issue, it is essential to carefully consider the trade-offs between readily available off-the-shelf LLMs with zero-shot capabilities and the potential for finetuning a model to achieve superior performance at the cost of computational resources. We will thus further discuss the impacts of different language models \(\mathcal{M}\) in 4.4 and explore the optimal settings for our framework. ## 4 Experiments ### Benchmark In this section, we build a benchmark that is used to evaluate our proposed framework compared with other state-of-the-art methods. In order to make fair comparisons, we only evaluate and test on the intersection of toolsets from different methods [36, 37, 26, 16]. As a result, the benchmark consists of a set of tasks that require various tools to solve complex problems. It is designed to cover different task domains, such as question answering, image generation, image editing, image perception, visual question answering, _etc_. In this benchmark, the tasks involve more than 20 tools across different modalities. This benchmark includes more than 100 instructions that specify the user's goals and preferences for each task. The instructions are classified into three levels of difficulty: easy (\(<2\) APIs), medium (\(=2\) APIs), and hard(\(>2\) APIs). The difficulty level reflects the complexity and specificity of the instructions, as well as the number and diversity of tools required to complete the task. We believe that this benchmark can provide a comprehensive comparison of the tool control capabilities of different methods. ### Evaluation Protocol Effectively evaluating the performance of tool-augmented Language Models (LLMs) remains a challenging task. This challenge stems from several factors, including the inherent ambiguities in defining standard answers, the absence of shared benchmarks, and formatted solutions for systematically assessing different methods. Consequently, existing evaluation methods [36, 37, 26] provide limited case studies to validate the performance. Consequently, we only compare our method against the models such as VisualChatGPT [36], HuggingGPT [26], GPT4Tools [37], and InternGPT [16], all of which share comparable toolsets. As the APIs of tools in different methods are slightly inconsistent, it is hard to annotate all feasible solutions for each method. As such, we adopt an evaluation protocol via multi-person voting approach with three annotation experts. This protocol breaks down the evaluation into three main aspects: tool selection, argument assignment, and overall solution assessment. Please note that the evaluation protocol is independent of the tools' capabilities. When the tools and their input arguments are correct, we do not account for the case where the output fails to sat \begin{table} \begin{tabular}{l|c c c c c} \hline \hline **Features** & **ControlLLM** & **HuggingGPT** & **Visual ChatGPT** & **InternGPT** & **GPT4Tools** \\ & (our work) & [26] & [36] & [16] & [37] \\ \hline Image Perception & ✓ & ✓ & ✓ & ✓ & ✓ \\ Image Editing & ✓ & ✓ & ✓ & ✓ & ✓ \\ Image Generation & ✓ & ✓ & ✓ & ✓ & ✓ \\ Video Perception & ✓ & ✓ & ✗ & ✓ & ✗ \\ Video Editing & ✓ & ✓ & ✗ & ✓ & ✗ \\ Video Generation & ✓ & ✓ & ✗ & ✓ & ✗ \\ Audio Perception & ✓ & ✗ & ✗ & ✗ & ✗ \\ Audio Generation & ✓ & ✓ & ✗ & ✗ & ✗ \\ Multi-Solution & ✓ & ✗ & ✗ & ✗ & ✗ \\ Pointing Inputs & ✓ & ✗ & ✗ & ✓ & ✗ \\ Resource Type Awareness & ✓ & ✗ & ✗ & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparisons of features between different methods.** The table shows that our framework supports more features that facilitate the user experience of multi-modal interaction. It proves the high scalability of our framework. isfy the user's expectations due to the limitations of tools. #### 4.2.1 Metrics for Tool Selection A) Irrelevant Tool Inclusion Rate (_abbr._\(IR\)): This metric gauges the performance of method in excluding irrelevant tools. It measures the proportion of the predicted solutions that contain the irrelevant tools. A higher \(IR\) indicates that the method tends to include more unnecessary tools, potentially hindering effective task planning. B) Necessary Tool Inclusion Rate (_abbr._\(NR\)): This metric assesses the inclusion of necessary tools in the predicted solution but without considering whether the arguments of tools are correct. If \(NR\) is high, it indicates the method has strong capabilities in tool selection. #### 4.2.2 Metrics for Argument Assignment A) Resource Hallucination Rate (_abbr._\(HR\)): This indicator reveals the extent of hallucination in the method's responses when inferring the arguments for tools. It measures whether all arguments of the tools used in the predicted solution exist physically. A lower \(HR\) suggests that the method is less prone to generating hallucinated content. B) Resource Type Consistency Rate (_abbr._\(CR\)): This metric examines whether the types of resources used as inputs in the predicted solution match those of the corresponding tools. It evaluates the method's ability to ensure consistency between argument types and tools. #### 4.2.3 Solution Evaluation The Solution Evaluation (_abbr._\(SE\)) measures the success rate of all generated solutions on our benchmark. It only considers whether the output solution can effectively address the user's problem, irrespective of whether it contains irrelevant tools. It focuses on whether the tool call chain can resolve the user's request. A higher score in the solution evaluation indicates that the method is able to provide an effective solution to user requests. In summary, these intuitive metrics together provide a comprehensive assessment of tool-augmented LLMs in terms of tool selection, argument inference, and overall effectiveness in addressing user queries. The formal definition of these metrics can refer to Appendix 6.2.1 ### Performance Comparisons #### 4.3.1 Feature Comparisons Table 2 presents a comprehensive feature comparison among various methods [16, 36, 26, 37], highlighting ControlLLM's distinct advantages in the landscape of multi-modal interaction. Notably, "Multi-Solution" signifies the method's ability to provide multiple feasible solutions, granting users more options. "Pointing Inputs" signifies support for pointing devices during interaction, enhancing precision. "Resource Type Awareness" indicates the method's capability to discern the type of resource used in interaction, ensuring more context-aware responses. In summary, ControlLLM emerges as the standout choice, excelling in various features. It offers a comprehensive set of tools in domains of image, video, and audio. Moreover, its support for resource type awareness, multiple solutions, and pointing inputs demonstrates its adaptability and scalability, making it the highly versatile framework for diverse multi-modal interaction scenarios. #### 4.3.2 Quantitative Comparisons In this section, we provide a comprehensive analysis for ControlLLM to compare with state-of-the-art methods, as summarized in Table 3. We here provide three implementations for our method: a) ControlLLM-ChatGPT leverages the ChatGPT-3.5 as language model \(\mathcal{M}\); b) ControlLLM-LLaMA that finetunes a LLaMA-7B as a language model \(\mathcal{M}\); c) ControlLLM-Mix is regarded as our default setting, which finetunes LLaMA-7B as a task decomposer in stage 1 while the rest modules employ the ChatGPT to finish the tasks. ControlLLM-Mix combines the advantages of the other two variants. Our evaluation is based on a set of metrics assessing tool selection and argument inference, as well as the overall effectiveness of the solutions. ControlLLM excels in several key aspects. Notably, it achieves the lowest Irrelevant Tool Inclusion Rate (\(IR\)) at a mere 0.03, indicating its exceptional ability to exclude irrelevant tools during task planning. This is a significant advantage, as minimizing irrelevant tools is crucial for efficient task execution. ControlLLM also boasts the highest Necessary Tool Inclusion Rate (\(NR\)) at 0.93, signifying its proficiency in selecting essential tools. Furthermore, ControlLLM demonstrates superior performance in argument inference, with the lowest Argument Hallucination Rate (\(HR\)) of 0.02 and the highest Argument Type Consistency Rate (\(CR\)) of 0.98. These results underscore its ability to generate accurate and consistent arguments, addressing a common challenge in language models. In the metric of solution evaluation, ControlLLM maintains its lead with a score of 0.94, indicating its effectiveness in resolving user requests. In summary, ControlLLM exhibits remarkable performance in tool selection, argument inference, and overall solution effectiveness, evidently outperforming the state-of-the-art methods in this field. ### Ablation Studies In this section, we delve into ablation studies to gain deeper insights for the performance of our method. #### 4.4.1 Ablation Studies on Different LLMs In this section, we conduct ablation studies to evaluate the impact of different LLMs on task planning. We also investigate the effects of incorporating prior knowledge into the subtask descriptions during task decomposition. The method without prior knowledge usually directly uses the user's request as a subtask description and does not offer any hints or suggestions on tool selections in the subtask description. In contrast, in the method with prior knowledge, we add prior knowledge into the subtask description, which is expected to guide the tool assessment. The results are presented in Table 4. The prior knowledge indeed improves the necessary tool inclusion rate (\(NR\)) and reduces the chance of selecting irrelevant tools (\(IR\)) when using the same large language model. Furthermore, we find the ability of language models plays a decisive role in tool selection. The more powerful the language model, the higher the score of solution evaluation. #### 4.4.2 Impact of Search Strategies Table 5 investigates the impact of different search strategies within our Thoughts-on-Graph. We observe that the exhaustive search strategy outperforms the others in most metrics, but this strategy is time-consuming. On the other hand, the greedy search strategy achieves the lowest performance. This is because sometimes it can not search for a feasible path based on the tool with a high score due to the inaccurate tool assessment. It is thus efficient but usually fails to find the solution, especially in hard cases. In addition, the adaptive search strategy strikes a balance between performance metrics, offering competitive results in most aspects. To trade-off between time and accuracy, we thus choose the adaptive strategy as our default search strategy. ### Qualitative Analyses In this section, we present extensive case studies to qualitatively assess our method. Fig. 3 shows two simple cases to illustrate the capabilities of our ControlLLM in task planning. In contrast to HuggingGPT [26], we find our method is able to generate more diverse solutions to meet users' expectations, thanks to our proposed Thoughts-on-Graph paradigm. Then, we provide more cases across different modalities to validate the user experience for our method in practice. In Fig. 4, we show some cases of image perception, which involves analyzing and understanding the content of an image, such as detecting objects, counting objects, find \begin{table} \begin{tabular}{c|l|c c|c c|c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{LLMs} & \multicolumn{2}{c|}{Tool} & \multicolumn{2}{c|}{Argument} & \multicolumn{3}{c}{Solution Evaluation \(\uparrow\)} \\ & & \(IR\downarrow\) & \(NR\uparrow\) & \(HR\downarrow\) & \(CR\uparrow\) & All & Easy & Medium & Hard \\ \hline \multirow{3}{*}{_w/o PK_} & Llama2-13B & \(0.28\) & \(0.71\) & \(0.01\) & \(0.99\) & \(0.68\) & \(0.87\) & \(0.50\) & \(0.38\) \\ & ChatGPT-3.5 & \(0.13\) & \(0.84\) & \(0.01\) & \(0.99\) & \(0.83\) & \(0.99\) & \(0.67\) & \(0.57\) \\ & ChatGPT-4 & \(0.06\) & \(0.91\) & \(0.03\) & \(0.97\) & \(0.91\) & \(0.98\) & \(0.83\) & \(0.81\) \\ \hline \multirow{3}{*}{_w/ PK_} & Llama2-13B & \(0.12\) & \(0.83\) & \(0.04\) & \(0.95\) & \(0.82\) & \(0.95\) & \(0.71\) & \(0.62\) \\ & ChatGPT-3.5 & \(0.03\) & \(0.93\) & \(0.02\) & \(0.98\) & \(0.93\) & \(0.98\) & \(0.96\) & \(0.81\) \\ \cline{1-1} & ChatGPT-4 & **0.01** & **0.98** & **0.02** & **0.98** & **0.98** & **1.00** & **1.00** & **0.91** \\ \hline \hline \end{tabular} \end{table} Table 4: **The effects of task decomposition with regard to different LLMs.**_PK_ denotes prior knowledge. We find, if adding prior knowledge, such as which tools might be used, into the subtask description, the performance of task planning can be evidently improved. \begin{table} \begin{tabular}{l|c c|c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Tool} & \multicolumn{2}{c|}{Argument} & \multicolumn{3}{c}{Solution Evaluation \(\uparrow\)} \\ & \(IR\downarrow\) & \(NR\uparrow\) & \(HR\downarrow\) & \(CR\uparrow\) & All & Easy & Medium & Hard \\ \hline HuggingGPT [26] & \(0.45\) & \(0.64\) & \(0.16\) & \(0.69\) & \(0.59\) & \(0.73\) & \(0.50\) & \(0.33\) \\ Visual ChatGPT [36] & \(0.26\) & \(0.58\) & \(0.09\) & \(0.76\) & \(0.57\) & \(0.73\) & \(0.63\) & \(0.10\) \\ InternGPT [16] & \(0.12\) & \(0.51\) & \(0.49\) & \(0.43\) & \(0.44\) & \(0.60\) & \(0.46\) & \(0.00\) \\ GPT4Tools [37] & \(0.19\) & \(0.44\) & \(0.28\) & \(0.72\) & \(0.43\) & \(0.64\) & \(0.33\) & \(0.00\) \\ \hline ControlLLM-ChatGPT & \(0.16\) & \(0.63\) & \(0.83\) & \(0.83\) & \(0.64\) & \(0.71\) & \(0.67\) & \(0.43\) \\ ControlLLM-LLAMA & \(0.06\) & **0.95** & \(0.02\) & \(0.98\) & \(0.91\) & \(0.98\) & \(0.88\) & \(0.76\) \\ ControlLLM-Mix\({}^{*}\) & **0.03** & \(0.93\) & **0.02** & **0.98** & **0.93** & **0.98** & **0.96** & **0.81** \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparisons with other state-of-the-art methods.**\(\downarrow\) means the smaller the better, \(\uparrow\) means the larger the better. The results of state-of-the-art methods [16, 26, 36, 37] are reproduced on our own benchmark. \(*\) denotes the default setting of ControlLLM if not stated. ing objects, segmenting objects, answering questions about the image, _etc_. These tasks require the system to invoke tools to process visual information and extract relevant features and labels from the image. Fig. 5 gives examples of image processing and image editing, which assist users in processing or editing the image according to some criteria or instructions. Fig. 6 mainly focuses on image question answering and image generation, showing the graphic dialogue capability. In Fig. 7, we provide some multi-modal interaction cases on image, video, and audio domains. In addition, we also illustrate the capabilities of complicated scenarios with solutions searched by ToG during task planning in Fig. 8 and Fig. 9. These complex tasks involve combining multiple tools to find a more advanced and creative solution path that can solve more challenging problems. It requires a system that can integrate different types of information and outputs from tools and generate comprehensive and meaningful responses based on execution results. These figures demonstrate the strong capabilities of ControlLLM in task planning for both simple and complicated scenarios. It thus leads to a better user experience. \begin{table} \begin{tabular}{l|c c|c c|c c c} \hline \hline \multicolumn{1}{c|}{Search} & \multicolumn{2}{c|}{Tool} & \multicolumn{2}{c|}{Argument} & \multicolumn{3}{c}{Solution Evaluation \(\uparrow\)} \\ Strategies & \(IR\downarrow\) & \(NR\uparrow\) & \(HR\downarrow\) & \(CR\uparrow\) & All & Easy & Medium & Hard \\ \hline Greedy & \(0.19\) & \(0.49\) & \(0.24\) & \(0.76\) & \(0.49\) & \(0.56\) & \(0.58\) & \(0.19\) \\ Beam & \(0.14\) & \(0.88\) & \(0.01\) & \(0.99\) & \(0.88\) & \(0.96\) & \(0.79\) & \(0.76\) \\ Adaptive & **0.03** & \(0.93\) & \(0.02\) & \(0.98\) & \(0.93\) & \(0.98\) & \(0.96\) & \(0.81\) \\ Exhaustive & \(0.06\) & **0.97** & **0.01** & **0.99** & **0.97** & **1.00** & **0.96** & **0.91** \\ \hline \hline \end{tabular} \end{table} Table 5: **The evaluation for different search strategies.** As introduced in Sec. 3.2.3, although exhaustive strategy achieves the best performance on most metrics, the adaptive strategy strikes a good balance between efficiency and effectiveness. Figure 3: **Qualitative comparisons of task planning.** We here use two simple cases to illustrate the differences between two different methods in task planning. Here, each output node is generated by different solution paths. **User:** Provide me with the count of bears in the input image **AI:** The count of bears in the input image is 3. Here is a summary of the workflow and results for your request: 1. The image was processed using an _object detection_ tool, which detected the presence of bears in the image. 2. The bounding boxes around the detected bears were selected by _bounding boxes selection_ tool. 3. The count of bears was then determined using a _count objects_ tool. 4. The result of the count objects tool indicated that there are 3 bears in the image. **User:** Fm trying to find the umbrella in the image, can you assist me? **AI:** Yes, I can assist you in finding the umbrella in the image. After analyzing the image, I have identified the location of the umbrella and cropped it for better visibility. To achieve this, I used the following tools in my workflow: 1. **Object Detection**: This tool was used to detect various objects in the image, including the umbrella. The tool provided a list of detected objects with their corresponding labels and bounding box coordinates. 2. **Select Bounding Box**: This tool was used to select the bounding box corresponding to the umbrella from the list of detected objects. 3. **Image Cropping**: This tool was used to crop the image based on the selected bounding box of the umbrella. The result is a new image that focuses specifically on the umbrella. I hope this explanation clarifies the steps I took to find and extract the umbrella from the image. If you have any further questions, feel free to ask! **User:** Give me the mask of elephant in the image. **AI:** I have generated a mask of an elephant for you. To generate this mask, I used an _image instance segmentation_ tool on the image. The tool identified different objects in the image and created masks for them. Then, I used a _mask selection_ tool to specifically select the mask for the elephant based on the condition provided. Please note that the result may not be completely accurate, as it relies on the performance of the tools used. **User:** Can you highlight the giraffe for me in the image? **AI:** I have highlighted the giraffe for you. To fulfill your request, I followed the following workflow: 1.First, I used an _object detection_ tool to locate the elephant in the image. 2.Then, I used a _selection tool_ to choose the bounding box that contains the elephant. 3.Finally, I used an _image highlighting_ tool to highlight the selected bounding box on the image. The output of this workflow is the highlighted image, which shows the giraffe highlighted. Figure 4: **Case study on different task domains** (a). **AI:** I have successfully extracted the edge information from the image you provided. To achieve this, I used an image processing tool called _"image_to_edge"_. I passed your input image as input to the tool. The tool processed the image and generated a new image file containing only the edge information. **User:** Please get me the pose information for the image. **AI:** Sure! Based on your request, I have obtained the pose result for the input image. To obtain this result, I used the _"image_to_pose"_ tool. I provided the tool with the image file as input. The tool processed the image and generated the pose result. **User:** Please segment anything without label information for this image. **AI:** I have segmented anything without label information for the image you mentioned. Now, let me explain the workflow and the tools used for your request. I used a tool called _"segment_anything"_ to perform the segmentation. The input for this tool was the image file named "47c56e_None_input_image.jpg". The tool processed the image and generated a segmentation result. **User:** Erase the sheep from the photo **AI:** I have edited the picture and changed the color of the horse to blue. To address your request, I used _text_image_editing_ tool, which can edit the image guided by a textual prompt. **User:** Erase the sheep from the photo **AI:** I have successfully erased the sheep from the photo. Now, let me explain how I accomplished this task. First, I used _an image instance segmentation_ tool to identify the sheep in the photo. This tool generated several mask images highlighting the sheep in different parts of the photo. Then, I used a _select mask_ tool to choose the most accurate masks that represent the sheep. These selected masks were saved as separate images. Finally, I used _an image inpainting_ tool to remove the sheep from the original photo, using the selected masks as a guide. Please let me know if there's anything else I can help you! Figure 5: **Case study on different task domains** (b). **User:** Why is this image so funny? **AI:** The image is funny because it features two toy frogs, one of which is holding a camera and taking a picture of the other. The scene is set in a grassy field, and the toy frogs are posed in a way that mimics human behavior, specifically a photographer capturing a subject. The humor comes from the unexpected and anthropomorphic portrayal of the toy frogs, as it is not common to see inanimate objects, such as toys, engaging in activities like photography. This playful and whimsical scene can evoke amusement in the viewer. I used an _image question answering_ tool. The tool analyzed the image and determined why this image is so funny. **User:** Given the image, is there going to be a potentially dangerous situation in the picture? **AI:** There is a potentially dangerous situation as a man is riding on the back of an elephant, while another elephant is walking through the water. The presence of two elephants in close proximity to each other and the man could pose a risk of accidents or injuries, especially if the elephants become agitated or rifightened. It is essential for the man to maintain control over the elephant he is riding and ensure that the other elephant is calm and well-behaved to minimize any potential dangers. To solve your question, I used an _Image question answering_ tool. **User:** how many people can be seen riding the elephant in the image? **AI:** In the image, there is one person riding on the back of the elephant. I used an _Image question answering_ tool to answer your question. **User:** I want a new image that has mountains and sunset. Can you use the depth result from input image to create it? **AI:** The new image is generated. The workflow involved two tools: _"image to depth"_ and _"depth text to image"_. The first tool, _"image to depth"_, used an input image to generate a depth result. The second tool, _"depth text to image"_, took a text input of "The new image should have mountains and sunset," and a depth map (generated by the previous tool) **User:** please generate a new image based the edge from this image. The new image shows a beautiful tower in the rain. **AI:** The new image generated. Workflow: The image provided was processed using an _image-to-edge_ tool, which generated an edge representation of the image. The edge representation was then used along with the text "The new image shows a beautiful tower in the rain" in an _edge-text-to-image_ tool. This tool combined the edge representation and the text to generate a new image. Figure 6: **Case study on different task domains** (c). **User:** Convert the following sentence into speech: "Hope is the thing with feathers That perches in the soul, And sings the tune without the words, And never stops at all". **AI:** I have converted the following sentence into speech for you: "Hope is the thing with feathers with feathers That perches in the soul, And sings the tune without the words, And never stops at all". To do this, I used a text to speech tool that can generate speech from text. I gave it the input sentence and it returned an audio file with a human-like voice reading the sentence. The tool can also adjust the speed, pitch, and accent of the voice to suit different preferences. I hope you like the results. **User:** Generate an image where a woman is surfing. Then convert this image to a dynamic video. **AI:** I have generated an image where a woman is surfing and converted it to a dynamic video for you. To do this, I used two tools that can create and manipulate visual content. The first tool called _text to image_ can generate an image based on a text description. I gave it the input 'a woman is surfing and it returned an image of a woman surfing on a blue ocean wave. The second tool is _image to video_ that can convert an image to a video by adding motion and sound effects. I gave it the image from the first tool, and it returned a video of the woman surfing with realistic water and wind sounds. I hope you like the results. **User:** What happened in this video? **AI:** In this video, a basketball game is taking place, with a player attempting to shoot the ball. The player is holding the ball and getting ready to make a shot, while other players are on the court, possibly trying to defend or score. To solve your question, I used two tools: "_action_recognition_" and "_video_captioning_", which both take the video as input. **User:** Please generate music according to the following prompt: An 80s driving pop song with heavy drums and synth pads in the background **AI:** Sure! I have generated a piece of music based on your prompt. To create this music, I used a **text-to-music** tool. I provided the tool with the prompt you gave me, which was "An 80s driving pop song with heavy drums and synth pads in the background". The tool then generated a music composition that matches the description. **User:** Can you help me dub this video with uploaded audio? **AI:** Sure! Based on your request, I have dubbed the video with the audio for you to dub a video. I used a "_dub_video_" tool that can add or replace the audio track of a video. I gave it the input video file and the input audio file, and it returned a new video file with the audio synchronized with the video. **User:** Please dub the video.mp4 with a suitable background music and then generate a HTML web page to introduce this video. **Task planning** [Action(tool name="_video_captioning_", inputs={"video: video.mp4"}, outputs=[~TOOL-GENN-text-0"], Action(tool name="_text_to_music", inputs={"text": '<TOOL-GENN-text-0"}, outputs=[~TOOL-GENN-text_to_music-audio-0"]), Action(tool name="_dbdb_video_", inputs={"video: "video.mp4", "audio:" <TOOL-GENN-text_to_music-audio-0"}, outputs=[~TOOL-GENN-db_video-video-0"]) [Action(tool name="_title_generation_", inputs={"text: '<TOOL-GENN-text-0"}, outputs=[~TOOL-GENN-text-1"]), Action(tool name="_text_to_tags", inputs={"text": '<TOOL-GENN-text-0"}, outputs=[~TOOL-GENN-tags-0"]), Action(tool name="video_to_webpage", inputs={"video: '<TOOL-GENN-db_video-0", title: '<TOOL-GENN-text-1', 'tags': '<TOOL-GENN-tags-0"}, description: "<TOOL-GENN-text-0"}, outputs=[~<GENN-0"]) **AI:** Sure, I can generate a web page for your input video. I have executed a workflow using several tools to generate the desired web page. The _video_captioning_ tool was used to generate a text description for the input video. The _dbdb_video_ tool combined the input video with the generated audio to create a dubbed video. The _title_generation_ tool was used to generate a title for the video based on the generated text. The _text_to_tags_ tool generated a set of tags related to the video content. Finally, the _video_to_webpage_ tool was used to generate a web page for the video, including the video itself, the generated title, tags, and description. The resulting web page includes the following elements: 1) A title prompt asking for a title to be generated within 10 words. 2) A video player to play the input video. 3) Tags related to the video content. 4) A video description. **User:** Generate a new image based on prompt. Prompt: "A serene and beautiful landscape with a calm lake reflecting the blue sky and white clouds. There are some green hills and trees on the shore". Then help me generate a promotional video for this generated image. **Task planning** [Action(tool name="_text_to_image_", inputs={"text: 'A serene and beautiful landscape with a calm lake reflecting the blue sky and white clouds. There are some green hills and trees on the shore, and a small wooden boat floating on the water."}, outputs=[~TOOL-GENN-text_to_image-image-0"]), Action(tool name="_image_captioning_", inputs={"image": '<TOOL-GENN-text_to_image-image-0"}, outputs=[~ICOL-GENN-text-0"]), Action(tool name="_text_to_speech", inputs={"text: '<TOOL-GENN-text-0"}, outputs=[~ICOL-GENN-text_to_speech-audio-0"]) **Action(tool, name="_image_audio_to_video_", inputs={"image": '<TOOL-GENN-text_to_image-image-0"}, 'audio:"<TOOL-GENN-text_to_speech-audio-0"}, outputs=[~CGN-0"])** **AI:** Sure! I can help you generate a video to introduce an image. Based on the information provided, I have used a combination of tools to fulfill your request. First, I used the "_text_to_image_" tool to convert your description into an image. The tool created an image of a serene and beautiful landscape with a calm lake reflecting the blue sky and white clouds. There are also green hills and trees on the shore, as well as a small wooden boat floating on the water. Next, I used the "_image_captioning_" tool to generate a textual description of the image. The tool analyzed the image and produced a caption that best describes it. Then, I used the "_text_to_speech_" tool to convert the generated textual description into an audio file. This tool transformed the text into speech, allowing for a more engaging experience. Finally, I used the "_image_audio_to_video_" tool to combine the generated image and audio into a video. The tool merged the image and audio together, creating a video that introduces the image. The result of this workflow is a video file. This video showcases the image described in your initial request and includes the audio narration of its description. **A1:** Based on the information provided, the count of horses in the two images is eight. To provide you with this information, I followed a workflow using various tools. First, I used an _object detection_ tool to detect objects in the images. Then, I used a tool called "_select_bbox_" to select the bounding boxes that contain the desired object, which in this case is a horse. Next, I used a tool called "_count_objects_" to count the number of horses based on the selected bounding boxes. For both two images, I used the same solution to count the horses. The "_image_question_answering_" tool is also used to double check the result. The results of this workflow indicate that there are four horses in image1.png, three horses in image2.png and one horse in image3.png. So, there are eight horses in total. However, please note that these results may not be completely accurate, as they are generated using automated tools and may require human judgement for accuracy. Figure 9: **Case study on different task domains** (f). ## 5 Conclusion In this paper, we propose **ControlLLM**, a multi-modal interaction framework that can accurately control tool usage across various domains, including text, image, audio, video, _etc_. The proposed framework consists of three key stages: (1) _task decomposition_ to concrete the objective of the task, (2) a _Thoughts-on-Graph_ (ToG) paradigm to search the optimal solution path on the constructed tool graph, (3) and an _execution engine_ with a versatile toolbox to execute solution efficiently. We conduct extensive experiments and demonstrate that our ControlLLM achieves superior performance regarding tool selection, argument inference, and overall solution effectiveness compared to existing methods. Nevertheless, this work still has some limitations. Since the goal of this work is to improve the accuracy of tool usage, even if the solution is theoretically feasible, we cannot guarantee that the output from tools is always correct. On the other hand, due to the inherent ambiguity in natural language, it is difficult to ensure that the optimal solution selected is consistent with the user's goal. In this case, we can only provide more alternative solutions searched by ToG for users to choose from if the optimal solution fails.
2307.02032
ScalOTA: Scalable Secure Over-the-Air Software Updates for Vehicles
Over-the-Air (OTA) software updates are becoming essential for electric/electronic vehicle architectures in order to reduce recalls amid the increasing software bugs and vulnerabilities. Current OTA update architectures rely heavily on direct cellular repository-to-vehicle links, which makes the repository a communication bottleneck, and increases the cellular bandwidth utilization cost as well as the software download latency. In this paper, we introduce ScalOTA, an end-to-end scalable OTA software update architecture and secure protocol for modern vehicles. For the first time, we propose using a network of update stations, as part of Electric Vehicle charging stations, to boost the download speed through these stations, and reduce the cellular bandwidth overhead significantly. Our formalized OTA update protocol ensures proven end-to-end chain-of-trust including all stakeholders: manufacturer, suppliers, update stations, and all layers of in-vehicle Electric Control Units (ECUs). The empirical evaluation shows that ScalOTA reduces the bandwidth utilization and download latency up to an order of magnitude compared with current OTA update systems.
Ali Shoker, Fernando Alves, Paulo Esteves-Verissimo
2023-07-05T05:30:22Z
http://arxiv.org/abs/2307.02032v1
# ScalOTA: Scalable Secure Over-the-Air Software Updates for Vehicles ###### Abstract Over-the-Air (OTA) software updates are becoming essential for electric/electronic vehicle architectures in order to reduce recalls amid the increasing software bugs and vulnerabilities. Current OTA update architectures rely heavily on direct cellular repository-to-vehicle links, which makes the repository a communication bottleneck, and increases the cellular bandwidth utilization cost as well as the software download latency. In this paper, we introduce ScalOTA, an end-to-end scalable OTA software update architecture and secure protocol for modern vehicles. For the first time, we propose using a network of update stations, as part of Electric Vehicle charging stations, to boost the download speed through these stations, and reduce the cellular bandwidth overhead significantly. Our formalized OTA update protocol ensures proven end-to-end chain-of-trust including all stakeholders: manufacturer, suppliers, update stations, and all layers of in-vehicle Electric Control Units (ECUs). The empirical evaluation shows that ScalOTA reduces the bandwidth utilization and download latency up to an order of magnitude compared with current OTA update systems. Vehicle, Over-the-air update, security, update station ## I Introduction Over-the-air (OTA) software/firmware update systems are witnessing a huge demand in the automotive market, with the unprecedented transformation towards software-defined vehicles [33]. An OTA update system is key to reduce the safety risks and maintenance time and cost, when post-market fleet anomalies and vulnerabilities are detected [9, 16, 18]. This is of paramount importance for Original Equipment Manufacturers (a.k.a., OEMs), that had to _recall_ 35 Million vehicles in 2021 alone, and whose estimated recall losses are around half Trillion dollars by 2024 [21]. Nevertheless, the promised benefits of an OTA update system may be doubtful if the system itself is insecure and costly. Our work is motivated by two main problems. The first is driven by a request from a leader Japanese OEM, whose customers (vehicles' owners) are complaining about the cost of OTA updates, due the extensive bandwidth utilization (using a cellular LTE communication). Indeed, the enormous number of Software Lines of Code (SLoC)--estimated to exceed 100 Millions--in mainstream vehicles [8, 31] can yield up to 1G Byte of daily updates. This makes the OTA system slow and expensive. It is expensive since current OTA update systems hinge into cellular LTE/4G connectivity--while future 5G solutions rates cannot be anticipated [13, 16]. It is slow since the vehicle is not always turned on, and thus, not always connected and updating. In search for a convenient solution, we realized another facet to this problem, characterized by the centralization of updates on the OEM's repositories at daily peak hours. The experiments in [4, 6] pointed out that the interference caused by simultaneous vehicle updates in a zone at peak hours can reach the bandwidth limit of a telecommunication cell unit with less than 20 vehicles. In addition, the connection's quality deteriorates significantly for the vehicles and other (mobile) users. Unfortunately, there are few works that tackled this issue at the high-level using fog, edge, or even Blockchain forwarding nodes [4, 10, 24, 36, 42], as we explain in Section III; however, none of them has studied the issue deeply. The second motivation is driven by the need for automotive OTA update systems that are secure by design; otherwise, the consequences can be fatal--endangering human lives. In fact, automotive OTA update systems are more complex than computer or mobile update systems for several reasons [16, 19]; we mention two main ones here. First, there is a voluminous number of software producers (e.g., Tier 1, Tier 2, and Tier 3 third parties) throughout the supply chain, which makes it cumbersome for the OEM to handle the complex _chain-of-trust_, or retain and maintain all the respective update repositories by itself. Second, current automotive architectures are composed of many layers of software-based _secondary_ ECUs, whereas only one ECU, i.e., the _primary_ ECU or Telecommunication Unit, has an interface to the cyberspace, e.g., via a cellular connection. This enforces the secondary ECUs to download updates through the primary, which is problematic, from a liability viewpoint, as these ECUs are often provided by different vendors. Despite the plenty of literary works on OTA update security (see Section III), the most solid end-to-end direction we found is _Uptane_[19, 38] that uses a _separation-of-roles_ scheme for security key signing and verification. The idea is to ensure the chain-of-trust across many OEM's _roles_ (tasks), including suppliers and secondary ECUs. Despite the security premise of Uptane, the system model and protocols are not rigorously formalized (publicly). In this paper, we introduce ScalOTA: a scalable, efficient, and secure OTA update system for vehicles. ScalOTA is the first OTA update architecture that proposes using update stations as _Points-of-Presence_ for software updates, e.g., integrated with Electric Vehicle (EV) charging stations (EVU station, see Fig. 1). This new architecture allows vehicles to reduce the bandwidth utilization and latency of updates up to an order of magnitude, compared with cellular LTE/4G downloads. This boosts the resilience of the system as the update server is no longer single point of attack or failure. In ScalOTA, the OEM notifies the vehicles with the data of any new update, e.g., via a cellular network. The vehicle can then download the corresponding _cached_ update images via an update station, that is operated by a new business entity, possibly different from the OEM or suppliers. Updates in ScalOTA are much faster than using cellular LTE/4G as vehicles can be wired to the update station through Fast Ethernet, Fiber Optics, or using even a Powerline [12] with the EV cables. ScalOTA also avoids over-utilising the general cellular bandwidth and significantly reduces the overload on repository servers. On the other hand, this architecture opens a huge business model opportunity, and thus establishing automotive software _market place_ operators, as in mobile systems, and is only recently noticed in the industry [25]. In addition to the ScalOTA architecture, our work is the first published academic work that formalizes an end-to-end OTA update system model and protocol, presents _Liveness_ and _Safety_ proof sketches (in the Appendix), as well as drives a scalable empirical evaluation. In particular, our experiments, on a real cluster, show that ScalOTA reduces the update latency and cellular bandwidth utilization to an order of magnitude, as expected theoretically, compared to cellular-based solutions, like _Uptane_. The experiments also show that ScalOTA is more resilient to attacks and overloads, as it no longer depends on a single update server. There are two main challenges in ScalOTA's architecture: security and storage scalability. The security challenge is referred to using the update station as a new stakeholder in the software value chain. We inspire by Uptane's separation-of-roles scheme to provide and prove a formal chain-of-trust model that covers the end-to-end workflow between the software producer and the secondary ECUs. However, contrary to Update, we formalize the protocol and security abstractions. The storage scalability challenge is caused by the need to support hundreds of vehicle models and brands, which requires a huge storage capacity at the EVU station. We solve this by only caching the relevant updates of vehicle models that are common in a zone, e.g., based on historical mobility patterns. The intuition is that similar vehicle brands and models are more likely to be common in a country or city, and therefore, subsequent vehicles of the same model will already have the updates cached at the EVU station. In addition, updates are tagged and bundled at a fine-grained device level, e.g., provided by a supplier universal identifier (SUI). This avoids downloading duplicate updates of the same auto part used in different vehicles and models. This is reasonable since auto parts in most vehicle models are mostly supplied by a handful number of well-known (Tier 1 and Tier 2) suppliers regardless of the brand/model. The rest of the paper is organized as follows: Section II discusses the case for ScalOTA. Section III presents the key related works, and Section IV presents the system and threat models. Section V then presents the architecture and protocol formalization, which is then evaluated in Section VI. Finally, we conclude in Section VII, followed by proofs in the Appendix. ## II The Case for Using OTA Update Stations ScalOTA uses update stations embedded/collocated with EV stations, which represents a paradigm shift. We discuss the plausibility of this design decision by highlighting the issues current OTA update approaches have, and how using update station can resolve them. A software-defined vehicle can be seen as complex mobile phone. Software update sizes can range from few KBs to GBs [3, 14, 20, 30, 37]. Due to the slowness and cost of OTA download paradigms, OEMs tend to rollout updates in large packages on weekly or monthly basis [2, 39]. We however argue that as OTA techniques get more mature, we would expect smaller update sizes with large frequency, especially when critical safety or security measures are concerned. Updates in vehicles are, however, different from mobiles in two main ways of high relevance. The first is that a vehicle's usage pattern is very restrictive: a download can only happen while ignition is turned on, and often requiring _Park_ gear mode. The second is that hundreds of software/firmware (for tens of ECUs) are developed by various suppliers, which makes updates' distribution and delivery cumbersome to the OEM: it shifts the OEM from its core innovation business to out of its comfort zone, e.g., cloud storage and secure update delivery business1. We explain how these two aspects affect the cellular and Wi-Fi download paradigms currently used by the industry. Footnote 1: We started to see this the industry. E.g, [https://www.magnsoft.com/](https://www.magnsoft.com/) now pushes updates directly to cars on behalf of three OEMs. Cellular performing OTA updates using cellular 3G/4G/LTE technology is slow, costly, and exhausts the bandwidth resources. The slowness is referred to two reasons. First, due to the human mobility patterns, a car may be in use, and thus connect to a cellular network, for one hour daily, which is not sufficient to complete current payloads required by major OEMs [3, 14, 20, 30, 37]. Worse, cars are mostly active during rush hours when the network is also highly loaded [5]. Due to the limited cellular _Physical Resource Blocks_ (PRB), shared by mobiles too, cellular networks only scale (with a reasonable interference) to almost 20 car simultaneous zonal downloads. As a consequence of these reasons, a major update (\(>1GB\)) may take one week to complete [3, 6]. Last but not least, cellular-based downloads exhaust the telecommunication network resources since the number of vehicle-to-server connections is linear to the number of vehicles. Wi-FiSeveral OEMs realize the above cellular limitations and thus require using Wi-Fi for large updates. Unfortunately, using Wi-Fi for updates is very location-restricted, insecure, and exhausts the network's backhaul resources. In a nutshell, having access to Wi-Fi is not the common case in vehicle mobility: public Wi-Fi car access is not prominent worldwide, while private Wi-Fi often have limited range coverage to underground garages or remote parking lots [5]. Importantly, car owners will unlikely keep their cars' ignition on for minutes-to-hours until downloads complete. These two issues also exist in the case where home EV chargers are used. From a security perspective, we, as well as other authors [6], do not recommended to hinge on the awareness and experience, or lack thereof, of average users to maintain their Wi-Fi access points, wildly common to have guessable passwords and open configurations [41]. Finally, as in cellular OTA updates, downloading through Wi-Fi access points exhausts the backhaul network resources, since the number of connections is linear to the number of Wi-Fi access points, which are roughly as numerous as vehicles. Our design decision stems from understanding vehicles' mobility pattern and driver convenience. An EV driver can utilize the charging time at an EV station to do the software updates. Since the direct network (fiber, Ethernet, powerline) can practically be an order of magnitude faster than a cellular/Wi-Fi connection, payloads can often get downloaded in a single charging instance (e.g., within few minutes). At the EV station, a driver is likely to be attentive, and it is reasonable to keep the ignition on for such a short period. On the other hand, the number of connections between the download server and user is reduced to a logarithmic order, _i.e._, proportional to the number of update stations downloading (see Section VI). Interestingly, since update stations are shared update sources now, it is possible to optimise update distribution even further by downloading the common dependencies between different car models only once, while keeping the control with the OEM. For instance, popular telecommunication units, gateways, or intrusion detection systems from major suppliers currently appear in tens of car models from different OEMs. Instead of having encapsulated update bundles by the OEM, tagged and downloaded for each model, it is smart to tag updates by the supplier and reference them in the OEMs bundle to keep control of updates. These tagged updates are downloaded only once by the update station, regardless of the OEM, as long as the supplier version is the same. ## III Related Work The seminal Over-the-Air (OTA) architectures [1, 23] appeared with the introduction of wireless technologies like WIFI, Bluetooth, cellular LTE, and later DSRC and C-V2X [40]. In [23], the authors discussed the main security building blocks for a wireless OTA update system like using symmetric and asymmetric keys, SSL, and VPN. Their architecture was very simple, where the OEM sends the vehicle a software download link provided by the supplier. For integrity assurance, they have suggested downloading a software updates twice along with the message digest [17, 23], which was later shown to be redundant and unnecessary [16]. Nevertheless, the basic blocks like using symmetric and asymmetric keys for authentication and message digests inspired most of the subsequent OTA architectures [16, 19, 26, 35, 36], including ours. On the other hand, Tesla (whose code is not published) took advantage of using of _Virtual Private Networks_ (VPN) over WIFI and cellular networks to ensure authentication and confidentiality in their very first design in 2012 [36, 43], in addition to the extensive use of _code signing_ to ensure the integrity of updates [29]. The increasing complexity of the automotive software supply chain have called for secure end-to-end OTA update architectures that considered the off-vehicle (i.e., vehicle to the surrounding) and on-vehicle (mainly ECU to ECU) parts. The community has got inspired by The Update Framework [34] (TUF)--not tailored for automotive--that addresses the different stakeholders in the software update value chain. It considered the entire chain of trust by introducing the concept of _separation of roles_, thus making it easy to verify the authentication and integrity properties at any stage by the relevant stakeholders. TUF however has not tackled the automotive workflow in which the OEM plays a major role in deciding updates, and the fact that updates should target many entities, i.e., ECUs, rather than a single entity. Uptane [19, 22, 38] bridged TUF's gaps by proposing an OEM's _update director_, playing the role of the software update controller of the vehicle. Uptane also extended the TUF's _separation-of-roles_ concept to ensure the end-to-end trust of chain including those between ECUs, i.e., _primary_ and _secondary_ ECUs, inside the vehicle. ScalOTA benefits from both, TUF and Uptane, by extending the _separation-of-roles_ concept to cover the suggested network of update stations. In this case, the update director in Uptane still defines the updates to be installed on a vehicle, however, it uses a publish-subscribe scheme to guide the vehicle to securely download the updates through the update stations and the software image inventories. Similar to ScalOTA, several works have recently addressed Uptane's bottleneck, but from a telecommunication perspective. For instance, the authors in [42] and [13] proposed architectures that predict the bandwidth usage, e.g., using historical (weekly) vehicle patterns, to schedule content sharing between edge/fog devices in the cellular backhaul network, and with the help of vehicle-to-vehicle (V2V) update forwarding. Similarly, [24] proposed using fog nodes to push updates to vehicles with the help of _pivot_ cars, in a V2V manner. The reason behind using V2V update forwarding is that the edge/fog devices in these works are still centralized in the their respective zone, and lead to interference issues in peak hours, where the vehicles mainly pull updates [4]. In our work, we avoid V2V forwarding since it requires some notion of trust between vehicles--which was not addressed in the aforementioned works, and we argue that the industry is not ready for this approach. Other works like [10, 36] have proposed, at the high level, using V2V forwarding schemes through blockchains. However, these works have not thoroughly discussed the security aspects as we do in our work, and they have shown that exchanged updates are linear with the number of vehicles. ## IV System and Threat Models ### _System Model_ #### Iv-A1 Off-Vehicle model A typical software update system model is composed of two main entities: software provider/supplier and end-user device. An automotive software update system model has two main differences. First, the vehicle manufacturer--commonly known as _Original Equipment Manufacturer (OEM)_--employs a _Software Update Director_ (SUD) service that is in charge of directing and maintaining the vehicle software updates tailored to different vehicle models--for safety and liability reasons. In this case, (tens of) suppliers provide the software updates of the specific parts directly to the manufacturer or through direct _Image Repositories_ (IR). A supplier can yet delegate software image production or maintenance to (a chain of) third-party suppliers. The manufacturer's SUD and the supplier's IRs are assumed to be connected via an unreliable network, e.g., the Internet, where packets can be dropped, reordered, or modified. However, we assume that the network eventually delivers packets to their destination. This is feasible since automotive software updates are not required to be real-time. #### Iv-A2 In-Vehicle model The second main difference is that the end-user device (the vehicle in this case) is composed of several smaller devices known as Electronic Control Units (ECUs), most of which are constrained in computation, memory, communications and features (e.g., security, power, etc.). We distinguish between two main classes of ECUs: a _Primary_ ECU that has decent capabilities [7, 15] and connectivity to the surrounding environment, which allows it to play the role of vehicle update manager; and tens of _Secondary_ computationally-constrained ECUs that are connected to the Primary via the in-vehicle network, through which they receive relevant updates. Some vehicle architectures can have more than one Primary or yet another layer of ECUs, e.g., tertiary ECUs, but we exclude these for simplicity. For the same reason, we assume the Primary and Secondaries are connected to the same _Automotive Ethernet_ network [28] without any gateways. We assume that the in-vehicle network eventually delivers packets to their destination despite transient issues. Without loss of generality, we assume that the vehicle has a single Primary ECU that connects it to the Internet. #### Iv-A3 Update stations model We extend this system model with an _Update Distribution Broker_ (UDB): a distributed network of \(N\) of vehicle update stations (_UStation_), e.g., associated with _Electric Vehicle_ (EV) charging stations. In particular, UStations in the same zone could be connected via _fiber-optic_ LAN, to benefit from their high throughput and security; whereas UStations could use a WAN or the Internet. There is no specific network topology that governs the UStations, but we assume a decent level of resiliency. The UDB plays the role of a marketplace for software updates, which can coordinate and deliver updates to vehicles in a fast, efficient, and secure way directly through a wired connection, e.g., while an EV is charging. A wired connection between the UStation and the vehicle has the advantage of high throughput and security. Again, this is convenient if the software cables are bundled with the power cables of the EV charging station or using Power-Line Communication (PLC) [12]. Finally, we assume a business model where OEMs have _service level agreements_ (SLAs) with a _software update operator_, e.g., can be the EV station operator, that implements an _Update Distribution Broker_ (UDB), and thus, allowing the UStations to host and deliver updates to the fleets. ### _Threat and Adversary Model_ #### Iv-B1 CIA model Our threat model considers the Availability and Integrity properties of the CIA security triad model. The goal is to ensure the delivery of authentic and intact "packaged" software updates in a reasonable time to their destination vehicles. Given the large supply-chain discussed in the system model, the main challenge is to scale out the distribution of software updates, by using UStations as edge devices, while maintaining a secure chain-of-trust across the entire system. Confidentiality is not considered in this paper because its lack does not directly impede our goal above. Nevertheless, when it is essential, e.g., to protect the software intellectual property and reduce malware injection via reverse engineering, one can simply encrypt the payload using off-the-shelf encryption techniques, like RSA or ECC. In the same vein, we do not address software security prior to packaging or during development. We also assume highly resilient and secure SUDs and IRs, meaning that these should be impervious to attackers. In the worst case, an attacker may compromise one but not both servers at the same time. #### Iv-B2 Threats An adversary may attack the vehicle by (1) attempting to install forged malicious updates to control an ECU or the entire vehicle (functionality or performance) through _Man-in-the-Middle_ (MITM) or _Spoofing_ attacks [22]; (2) impeding the vehicle update processes by generating compatibility issues across software or ECUs, caused by partial updates (e.g., _Partial-bundle-installation_ or _Mix-and-match_ attacks [22]); or (3) preventing a new update by replaying the same update when the vehicle performs the update cycle (i.e., _Replay_ attack [22]). The adversary may attack the availability by dropping, delaying, or corrupting updates, e.g., by modifying the contents (e.g., _Endless-data_ and _Mix-and-match_ attacks) or timestamps (e.g., _Freeze_ or _Rollback_ attacks [22]). A detailed description of possible attacks is described in [22]. #### Iv-B3 Adversary capabilities To perform these attacks, we assume a strong adversary that is capable of compromising the system components and network to impede our goal. In particular, the adversary can: * _Compromise the ECUs in the vehicle_. Compromising the primary could compromise the updates on all Secondaries as well if the former is trusted. Compromising the Secondary may often have local effect only, e.g., by preventing updates locally, but it may also prevent other ECU updates to complete if there are dependencies (e.g., that require atomic all-or-nothing updates). * _Compromise the keys_. This includes compromising the keys of the manufacturer's SUD, Image Repositories, or the UStations in the UDB. We however assume that the adversary cannot compromise the SUD and IR at the same time. * _Intercept the communication channels_. This includes intercepting the channels between all system actors, both off-vehicle and in-vehicle. The former is more likely in wireless communications, e.g., Cellular LTE, 5G, or Wi-Fi. The in-vehicle network is possible through accessing the vehicle via the OBD-II port, USB, or the Telecommunication Unit. * _Compromise the cryptographic keys_. The adversary can compromise the cryptographic keys, e.g., through stealthy attacks or _elevation of privileges_, to perform spoofing attacks or modify the signed hash digests of updates. Nevertheless, it is assumed that the attacker cannot break the used cryptographic primitives, like RSA and ECC keys or hash functions, using brute force. ## V ScalOTA Architecture and Protocol ### _Abstractions and Symbols_ The message abstractions and symbols used throughout the paper are summarized in the next table for the reader's convenience. ### _Architecture overview_ Figure 1 depicts the architecture of ScalOTA. The architecture is composed of four main parts: the manufacturer's Software Update Director (SUD), the supplier chain of Image Repositories, the EV Station Manager (ESM), and the vehicles. The **Software Update Director (SUD)** is an entity that is owned by the OEM to control the updates of the entire fleet. For this, the SUD retains an inventory database of the entire fleet information and vehicle specifications, versions, and associated suppliers. The SUD also calculates version dependencies and conflicts to generate safe update lists. The SUD contains roles that represent entities inside the SUD with different responsibilities; when a role writes metadata it always signs it with its own key(s). The roles are: targets--stores the various updates' information; snapshot--creates metadata describing stable software bundles; timestamp--responsible for applying timestamps to targets' and snapshot's metadata, which is used to know if there are new updates; and root--acts as a Certificate Authority (CA) for the other roles. **Image Repository (IR).** This is a database where all OEM-associated software is stored, including packages developed in-house and from _N_-Tier suppliers. The **Update Distribution Broker (UDB)** is a network of vehicle update stations (UStations), e.g., associated with EV charging stations, and operated by a new entity we call _update operator_. The UDB acts as a CA for its stations, such that the vehicles can attest the owner of each UStation. Each UStation collects identifiers about vehicle models in its corresponding zone, for which updates are downloaded and cached. The UStation pulls from the UE updates corresponding to the identifier, that by its turn subscribes to the corresponding updates provided by the manufacturer's SUD, through the MQTT pub/sub protocol. The UE then redirect the update downloads to the designated UStation. The UE can optimize the cache of the software updates on zonal-basis. It can also predict update usage, i.e., if an update \(\Delta\) becomes available for a vehicle type that is frequently seen in its UStations, then it can preemptively publish it. A **Vehicle** is composed of a primary and several secondary ECUs, for which software updates are continuously needed. The **Supplier** is the main software _update producer_. It stores updates in local or manufacturer-owned Image Repository (IR). As explained later, a direct supplier (called Tier 1) can securely delegate software to other third-party (Tier 2 or Tier 3) suppliers. Finally, the **Time Server** is a trusted source of time for all elements considered. ### _Update Initialization and Publishing_ This section presents the ScalOTA protocol steps for update installation and publishing to the update stations. #### V-C1 Initialization Before dispatching the vehicle to the market, an OEM installs the most recent software versions for all of its parts (i.e., ECUs). The vehicle's model is known by its unique _Vehicle Identification Number (VIN)_, composed of 17 characters. In particular, the very first 11 characters, we call them _Model Identification Number (MIN)_, define the Fig. 1: The ScalOTA architecture. corresponding features, specifications, and manufacturer. To keep the vehicle in circulation up-to-date, the OEM retains the updates' meta-data in a _Fleet Inventory_ database. The Fleet Inventory is essential for the OEM to be able to track the circulating fleet and its corresponding software. This is key to pull any new software updates, through software producers, and push them to the vehicles in a timely manner, to reduce recalls or safety incidents. Therefore, the Fleet Inventory stores the following information: _VIN_, _MIN_, and a list of all software meta-data manifests: \(L_{e}=\{\mu_{s}=\langle\mathsf{Manifest},l,\theta,\tau_{\mu}\rangle_{\sigma};s \in S_{e}\}\), where \(S_{e}\) is the set of software for ECU \(e\) corresponding to MIN; \(l\) is the \(\delta\) location link; \(\theta=(\mathsf{Meta},h,e,s,d)\) retains the hash digest, ECU, software \(s\), and its dependencies, respectively; and \(\tau_{s}=(\mathsf{TS},t,v)\) corresponds to the timestamp and version of \(s\). All manifests are signed by the corresponding software producer \(prod\in\sigma\). Of particular interest, we identify the _last_ available update through its TS message \(\tau_{s}^{last}\). #### Iv-A2 Publishing Updates This section presents the protocol used to publish newly generated software updates to the update stations, thus, making the available to be downloaded by the vehicles. **Step 1.** The software update producer adds a new update \(\delta^{\prime}\) and a _signed_ meta-data \(\mu_{\delta}^{\prime}\) to the Image Repository (IR). Several _software producers_ may continuously supply the OEM with updates for the different software installed in the vehicle's ECUs (e.g., in response to a bug, vulnerability, or feature opened in an _Issue Tracking_ platform). The producer can be a supplier, third party (e.g., _Tier 2_) or the OEM development team itself. When a new software update \(\delta^{\prime}\) is ready to be deployed, the software producer \(prod\) places in an Image Repository (IR) both the update \(\delta^{\prime}\) and its corresponding manifest \(\mu_{\delta}^{\prime}=\langle\mathsf{Manifest},l^{\prime},\theta^{\prime}, \tau_{\mu}^{\prime}\rangle_{\sigma}\), _signed_ by \(prod\in\sigma\); where \(l^{\prime}\) represents the update location of \(\delta^{\prime}\) to download from, \(\theta^{\prime}\) collects the integrity data of \(\delta^{\prime}\), and \(\tau_{\mu^{\prime}}^{\prime}\) retains the current timestamp and version of the meta-data \(\mu_{\delta^{\prime}}^{\prime}\). Notice that different IRs--hosted by the producers or the OEM--can retain the same software images; but we only consider one IR in this protocol, for clarity. **Step 2.** The producer \(prod\) sends to the OEM's update director SUD a copy of the meta-data manifest \(\mu_{\delta^{\prime}}^{\prime}\). The latter validates its authenticity, integrity, and freshness, signs it, and adds it to its Fleet Inventory, if valid. After saving the software update \(\delta^{\prime}\) and meta-data \(\mu_{\delta^{\prime}}^{\prime}=\langle\mathsf{Manifest},l^{\prime},\theta^{ \prime},\tau_{\mu}^{\prime}\rangle_{\sigma}\) in IR, the producer sends \(\mu_{\delta^{\prime}}^{\prime}\) to the OEM's SUD. (The producer may send \(\mu_{\delta^{\prime}}^{\prime}\) to several OEMs in case their fleet shares the same software, or possibly ECUs.) When the SUD receives \(\mu_{\delta}^{\prime}\), that subsumes \(\theta^{\prime}=(\mathsf{Meta},h^{\prime},e^{\prime},s^{\prime},d^{\prime})\) and \(\tau_{\delta^{\prime}}^{\prime}=(\mathsf{TS},t^{\prime},v^{\prime})\), it validates it by running the following assertions against the last version it has \(\tau^{last}=(\mathsf{TS},t,v)\). In detail, the authentication is validated by assertAuth, Eq.(1), that asserts that \(prod\) is one of the signers of the message, i.e., \(prod\in\sigma\). The freshness is asserted by assertFresh, Eq.(2), that ensures the time and version of the new update are strictly newer than the _last_ version the SUD has in \(\tau_{s}^{last}\). Finally, the integrity is verified by in assertIntegrity, Eq. (3-4), that ensures the update's download succeeds via link \(l\), the hash of the downloaded version matches that of \(h^{\prime}\), and the corresponding ECU \(e\) and software \(s\) match those in the meta-data manifest. \[\mathsf{assertAuth} :=prod\in\sigma \tag{1}\] \[\mathsf{assertFresh} :=t^{\prime}>t^{last}\wedge v^{\prime}>v^{last}\] (2) \[\mathsf{assertIntegrity} :=h^{\prime}=hash(download(l^{\prime}))\] (3) \[\wedge e =e^{\prime}\] (4) \[\wedge s =s^{\prime} \tag{5}\] If these assertions succeeded, SUD signs \(\langle\mu_{\delta^{\prime}}^{\prime}\rangle_{\sigma}\) by appending the signatures of three roles target, timestamp, root to \(\sigma=\{\sigma,target,timestamp,root\}\). The target role signature certifies the update data, the timestamp certifies the approval time, and the root certifies the entire roles. These roles can represent different entities or processes at the OEM, which requires separation of responsibilities, and thus require different signatures. Then it adds \(\langle\mu_{\delta^{\prime}}^{\prime}\rangle_{\sigma}\) list of software \(L_{e}\) in the Fleet Inventory. Otherwise, the OEM notifies the producer and asks for the correct update manifests until it succeeds. It is noteworthy that the OEM may or may not perform the necessary quality assurance (e.g., unit testing or validation) in a suitable setting before adding \(\mu_{\delta^{\prime}}^{\prime}\). We argue that doing this is, however, important for the OEM being the liable entity--contrary to the producer--about any software failures. **Step 3.** The OEM resolves the possible dependencies of \(\delta^{\prime}\), creates certified update _bundle_\(\Delta^{\prime}\), and then updates its Fleet Inventory with the new _signed_ software bundle. Although the software update \(\delta^{\prime}\) is validated in Step 2, it is still not ready to be pushed to the vehicle because of potential dependencies. For this reason, the OEM's SUD has to prepare a bundle \(\Delta^{\prime}\) that contains \(\delta^{\prime}\) and its resolved dependencies, i.e., possibly many updates \(\delta_{i}\), before shipping it to the vehicle. This means that an update \(\delta^{\prime}\) may not be pushed to the vehicle alone. Note that the OEM may need to create a bundle for each vehicle model having different features (e.g., based on its \(MIN\)). There are two types of dependencies. The first is a dependency on other ECUs, i.e., if different ECUs have to be updated at once for the new update \(\delta^{\prime}\) on \(e\) to function properly. The OEM can resolve these dependencies since--contrary to the software producer--it has visibility over the entire vehicle system, and thus can figure out the possible conflicts or dependencies across ECUs. The second type of dependencies is local, representing the software modules or libraries that the same ECU \(e\) must have. These are already listed by the producer in \(d^{\prime}\), i.e., together with \(\theta^{\prime}=(\mathsf{Meta},h^{\prime},e^{\prime},s^{\prime},d^{\prime})\) The OEM, however, has to make sure the set of dependencies are installed on \(e\); otherwise, it bundles them together with \(\delta^{\prime}\) to be shipped to the vehicle. Finally, any dependency \(\delta_{i}\) is assumed to be generated and represented as a typical software update, possibly generated by multiple producers (including the OEM development team), following Step 1 and Step 2 defined above. After the list of dependencies (\(\delta_{1},\delta_{2},...,\delta_{m}\)) has been resolved, the SUD creates a bundle \(\Delta^{\prime}=(\mathsf{Bundle},D^{\prime},\tau^{\prime}_{\Delta^{\prime}})_{\sigma}\). In this bundle, \(D^{\prime}=\{\mu_{\delta^{\prime}},\mu_{\delta_{1}},\mu_{\delta_{2}},...,\mu_ {\delta_{m}}\}\) collects all the manifests of the related dependencies of \(\delta^{\prime}\) (included), so that the vehicle can download them as needed. On the other hand, \(\tau^{\prime}_{\Delta^{\prime}}=(\mathsf{T}\mathsf{S},t^{\prime},v^{\prime})\) assigns the new timestamp and version of the bundle to ensure freshness at the vehicle side (see details later). Finally, the \(snapshot\) role's signature is appended to the bundle \(\sigma=\{\sigma,snapshot\}\), thus asserting to the vehicle that this bundle is certified by the corresponding OEM. **Step 4.** The OEM's SUD publishes the manifest \(\tau^{\prime}_{\Delta^{\prime}}\) of the new bundle \(\Delta^{\prime}\) to all _subscribers_, including the Update Distribution Brokers (UDB) and vehicles, after including and signing the public key of each. At this stage, the SUD prepares to notify the subscribers about the new update bundle, via the Publish-subscribe service, run by the SUD. The natural subscribers for updates (of a software \(s\) corresponding to an ECU \(e\)) are the OEM's fleet vehicles. In addition, the Update Distribution Broker (UDB) we suggest in this work is a subscriber on behalf of the operated update stations, as explained in Section V-D. However, the most relevant point to mention here is that the UDB subscribes to receive all the updates corresponding to any vehicle passing by the network of update stations, provided that the vehicle has access to the UDB service. Before publishing \(\Delta^{\prime}=\langle\mathsf{Bundle},D^{\prime},\tau^{\prime}_{\Delta^{ \prime}}\rangle_{\sigma}\) to a subscriber \(sub\), the UDB's \(publish\) role adds the public key of the subscriber and its own public key to \(\sigma=\{\sigma,sub,publish\}\). Appending the subscriber's public key and signing it by the OEM's publish role is necessary for the former to certify its eligibility to download the updates listed in \(\Delta^{\prime}\) from the corresponding IRs. Again, having a separate publish role signature is helpful to stop the downloads when needed, i.e., by simply revoking the publish role key (without having to change the other roles if not needed). Finally, the SUD publishes the bundle update \(\Delta^{\prime}\) to its subscribers. In particular case of the UDB, the notification is sent to the UDB's Update Engine that can later distribute the updates to the relevant update stations internally. On the other hand, the vehicles may also receive the update bundle upon request as described next, in Section V-D. **Step 5.** The UDB validates the authenticity, integrity, and freshness of the update bundle \(\Delta^{\prime}\). If this succeeds, it downloads the update images from the respective Image Repositories (IR), and makes them available at the relevant update stations. When the UDB's Update Engine receives the update bundle \(\Delta^{\prime}=\langle\mathsf{Bundle},D^{\prime},\tau^{\prime}_{\Delta^{ \prime}}\rangle_{\sigma}\), it first tries to validate its authenticity, integrity, and freshness against the last version it has \(\tau^{last}=(\mathsf{T}\mathsf{S},t,v)\). Authenticity is verified, in Eq. (6), by ensuring the \(publish\) and \(sub\) keys are included in \(\sigma\). Freshness is verified, in Eq. (7), by asserting that the version and timestamps of the received bundle are strictly in the future of the last one UDB has. Integrity is verified by asserting that the hash of \(\Delta^{\prime}\) matches the received signed hash digest. \[\mathsf{assertAuth} :=\{publish,sub\}\in\sigma \tag{6}\] \[\mathsf{assertFresh} :=t^{\prime}>t^{last}\wedge v^{\prime}>v^{last} \tag{7}\] After this, the Update Engine iterates over all \(\delta^{\prime}_{i}\in D^{\prime}=\{\mu_{\delta^{\prime}},\mu_{\delta_{1}}, \mu_{\delta_{2}},...,\mu_{\delta_{m}}\}\) to validate them, in a similar way to the steps in Eq. (1-5). If validation fails, the Update Engine requests the correct manifests explicitly, until its can validate them. Then it downloads each \(\delta_{i}\) from the corresponding IR mentioned in \(\mu_{\delta_{m}}\), excluding the invalid ones or those that have been downloaded previously. This is possible after authenticating with the IR, by exchanging \(\mu_{\delta_{m}}\) that includes the public key of the UDB \(sub\), signed by the OEM. Afterwards, the Update Engine pushes the updates \(\delta_{i}\) and their corresponding manifests \(\mu_{\delta_{i}}\) to the update stations that are subscribed to these updates. The Update Engine can alternatively send the manifests to the corresponding update stations to download the images by themselves. ### _Update Protocol_ This section presents the ScalOTA protocol steps followed by the vehicles to download software updates. **Step 6.** The vehicle's primary ECU sends a _report_\(R\) describing the vehicle software, by periodically sending \(\gamma=\langle\mathsf{Status},R,\tau_{\gamma}\rangle_{\sigma}\) to the OEM's SUD. Timestamps and versions have to be signed by _secondary_ ECUs if they do not trust the primary. In this step, the vehicle attempts to check if there are any software updates to download and install. Nevertheless, the vehicle cannot do this with on its own because of the potential dependencies bundled by the OEM. This is common in automotive OTA updates, where the OEM director is in charge of deciding the updates to be installed for each vehicle. To do this in ScalOTA, the primary ECU sends a periodic report \(R\) about the current status \(\gamma=\langle\mathsf{Status},R,\tau_{\gamma}\rangle_{\sigma}\) of the software versions installed on the vehicle to the OEM's SUD. The sending frequency can be once per week, day, or even upon every vehicle ignition, but it must be defined for freshness reasons. The primary prepares \(\gamma\) as follows: it adds to a list \(R\) the timestamp meta-data \(\tau_{e}\) corresponding to all the vehicle ECUs' versions that are currently installed. If \(R\) is the same as the previous sent Status message, the primary sends the hash digest of \(R\) instead. The primary also prepares \(\tau_{\gamma}=(t_{\gamma},v_{\gamma})\) by updating its timestamp \(t_{\gamma}\) and using the last Status message's version \(v_{\gamma}\) it has. Finally, the primary signs \(\gamma\) by adding its signature \(primary\) to \(\sigma\) and sends it to the SUD. In the case where a _secondary_ ECU does not trust the primary, which has the only communication interface to the outside world, the secondary has to sign the \(\tau_{e}\) by itself, and _relays_ it through the primary while preparing \(\sigma\). This has the advantage that the primary cannot lie to the OEM about the last versions installed on the secondary. In addition, the secondary has to send \(\tau_{e}\) at a fixed frequency, e.g., once per week, day, or even upon every vehicle ignition, so that it can raise an _alert flag_ to the driver, when the primary delays or discards the OEM's replies of the Status message. **Step 7.** Upon the receipt of \(\gamma=\langle\mathsf{Status},R,\tau_{\gamma}\rangle_{\sigma}\) from the vehicle, the SUD validates \(\gamma\), identifies the update bundle \(\Delta_{i}\) newer to those reported in \(R\), if any, and then encapsulates them in \(\gamma^{\prime}\) as a response to the vehicle. When the SUD receives \(\gamma=\langle\mathsf{Status},R,\tau_{\gamma}\rangle_{\sigma}\) from the primary ECU of vehicle \(V\), it first validates its authenticity and hash integrity, which occurs similar manner as before. The exception is freshness validation: if \(R\) has been previously seen by the SUD, freshness is ensured if \(t_{\gamma}^{last}<t_{\gamma}\) and \(v_{\gamma^{\prime}}^{last}\geq v_{\gamma}\); where \(\tau_{\gamma}=(t_{\gamma},v_{\gamma})\), and \(v_{\gamma}^{last}\) and \(t_{\gamma}^{last}\) correspond to the last retained Status message's meta-data at the OEM's SUD. The validation of \(v_{\gamma}\) is important since the vehicle should never have a newer version than the OEM. If \(\gamma\) is invalid, the SUD will discard the message, and consequently, the response to the \(Status\) message will be delayed, and the ECUs at the vehicle will raise an alert to the driver. Now, the SUD is ready to prepare for the reply \(\gamma^{\prime}=\langle\mathsf{Status},R^{\prime},\tau_{\gamma^{\prime}} \rangle_{\sigma}\). This is done by figuring out all the bundles' meta-data \((\Delta_{1},\Delta_{2},..,\Delta_{k})\) in the Fleet Repository that are in the future of those reported in \(R\). If none, i.e., \(R\) has been seen before, \(R^{\prime}\) is set to \(R\) again. However, this time with a new timestamp \(t_{\gamma}^{\prime}\) and incremented Status message's version \(v_{\gamma}^{\prime}=v_{\gamma}+1\) in \(\tau_{\gamma^{\prime}}=(t_{\gamma}^{\prime},v_{\gamma}^{\prime})\), so that the ECUs can assert the freshness of _Status_ message. (Recall that each update \(\delta\) has another \(\tau_{\delta}\) that is used to validate the freshness of the update.) Finally, the SUD's _timestamp_ role signature is added to \(\sigma\), and \(\gamma^{\prime}\) is sent back to the primary ECU. In the case where secondary ECUs do not trust the primary, the SUD has to verify each ECU's timestamp \(\tau_{e}\) in \(\gamma\), and then sign each bundle \(\Delta\) destined to \(e\). This allows the latter to verify the bundle instead of the primary ECU. **Step 8.** Upon the receipt of \(\gamma^{\prime}=\langle\mathsf{Status},R^{\prime},\tau_{\gamma}^{\prime} \rangle_{\sigma}\) from the SUD, the vehicle's primary ECU validates \(\gamma^{\prime}\), and becomes _pending_ to download the corresponding bundles' images, i.e., when connected to an update station or directly through the IRs. When the vehicle's primary ECU receives the Status reply \(\gamma^{\prime}=\langle\mathsf{Status},R^{\prime},\tau_{\gamma}^{\prime} \rangle_{\sigma}\) from the SUD, it validates its authenticity and hash integrity, as usual. Freshness validation is however done through asserting that \(t_{\gamma}^{last}<t_{\gamma}^{\prime}\) and \(v_{\gamma}^{last}\leq v_{\gamma}^{\prime}\), where \(\tau_{\gamma}^{\prime}=(t_{\gamma}^{\prime},v_{\gamma}^{\prime})\), and \(v_{\gamma}^{last}\) and \(t_{\gamma}^{last}\) correspond to the last retained Status message's meta-data at the primary. Note that its okay if the primary did not receive some \(\gamma\) versions between \(v_{\gamma}^{last}\) and \(v_{\gamma}^{\prime}\), since any recent \(\gamma\) version will include all the bundles with their dependencies (again other bundles). Thus, it is sufficient for the primary to download the last bundles, in \(\gamma\), it does not have, and thus discard any (delayed) \(\gamma^{\prime\prime}\) message whose \(t_{\gamma}^{\prime\prime}<t_{\gamma}^{last}\). Afterwards, if \(\gamma^{\prime}\) is valid, the primary retains all the meta-data manifest bundles reported in \(R^{\prime}\), in a _pending_ state, until it connects to an update station to download them. However, if \(v_{\gamma}^{last}=v_{\gamma}^{\prime}\), then \(R^{\prime}\) has been seen before and, therefore, no new updates are available. At this stage, vehicles that do not wish to download the updates through the update stations of the UDB can still download them via typical wireless, e.g., cellular communication. In the case where a secondary ECU \(e\) does not trust the primary, the latter must send the relevant bundles of \(e\), together with \(\tau_{\gamma}^{\prime}\), to verify them by itself. If this failed, or got delayed to reach \(e\), or even the primary has not received a valid \(\gamma^{\prime}\) on time, any of these ECUs can raise an alert to the driver. **Step 9.** The vehicle connects to an update station, gets authenticated, and downloads each update image \(\delta\) included in all the _pending_ meta-data bundles. When the vehicle stops at an update station, e.g., while charging at an integrated EV station, operated by the update broker UDB, it tries to connect, preferably, via a fast wired interface (e.g., an Ethernet cable that is bundled with the EV electric cable or a _Powerline_ cable [32]), or via other available communication media, like Wi-Fi. Then, the primary exchanges its public key, signed by its private key, with the update station to authenticate it--assuming that the vehicle already has a valid account with the UDB operator. If authentication succeeded, the primary sends all its pending meta-data bundles to the update station. Recall that each bundle \(\Delta^{\prime}=\langle\mathsf{Bundle},D^{\prime},\tau_{\Delta^{\prime}}^{ \prime}\rangle_{\sigma}\) may include many meta-data manifests like \(\mu_{\delta_{i}}^{i}=\langle\mathsf{Manifest},\delta_{i_{1}},\theta_{\delta_{i} },\tau_{\delta_{i}}\rangle_{\sigma}\) for multiple updates \(\delta_{i}\in D^{\prime}\), where the update integrity details are in \(\theta_{\delta_{i}}=(\mathsf{Meta},h_{\delta_{i}},e_{\delta_{i}},s_{\delta_{i}},d _{\delta_{i}})\). The update station uses these details to identify the corresponding image \(\delta_{i}\), and thus pushes it to the primary ECU if it is already cached. Otherwise, the update station have to download the update on the spot in the cases of _cache miss_ or _unknown_ vehicle model, as discussed in the following. Finally, the primary validates the hash digest of \(\delta_{i}\) against the the manifest's digest \(h_{\delta_{i}}\), thus asserting its integrity. **Step 10.** The primary ECU pushes each update image \(\delta_{i}\) to its corresponding secondary ECU to install them. The secondary may do partial or full verification depending on whether it trusts the primary or not, respectively. Prior to pushing updates from the primary to secondary ECUs, two components are required. The first is an in-vehicle communication medium that supports the basic com munication and security abstractions used in the protocol above. For simplicity, we consider an _Automotive Ethernet_[27] network between the primary and secondaries. Considering other networks is orthogonal to ScalOTA since we consider the communication channels as a black-box. The second component is a _Flash Bootloader_ tool at the secondaries, which is used to handle the received updates and install them. The Flash Bootloader is assumed to have a running _daemon_ that has a defined API to be able to communicate with other processes over the network, e.g., with the primary ECU. To avoid pedantic details, we assume that these tools are immutable (i.e., are never updated). We differentiate between two cases for pushing the updates from the primary to a secondary ECU, considering whether the secondary trusts the primary or not. **Trusted Primary.** Assuming the primary is trusted, it can handle the entire software download process as specified in Steps 6-9 above, on behalf of the secondary. The secondary only need to do _partial verification_: verify the key of the primary ECU, i.e., \(primary\in\sigma\), and the update image against the received hash \(h_{\delta_{i}}\) in the meta-data \(\theta_{\delta_{i}}=(\text{Meta},h_{\delta_{i}},e_{\delta_{i}},s_{\delta_{i}},d _{\delta_{i}})\) of the manifest \(\mu_{\delta_{i}}^{i}\). It may also perform basic sanity checks, e.g., error correction or erasure code. If these are valid, the Flash Bootloader daemon installs the update locally and reboots the secondary ECU if needed. **Untrusted Primary.** An untrusted primary may lead to integrity or availability issues (and confidentiality if considered). It could violate integrity by forging the software update images or manifests. This would require the secondary to do a _full verification_, which means verifying all the encapsulated messages \(\Delta\), \(\mu_{\delta}\)\(\tau\), \(\theta\), and \(\sigma\) as the above in Steps 6-9. On the other hand, the primary may simply delay or drop the updates destined to the secondary. To address this, the secondary's Flash daemon has to send the _Status_ message following a pre-defined frequency, as discussd in Step 6 and 8, and then raise an alert if no valid responses are received on time. This ensures that the primary could neither delay or discard the received updates from the OEM beyond the pre-defined period (e.g., defined on safety basis), nor tampering with their content. ### _Caching Mechanism_ As described in the protocol in Step 9, the vehicle can start downloading the update images once it connects to an update station. However, given the storage capacity of the cache at update stations as well as the mobility patterns of similar vehicle models, an update images may or may not be available at the update station for immediate download. We, thus, consider three interesting cases: _Cache Hit_, _Cache Miss_, and _Unknown Vehicle_, discussed below. _Cache Hit._ The update station has local caches for the required updates. This means that the UDB's Update Engine has already pushed the updates to this station, as a result of a previous access that had occurred by a vehicle model requiring the same software. The primary can thus download the software images without any delays, i.e., via a wired connection. _Cache Miss._ The update station does not have the update images requested by the vehicle. In this case, the vehicle's software are assumed to be subscribed to the UDB's Pub/Sub system, but it lost the images possibly because of the cache scheme. This can happen if any vehicle model that shares the same software had already passed by any update station operated by the UDB. The station can thus pull the updates from the UDB's Update Engine. Although the network connecting the update station is assumed to be fast, e.g., using fiber optics, a cache miss will incur additional delays. Despite this, the download can be one order of magnitude faster than a cellular-based connection, as we show in Section VI. _Unknown Vehicle._ The current vehicle's model (\(MID\)) is unknown to the update station. In this case, the station sends a subscribe request to the UDB's Update Engine. If the latter has the \(MID\) subscription through another update station, it adds the current station to its list of destinations and sends to it the updates. On the other hand, if the Update Engine does not know about the \(MID\) at all (i.e., this model has never been seen at any update station), the UDB subscribes to the OEM's SUD and starts getting updates for that model. Therefore, the protocol is followed to make the updates of this car model supported at the UDB and at this very update station. ## VI Evaluation ### _Experimental Setting_ We have implemented ScalOTA in _Python_ and \(C\). The core footprint is 1700 LOC, excluding dependencies on external libraries, mainly TUF [34]. To evaluate ScalOTA, we used Emulab [11] that allowed us to use tens of real machines (8-cores with tens of GB of memory) without interference with other applications, while emulating network bandwidth and link delays as needed (via using extra delay machines). We ran our code on Ubuntu 20.04 VM/bare metal machines depending on the experiment. We set up three types of links with different characteristics for the three typical network use cases suggested in ScalOTA: cellular from the vehicles to the SUD or IR (5Mbps bandwidth, 30ms latency); cable between UStations and IRs (10Mbps bandwidth, 10ms latency), and direct Ethernet between the UStation and the vehicles (100Mbps bandwidth, 2ms latency). All measurements include computation times, such as verifying signatures. ### _Update download time_ ScalOTA is attractive to vehicle owners since it uses a direct connection to a UStation, i.e., fast and free, while currently users pay a costly cellular data plan whose speed and bandwidth are capped or limited depending on the region. In Figure 2 we show this ScalOTA's advantages for the user by comparing the time taken to download 100MB of updates through cellular link and directly from the UStation. In this scenario, we consider all update requests to the UStation are hits. We can see that as we serve a larger percentage of updates from the UStations the time required for downloading the whole bundle decreases significantly, up to \(\frac{1}{5}\) of the cellular network time. We discriminate the time used to fetch the manifests as these are always downloaded via the cellular network. This figure demonstrates the benefit of using dedicated links for the convenience of the user. ### _Cache management on download time_ Figure 3 shows the bandwidth usage of downloading 100MB of updates over the different cache events considered in ScalOTA: hit, miss, and unknown (see Section V-E for details). Over the X-axis we present several scenarios, where a different percentage of the updates falls under (H)it, (M)iss, and (U)nknown events. We present a scenario where each event type is dominant, as well as more balanced scenarios. There are two takeaways from the figure: 1) the update distribution algorithm is paramount for the success of ScalOTA since we want to prioritize hits for maximum efficiency; 2) the download times between miss and unknown are very similar--this is due to the simplicity of establishing a subscription for a new model, which is the only difference between the processes. These results reinforce the effectiveness of ScalOTA, since in the worst case scenario--100% misses--the download speed is lower than using only the cellular network, while at the same time being free of charge. ### _Resilience of IR to increasing loads and UStations_ In Figure 4, we show the impact the number of clients have on simultaneously downloading a 100Mb update bundle from the IR. The X-axis represents the percentage of the bundle that is served by UStations (the remainder is downloaded via the cellular network), while the various lines represent the number of clients simultaneously downloading the bundle. There is a clear degradation of service as the server gets overloaded with requests when there is little update coverage from the UStations. Using ScalOTA means drastically reducing the download times even during peak-usage events, since the load is distributed in the edge instead of centralized in the cloud. In fact, in a real-world scenario this result is expected to be even worse due to the network congestion and interference from multiple vehicles in close proximity degrading the link quality [4, 6]. ### _Bandwidth Cost_ We estimate the bandwidth cost theoretically since we faced some issues with Emulab while running hundreds of ScalOTA real instances within hundreds of VMs. The total update cost is usually paid by owners. It can be modeled as follows: \(C_{T}=C_{up}+C_{bwdth}\), where \(C_{up}\) is the software update service cost, often offered by the OEM, and \(C_{bwdth}=r*(\Delta^{a}+\mu^{a})\) is the estimated bandwidth cost for software \(a\) offered by a cellular operator, where \(r\) is the telecommunication bandwidth rate. Notice that in state of the art solutions, a software update downloads \(\Delta^{a}+\mu^{a}\) through the OEM's SUD (that hosts the Image Repository in this case). ScalOTA reduces the \(C_{bwdth}\) part, which dominates the update service cost, as the vehicle only downloads \(\mu^{a}\) through the SUD, and \(\Delta^{a}\) via the update station. In his case, it is reasonable to assume the new update service to be equivalent to \(C_{up}\), while the bandwidth cost from the update station is assumed to be free. Therefore the total bandwidth cost will be \(C_{bwdth}=r*\mu^{a}\), and thus the relative bandwidth cost will be \(\mu^{a}/(\Delta^{a}+\mu^{a})\). In our implementation of ScalOTA, the manifests occupy from a couple to less than a hundred kilobytes, thus confirming that \(\Delta^{a}\) dominates the download cost, and therefore the savings obtained. ## VII Conclusions We have introduced ScalOTA, a new architecture that makes use of update stations, possibly integrated with EV stations, to reduce the update latency and cellular bandwidth utilization. For this, we have presented a formal OTA update protocol that we have also proved (see Appendix) its integrity, authenticity, and availability properties, defending against known OTA attacks. Our empirical evaluation on a real cluster confirms that the reduction in update latency and cellular bandwidth Fig. 4: The download times based on the number of simultaneous clients. Fig. 3: Download time comparisons considering various caching events. Fig. 2: Download times considering the percentage of updates served by UStations. utilization is one order for magnitude compared with using cellular connectivity alone. Interestingly, ScalOTA introduces a new business model, by having separate software distribution operators and a market place. In the future, we aim to conduct a dedicated security verification, which is possible now given our formalization. We also aim to use real data traces to understand the mobility patterns over an area near update stations, and thus, better asses the cache misses and hits.
2305.14029
The Complexity of Corporate Culture as a Potential Source of Firm Profit Differentials
This paper proposes an addition to the firm-based perspective on intra-industry profitability differentials by modelling a business organisation as a complex adaptive system. The presented agent-based model introduces an endogenous similarity-based social network and employees' reactions to dynamic management strategies informed by key company benchmarks. The value-based decision-making of employees shapes the behaviour of others through their perception of social norms from which a corporate culture emerges. These elements induce intertwined feedback mechanisms which lead to unforeseen profitability outcomes. The simulations reveal that variants of extreme adaptation of management style yield higher profitability in the long run than the more moderate alternatives. Furthermore, we observe convergence towards a dominant management strategy with low intensity in monitoring efforts as well as high monetary incentivisation of cooperative behaviour. The results suggest that measures increasing the connectedness of the workforce across all four value groups might be advisable to escape potential lock-in situation and thus raise profitability. A further positive impact on profitability can be achieved through knowledge about the distribution of personal values among a firm's employees. Choosing appropriate and enabling management strategies, and sticking to them in the long run, can support the realisation of the inherent self-organisational capacities of the workforce, ultimately leading to higher profitability through cultural stability.
Frederik Banning, Jessica Reale, Michael Roos
2023-05-23T13:01:15Z
http://arxiv.org/abs/2305.14029v2
# The Complexity of Corporate Culture as a Potential Source of Firm Profit Differentials ###### Abstract This paper proposes an addition to the firm-based perspective on intra-industry profitability differentials by modelling a business organisation as a complex adaptive system. The presented agent-based model introduces an endogenous similarity-based social network and employees' reactions to dynamic management strategies informed by key company benchmarks. The value-based decision-making of employees shapes the behaviour of others through their perception of social norms from which a corporate culture emerges. These elements induce intertwined feedback mechanisms which lead to unforeseen profitability outcomes. The simulations reveal that variants of extreme adaptation of management style yield higher profitability in the long run than the more moderate alternatives. Furthermore, we observe convergence towards a dominant management strategy with low intensity in monitoring efforts as well as high monetary incentivisation of cooperative behaviour. The results suggest that measures increasing the connectedness of the workforce across all four value groups might be advisable to escape potential lock-in situation and thus raise profitability. A further positive impact on profitability can be achieved through knowledge about the distribution of personal values among a firm's employees. Choosing appropriate and enabling management strategies, and sticking to them in the long run, can support the realisation of the inherent self-organisational capacities of the workforce, ultimately leading to higher profitability through cultural stability. **Keywords:** Complex Adaptive Systems, Self-Organization, Corporate Culture, Social Networks, Agent-Based Modeling **JEL Classification:** C63, D21, L25, M14, Z13 ## 1 Introduction Firms operating within the same industry have long experienced persistent profit margin differentials (see Mueller et al., 1986; Geroski and Jacquemin, 1988; Leiponen, 2000, among others). Intra-industry profitability differentials sparked a long-standing debate about which factors drive firms' performance. The dichotomy between industry- and firm-based factors has led to two competing theories (Bowman, 1990). The industrial organisation (IO) literature addresses profit differentials to structural characteristics of the firms' industry. Assuming homogeneous firms (Mauri and Michaels, 1998), IO-based empirical research has tied higher profits to (i) market power and concentration ratios (Bain, 1954), (ii) entry barriers (Mann, 1966), (iii) mobility barriers (Semmler, 1984)1, and (iv) market shares (Ravenscraft, 1983). The resource-based view (RBV) of the firm challenges the implicit assumption of firms' homogeneity behind the industry-driven empirical analyses. The RBV emphasises firm-specific factors as the drivers of performance variance between firms within the same industry (Hansen and Wernerfelt, 1989; Cefis and Ciccarelli, 2005; Becker-Blease et al., 2010). In this vein, each organisation has a _unique_ set of resources and capabilities, which include management skills (Bowman and Helfat, 2001), routines (Barney et al., 2001) and corporate culture (Barney, 1986). Existing literature has provided stronger empirical support for firm effects as drivers of performance and profit differentials (see Hawawini et al., 2004; Arend, 2009; Bamiatzi and Hall, 2009; Hambrick and Quigley, 2014; Bamiatzi et al., 2016, among others). Firms' idiosyncratic resources have thus greater explanatory power for competitive advantage than industry-specific effects (Galbreath and Galvin, 2008). Moreover, the prevalence of industry effects on firm factors (Schmalensee, 1985) has been observed less frequently and is valid for medium-size firms only (Fernandez et al., 2019). The economic literature often treats firms as a "black box". As such, organisations are conceived as indivisible units with homogeneous resources and capabilities where firm-specific factors have no relevance. In order to better understand the sources of competitive advantage and profit differentials, opening up the black box is necessary to avoid neglecting the complexity of firms' internal structure (Foss, 1994). Moreover, existing literature has mainly focused on the response of firms' resources to "shifts in the business environment"(Teece et al., 1997, page 515). However, it is still unclear how company-specific factors, i.e. corporate culture and management strategies, co-evolve and impact corporate performance when firms' inner conditions change. Footnote 1: Mobility barriers comprise factors hindering a firm’s entry and exit from an industry. This paper contributes to the understanding of the inner workings of a firm (i) by combining concepts and methods from economics, (social) psychology, and complexity science, and (ii) by adopting an agent-based bottom-up approach. We regard formal and informal institutions within a firm as a key factor of independent and heterogeneous human resources embedded in a social context, interacting actively and reactively with other employees and responding to corporate strategy changes. To capture these mechanisms, we see a company as an example of a complex adaptive system (CAS) (Fuller and Moran, 2001), inherently difficult to control and manage by nature (Holland, 1992; Fredin and Liden, 2020). This paper extends an earlier agent-based model (Roos et al., 2022). It builds on top of its framework within which agents have heterogeneous value hierarchies (Schwartz et al., 2012), shaping corporate culture - captured via descriptive social norms (Deutsch and Gerard, 1955)2 - and mediating the impact of management instruments on corporate performance.3 As a first step towards modelling an organisation as a CAS, we propose three main extensions. First, agents take part in an endogenous and dynamic social network, whose peering mechanism determines how social norms and corporate culture spread within the organisation. Second, employees heterogeneously adapt their behaviour to corporate strategies, depending on how management instruments - i.e., monitoring and monetary incentives - affect their value-based satisfaction. Third, the management can more or less frequently update the degree of its monitoring activities and the implementation of pay-for-performance (PFP) schemes, which are influenced by, and feed back into, the development of corporate culture. By combining a dynamic corporate culture, employees' adaptive behaviour, and endogenous management strategies, we aim to answer the following research questions: What are potential effects of corporate culture on the profit differentials of otherwise similar firms? In which ways are corporate outcome and profitability affected by the frequency of changes in management strategy? Under which conditions - if any - will the management's attempts to steer the organisation boost profitability? Footnote 3: The _ (Casson, 2005), contract-based theories of the firm in the Coase tradition focused on employers' and employees' decision-making as driven by internal and external transaction costs. However, notions of profit maximisation and market equilibrium - at the core of this transaction-cost framework - make behavioural changes merely dependent on extrinsic causes. By identifying agents' competencies as the cornerstone of the theory of the firm, the evolutionary (competence) approach emphasises the crucial role of the social component within organisations (Foss, 1993). While acting as a foundation for the monolith of literature in economics and business that deals with firms as such ultra-optimised institutions, the underlying assumptions of the competence-based approach - highly rational and fully informed agents who maximise their expected utility - still advance an inadequate representation of firms' behaviour as a "series of rational and dispassionate activities" (Hodgkinson and Healey, 2011, page 1501). As a consequence, standard microfoundations - also in the strategic management tradition - appear considerably incompatible with the findings of experimental economists, psychologists, sociologists, neuroscientists, and others (e.g. Kahneman and Tversky, 1979; Shafir et al., 2002; Sarnecki, 2007). Moreover, the two predominant theories of the firm - contract-based and competence-based - neglect social behaviour. Contrary to empirical findings, they thus overlook (i) the _glue role_ of corporate culture (Freiling and Fichtner, 2010; Heine, 2013) in binding tangible, intangible, and personnel-based resources - proper of the RBV (Grant, 1991) - and (ii) the impact of firm-specific factors on profitability. Even though building models of complex systems is necessarily a reductionist approach towards real-world practices and processes, it allows us to study firms' dynamics within predefined limits - inherent to the CAS approach - and to assume more flexible behavioural rules suitable for several modelling scenarios. Moreover, conceptualising a firm as a CAS within confined boundaries and spheres of influence facilitates the study of its internal workings and allows us to use it as a laboratory for systems with higher complexity (Guiso et al., 2015). There is a relatively young tradition of scholars working on firms as CAS. The focuses of these previous studies have been widespread and range from innovation (Chiva-Gomez, 2004; Akgun et al., 2014; Inigo and Albareda, 2016), to entrepreneurial ecosystems (Roundy et al., 2018; Fredin and Liden, 2020), knowledge diffusion (Magnanini et al., 2021), and learning (Marsick et al., 2017; Lizier, 2022). This paper contributes to this line of thought both in terms of object of study - i.e. business organisations - and employed method - CAS-based approach - but focuses, distinct from already existing literature, on corporate culture and its influence on firm performance and profitability. The paper is structured as follows. Section 2 describes the three extensions of the model, i.e. network formation and the emergence of corporate culture, employees' adaptive behavioural rules, and endogenous management strategies. Section 3 explains the simulations and the main results of the model, and section 4 discusses the relevance of our findings. The last section concludes. ## 2 Model There are \(n\) employees in the company. Every employee \(i\in N\) where \(N=[1,n]\) has the same daily time budget \(\tau\), which has to be allocated among three activities: cooperation (\(c_{i,t}\)), shrinking (\(s_{i,t}\)) and, residually, individual tasks (\(p_{i,t}\)). Employees' behaviour depends on personal values (Schwartz et al., 2012). Each agent belongs to one of the four higher-order value types: Self-transcendent (ST-type) employees are motivated by benevolence and universalism, and self-enhancing (SE-type) agents by power and achievement. Conservative (C-type) employees value security and conformity above all else, whereas open-to-change individuals (O-type) especially value self-direction and stimulation. Agents' decisions depend on social norms, from which they can deviate positively or negatively. Time allocations among the three activities are assumed to be triangularly distributed and are modelled in terms of stochastic deviations from the cooperative (\(c_{i,t}^{*}\)) and shirking (\(s_{i,t}^{*}\)) norms, defined in section 2.1. The main behavioural equations which constitute the backbone of this model follow Roos et al. (2022). For the sake of clarity, Table A2 in Appendix A provides a brief comparison between the original equations and the changes the three additional extensions presented in this paper entail. The management can implement monitoring strategies and/or financial rewards. The adoption of these instruments can lead to a certain degree of trusting (or controlling) management style and/or to a competitive or cooperative rewards setting (PFP schemes).5 Footnote 5: A full list of starting values for the model parameters can be found in Table A1 of Appendix A. In the following subsections, we explain in detail (i) the network formation and its influence on social norms and corporate culture (2.1), (ii) employees' adaptive behaviour based on job satisfaction concerns (2.2), and (iii) how management strategies endogenously react to key company benchmarks (2.3). For the sake of clarity, Figure 1 presents the model overview, highlights the main feedback mechanisms our CAS-based firm is based on (bold lines), and indicates for each extension its corresponding section. ### Spread of norms in a social network In Roos et al. (2022), employees' perceived social norms for each task are assumed to be equal to the actual descriptive norm inside the firm, modelled as the mean behaviour among all agents. In this paper, we relax this artificial assumption of static environments by modelling the spread of information about social norms via the construction of an endogenous network where informal connections in the firm (e.g. based on value homophily or cooperation and Figure 1: Model overview. Source: Authors’ own illustration. shirking intensity) continuously evolve over time. These connections form a personal network that captures an employee's relevant peers and, thus, also their perceived social norms. Since all the intricate details of personal interactions within a firm - no matter if directly related to work or non-work processes - are intractable, we need to provide a formalised simplification able to capture the inherent dynamics of the endogenous formation and evolution of such a social network. For this purpose, we exploit the tenets of the Social Referent Literature (SRL). The SRL distinguishes two types of social referents at the workplace (Brass, 1984). On the one hand, _cohesive_ referents are the ones with whom employees engage more frequently, potentially allowing for the formation of close interpersonal ties and friendships (Galaskiewicz and Wasserman, 1989). On the other hand, _structurally equivalent_ relationships are formed among employees who perform the same role, occupy the same position in a network, or share a similar pattern of relationships (Burt, 1987). There is a long-standing debate about what kind of information the two kinds of actors (referents) are likely to share with co-workers which has revealed interesting findings. Cohesive referents share more general organisational information related to employees' social integration within the firm's corporate culture, while employees turn to structural referents for information strictly related to their work - such as tasks, roles and responsibilities - for performance improvements (Shah, 2000). While not neglecting the influence of structural relationships on employees' social networks, we currently leave aside considerations about formal connections. Indeed, in the network we aim to construct, co-workers' interactions inform employees about the prevailing social norm within the firm, information less likely shared by structural referents (Shah, 1998). Therefore, behavioural norms - i.e. normative information6 - are best acquired via cohesive social relationships, whose formation depends on the _frequency_ and the _intensity_ of interactions within organisations. Footnote 6: Shah (1998) classifies normative information under the category of ”general organizational information”, relevant for social adaptation to a company’s culture and social integration within a group. While the author specifically refers to ”norms of expected behaviour” – i.e. injunctive norms, which are outside the scope of this paper – we deem descriptive norms as also falling in the above-mentioned category as these might encourage behavioural conformity (Cialdini and Trost, 1998) for the sake of belonging to a social group and being integrated within an organisation’s social system, even independently of self-identification concerns (Pryor et al., 2019). By exploiting the concept of cohesive relationships as dependent on the frequency of contacts, we propose a _similarity-based_ approach that relies on the agents' decisions regarding their time allocation during each workday. In other words, employees' chances of connection are determined by how similarly they spend their time on the three activities: cooperation (\(c_{i,t}\)), shirking (\(s_{i,t}\)) and individual tasks (\(p_{i,t}\)). The rationale behind this is that employees have a higher chance to meet others who spend their time in similar ways, resulting in a higher probability to connect with one another while performing these activities: The higher the activities similarity, the greater the probability of contact between two agents and the higher the amount of transmitted cues about socially normative information. Assume that the initial connection strength (i.e. edge weight) - how well agents know each other - between two employees \(\{i,j\}\in N\) is zero, such that \(e_{i,j,t-1}=e_{j,i,t-1}=0\). This means that the simulation starts with a "blank slate" social network without any connections and fully isolated agents, i.e. with \(n\) vertices and \(0\) edges. The set of an agent's peers \(P_{i,t}=\{j\mid e_{i,j,t-1}\neq 0\}\) is therefore empty. As a consequence, there is no influence of individual behaviour on other agents' perceived social norms during the first time period. During each subsequent workday, agents make their regular decisions on how to spend their time. Based on employees' decisions, we calculate the activity differences (\(AD_{i,j,t}\)) between each pair of agents regarding the time they spent on the three activity types \(c_{i,t}\), \(s_{i,t}\) and \(p_{i,t}\). This calculation is performed from the perspective of each agent \(i\) regarding all other agents and vice versa. To model this, we exploit and adapt the weighted absolute-deviation index (WADI) proposed by Stewart (2006), which facilitates interpretation and guarantees a higher degree of robustness compared to other dissimilarity indices. Translated into a case-independent equation that is able to deal with any number \(A\) of generic activities \(a\), this leads us to formalise \(AD_{i,j,t}\) as in Equation 1.7 Footnote 7: In this model, performing individual activities (\(p_{i,t}\)) also enters the calculation of the activity similarity measure. Doing so allows modelling of the fact that employees may gain behavioural information from co-workers sharing the same office or working space, even while devoting time to individual tasks. \[AD_{i,j,t}=\frac{\sum_{k=1}^{A}\lvert a_{k,i,t}-a_{k,j,t}\rvert}{\tau} \tag{1}\] Equation 1 describes the weighted absolute difference in activities between two agents with equal daily working time \(\tau_{i,t}=\tau_{j,t}=\tau\). The weights are thus equal to the fraction of the total time spent in each activity8 and it follows that the activity differences between two employees would be symmetrical, that is \(AD_{i,j,t}=AD_{j,i,t}\).9 Footnote 8: The weights in Equation 1 are implicit as they are equal to \(\sum_{k=1}^{A}\frac{(a_{k,i,t}+a_{k,j,t})}{(a_{k,i,t}+a_{k,j,t})}=1\), if employees \(i\) and \(j\) are endowed with the same amount of available maximum working time. Footnote 9: However, interacting pairs of employees may also experience heterogeneous degrees of relational _intensity_. When agents have different time budgets (\(\tau_{i,t}~{}\neq~{}\tau_{j,t}\)), the spread of social norms might be asymmetrical – i.e. \(e_{i,j,t}~{}\neq~{}e_{j,i,t}\) – reflecting the potential asymmetrical reciprocity of relational ties which exerts a great impact on firms’ social dynamics and corporate performance (Lopez-Kidwell et al., 2018). Therefore, each employee might value the three activities differently. In the context of this model, the importance each agent assigns to an activity can be deduced by the relative time spent on that task with respect to its individual time budget. To account for differences in _relative importance_ of a given activity for any agent, we could extend the calculation of the WADI by introducing a simple weight \(w_{i,t}=\frac{\tau_{i,t}}{\tau_{j,t}}\). However, we leave heterogeneous time budgets to future works dealing with flexible working time arrangements. To calculate employees' activity similarity \(AS_{i,j,t}\), we subtract the previously computed activity difference from 1, to reflect the positive impact of perceived activity similarity on agents' connection strength. The higher the activity similarity, the stronger the employees' connection during this working day. \[AS_{i,j,t}=1-AD_{i,j,t} \tag{2}\] This similarity measure \(AS_{i,j,t}\) is used to represent (i) the chance that the two agents have met during a workday \(t\), and (ii) agent \(i\)'s assigned importance of occurred interactions with employee \(j\). To model the chance of agents' interactions, we make a random draw \(d_{i,j,t}\) from a uniform distribution between 0 and 1 for each agent pairing. Let \(d_{t}\) denote the set of these draws.10 Footnote 10: Note that the fixed set \(d_{t}\) necessarily means that interactions are always symmetrical between two agents, implicitly assuming that \(i\) cannot interact with \(j\) without \(j\) also interacting with \(i\). \[d_{t}=\{d_{i,j,t}\sim U(0,1),\;\forall\;(i,j)\in N\}\] (b) If the value of \(d_{i,j,t}\) is less than their activity similarity \(AS_{i,j,t}\) (\(=AS_{j,i,t}\)), they interact during the current workday. If and how well employees know each other (\(e_{i,j,t}\)) determines the order by which agent \(i\) checks for potential interactions (\(I_{i,j}^{pot}\)). Agents will always first check for potential interactions with their existing peers \(P_{i,t-1}\), starting from those with whom they have the strongest connection (i.e. \(\max(e_{i,j,t-1}),\ \forall\ j\in P_{i,t-1}\)) and going through this sequence in descending order. Only after that has been done agents also check for potential interactions with other randomly chosen employees who are yet unknown to them. Each agent \(j\) can only be checked once for possible interaction with \(i\) and can also only be interacted with once. \[I_{i,t}^{pot}=\{j\mid e_{i,j,t-1}>e_{i,k,t-1},\forall\ j,k\in P_{i,t-1},\ j\neq k\} \cup\{j\mid j\in_{R}N\setminus P_{i,t-1}\} \tag{3}\] Therefore, the set of agents with whom employee \(i\) interacts (\(I_{i,t}\)) can be defined following Equation 4. \[I_{i,t}=\{j\mid d_{i,j,t}\ <\ AS_{i,j,t},\ \forall j\in I_{i,t}^{pot}\} \tag{4}\] Equation 4 stochastically determines whether or not two agents interact, and Equation 3 captures the order in which potential interactions are checked. Naturally, this leads to relatively dense networks over time, which is especially evident in the long run if agents' behaviours converge. Such high amounts of daily interactions stand in stark contrast to empirical findings from epidemiology, which show that on an average daily basis, people have 8 (Leung et al., 2017) or 13.4 contacts (Mossong et al., 2008). To avoid the peculiarity of extremely high interactivity in our theoretical model of a firm, a new agent variable is added, which limits the amount of interactions agents can have over the course of one day. At each step, agents pick their maximum amount of interactions (\(\iota_{i,t}\)) from a theoretical distribution (\(ID\)) such that \(\iota_{i,t}\sim ID\).11 This distribution can either be informed by empirical literature or created freely to explore its effects on the modelling results. For the analysis conducted throughout this paper, we have chosen a uniform distribution \(ID=U(0,7.14)\) loosely based on the contact numbers of the above-mentioned studies.12 Therefore, \(I_{i,t}\) can never contain more than \(\iota_{i,t}\) elements, and, as such, the set is truncated after \(\iota_{i,t}\) elements. Footnote 11: To account for the fact that there are no fractional interactions in our model, \(\iota_{i,t}\) is rounded to its nearest integer value. Footnote 12: Attributing the same weight to their results, we assumed that people have \((8+13.4)/2=10.7\) contacts per day on average. Further assuming an equal distribution of contacts across the day, we estimate the average amount of work contacts on a normal work day with \(\tau=8\) to be \(\frac{8}{24}*10.7\approx 3.57\). Whenever agents \(i\) and \(j\) interact, their connection intensifies by \(AS_{i,j,t}\). Otherwise, their connection strength does not change during this workday. We introduce \(\Delta e_{i,j,t}\) to reflect the weight change for each agent \(i\)'s directed edge toward agent \(j\). \[\Delta e_{i,j,t}=\left\{\begin{array}{ll}AS_{i,j,t}&\mbox{if $j\in I_{i,t}$ }\\ 0&\mbox{otherwise}\end{array}\right. \tag{5}\] It describes how strong the interaction between agents \(i\) and \(j\) is during that day, and by that also how important agent \(j\)'s behaviour is for agent \(i\)'s updating of descriptive social norms. At the end of the workday \(t\), the new connection between agents \(i\) and \(j\) can be formulated as in Equation 6. \[e_{i,j,t}=\left\{\begin{array}{ll}\frac{(t-1)\cdot e_{i,j,t-1}+\Delta e_{i,j,t}}{t}&\mbox{if $t\geq 1$}\\ 0&\mbox{if $t=0$}\end{array}\right. \tag{6}\] The edge weights \(e_{i,j,t}\) reflect the long-term interaction history between two agents while also accounting for the fact that the connection between them deteriorates over time if no interaction takes place. They are then used to update the _descriptive_ social norms perceived by agent \(i\), describing the relative influence of peers with whom \(i\) has interacted during this workday. This leads to the following adaptations to equations (3) and (4) from Roos et al. (2022): \[s_{i,t}^{*} =\left\{\begin{array}{ll}(1-h)\;s_{i,t-1}^{*}+h\;\frac{\sum_{j \in I_{i,t}}\Delta e_{i,j,t-1}\;s_{j,t-1}}{\sum_{j\in I_{i,t}}\Delta e_{i,j,t-1 }}&\text{if }I_{i,t}\neq\emptyset\\ s_{i,t-1}^{*}&\text{otherwise}\end{array}\right. \tag{7}\] \[c_{i,t}^{*} =\left\{\begin{array}{ll}(1-h)\;c_{i,t-1}^{*}+h\;\frac{\sum_{j \in I_{i,t}}\Delta e_{i,j,t-1}\;c_{j,t-1}}{\sum_{j\in I_{i,t}}\Delta e_{i,j,t-1 }}&\text{if }I_{i,t}\neq\emptyset\\ c_{i,t-1}^{*}&\text{otherwise}\end{array}\right. \tag{8}\] Rather than modelling employees' motivation to maintain specific ties at the workplace (see e.g. Randel and Ranft, 2007), we assume that agents remember all past interactions with others, no matter how weak the ties between them. Because the strength of each connection can only grow by a value between \([0,1]\) per simulation step, we can observe the _relative_ strength (weakness) of emergent connections between agents who have historically interacted more (less) frequently. ### Adaptive behaviour and job satisfaction A moderate but robust correlation regarding the effects of job satisfaction on job performance has been found by meta-studies (Judge et al., 2001; Fisher, 2003).13 To incorporate this in the model, each employee has a level of job satisfaction \(S_{i,t}\in[0,1]\) that directly influences job performance through short-run productivity effects \(\pi_{i,t}\). Footnote 13: It is noteworthy that while we modelled a direct connection between productivity effects and job satisfaction, there are other plausible relationships dealing with the broad spectrum of employee happiness (Thompson and Bruk-Lee, 2021), organisational citizenship behaviour (Spector, 2022), and counterproductive work behaviour (Nemteanu and Dabija, 2021). Footnote 14: This simplified approach is chosen on a forward-looking basis to facilitate model calibration and later integration of other productivity factors. We assume that \(\pi_{i,t}=(1-S^{eff})+2\cdot S^{eff}\cdot S_{i,t}\) where \(S^{eff}\in[0,1]\) is an exogenous model parameter mediating the effect of satisfaction on productivity.15 For the simulations conducted in this paper, we have chosen \(S^{eff}=0.5\), which results in \(\pi_{i,t}\in[0.5,1.5]\). Under these conditions, dissatisfaction (low \(S_{i,t}\)) directly leads to a reduction in productivity by impacting the intensity with which working time is used and thus individual output (\(O_{i,t}\)). Footnote 15: In a controlling environment, C-type employees are happiest and shirk much less than the social norm, and the opposite occurs with O-type employees. Vice versa under a trusting management attitude. SE and ST employees are assumed to be indifferent to monitoring but responsive to financial rewards. \[O_{i,t}=\pi_{i,t}(p_{i,t}^{(1-\kappa)}\cdot\bar{c}_{i,t}^{\;\kappa}) \tag{9}\] Individual output thus depends on (i) the time devoted to individual tasks (\(p_{i,t}\)), (ii) the average cooperative time (\(\bar{c_{i,t}}=\nicefrac{{1}}{{(n-1)}}\sum_{j\neq i}c_{j,t}\)), and (iii) the extent to which employee \(i\)'s performance depends on the support of co-workers (\(\kappa\)) and on initial productivity. The firm's management employs a certain degree of monitoring \(\Sigma\) which can range between a fully trusting (\(\Sigma=0\)) and a fully controlling (\(\Sigma=1\)) management style.16 The bonus each employee \(i\) receives is defined in Equation 10 and depends on the type of PFP scheme (\(\lambda\)) implemented and individual output \(O_{i,t}\). Pure bonus systems can incentivise only one type of behaviour, i.e. by linking bonus payments to individual (\(\lambda=0\)) or joint (\(\lambda=1\)) output. Mixed PFP schemes (\(\lambda=[0,1]\)) also cover intermediate cases where a proportional combination of output assessment is used. \[B_{i,t}=(1-\lambda)O_{i,t}+\lambda(\frac{1}{n})\sum_{j=1}^{n}O_{j,t} \tag{10}\] The firm pays all employees a homogeneous base wage \(\omega_{b}\), plus individual bonuses (\(B_{i,t}\)), which are weighted for a parameter \(\mu=\{0,1\}\) that reflects the intensity of rewards the management is willing to offer.16 Footnote 16: The intensity of PFP schemes \(\mu\) is such that \(\mu=0\) if no rewards are implemented, and \(\mu=1\) if bonuses are granted, whatever the type. \[R_{i,t}=\omega_{b}+\mu B_{i,t} \tag{11}\] An employee's base satisfaction level (\(S_{i}^{0}\)) can take a value between \([0,1]\) where \(0\) is completely dissatisfied and \(1\) means completely satisfied, therefore defining a neutral level of job satisfaction to be at \(0.5\). Equation 12 shows that it is dependent on the employees' value types, management's monitoring efforts (\(\Sigma\)), the type of implemented PFP schemes (\(\lambda\)), and their intensity (\(\mu\)). The initial satisfaction of each agent at the beginning of the simulation is equal to \(S_{i,t=0}=S_{i}^{0}\). \[S_{i}^{0}=\left\{\begin{array}{ll}\Sigma&\mbox{if $i$ \in\ C-type}\\ 1-\Sigma&\mbox{if $i$ \in\ O-type}\\ 0.5+\mu(0.5-\lambda)&\mbox{if $i$ \in\ SE-type}\\ 0.5+\mu(\lambda-0.5)&\mbox{if $i$ \in\ ST-type}\end{array}\right. \tag{12}\] Since satisfaction carries over from one day to another, we can state that \(S_{i,t}=S_{i,t-1}\) at the beginning of a new time step in the simulation. Should \(S_{i,t-1}\) deviate positively (negatively) from \(S_{i}^{0}\), it is reduced (increased) by \(1\%\) of its value, as formulated in Equation 13.17 Footnote 17: It is also conceivable to model satisfaction recovery in a non-linear fashion such that only the _offset_ from base satisfaction would be reduced by \(1\%\). A possible formalisation would be where greater deviations from base satisfaction are reduced faster and a total recovery back to \(S_{i}^{0}\) is made impossible, such that \(S_{i,t}=S_{i,t-1}-\frac{S_{i,t-1}-S_{i}^{0}}{100}\). Regardless of the chosen implementation, the restriction of \(S_{i,t}\in[0,1]\) shall always hold. \[S_{i,t}=\left\{\begin{array}{ll}0.99\ S_{i,t-1}&\mbox{if $S_{i,t-1}>S_{i}^{0}$}\\ 1.01\ S_{i,t-1}&\mbox{if $S_{i,t-1}<S_{i}^{0}$}\end{array}\right. \tag{13}\] During each period, the management observes a random subset of workers and controls for excessive shirking levels. The management checks a subset of randomly drawn employees \(ETC\subset N\) with cardinality \(|ETC|=\Sigma\cdot n\). We assume that the management willingly accepts a certain amount of shirking activity (\(s^{max}\)) because it is inevitable to some degree and might even be beneficial (Vermeulen et al., 2000; Campbell and Zipay, 2019). This threshold might be subject to various considerations such as the firm's desired revenue or profit margin, management values, or just the observed behaviour of employees. For the time being, the simulations conducted with this second model extension will use a constant value of one-tenth of the available working time of all agents, i.e. \(s^{max}=\tau/10\). Thus, when checking on employees, management deems their shirking levels to be reasonable as long as \(s_{i,t}\leq s^{max}\). Since receiving a warning from superiors is generally a negative experience, employees become more dissatisfied after getting caught shirking too much, thus lowering their productivity. To which extent getting caught impacts agents' degree of satisfaction \(S_{i,t}\) might depend on agents' value types (Chatman, 1991), however, we model the impact on satisfaction after receiving a verbal warning in the same manner for all agent types. Thus, a verbal warning will reduce employee satisfaction by an arbitrary _shock of being caught_ (\(\eta=[0,1]\)) such that \(S_{i,t}=S_{i,t}(1-\eta)\). The simulation results discussed in this paper have used a constant \(\eta=0.05\). If workers get caught _for the third time_ shirking more than accepted, the management will issue a written warning (\(ww_{i,t}\)) signalling that repeating such behaviour might result in some form of punishment.18 The warnings have two effects: (i) the worker might shift less in the future for fear of bad consequences; hence _individual_ deviations from the shirking norm are modelled along with the type-specific ones; (ii) workers get _dissatisfied_ with their work. This implies the existence of an optimum degree of monitoring for which the positive deviation from the shirking norm is minimised while keeping a high employee satisfaction (and thus productivity). Footnote 18: That being said, there is no form of consequence or punishment implemented in the presented model. Therefore, this provides an intriguing venue for future research, as for example in a model dealing with hiring-firing mechanisms and their impact on the labour market. Letters of reprimand are written in reaction to management's actual observations of shirking behaviour, hence, receiving one reduces workers' future positive deviations from the shirking norm. Formally we model this with an individual-specific scaling factor \(\beta_{i,t}\), with \(\beta_{i,t}=1\) as its default state, which alters the upper bound of the triangular distribution used for individual decision making19, see Equation 14. Figure 2 provides an example of what would happen for a \(\beta_{i,t}\) of \(\nicefrac{{2}}{{3}}\) (red line) versus the baseline case (black line) assumed in Roos et al. (2022). Footnote 19: The original upper bound was \(b_{i,t}=s_{i,t}^{*}(1+\delta_{i})\), as can be seen on row number 10 of Table A2 in Appendix A. \[b_{i,t}=s_{i,t}^{*}(1+\beta_{i,t}\ \delta_{i}) \tag{14}\] Figure 2: Density function of a triangular distribution for shirking behaviour with \(\beta_{i,t}\ =\ \nicefrac{{2}}{{3}}\). Adaptation of Roos et al. (2022)’s triangular distributions of agents’ stochastic behaviour. The equations defining the parameters of the triangular distribution can be found in Table A2. Source: Authors’ own illustration. Changes to \(\beta_{i,t}\) are assumed to have a persistent effect which gradually decreases over time. Thus, the more time has passed since the last written warning was received, the less an employee's value-based behaviour is modified. To capture this, written warnings are modelled as a finite set that takes record of the steps at which the agent has been caught shirking more than acceptable for the third time: \(WW_{i,t}=\{ww_{1},ww_{2},\ldots,ww_{n}\}\). If the set is non-empty, employees will permanently alter their shirking behaviour according to _how long ago_ the last warning (\(ww_{n}:ww_{n}\ \in\ WW_{i,t}\neq\emptyset\)) was received and _how many_ warnings (\(|WW_{i,t}|\)) were recorded overall, see Equation 15. \[\beta_{i,t}=\left\{\begin{array}{ll}1-\frac{|WW_{i,t}|}{3}+\frac{|WW_{i,t}|} {3}\ \frac{t-x_{n}}{t}&\text{if }0\leq|WW_{i,t}|<3\\ 0.0+1.0\ \frac{t-x_{n}}{t}&\text{otherwise}\end{array}\right. \tag{15}\] Instead of just being verbally admonished to shirk less, receiving a written warning is an important formal signal of management control which results in a bigger impact on employee satisfaction. We chose a factor of three for the simulations in the work at hand. Hence, after receiving a written warning, the affected agent's satisfaction is reduced to a fraction of its former value, such that \(S_{i,t}=S_{i,t}(1-3\eta)\). ### Endogenous management strategies Differently from the static and exogenous management assumptions in Roos et al. (2022), both monitoring and incentives are dynamic and endogenous here. The management tracks key company benchmarks, namely average company output (\(\bar{Q}_{t}\)) as well as the average shirking (\(\bar{s}_{t}\)) and cooperative times (\(\bar{c}_{t}\)) of observed workers over the past \(x\) periods. The average observed shirking and cooperative times are defined respectively as \[\bar{s}_{t}=\frac{1}{n}\sum_{i}^{ETCt}s_{i,t} \tag{16}\] and \[\bar{c}_{t}=\frac{1}{n}\sum_{i}^{ETCt}c_{i,t} \tag{17}\] where \(ETC_{t}\) is the now endogenous subset of observed workers with cardinality \(|ETC_{t}|=\Sigma_{t-1}\cdot n\).20 Management judges recent developments based on the preset goals of expected group output (Equation 19), the exogenous degree of task interdependence \(\kappa\) (Equation 9), and maximum acceptable shirking time (Equation 18). Footnote 20: Please note that in the previous extension (Section 2.2) the cardinality of the subset of observed workers was exogenous, as \(\Sigma\) was not responding to any company benchmarks. In contrast to the fixed value chosen in the previous extension, the maximum acceptable shirking time is now endogenised as \(s_{t}^{max}\) which reflects the management's adaptive expectations regarding the usual and necessary work efforts of the firm employees. Considering the deliberate absence of any external (e.g. market-related) factors in our model that might influence management behaviour, we propose that the maximum accepted shirking level is modelled in a similar fashion to how it has been done for other social norms. Thus, \(s_{t}^{max}\) can be understood as the shirking norm perceived by the management and depends on both its previous value (\(s_{t-1}^{max}\)) and the mean shirking behaviour of the observed agents on the previous day (\(\bar{s}_{t-1}\)). How quickly the management adapts \(s_{t}^{max}\) again depends on the exogenous parameter \(h\) previously used in Equations 7 and 8.21 Instead of peer influence as in the case of agents, changes to \(s_{t}^{max}\) depend on those agents that the management has controlled on the previous day. Note that the reference point is shifted from an individual to a top-down aggregate view (see the second column of Table A2 in Appendix A). Footnote 21: In a model including value-based management, these changes in maximum acceptable shirking time could be further endogenised with theory-driven behavioural rules. \[s_{t}^{max}=(1-h)\;s_{t-1}^{max}+h\;\bar{s}_{t-1} \tag{18}\] The management can now infer an expected group output \(EGO_{t}\), which is the maximum of the Cobb-Douglas type production function under the constraint \(s_{t}=s_{t}^{max}\). Let us denote employees' available time out of the maximum acceptable shirking threshold with \(\alpha_{t}=\tau-s_{t}^{max}\), \(EGO_{t}\) is then defined as follows. \[EGO_{t}=\left[\alpha_{t}(1-\kappa)\right]^{(1-\kappa)}\cdot(\alpha_{t}\kappa)^ {\kappa} \tag{19}\] Monitoring and incentive strategies are updated in a pre-determined interval according to a _strategy update frequency_ parameter (\(suf\in\mathbb{N}\)). The future degree of corporate monitoring (\(\Sigma_{t}\in[0,1]\)) is determined as in Equation 20 where \(sui\in\mathbb{R}^{+}\) is an exogenous _strategy update intensity_ parameter. We have chosen \(suf=\{1,30,180,365\}\), \(sui=\{\nicefrac{{1}}{{60}},\nicefrac{{1}}{{20}},\nicefrac{{3}}{{10}}, \nicefrac{{73}}{{120}}\}\), and \(x=suf\) for the results discussed in Section 4. \[\Sigma_{t}=\left\{\begin{array}{ll}(1+sui)\Sigma_{t-1}&\text{if }\frac{1}{x}\sum\limits_{t-x}^{t-1}\bar{s}_{t}>s_{t}^{max}\\ (1-sui)\Sigma_{t-1}&\text{if }\frac{1}{x}\sum\limits_{t-x}^{t-1}\bar{s}_{t} \leq s_{t}^{max}\\ (1-\frac{\bar{O}_{t}}{EGO_{t}})sui&\text{if }\Sigma_{t-1}=0\;\wedge\;\frac{1}{x} \sum\limits_{t-x}^{t-1}\bar{O}_{t}<EGO_{t}\end{array}\right. \tag{20}\] Therefore, the management becomes more (less) controlling when the average observed shirking (\(\bar{s}_{t}\) ) of the current set of monitored employees (\(\forall\;i\in ETC_{t}\)) exceeds (stays within) the maximum predefined threshold \(s_{t}^{max}\). As can be noted from Equation 20, we also account for a special case which takes place when the management has adopted a fully trusting strategy in the previous period, i.e. when \(\Sigma_{t-1}=0\). When this event occurs, it is reasonable to conceive the management as indifferent to employees' shirking attitudes. In this case, the firm would instead anchor any monitoring decisions to the average company output (\(\bar{O}_{t}\)) such that \(\Sigma_{t}\) would increase proportionally to how far away \(\bar{O}_{t}\) was from expected group output (\(EGO_{t}\)). While monetary incentives are assumed to have positive steering effects on employee motivation (Gerhart, 2017), the management should keep the amount of financial rewards as low as feasible as it contributes to the overall costs of the firm. Wages are assumed to be sticky to some extent, as reflected in Equation 21. The management increases (decreases) the amount of financial rewards when the average company output (\(O_{t}\)) is below (above) the management's expected group output. If the company benchmark - namely \(EGO_{t}\) - is reached, we assume that the management has no incentive to alter the amount of rewards. \[\mu_{t}=\left\{\begin{array}{ll}(1+sui)\mu_{t-1}&\text{if }\frac{1}{x}\sum \limits_{t-x}^{t-1}\bar{O}_{t}<EGO_{t}\\ (1-sui)\mu_{t-1}&\text{if }\frac{1}{x}\sum\limits_{t-x}^{t-1}\bar{O}_{t}>EGO_{t} \\ \mu_{t-1}&\text{otherwise}\end{array}\right. \tag{21}\] When the management observes a subset of employees (\(ETC_{t}\)), it also gathers information about the amount of time they have devoted to cooperative activities (\(\bar{c}_{t}\)). The management shifts to a higher (lower) degree of competitive rewards when the desired amount of time allocation to cooperation (\(\kappa\cdot\alpha_{t}\)) is (not) achieved. By doing so, we also account for mixed PFP schemes (\(\lambda_{t}\in[0,1]\)), i.e. schemes that combine collective and individual rewards, which represent the most common type of rewards used in real-world scenarios (Nyberg et al., 2018). \[\lambda_{t}=\left\{\begin{array}{ll}(1+sui)\lambda_{t-1}&\text{if }\frac{1}{x}\sum \limits_{t-x}^{t-1}\bar{c}_{t}<\kappa\cdot\alpha_{t}\\ (1-sui)\lambda_{t-1}&\text{if }\frac{1}{x}\sum\limits_{t-x}^{t-1}\bar{c}_{t}> \kappa\cdot\alpha_{t}\\ \lambda_{t-1}&\text{otherwise}\end{array}\right. \tag{22}\] Equations 20, 21, and 22 could result in values below or above the parameter boundaries of \([0,1]\). In these cases, \(\Sigma_{t}\), \(\mu_{t}\) or \(\lambda_{t}\) will be rounded to the nearest possible value inside the interval. Further, any changes to the management style (\(\Sigma_{t}\), \(\mu_{t}\) or \(\lambda_{t}\)) also induce an update of the base satisfaction of agents \(S_{i}^{0}\) in the same way as described in Equation 12. ## 3 Simulations and results Our main research question is about the potential effect of corporate culture on the profit differentials of otherwise similar firms. To shed light on this, the current section presents our findings from the conducted agent-based simulations by focusing on three aspects identified to impact profitability: (i) the frequency of changes in management decisions, (ii) the influence of employees' homophily in interactions, and (iii) the role of job satisfaction. A detailed description of the agent-based simulation algorithm can be found in Figure B1 in Appendix B. The simulations have been run for 3650 time steps (i.e. 10 years) with a stable workforce. The presented results are mean aggregates over 100 uniquely seeded replicate runs for each parameter constellation (= scenario). All initial model parameters are summarised in Table A1 in Appendix A.22 The nine scenarios previously used by Roos et al. (2022) have become meaningless here with the endogenisation of the management strategy. As such, all further simulations will start from the same neutral management strategy which equals to the previously used Base scenario. As outlined in the model description (see Section 2), the management strategy regarding monitoring efforts and implemented pay for performance scheme now changes over time and depends on the chosen strategy update intensity (\(sui\)) and frequency (\(suf\)). Hence, we introduce four new scenarios in Table 1 that modulate these parameters with which the management deterministically reacts to changes in the observed firm variables.23 Footnote 23: While both \(suf\) and \(sui\) are modulated, the ratio of \(\nicefrac{{sui}}{{suf}}\) is held constant to keep the amount of scenarios low. This assumption can be relaxed for more in-depth analysis of these model parameters. Figure 3 displays the firms profitability over time across four scenarios with varying degrees of strategy update frequency and intensity (see Table 1 for details). Profitability has been formalised as the ratio of sum of output to sum of rewards.24 The main plot in the top left subfigure shows this from an aggregate perspective, taking into account the output and rewards of all firm employees. The top right contains four subfigures providing a more fine-grained view on the profitability of each of the four value groups. The evolution of the management strategy according to the three parameters monitoring (\(\Sigma\)), intensity (\(\mu\)), and type (\(\lambda\)) of implemented PFP scheme can be tracked in the bottom row of Figure 3. Footnote 24: Note that this ratio is of deeply theoretical nature as the model at hand does not have a market to convert output into money. As such, the relative differences in profitability allow for a discussion of corporate culture as a potential source of profit differentials between otherwise equal firms. Therefore, this abstraction to relate units of output to unspecified monetary units of reward payments seems sufficient for the aims of this paper. The top left plot shows that changes in profitability get more erratic with decreasing strategy update frequency whereas more incremental updates lead to a smoother transition over time. This allows the firm's management to react faster to new business insights and closely adapt their currently employed strategy in accordance with the underlying heuristics. After one year the Yearly scenario brings the best results in terms of profitability, which points towards the positive impact of a stable environment that allows social norms to manifest and spread among the employees. However, this changes rapidly in the following year where drastic modifications of the management style (increased monitoring due to higher than acceptable observed shirking) lead to severe drops in profitability. Although Conservative agents react positively to this change, even leading to a short-lived rise in profitability for these employees, the decline in aggregate profitability can be observed in Figure 3 across all value-types from the second year onward, where only Open-to-change agents' profitability shows a U-shaped recovery. One noteworthy outlier is the profitability of Conservative and Open-to-change agents in the Daily scenario, implying that small and frequent updates lead to (un)favourable outcomes for employees in these higher-order value group. Here, the management can constantly observe the shirking of a subset of employees and adapt its expectations (i.e. \(s_{t}^{max}\) and therefore also \(EGO_{t}\)) to what has actually happened over the past day. The resulting early drops in monitoring efforts \(\Sigma\) significantly increase the satisfaction levels (i.e. productivity) of Open-to-change agents.25 Even though the model has been built under the \begin{table} \begin{tabular}{l c c} \hline \hline Name & \(suf\) & \(sui\) \\ \hline \hline Daily & 1 & 1/600 \\ Monthly & 30 & 1/20 \\ Biannually & 180 & 3/10 \\ Yearly & 365 & 73/120 \\ \hline \hline \end{tabular} \end{table} Table 1: Varying strategy update intensity (\(sui\)) and frequency (\(suf\)) combinations in four scenarios for use with the endogenous management extension. assumption of behavioural symmetry between employees of opposing higher order values, Conservative agents do not completely mirror the reactions of Open-to-change agents because their increasingly productive behaviour also manifests itself in the social norms perceived by others. Furthermore, any reduction in monitoring also lowers the amount of observed employees each day, thus leading to less verbal and written warnings, and ultimately resulting in higher satisfaction across the whole population. In the Daily scenario, this effect counters the negative impact on satisfaction of Conservative employees throughout years one to three and even leads to small increases in their profitability. Yet the long-run trend towards very low degrees of monitoring throughout all four scenarios eventually overshadows these gains and lead to convergence of Conservative and Open-to-change employees' profitability around 0.19 and 0.56 respectively. Footnote 1: [https://www.sds.org/](https://www.sds.org/) Although also positively affected by the reduction in warnings issued by the management, the evolution of Self-enhancing and Self-transcendent agents is driven by different influential factors. With respect to the intensity of implemented incentive schemes, the four scenarios paint a very similar, albeit slightly time-lagged, picture. After two years at most, they all lead to maximum \(\mu\), leaving this parameter around this level until the end with only a short-lived dip in PFP intensity in years four to six of the Yearly scenario. As laid out in Equation 21, Figure 3: Firm profitability (top row) and management strategy (bottom row) over time across four scenarios. Top left plot shows aggregate profitability across the whole work force. Top right plots show profitability broken down into the four higher-order value groups. Three plots in bottom row show the development of the management strategy parameters. Source: Authors’ own illustration. changes to the amount of incentives paid depend on management's expectations regarding expected group output which is starkly influenced by what is observed over time by the management as normal shirking behaviour. Indeed, the results hint at those expectations being practically unachievable under most simulation settings. The only exception is a period of two years in the Yearly scenario where the sustained productivity of Self-enhancing employees and the rapidly increasing productivity of Open-to-change employees contribute to levels of output that are high enough to warrant a reduction in monetary incentives. Still, it is important to note that the amount of paid incentives has a strong impact on the calculation of profitability, ultimately adding to the explanation of the ongoing side-/downward trend in aggregate profitability after year six (and earlier when looking at the separate higher order value groups). Changes to the type of implemented PFP scheme occur in the first half of the simulation period and cause positive/negative reactions from Self-enhancing/Self-transcendent agents. However, the positive impact on the former group's behaviour is diminished by their peers from other value groups, thus countering the development of potentially more profitable social norms. This kind of mitigation cannot be observed for Self-transcendent employees which is caused by their social networks exhibiting high degrees of homophily26 at approximately twice the levels of all other value groups in the long run. As such, this management decision is overshadowed by the declining influence of Self-transcendent employees on the social norms across the whole firm. Subsequently the decision to lower \(\lambda\) is reverted from years 2 (Daily) to 4 (Yearly) onward, eventually remaining at high levels above 0.9 again. Footnote 26: Homophily is defined as the weighted share of agent \(i\)’s peers who belong to the same value group as \(i\). The weights are determined by the current connection strength at time \(t\) between agents \(i\) and \(j\;\forall\;j\in P_{i,t}\). Figure 4 provides insights on how homophily in interactions between the employees changes Figure 4: Interactions homophily in the endogenous social network across four scenarios. Left plot shows aggregate interactions homophily across the whole work force. Right plots show interactions homophily broken down into the four higher-order value groups. Source: Authors’ own illustration. over time. The main plot on the left side depicts the average homophily of all agents' interactions in the endogenous social network across the four scenarios and also provides a dashed horizontal line as a reference case with an unweighted complete graph. The four subplots on the right side are again divided by the higher-order value types and display their mean interactions homophily for inter-group comparisons. While it is evident that differences exist between all four employee types, Self-transcendent agents reach severely higher degrees of homophily (\(0.80-0.85\)) than the three other groups. Since the probability for two agents to interact depends on their activity similarity, the stronger deviations from social norms allow Open-to-change agents to consistently achieve the most inter-group interactions at the end of the four simulated scenarios (\(0.37-0.38\)). Conservative (\(0.39-00.41\)) and Self-enhancing (\(0.41-0.46\)) employees find themselves in the middle. The explanation for the wider spread of the latter group can be found in the different intensity of competitive incentives across the four scenarios (cf. bottom right plot in Figure 3) which leads to temporary boosts to activity similarity in this value group. These findings suggest that even short-lived changes in management strategy can have a lasting effect on the firm's network and by that consequently also affect profitability. Comparing the satisfaction (see Figure 5) and profitability curves of the four agent groups reveals high similarities of their evolution for almost all employees except those belonging to the Self-transcendent group. The correlation values reported in Table 2 show that bivariate correlations between satisfaction and profitability are strong for Conservative, Open-to-change, and Self-enhancing agents (\(SP\geq 0.879\)), which suggests that job satisfaction is indeed a positive influential factor for these employees. The low correlation of Self-transcendent agents is an indicator that there are cases in which job satisfaction is indeed high with no opportunity to translate this into high levels of output and/or profitability. Figure 5: Job satisfaction across four scenarios. Left plot shows aggregate satisfaction across the whole work force. Right plots show satisfaction broken down into the four higher-order value groups. Source: Authors’ own illustration. This is due to their generally unproductive allocation of time with too much emphasis on cooperative activities and the accompanying social separation from the rest of the firm's employees which in combination lead to low profitability of Self-transcendent agents. However, for all higher order value groups a lower strategy update frequency generally implies a more positive correlation between satisfaction and profitability. Homophily has weak explanatory value for the profitability of Self-enhancing agents and relatively low, although firmly negative, correlations for Conservative agents. One exception is the Daily scenario where the early reduction of monitoring leads to a vastly divergent result and by that pushes the correlation coefficient of satisfaction and profitability for Conservative agents further into negative territory. The interaction homophily of Open-to-change (Self-transcendent) agents follows their profitability in more pronounced ways, exhibiting strong positive (negative) correlations that decrease in intensity with lower strategy update frequency. Satisfaction and interaction homophily evolve in similar ways for both Open-to-change and Self-transcendent agents and show only low signs of correlation for Self-enhancing agents. However, there is a negative correlation for Conservative agents implying that higher satisfaction levels are accompanied by lower homophily in their interactions. These observations suggest that a management might want to implement \begin{table} \begin{tabular}{l l l l l} \hline \hline Value group & Scenario & \(SP\) & \(HP\) & \(SH\) \\ \hline \hline \multirow{4}{*}{Conservative} & Daily & 0.879 & -0.703 & -0.817 \\ & Monthly & 0.977 & -0.236 & -0.291 \\ & Biannually & 0.978 & -0.256 & -0.238 \\ & Yearly & 0.970 & -0.274 & -0.225 \\ \cline{2-5} & Daily & 0.943 & 0.948 & 0.899 \\ & Monthly & 0.998 & 0.831 & 0.818 \\ & Biannually & 0.998 & 0.843 & 0.842 \\ & Yearly & 0.994 & 0.860 & 0.879 \\ \cline{2-5} & Daily & 0.982 & -0.126 & -0.156 \\ & Monthly & 0.927 & -0.350 & -0.494 \\ & Biannually & 0.973 & 0.013 & 0.053 \\ & Yearly & 0.954 & 0.071 & 0.158 \\ \cline{2-5} & Daily & -0.855 & -0.977 & 0.883 \\ & Monthly & -0.838 & -0.966 & 0.902 \\ & Biannually & -0.651 & -0.962 & 0.761 \\ & Yearly & -0.268 & -0.931 & 0.519 \\ \cline{2-5} & Daily & -0.141 & -0.220 & 0.937 \\ & Monthly & 0.602 & 0.138 & 0.760 \\ & Biannually & 0.006 & -0.090 & 0.843 \\ & Yearly & 0.023 & -0.110 & 0.793 \\ \hline \hline \end{tabular} \end{table} Table 2: Bivariate Pearson correlations between satisfaction and profitability (\(SP\)), interactions homophily and profitability (\(HP\)), and satisfaction and interactions homophily (\(SH\)). Results have been truncated to three digits. measures that increase the embeddedness of Self-transcendent agents in the broader firm population (thus lowering their homophily levels) while at the same time such measures are likely to be relatively ineffective for Self-enhancing agents. For Conservative, Open-to-change, and Self-enhancing employees the effects of high job satisfaction on profitability are likely stronger than the impact of their personal social network, suggesting that a sensible management would instead cater to this by implementing measures that raise their general job satisfaction. The last four rows of Table 2 show how the variables correlate with each other when using average values computed from all firm employees. In the Biannually and Yearly scenarios, both satisfaction and interactions homophily show only very slight correlations to firm profitability which suggests that these aspects of corporate culture play a relatively minor role in scenarios with slow-moving management strategies. This effect is more pronounced in the Monthly scenario (\(SP=0.602\)) which is also the only case where interactions homophily has a positive correlation coefficient with profitability (\(HP=0.138\)). Quite contrarily, daily management strategy updates lead to a situation in which satisfaction and interactions homophily are strongly positively correlated (\(SH=0.937\)) and higher values are accompanied by lower profitability.27 To summarise, three out of four scenarios show a more or less moderate positive correlation between satisfaction and profitability which qualitatively fits the findings in the empirical literature (Judge et al., 2001; Fisher, 2003). Footnote 27: While these results are somewhat unexpected and might even warrant a more in-depth examination and discussion, it has to be duly noted that the Daily scenario is an extreme edge case. It implies perfect willingness and ability of both employees and management to adapt to an ever-changing work environment and does not incorporate any cost of changing strategies (which arguably becomes more important with higher strategy update frequencies). Going back to Figure 3, after some time the four scenarios have reached states of slowly declining profitability past 6 simulated years which persist until the end of the simulation. Yearly strategy updates yield the highest profitability at approximately 0.2751 after ten years. The other three scenarios reached profitability levels closer to each other with Biannually at \(\approx 0.2675\), Monthly at \(\approx 0.2653\), and Daily at \(\approx 0.2647\). Thus, it becomes apparent that (i) fluctuations in profitability increase with less frequent changes in management style, (ii) achieved levels of profitability are higher under a less adaptive management, and (iii) management expectations regarding output and normal degrees of shirking play a crucial role in the long-term profitability of firms. Although the differences between the scenarios are clearly visible, their endpoints are relatively close to each other while also only providing a snapshot of the current profitability at any given time. Cumulated profitability over time paints another picture, however, as it takes into account the accumulated profits in each scenario's pathway which can then be compared to a neutral baseline scenario28 without any changes in management style. We can see in Figure 6 that the four lines at first follow similar trajectories but indeed deviate from each other's paths over the long run. Footnote 28: The model parameters shaping the management style of this scenario are \(\Sigma=0.5\), \(\mu=0.0\), and \(\lambda=1.0\) which is identical to the Base scenario used in Roos et al. (2022). Considering only the end of the simulations, Table 3 provides some insight on the fact that the differences in cumulated profits are deeply ingrained in the emerging corporate culture that is shaped by social norms and the frequency of strategic management decisions. The Yearly scenario yields the highest cumulated profitability in the long run even though slow changing strategies may lead to managerial overreaction due to lower ability to adapt to new insights in the short term. Daily and biannual changes in management strategies lead to medium levels of cumulated profitability whereas pursuing monthly strategy changes consistently leads to the worst performance among the four scenarios. However, it has to be noted that managerial interventions only produce higher levels of relative profitability in the first half year and continue to perform below the reference level for the rest of the simulation. The neutral baseline scenario which neither changes monitoring nor type or intensity of implemented incentive schemes continues to outperform the adaptive scenarios by more than 59%. The equal distribution of higher order values in the simulated \begin{table} \begin{tabular}{r|c c c c c} Scenario & Base & Yearly & Daily & Biannually & Monthly \\ Relative profitability & 100.00 & 62.85 & 61.75 & 59.81 & 57.69 \\ \end{tabular} \end{table} Table 3: Relative profitability of the four scenarios at the end of the simulation in percent, sorted in descending order from left to right. This is measured as the relation of their cumulated firm profitability values to that of the baseline scenario without endogenised management decision making. Figure 6: Cumulated firm profitability over time for each of the four scenarios divided by the cumulated firm profitability of a baseline scenario with a neutral and constant management style (black dashed horizontal line). Source: Authors’ own illustration. workforce is reflected in the employed neutral scenario which does not favour or adversely affect the behaviour of any particular group. This finding suggests that it might be better to not change the implemented management strategy at all and instead rely on the realisation of self-organisational capacity in the social network of the firm's employees. Given the knowledge of the distribution of higher order values among the employees, the firm's management could anticipate which strategy would provide the best long-term fit to the emerging corporate culture that manifests itself in the employees' behaviour guided by personal values and social norms. ## 4 Discussion We view the firm as a complex adaptive system, i.e. a system of a large number of agents that interact and adapt (Holland, 2006). In our model of the firm, several kinds of interactions and adaptations occur. Employees interact by forming a network and by cooperating. Through their actions, social norms of behaviour emerge, to which each worker adapts in line with his or her own values. The evolving social norms anchor employees' behaviours, but do not determine them completely. The management also interacts with the employees. The management tries to keep shirking under control, to achieve high output and to promote cooperation among employees. It uses direct control instruments and the parameters of a monetary reward scheme to influence employees' actions. Employees indeed adapt to the management's policies. They change their shirking and cooperation behaviour, and the satisfaction, which influences their productivity, adapts, too. Finally, the management adapts its management strategy to the observed outcomes of the use of the management instruments. Hence, corporate culture in the form of employees' social norms, that guide their behaviour, and management strategies (or the firm's formal institutions) co-evolve. Due to the adaptation of the firm's corporate culture, it is difficult for the management to influence the behaviour of employees in the desired way. Hence, there are strong constraints on the ability of the management to control the system with the given management tools. Our analysis focused on what we call management scenarios. The four scenarios considered how often and how strongly the management updates the intensity and the frequency of its instrument use (or strategy). Our first result is that in the long run (i.e. after about six years), all scenarios converge to similar level of profitability and almost identical strategies. In the long run, the management does not monitor employees' shirking anymore and uses group performance rewards as an incentive scheme. Over time there is an implicit learning effect that stems from the management's observations of actual employee behaviour, gradually leading to adaptation of expectations regarding shirking behaviour and produced output. The finding that the management completely abandons monitoring efforts in the long run can therefore be interpreted as the endogenous emergence of trust. Despite this convergence, there are enormous differences in profitability across the scenarios during the adaptation process. Especially the two extreme adaptation styles - daily updating and yearly updating - lead to stark temporary differences in management strategies. While daily updating leads to a gradual reduction of monitoring and a gradual and relatively soon increase in group rewards, yearly updating for some years generates almost the opposite strategy, i.e. high monitoring and low group rewards. Nevertheless, the cumulated profitability of these extreme adaptation styles are close together, indicating that both management strategies are relatively successful. The cumulative profitability of biannual adaptation, but especially of monthly adaptation, is clearly lower. The scenarios differ with regard to the impact on the different types of employees. Daily adaptation29 leads to an early reduction of monitoring and is strongly appreciated by O-agents who in turn experience high levels of job satisfaction. As shown in Roos et al. (2022), O-agents are important for the evolution of corporate culture because their motivation or demotivation can impact others through their wide-reaching influence on social norms. In contrast to that, the yearly updating scenario produces temporarily high individual monetary rewards which have a motivating effect on SE-agents. In the long run, both C- and SE-agents end up with very low levels of job satisfaction because the instruments they value (i.e. monitoring for C-agents and individual rewards for SE-agents) are not used. The mirror image of this result is that O-agents and ST-agents converge to very high levels of job satisfaction. Because of the link between job satisfaction and labour productivity, the long-run demotivation of some employees is a crucial issue. In our model calibration, a long-run job satisfaction close to zero of C-agents and SE-agents implies that both groups only work with the minimum productivity of 0.5 resulting in a substantial output loss of the firm. Footnote 29: Daily updating is more a theoretical limiting case than a realistic description of management behaviour. In the presence of decision-making costs, the management cannot be expected to change its strategy on a daily basis. The findings regarding job satisfaction require some qualifications. It seems plausible that employees with very low job satisfaction will quit their job at some point in time whereas there is no turnover of the firm's workforce in our current model. Both the dynamics and the long-run outcomes might be quite different if employees were allowed to leave the firm when their job satisfaction falls below a threshold for a certain time. The different adaptation styles of the management strategy might then lead to a selection of particular types of employees in a firm. A firm with daily updating might quickly lose all C-agents and over time also most SE-agents. Vice versa, every more long-term updating would probably drive away O-agents rather soon. With a change of the workforce composition, we might expect that the four adaptation scenarios will not converge to the same management strategies in the long run. We leave a detailed analysis of workforce turnover to future work. Another interesting finding concerns the dynamics of the interaction homophily in the endogenous social network. There are practically no differences across the adaptation scenarios, suggesting that the network dynamics are unaffected by the management style. This finding can be interpreted as a form of self-organisation that constrains the management's ability to control the system. While the interaction homophily of C-, SE- and O-agents convergences to the same level, the long run value of ST-agents' interaction homophily is twice as high. This means that this group has a much stronger in-groups connectivity than the others. ST-agents hence strongly interact among themselves implying that they develop their subculture within the firm over time. This result might be relevant if the management tried to influence corporate culture directly, e.g. by communication, which is not represented in our model. An analysis of direct efforts of the management to affect corporate culture is another interesting topic for future research. A final remarkable result is that all adaptation scenarios with changing management styles lead to significantly lower cumulated profitability than a baseline scenario in which the management initially chooses a neutral management style and sticks with it forever. Neutral management style means that the management chooses an intermediate monitoring strategy and abstains from using pay-for-performance schemes. The key point is that while it is possible to influence employees' behaviour with monetary incentives, using rewards is also costly. Furthermore, a constant strategy makes it easier for social norms to converge quickly on a dominant path (see Roos et al., 2022). Hence letting the self-organisation forces within a firm work might be preferable to active management efforts to achieve a certain behaviour. However, we consider this conclusion as tentative and preliminary and stress that more research is necessary to check its robustness to variations of the model. In future work, our model could be modified in several dimensions. First, employees only differ in terms of their values, but not in terms of other characteristics such as skills or knowledge. As a consequence, the task interdependence and hence the necessary cooperation is rather abstract. In the present model, it is not necessary that certain employees cooperate due to skill or knowledge complementarities, which is a limitation. Relatedly, there is no hierarchy and no formal working structure in the firm whereas both might have an impact on the formation of norms and the evolution of intra-firm subcultures. Second, there is no labour turnover. Employees do not quit if they are dissatisfied and the management does not fire underperformers or employees who received several written warnings as a result of being caught shirking more than deemed acceptable. When employees are allowed to leave the firm, a hiring process of new employees must also be modelled. By selecting employees according to their value types, the management would have another management instrument that might be crucial for performance. Third, the management is modelled as an abstract entity, but in reality it consists of individuals with values and behaviours as well. Modelling managers as individuals could also have a direct impact on corporate culture, either by being a role model or through direct communication efforts. Finally, in the current model, both monitoring and the updating of the management strategy are costless. However, monitoring efforts would cause a direct resource cost impacting profitability while changing the management strategy would require efforts from the managers and entail learning and implementation costs. These might also depend on personal characteristics of the managers and on power relations within the firm. ## 5 Conclusion Our paper shows that a firm can be viewed and modelled as a complex adaptive system. Due to the adaptation of employees to the management strategy, the emergence of social norms and self-organisation within the workforce, the management's ability to control the firm is limited. We define corporate culture as the endogenous social norms that regulate employees' shirking and cooperation behaviour, which in turn has a direct impact on the firm's output. Employees' responses to social norms are driven by their values. The management tries to influence employees' behaviour directly through monitoring and monetary incentives, but does not consider the indirect effects on corporate culture. This implies that management policies can have unintended side effects which counteract the direct ones. The presented model provides plenty of opportunities for future extensions, e.g. by adding personal skills and knowledge, a formal hierarchy and dependencies in the organisation, costs to monitoring and strategy changes, or fluctuations in the workforce. We show that firms with a management that adopts extreme adaptation styles of their management strategy have higher profitability than firms that chose intermediate or moderate adaptation styles. Firms in which adaptation occurs either very frequently (i.e. daily) or very infrequently (i.e. yearly) have higher cumulated profitability at the end of the simulations. The different adaptation styles have diverse effects on employees with different values and hence on endogenous corporate culture. We find that adaptation of the management style leads to a long-run decrease in monitoring and the increased use of group performance rewards. The decrease in monitoring can be interpreted as the endogenous emergence of trust of the management in employees. Frequent adaptation with a fast decrease of monitoring has a strong positive effect on the satisfaction and the performance of employees who are self-directed and open to change. Due to their connectedness inside the firm's social network, they are drivers of corporate culture. However, we also find that active adaptation of the management's strategy is always inferior with regard to a firm's profitability than a non-adapting management style that is already tailored to fit the value composition in the workforce. In firms with non-adapting management the self-organisation of corporate culture and its effect on employee behaviour is not disturbed.
2305.03555
Contrastive Graph Clustering in Curvature Spaces
Graph clustering is a longstanding research topic, and has achieved remarkable success with the deep learning methods in recent years. Nevertheless, we observe that several important issues largely remain open. On the one hand, graph clustering from the geometric perspective is appealing but has rarely been touched before, as it lacks a promising space for geometric clustering. On the other hand, contrastive learning boosts the deep graph clustering but usually struggles in either graph augmentation or hard sample mining. To bridge this gap, we rethink the problem of graph clustering from geometric perspective and, to the best of our knowledge, make the first attempt to introduce a heterogeneous curvature space to graph clustering problem. Correspondingly, we present a novel end-to-end contrastive graph clustering model named CONGREGATE, addressing geometric graph clustering with Ricci curvatures. To support geometric clustering, we construct a theoretically grounded Heterogeneous Curvature Space where deep representations are generated via the product of the proposed fully Riemannian graph convolutional nets. Thereafter, we train the graph clusters by an augmentation-free reweighted contrastive approach where we pay more attention to both hard negatives and hard positives in our curvature space. Empirical results on real-world graphs show that our model outperforms the state-of-the-art competitors.
Li Sun, Feiyang Wang, Junda Ye, Hao Peng, Philip S. Yu
2023-05-05T14:04:52Z
http://arxiv.org/abs/2305.03555v1
# Congregate: Contrastive Graph Clustering in Curvature Spaces ###### Abstract Graph clustering is a longstanding research topic, and has achieved remarkable success with the deep learning methods in recent years. Nevertheless, we observe that several important issues largely remain open. On the one hand, graph clustering from the geometric perspective is appealing but has rarely been touched before, as it lacks a promising space for geometric clustering. On the other hand, contrastive learning boosts the deep graph clustering but usually struggles in either graph augmentation or hard sample mining. To bridge this gap, we rethink the problem of graph clustering from geometric perspective and, to the best of our knowledge, make the first attempt to _introduce a heterogeneous curvature space to graph clustering problem_. Correspondingly, we present a novel end-to-end contrastive graph clustering model named Congregate, addressing geometric graph clustering with Ricci curvatures. To support geometric clustering, we construct a theoretically grounded Heterogeneous Curvature Space where deep representations are generated via the product of the proposed _fully Riemannian_ graph convolutional nets. Thereafter, we train the graph clusters by an _augmentation-free_ reweighted contrastive approach where we pay more attention to _both hard negatives and hard positives_ in our curvature space. Empirical results on real-world graphs show that our model outperforms the state-of-the-art competitors. ## 1 Introduction Graph clustering aims to group nodes into different clusters so that the intra-cluster nodes share higher similarity than the inter-cluster ones, receiving continuous research attention [20]. The state-of-the-art clustering performance on graphs has been achieved by deep clustering methods in recent years [13, 14, 15]. Meanwhile, we find that several important issues on deep graph clustering still largely remain open. _The first issue is on the geometric graph clustering._ In the literature, classic concepts such as modularity [14], conductance [13] and motifs [15] are frequently revisited. Little effort has been devoted to clustering from a geometric perspective. In the Riemannian geometry, **Ricci curvatures** on the edges can help determine the cluster boundary [21], thereby showing the density and clustering behavior among the nodes. However, graph clustering has rarely been touched yet in Riemannian geometry, since _it lacks a promising Riemannian space for graph clustering_. Most existing graph representation spaces present a single curvature radius, independent of nodes/edges [23, 1, 16], and cannot allow for a closer look over the various curvatures for graph clustering. Also, typical clustering algorithms in Euclidean space (e.g., \(K\)-means) cannot be directly applied as an alternative, due to the inherent difference in geometry. Consequently, it calls for a new Riemannian curvature space, _supporting a fine-grained curvature modeling for geometric clustering_. The second is on the unsupervised learning. Deep models are typically trained by the supervisions while graph clustering is unsupervised by nature. Recently, the contrastive clustering without external supervision draws dramatic attention [17, 18, 19]. _In the line of contrastive graph clustering, the issues of augmentation and hard samples are still unclear in general._ Unlike the easily obtained augmentations on images, graph augmentation is nontrivial [12]. In addition, the noise injected in this process usually requires a careful treatment to avoid misleading on graph clustering [10]. Robinson _et al._[22] point out the hardness unawareness of typical loss function such as InfoNCE. Hard negative samples have shown to be effective for graph contrastive learning [20], but little effort is made to its counterpart, _hard positive samples_. In fact, the hard positives in our context are the border nodes of a cluster, and plays a crucial role in clustering performance. Unfortunately, hard sample mining in curvature space largely remains open. Motivated by the observations above, we rethink the problem of graph clustering from the geometric perspective, and make the first attempt to address graph clustering in a novel **Curvature Space**, _rather than traditional single curvature ones_, with an advanced contrastive loss. **Our Work.** To this end, we propose a novel end-to-end contrastive graph clustering model in curvature spaces (Congregate), where we approach graph clustering via ge ometric clustering with Ricci curvatures so that positive Ricci curvature groups the nodes while negative Ricci departs them in spirit of the famous Ricci flow. To address the fine-grained curvature modeling for graph clustering (_the first issue_), we introduce a novel _Heterogeneous Curvature Space_, which is a key innovation of our work. It is designed as the product of learnable factor manifolds and multiple free coordinates. We prove that the proposed space allows for different curvatures on different regions, and the fine-grained node curvatures can be inferred to accomplish curvature modeling. Accordingly, we generate deep representations via the product of Graph Convolutional Nets (GCNs), where _fully Riemannian_ GCN is designed to address the inferior caused by tangent spaces. To address the unsupervised learning (_the second issue_), we propose a weighted geometric contrastive approach in our curvature space. On the one hand, our approach is _free of augmentation_ as we contrast across the geometric views generated from the proposed heterogeneous curvature space itself. On the other hand, we equip a novel dual reweighting to the Node-to-Node and Node-to-Cluster contrastive losses to train the clusters. In this way, we pay more attention to _both hard negatives and hard positives_ when maximizing intra-cluster similarity and minimizing inter-cluster similarity. To sum up, the noteworthy contributions are listed below: * _Problem_. We rethink the graph clustering from geometric respective. To the best of our knowledge, we are the first to introduce the heterogeneous curvature space, supporting fine-grained curvatures modeling, to the problem of graph clustering. * _Methodology_. We propose an end-to-end Congregate free of graph augmentation, in which we approach geometric graph clustering with the reweighting contrastive loss in the proposed _heterogeneous curvature space_, paying attention to hard positives and hard negatives. * _Experiments_. We evaluate the superiority of our model with \(19\) strong competitors, examine the proposed components by ablation study, and further discuss why Ricci curvature works. ## 2 Preliminaries In this section, we first introduce the necessary fundamentals of Riemannian geometry for better understanding our work, and then formulate the studied problem in this paper. In short, _we are interested in the end-to-end graph clustering in a novel curvature space_. ### Riemannian Geometry **Manifold.** A Riemannian manifold \((\mathcal{M},g)\) is a smooth manifold \(\mathcal{M}\) endowed with a Riemannian metric \(g\). Every point \(x\in\mathcal{M}\) is associated with a Euclidean-like _tangent space_\(\mathcal{T}_{x}\mathcal{M}\) on which the metric \(g\) is defined. The _exponential map_ projects from the tangent space onto the manifold, and the _logarithmic map_ does inversely [11]. **Curvature.** For each point \(x\) in the manifold, it is coupled with a curvature \(c_{x}\) describing how the space around \(x\) derives from being flat and a corresponding curvature radius \(\frac{1}{|c_{x}|}\). When \(c_{x}\) is equal everywhere in the manifold, it induces a **homogeneous** curvature space (a.k.a. constant curvature space) with a simplified notation of scalar curvature \(c\). Concretely, it is said to be _hyperbolic_\(\mathbb{H}\) if \(c<0\), and _hyperspherical_\(\mathbb{S}\) if \(c>0\). _Euclidean_ space \(\mathbb{R}\) is special case with \(c=0\). On the contrary, **heterogeneous** curvature space refers to a manifold whose curvatures on different regions are not the same, which is a more practical yet challenging case. ### Problem Formulation In this paper, we consider the node clustering on attributed graphs. An attributed graph is described as a triplet of \(G=(\mathcal{V},\mathcal{E},\mathbf{X})\), where \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{N}\}\) is the set of \(N\) nodes, \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) is the edge set, and \(\mathbf{X}\in\mathbb{R}^{N\times F}\) is the attribute matrix. Let \(K\) denote the number of node clusters. The node-to-cluster assignment is described as the _cluster membership_ vector \(\boldsymbol{\pi}_{i}\in\mathbb{R}^{K}\) attached to node \(v_{i}\). \(\boldsymbol{\pi}_{i}\) is a stochastic vector adding up to \(1\), whose \(k\)-th element \(\boldsymbol{\pi}_{ik}\) is the probability of \(v_{i}\) belonging to cluster \(k\). Now, we formulate the problem of Geometric Graph Clustering in Generic Curvature Space. **Problem Definition.**_Given \(G=(\mathcal{V},\mathcal{E},\mathbf{X})\), the goal of our problem is to learn an encoder \(f:v_{i}\rightarrow[\boldsymbol{z}_{i},\boldsymbol{\pi}_{i}],\forall v\in \mathcal{V}\) that 1) directly outputs cluster membership \(\boldsymbol{\pi}_{i}\) (end-to-end) so that the nodes are more similar to those grouped in the same cluster than the nodes in different clusters and 2) the node encodings in the generic curvature space \(\boldsymbol{z}_{i}\in\mathcal{M}\), supporting the geometric graph clustering._ Distinguishing with the prior works, we rethink the problem of graph clustering from the geometric perspective, and make the first attempt to study graph clustering in a novel _Curvature Space_, rather than traditional single curvature ones. **Notations.** The lowercase \(x\), boldfaced \(\boldsymbol{x}\) and uppercase \(\mathbf{X}\) denote scalar, vector and matrix, respectively. ## 3 Methodology: Congregate We propose an end-to-end contrastive graph clustering model (Congregate) where _we introduce the first curvature space to graph clustering_, a key innovation of our work. In brief, we directly learn the node clusters by training randomly initialized centroids \(\{\boldsymbol{\phi}_{k}\}_{k=1,\cdots,K}\) in a novel curvature space. \(\boldsymbol{\phi}_{k}\) is the centroid of cluster \(k\). The soft assignment of node \(v_{i}\) to cluster \(k\) is given as \(\pi_{ik}=Normalize(\delta(\boldsymbol{z}_{i},\boldsymbol{\phi}_{k}))\), where the similarity \(\delta(\boldsymbol{z}_{i},\boldsymbol{\phi}_{k})=exp(-d_{P}(\boldsymbol{z}_{i}, \boldsymbol{\phi}_{k}))\) and \(d_{P}\) is distance metric in our curvature space. Softmax normalization is applied so that \(\boldsymbol{\pi}_{i}\) adds up to \(1\). We illustrate our model in Figure 1. Concretely, we present a geometric clustering approach with Ricci curvatures (**Sec 3.1**), introduce the novel heterogeneous curvature space (**Sec 3.2**), and train cluster centroids by the proposed reweighted contrastive loss in our curvature space (**Sec 3.3**). ### Geometric Clustering with Ricci Curvature In Congregate, we address graph clustering from a geometric perspective, more concretely, _the notion of Ricci curvature_, and formulate a novel geometric clustering loss. We first discuss why Ricci curvature clusters nodes. Let us begin with its definition [15, 16, 17]: Given a graph with mass distribution \(m_{i}^{\lambda}(\cdot)\) on \(v_{i}\)'s neighbor nodes, Ricci curvature \(Ric(i,j)\) of edge \((v_{i},v_{j})\) is defined as \[Ric(i,j)=1-\frac{W(m_{i}^{\lambda},m_{j}^{\lambda})}{d_{G}(v_{i},v_{j})}, \tag{1}\] and \(W(m_{i}^{\lambda},m_{j}^{\lambda})\) is the Wasserstein distance between the mass distributions on nodes, where \(m_{i}^{\lambda}(\cdot)\) is defined as \[m_{i}^{\lambda}\left(v_{j}\right)=\begin{cases}\lambda&\text{if }v_{j}=v_{i}\\ \frac{1-\lambda}{degree_{i}}&\text{if }v_{j}\in\mathcal{N}_{i},\end{cases} \tag{2}\] where \(d_{G}\) is the length of shortest path on the graph, and \(\lambda\) is a control parameter. The intuition is that _the Ricci curvature of an edge describes the overlap extent between neighborhoods of its two end nodes, and thus signifies the density among nodes._ Specifically, if \(v_{i}\) and \(v_{j}\) belong to different clusters, it is costing to move the distribution \(m_{i}^{\lambda}\) to \(m_{j}^{\lambda}\) due to fewer common neighbors. The less overlapped neighborhoods present large \(W(m_{i}^{\lambda},m_{j}^{\lambda})\) and negative \(Ric(i,j)\). On the contrary, intra-cluster edges are most positively curved, and the nodes within the cluster are densely connected. With the observation above, we connect the Ricci curvature on edges to the density among the nodes. Then, intra-cluster density is formulated as summing the \(Ric(i,j)\) whose end nodes belong to the same cluster, \[D_{intra}=\frac{1}{|\mathcal{E}|}\sum\nolimits_{i,j}\sum\nolimits_{k=1}^{K} Ric(i,j)\pi_{ik}\pi_{jk}. \tag{3}\] Similarly, the inter-cluster density is given as \[D_{inter}=\frac{1}{|\mathcal{E}|K}\sum\nolimits_{i,j}\sum\nolimits_{k_{1}\neq k _{2}}Ric(i,j)\pi_{ik_{1}}\pi_{jk_{2}}. \tag{4}\] Consequently, the Ricci loss is defined as follows, \[\mathcal{L}_{Ric}=\alpha_{0}D_{inter}-D_{intra}, \tag{5}\] where \(\alpha_{0}\) is a weighting coefficient. The rationale of our formulation is that we maximize node density within the cluster while minimizing the density across different clusters. **Connection to the Famous Ricci Flow.** In differential geometry, the Ricci flow approach is to divide a smooth manifold into different regions based on the Ricci curvature. The regions of large positive curvature shrink in whereas regions of very negative curvature spread out [3]. Analogy to the smooth manifold, we divide a graph into different node clusters where _positive Ricci curvature groups the nodes and negative Ricci departs them._ Ni _et al._[2019]; Sia _et al._[2019] leverage Ricci curvatures to group nodes, but they do not consider the end-to-end clustering in a curvature space, essentially different from our setting. We are the first to introduce the curvature space to the problem of graph clustering to the best of out knowledge. ### Constructing Heterogeneous Curvature Space We are facing a challenging task: constructing a new curvature space for the geometric graph clustering. Most existing graph curvature spaces _present as a single curvature radius_ (either the typical hyperbolic, spherical and Euclidean spaces or the recent ultrahyperbolic and quotient manifolds [11, 12]). However, rather than a single curvature, geometric clustering requires a closer look over the various fine-grained curvatures on the graph. A core contribution of our work is that we introduce a novel _heterogeneous curvature space_, bridging this gap. In a nutshell, it is a product space of learnable factor manifolds and multiple free coordinates, as shown in Fig 1 (b). **A Novel Product Manifold** We introduce the intuition of our idea before the formal theory. The graph curvature spaces above are restricted by a fixed norm, thus yielding a single curvature radius. We enrich the curvatures by conducting a single radius space with **multiple free coordinates** that do not have any norm restriction. (A more theoretical rationale based on rotational symmetry [11] is given in Appendix.) Our heterogeneous curvature space \(\mathcal{P}_{H}\) is constructed as follows, \[\mathcal{P}_{H}=\otimes_{m=0}^{M}\mathcal{M}_{m}^{c_{m},d_{m}},\ \mathcal{M}_{0}^{c_{0},d_{0}}:=\mathbb{R}^{d_{0}},c_{0}=0, \tag{6}\] where \(\otimes\) denotes the Cartesian product. It is a product of \(M\)_restricted factors_ and _a free factor of \(d_{0}\) free coordinates_. In the product space, a point \(\mathbf{z}\in\mathcal{P}_{H}\) is thus expressed as the concatenation of its factors \(\mathbf{z}^{m}\in\mathcal{M}_{m}^{c_{m},d_{m}}\) with the combinational distance metric of \(d_{\mathcal{P}}^{2}(\mathbf{x},\mathbf{y})=\sum_{m}d_{c_{m}}^{2}(\mathbf{x}^{m},\mathbf{y}^{m})\). _A restricted factor \(\mathcal{M}_{m}^{c_{m},d_{m}}\) is defined on the manifold,_ \[\left\{\mathbf{z}=\left[\begin{array}{c}z_{t}\\ \mathbf{z}_{s}\end{array}\right]\left|\left\langle\mathbf{z},\mathbf{z}\right\rangle_{c_{ m}}=\frac{1}{c_{m}},\ z_{t}\in\mathbb{R},\mathbf{z}_{s}\in\mathbb{R}^{d_{m}}\right\},\right. \tag{7}\] _with the metric inner product \(\left\langle\mathbf{z},\mathbf{z}\right\rangle_{c_{m}}=sgn(c_{m})z_{t}^{2}+\mathbf{z}_{s} ^{\top}\mathbf{z}_{s}\), where \(sgn\) is the sign function. \(c_{m}\) and \(d_{m}\) denote the curvature and dimension, respectively. The induced norm restriction is given as \(\left\|\mathbf{z}\right\|_{c_{m}}^{2}=\left\langle\mathbf{z},\mathbf{z}\right\rangle_{c_{ m}}\). \(z_{t}\) is the \(1\)st dimension, and is usually termed as \(t\)-dimension. The north pole is \(\mathbf{0}=(|c_{m}|^{-\frac{1}{2}},0,\cdots,0)\). The closed-form distance \(d_{c_{m}}\), logarithmic \(log_{s}^{c_{m}}\) and exponential maps \(exp_{\mathbf{z}}^{c_{m}}\) are derived in Skopek _et al._[2020]. _The free factor \(\mathbb{R}^{d_{0}}\)_ looks Euclidean like, but in fact we inject the rotational symmetry in it. The Figure 1: Illustration of Congregate. (a) We address graph clustering from geometric perspective with Ricci curvatures. (b) We construct a novel curvature space where we generate deep representations via the product of proposed _fRGCN_s. (c) Our model is trained by a reweighted contrastive loss across geometric views (red/magenta/blue) free of augmentation. (d) We obtain clustering results in an end-to-end fashion. closed-form distance \(d_{0}\) is given in [14]. We do not use its logarithmic/exponential maps in our model. We prove that the proposed \(\mathcal{P}_{H}\) has heterogeneous curvatures, i.e., it allows for different curvatures on the different regions. Supporting curvature heterogeneity is the foundation of geometric clustering. We start with the concept below. **Definition (Diffeomorphism [13]).**_Given two manifolds \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), a smooth map \(\varphi:\mathcal{M}_{1}\rightarrow\mathcal{M}_{2}\) is referred to as a diffeomorphism if \(\varphi\) is bijective and its inverse \(\varphi^{-1}\) is also smooth. \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) are said to be diffeomorphic and denoted as \(\mathcal{M}_{1}\simeq\mathcal{M}_{2}\) if there exists a \(\varphi\) connecting them._ **Proposition 1** (Curvature Heterogeneity).: \(\forall d_{0}>1,\forall c_{m}\)_, there exists a diffeomorphism of \(\mathcal{P}_{H}\simeq(\otimes_{m=1}^{M}\mathcal{M}_{m}^{c,d_{m}}\otimes \mathcal{M}^{0,d})\otimes\mathbb{R}_{S}\) where a point \(\mathbf{z}_{i}\)'s curvature is a map \(\psi((\mathbf{z}_{i})_{|S|},\)\(c_{1},\cdots,c_{M})\) w.r.t. its location with the differential operator_ \[\frac{-2\partial_{SS}^{2}\rho}{\rho}+\frac{1-(\partial_{SS}^{2}\rho)^{2}}{ \rho^{2}}, \tag{8}\] _for some smooth \(\rho\) and \((\mathbf{z}_{i})_{|S|}\) is the coordinate of \(\mathbb{R}_{S}\), where \(\mathcal{M}^{0,d}\otimes\mathbb{R}_{S}=\mathbb{R}^{do}\) and \(\mathbb{R}_{S}\) is the axis for rotational symmetry._ Proof.: Please refer to the Appendix. **Fine-grained Curvature Modeling for Graph Clustering** Here, we derive the fine-grained node-level curvature in our product space. With the definition of _Diffeomorphism_ above and _Proposition 1_, the curvature \(c_{i}\) of \(\mathbf{z}_{i}\in\mathcal{P}_{H}\) can be derived from the map \((\varphi\circ\psi)((\mathbf{z}_{i})_{|S|},c_{1},\cdots,c_{M})\) and the differential operator on \(\rho\). That is, \(\mathbf{z}_{i}\)'s curvature is inferred via a map regarding the curvatures of factor manifold \(c_{1},\cdots,c_{M}\) and its coordinate of rotation symmetry \((\mathbf{z}_{i})_{|S|}\). In our construction, \((\mathbf{z}_{i})_{|S|}\) is given in the \(1\)st dimension of the \(\mathbf{z}_{i}\)'s free factor \((\mathbf{z}_{i}^{0})_{|1|}\). We employ a multilayer perceptron (MLP) to approximate the map. The estimated curvature \(\bar{c}_{i}\) is given as, \[\bar{c}_{i}=MLP([(\mathbf{z}_{i}^{0})_{[1]},c_{1},\cdots,c_{M}]^{\top}). \tag{9}\] In the graph domain, node curvature \(Ric(i)\) is defined by averaging the Ricci curvature in its neighborhood, in analogy to tracing around the tangent space of the manifold. That is, the node-level curvature on the graph is formulated as \(Ric(i)=\frac{1}{degree_{i}}\sum_{j\in\mathcal{N}_{i}}Ric(i,j)\), where \(degree_{i}\) is the degree of \(v_{i}\) and \(\mathcal{N}_{i}\) denotes the \(1\)-hop neighborhood of node \(i\). Then, we propose a node-level curvature consistency loss as \[\mathcal{L}_{Curv}=\frac{1}{N}\sum\nolimits_{i}|Ric(i)-\bar{c}_{i}|^{2}, \tag{10}\] so that _curvatures of factor manifolds are jointly learnt with the model via the fine-grained curvature modeling._ Till now, we construct the heterogeneous curvature space modeling the fine-grained curvatures of the graph. Thereby, _the constructed curvature space supports geometric graph clustering with the Ricci loss, which requires a closer look over the various Ricci curvatures on the graph_ (Eqs. 3-5). **Remarks.** The advantages of our design are 1) \(\mathcal{P}_{H}\) supports node-level curvature modeling for geometric clustering, and its factors has learnable curvatures, different from the product manifolds in Gu _et al._[19]; Wang _et al._[2021]. 2) \(\mathcal{P}_{H}\) as a whole owns the closed form expression of geometric metrics inherited from its factor manifolds. 3) \(\mathcal{P}_{H}\) decomposes itself into \((M+1)\)_different geometric views_ corresponding to each factor (i.e., \(M\)_restricted views_ and \(1\)_free view_). **Generate Deep Representations in the Product Manifold** Thanks to the product construction, encoding in the heterogeneous curvature space is transformed into encoding in each factor manifold. Most of the Riemannian GCNs involve the tangent space out of the original manifold, and recent studies observe the inferior of tangential methods [15]. To bridge this gap, we design a _fully Riemannian_ GCN (_fRGCN_) for the restricted factor \(\mathcal{M}^{c,d}\), whose novelty lie in that _all the operations are fully Riemannian for any \(c\)_, i.e., no tangent space is involved. We design the manifold preserving operators of _fRGCN_ as follows. **Feature Transformation.** First, we formulate a generalized Lorentz Transformation (\(gLT\)) for dimension transformation, inspired by the classic LT. The transform \(\mathcal{M}^{c,d_{m}}\rightarrow\mathcal{M}^{c,d_{n}}\) is done via the matrix left-multiplication with the transform matrix derived as follows, \[gLT_{\mathbf{z}}^{c,d_{m}\to d_{n}}(\mathbf{W})=\left[\begin{array}{cc}w_{t}& \mathbf{0}^{\top}\\ \mathbf{0}&\mathbf{W}\end{array}\right]. \tag{11}\] Recall that \(\mathbf{z}=[z_{t}\ \mathbf{z}_{s}]^{\top}\in\mathcal{M}^{c,d_{m}}\). In \(gLT\), \(w_{t}\) is responsible to scale \(z_{t}\) while \(\mathbf{W}\) transforms \(\mathbf{z}_{s}\). We derive the closed-form \(t\)-scaling as \(w_{t}=\frac{1}{z_{t}}\sqrt{sgn(c)\left(\frac{1}{c}-\ell(\mathbf{W},\mathbf{z}_{s}) \right)}\) and \(\ell(\mathbf{W},\mathbf{z}_{s})=\left\|\mathbf{W}\mathbf{z}_{s}\right\|^{2}\). Now, we prove that the transformed feature with \(gLT\) resides in the target manifold. **Proposition 2** (**Manifold Preserving).**\(\forall\mathbf{z}\in\mathcal{M}^{c,d_{m}},\forall c\), \(gLT_{\mathbf{z}}^{c,d_{m}\to d_{n}}(\mathbf{W})\mathbf{z}\in\mathcal{M}^{c,d_{n}}\) holds for any \(\mathbf{W}\in\mathbb{R}^{d_{n}\times d_{m}}\). Proof.: Please refer to Appendix. Note that, the classic LT works with a fixed dimension. Recently, Dai _et al._[2021] optimize with orthogonal constraint unfriendly to deep learning. Chen _et al._[2022] restrict in negative curvature. That is, all of them cannot satisfy our need. Second, we add the bias for \(gLT\) and obtain the linear layer in the manifold of any curvature \(c\) as follows, \[LL^{c}(\mathbf{W},\mathbf{z},\mathbf{b})=\left[\begin{array}{c}w_{t}z_{t}\\ \mathbf{W}\mathbf{z}_{s}+\mathbf{b}\end{array}\right], \tag{12}\] where \(\mathbf{b}\) is the bias and \(\ell(\mathbf{W},\mathbf{z}_{s})=\left\|\mathbf{W}\mathbf{z}_{s}+\mathbf{b}\right\|^{2}\). It is easy to check that \(LL^{c}\) is _manifold preserving_. **Attentive Aggregation.** The encoding of \(i\) is updated as the weighted geometric centroid over the set \(\bar{\mathcal{N}_{i}}\), the neighbors of \(i\) and itself, i.e., \(\arg\min_{\mathbf{h}_{i}\in\mathcal{M}}\sum_{j\in\bar{\mathcal{N}_{i}}}{\nu_{ij}d_{ \mathbf{c}}^{2}(\mathbf{h}_{i},\mathbf{h}_{j})},\forall c\) and \(\nu_{ij}\) denotes the attentive weight. For any \(c\), we derived the closed form solution \(\mathbf{h}_{i}=AGG^{c}(\{\mathbf{h}_{j},\nu_{ij}\}|j\in\bar{\mathcal{N}_{i}})\), \[AGG^{c}(\{\mathbf{h}_{j},\nu_{ij}\}|j\in\bar{\mathcal{N}_{i}})=\frac{1}{\sqrt{|c|}} \sum_{j\in\bar{\mathcal{N}_{i}}}\frac{\nu_{ij}\mathbf{h}_{j}}{\left\|\sum_{j\in \bar{\mathcal{N}_{i}}}{\nu_{ij}\mathbf{h}_{j}}\right\|^{2}}. \tag{13}\] The attentive weights \(\nu_{ij}\) is the importance of \(j\) in the aggregation over \(\bar{\mathcal{N}_{i}}\). We define the attentive weights based on the distance in the manifold, \(\nu_{ij}=Softmax(-\tau d^{c}(\mathbf{h}_{j},\mathbf{h}_{i})-\gamma)\), where \(\tau\) is an inverse temperature and we add a bias \(\gamma\). It is easy to check that the centroid in Eq. (13) lives in the manifold, \(\forall c\), and thus \(AGG^{c}\) is _manifold preserving_. Note that, Einstein midpoint formulates an arithmetic mean in the manifold but lacks geometric interpretation. Frechet mean elegantly generalizes from Einstein midpoint but does not offer any closed form solution [1]. Our closed form solution in Eq. (13), generalizing to any curvature, is the geometric centroid of squared distance. _The Free Factor._ Linear layer \(LL^{0}\) is done via replacing \(LL^{c}\) with a free \(w_{t}\in\mathbb{R}\). Attentive aggregation is defined as \(AGG^{0}(\{\mathbf{h}_{j},\nu_{ij}\}|j\in\bar{\mathcal{N}_{i}})=\sum_{j\in\bar{ \mathcal{N}_{i}}}\nu_{ij}\mathbf{h}_{j}\) where attentive weights \(\nu_{ij}\) is computed based on distance \(d_{0}\). They are manifold preserving as there is no norm restriction in \(\mathbb{R}^{d_{0}}\). ### Learning by Reweighted Geometric Contrasting In this subsection, we train the graph clusters with a contrastive loss in the proposed curvature space. Specifically, we propose a Reweighted Geometric Contrasting (RGC) approach, in which we contrast across different geometric views with a novel dual reweighting, as shown in Fig 1 (c). **Augmentation-Free Geometric Contrast** The augmentation is nontrivial for graph contrastive learning, and requires special design for clustering [14]. Instead, our Congregate is free of augmentation where we take advantage of the carefully designed \(\mathcal{P}_{H}\) for contrastive learning. Thanks to the product construction, \(\mathcal{P}_{H}\) itself owns different _geometric views_ as remarked in Sec. 3.2. The contrast strategy is that we contrast each restricted view in \(\mathcal{M}_{m}^{c_{m},d_{m}}\) with the free view in \(\mathbb{R}^{d_{0}}\), and vice versa. The remaining challenge is how to contrast between different manifolds, i.e., \(\mathcal{M}_{m}^{c_{m},d_{m}}\) and \(\mathbb{R}^{d_{0}}\). The difference in both curvature and dimension blocks the application of typical similarity functions. We propose to bridge this gap by \(gLT\) and bijection \(\psi_{\mathcal{M}\rightarrow\mathbb{R}}\) of _Diffeomorphism_. (Recall that we have already provided an effective mathematics tool for dimension transformation, \(gLT\).) Specifically, we introduce an image of restricted view \(\tilde{\mathbf{z}}^{m}\) that is comparable with the free view. First, we employ \(gLT\) to transform \(\mathbf{z}^{m}\) into \(\mathcal{M}_{\text{tr}}^{c_{m},d_{0}-1}\) whose ambient space is \(\mathbb{R}^{d_{0}}\). Second, we apply the diffeomorphism bijection and thus the image is given as follows, \[\tilde{\mathbf{z}}^{m}=\psi_{\mathcal{M}\rightarrow\mathbb{R}}(gLT_{\mathbf{z}^{m},d _{m}\rightarrow(d_{0}-1)}^{m}(\mathbf{W})\mathbf{z}^{m}), \tag{14}\] where parameter \(\mathbf{W}\) characterizes \(gLT\), and \(log_{\mathbf{0}}^{c_{m}}(\cdot)\) is utilized as the bijection since its differentiable inverse exists \(log_{\mathbf{0}}^{c_{m}}(exp_{\mathbf{0}}^{c_{m}}(\mathbf{z}))=\mathbf{z}\). Note that \(\hat{\mathbf{z}}_{i}^{m}\in\mathbb{R}^{d_{0}}\). Then, we define the similarity as a bilinear critic with parameter \(\mathbf{S}\), \[Sim(\mathbf{z}^{m},\mathbf{z}^{0})=(\hat{\mathbf{z}}^{m})^{\top}\mathbf{S}\mathbf{z}^{0}. \tag{15}\] Our formulation of Eq. (15) does not introduce additional tangent space, and its advantage is examined in Sec. 4.2. **Dual Reweighting in Curvature Space** A drawback of the popular InfoNCE loss is hardness unawareness (equally treating the hard sample pairs and the easy ones), limiting the discriminative ability [12]. To address this issue, we propose a _dual reweighting_, paying more attention to both hard negatives and hard positives for contrastive learning in curvature space. First, we specify the hard samples in the context of graph clustering where cluster assignment offers pseudo labels. Intuitively, the nodes assigned to different clusters but sharing large similarity are referred to as _hard negatives_, while _the border nodes sharing small similarity to the cluster centroid are hard positives_. Second, we model the hardness by comparing cluster assignment (pseudo label) and representation similarity, and formulate the dual reweighting as follows, \[\mathcal{W}(\mathbf{z}_{i}^{m},\mathbf{z}_{j}^{0})=|\mathbf{\pi_{i}}^{\top}\mathbf{\pi}_{j}- Sim(\hat{\mathbf{z}}_{i}^{m},\mathbf{z}_{j}^{0})|^{\beta} \tag{16}\] where the control coefficient \(\beta\) is a positive integer, and \(\mathcal{W}(\mathbf{z}_{i}^{m},\mathbf{z}_{j}^{0})\) up-weights both hard positives and hard negatives while down-weighting the easy ones. Recently, Sun _et al._[20] design a Riemannian reweighing for node embedding only and thus fail to consider clusters. Liu _et al._[2023] select hard positives in Euclidean space while we need to handle different manifolds. Both of them cannot meet our need and motivate our design of Eq. (16). **Node-to-Node & Node-to-Cluster Contrasting** The RGC loss consists of Node-to-Node and Node-to-Cluster contrasting, where we contrast different geometric views with the dual reweighting and \(Sim\) function in generic curvature space. First, we define Node-to-Node contrast loss as follows, \[I(\mathbf{Z}^{m},\mathbf{Z}^{0})=-\sum\nolimits_{i=1}^{N}log\frac{e^{\mathcal{ W}(\mathbf{z}_{i}^{m},\mathbf{z}_{i}^{0})Sim(\mathbf{z}_{i}^{m},\mathbf{z}_{j}^{0})}}{ \sum\nolimits_{j=1}^{N}e^{\mathcal{W}(\mathbf{z}_{i}^{m},\mathbf{z}_{j}^{0})Sim(\mathbf{z}_ {i}^{m},\mathbf{z}_{j}^{0})}}. \tag{17}\] Second, we contrast node encoding of one view with cluster centroids of another view, and formulate the Node-to-Cluster contrast loss as follows, \[I(\mathbf{Z}^{m},\mathbf{\Phi}^{0})=-\sum\nolimits_{i=1}^{N}log\frac{e^{ \mathcal{W}(\mathbf{z}_{i}^{m},\mathbf{\theta}_{k_{i}}^{0})Sim(\mathbf{z}_{i}^{m},\mathbf{ \theta}_{k_{i}}^{0})}}{\sum\nolimits_{k=1}^{K}e^{\mathcal{W}(\mathbf{z}_{i}^{m}, \mathbf{\theta}_{k}^{0})Sim(\mathbf{z}_{i}^{m},\mathbf{\theta}_{k}^{0})}}, \tag{18}\] where node \(v_{i}\) is assigned to cluster \(k_{i}\). Here, in \(\mathcal{W}(\mathbf{z}_{i}^{m},\mathbf{\theta}_{k_{i}}^{0})\), the inner product term is simplified as \([\mathbf{\pi}_{i}]_{k_{i}}\) the probability of \(v_{i}\) assigned to cluster \(k_{i}\). Thus, we have RGC loss as follows, \[\mathcal{L}_{RGC}=\sum\nolimits_{m=1}^{M}\sum\nolimits_{\mathbf{X}\in\{ \mathbf{Z}^{0},\mathbf{\Phi}^{0}\}}(I(\mathbf{Z}^{m},\mathbf{X})+I(\mathbf{X}, \mathbf{Z}^{m})). \tag{19}\] In our curvature space, intra-cluster node similarity is maximized as they positively contrast to the same centroid, while inter-cluster nodes are separated by negative contrast. Meanwhile, more attention is paid to the similar cluster centroids (_hard negatives_) and the nodes residing in the cluster border (_hard positives_), thanks to dual reweighting of Eq. (16). **The Overall Loss** of our model is finally defined as follows, \[\mathcal{J}=\mathcal{L}_{Ric}+\alpha_{1}\mathcal{L}_{Curv}+\alpha_{2}\mathcal{L }_{RGC}, \tag{20}\] where \(\alpha_{1}\) and \(\alpha_{2}\) are weighting coefficients. We summarize the training process in Algo. 1. In this way, we end-to-end train the cluster centroids in the proposed curvature space. **Complexity Analysis.** Eq. (19) is the most costly, yielding the computational complexity of \(O(2M|\mathcal{V}|^{2}+2MK|\mathcal{V}|)\). It is similar to typical contrastive methods [21, 22]. The Ricci curvatures only need to be computed once as a pre-processing, and can be effectively obtained similar to Ni _et al._[20]; Ye _et al._[2020]. ## 4 Experiment In this section, we evaluate our model with 20 baselines on 4 public datasets, aiming to answer the following research questions (_RQs_), * _RQ1_: How does the proposed Congregate perform? * _RQ2_: What are the effects of the proposed components? * _RQ3_: Why does _Ricci Curvature_ work? ### Experimental Setups **Datasets & Baselines.** We choose \(4\) datasets, i.e., Cora and Citeseer [13], and larger MAG-CS [15] and Amazon-Photo [11]. We focus on deep graph clustering with no labels available. Thus, both the strong deep clustering methods (_DC_) and self-supervised learning methods (_SS_) are included as _Euclidean Baselines_ for a comprehensive evaluation. There are \(13\) strong DC methods and \(5\) SS methods, summarized in Table 1. There exists few _Riemannian Baselines_ (_R_). Note that, recent Riemannian GNNs do not have clustering ability, as typical clustering algorithms cannot be directly applied/incorporated owing to the inherent difference in geometry. Instead, we choose a recent shallow model, _RicciCom_. We are the first to bridge Riemannian space and graph clustering to our knowledge. **Evaluation Protocol.** We employ \(3\) popular evaluation metrics, i.e., Normalized Mutual Information (NMI), Adjusted Rand Index (ARI) and Accuracy (ACC) [13, 14, 15]. The number of clusters \(K\) is set as the number of real classes on each dataset. We perform \(10\) independent runs, and report the mean value with standard deviation for fair comparisons. For the encoding-clustering baselines, we apply \(K\)-means to obtain the results. **Reproducibility.** Further details and code are provided [https://github.com/CurvCluster/Congregate](https://github.com/CurvCluster/Congregate). If input features live in the Euclidean space, we use the inverse bijection \(\psi^{-1}_{M\to\mathbb{R}}\) in Eq. (14) to map the Euclidean input to a factor manifold. In_fRGCN_, the convolutional layer is stacked twice. Parameters living in the factor are optimized via Riemannian Adam [12]. We utilize a \(2\)-layer MLP to approximate the fine-grained curvature. In RGC loss, hyperparameter \(\beta\) of the reweighting is \(2\) as default. ### Empirical Results _RQ1: Main Results._ The clustering results on all the datasets in terms of NMI, ARI and ACC are reported in Table 1. Our Congregate is instantiated with \(4\) factor manifolds whose dimensionality are \(\{32,32,16,16\}\), and it consistently achieve the best results among \(19\) competitors. The reasons are 1) we take advantage of the proposed curvature space and the consensus clustering from different geometric views. 2) We jointly learns high discriminative node encodings and cluster centroids with the proposed reweighting loss. _RQ2: Ablation Study._ We investigate on how each proposed component contributes to the success of our Congregate: i)_fRGCN_ for modeling graph fully Riemannianly, ii) \(\varphi\circ gLT\) for contrasting between different manifolds and iii) the dual reweighting in \(\mathcal{W}\) for paying attention to hard samples. To evaluate the effectiveness of _fRGCN_, we introduce a variant which replaces _fRGCN_ with a \(GAT^{c}\). Concretely, \(GAT^{c}\) generalizes GAT [21] in a manifold of curvature \(c\) with tangent spaces. We utilize the tangential methods of any \(c\) formulated in Skopek _et al._[20]. To evaluate the effectiveness of \(\varphi\circ gLT\), we introduce a variant using \(\text{T}log_{0}^{c_{m}}\) instead, where the matrix \(\mathbf{T}\) is given for dimension transformation. It introduce an additional tangent space compare to the design in our model. To evaluate the effectiveness of \(\mathcal{W}\), we introduce two kinds of variants. The first variant (denoted as \(-pAware\)) removes the \(\mathcal{W}\) on the numerators of our RGC loss, thus keeping the attention to hard negatives only. The second variant (denoted as \(-hard\)) \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c} \multicolumn{1}{c}{} & \multicolumn{3}{c|}{**Cora**} & \multicolumn{3}{c|}{**Citeseer**} & \multicolumn{3}{c}{**MAG-CS**} & \multicolumn{3}{c}{**Amazon-Photo**} \\ \multicolumn{1}{c|}{\multirow{-1}{*}{**Method**}} & \multirow{-1}{*}{ACC} & NMI & ARI & ACC & NMI & ARI & ACC & NMI & ARI & ACC & NMI & ARI \\ \hline \multirow{8}{*}{**Cong**} & GAE [13] & 61.3 (0.0) & 44.4 (0.3) & 38.1 (0.0) & 61.4 (0.0) & 34.6 (0.3) & 33.6 (0.2) & 63.2 (0.6) & 69.9 (0.4) & 52.8 (0.1) & 71.6 (0.2) & 62.1 (0.4) & 48.8 (0.4) \\ & VGAE [13] & 64.7 (0.3) & 43.4 (0.3) & 37.5 (0.1) & 61.0 (0.0) & 32.7 (0.3) & 33.1 (0.0) & 60.4 (0.2) & 65.3 (0.4) & 50.0 (0.2) & 74.3 (0.6) & 66.0 (0.3) & 56.2 (0.4) \\ & DGT [21] & 72.6 (0.9) & 57.1 (0.1) & 51.0 (0.6) & 68.6 (0.3) & 43.5 (0.4) & 44.5 (0.0) & 60.0 (0.6) & 65.9 (0.5) & 50.3 (0.5) & 57.2 (0.3) & 36.4 (0.2) \\ & ARGA [15] & 71.0 (0.5) & 51.1 (0.5) & 47.7 (0.1) & 61.1 (0.3) & 34.4 (0.3) & 33.4 (0.2) & 47.9 (0.4) & 48.7 (0.3) & 23.6 (0.9) & 53.2 (0.3) & 58.4 (0.4) & 42.4 (0.2) \\ & MVGRL [22] & 70.5 (0.7) & 55.6 (0.5) & 48.7 (0.3) & 62.8 (0.0) & 40.7 (0.9) & 34.2 (0.1) & 61.6 (0.3) & 65.4 (0.4) & 49.2 (0.3) & 41.1 (0.2) & 30.3 (0.3) & 18.8 (0.3) \\ \hline \multirow{8}{*}{**Cong**} & DAEGO [14] & 70.4 (0.5) & 52.9 (0.5) & 49.6 (0.0) & 64.5 (0.3) & 36.4 (0.0) & 37.8 (0.2) & 48.1 (0.8) & 60.3 (0.4) & 47.4 (0.2) & 76.0 (0.2) & 65.3 (0.5) & 58.1 (0.3) \\ & SDCN [14] & 35.6 (0.0) & 31.4 (0.3) & 7.8 (0.5) & 65.9 (0.3) & 38.7 (0.0) & 40.2 (0.1) & 51.6 (0.5) & 58.0 (0.3) & 53.4 (0.4) & 53.4 (0.4) & 49.3 (0.1) & 31.2 (0.3) \\ \cline{1-1} & AGE [14] & 73.5 (0.5) & 57.6 (0.5) & 50.1 (0.1) & 69.7 (0.4) & 44.9 (0.5) & 45.3 (0.9) & 51.6 (0.7) & 56.7 (0.5) & 51.1 (0.7) & 55.9 (0.4) & 58.9 (0.5) \\ \cline{1-1} & GMM-VGAE [14] & 71.5 (0.5) & 53.1 (0.7) & 47.4 (0.6) & 57.0 (0.7) & 40.7 (0.2) & 44.3 (0.7) & 67.2 (0.2) & 78.8 (0.5) & 56.1 (0.3) & 75.5 (0.8) & 68.1 (0.7) & 57.9 (0.9) \\ \cline{1-1} & AGCN [14] & 72.2 (0.4) & 54.7 (0.3) & 48.9 (0.2) & 68.8 (0.1) & 45.3 (0.3) & 48.3 (0.4) & 54.2 (0.3) & 59.4 (0.2) & 49.2 (0.2) & 45.2 (0.4) & 41.6 (0.1) & 36.6 (0.6) \\ \cline{1-1} & GDCL [14] & 70.8 (0.5) & 56.6 (0.6) & 48.1 (0.7) & 66.4 (0.0) & 39.5 (0.4) & 41.1 (0.5) & 53.9 (0.3) & 60.3 (0.3) & 48.8 (0.3) & 37.3 (0.3) & 21.6 (0.8) \\ \cline{1-1} & SGCC [14] & 74.2 (0.5) & 58.9 (0.4) & 54.4 (0.6) & 68.3 (0.4) & 44.1 (0.8) & 45.8 (0.4) & 77.6 (0.1) & 71.8 (0.3) & 63.7 (0.5) & 45.8 (0.1) \\ \cline{1-1} & GC [14] & 73.1 (0.5) & 57.0 (0.0) & 49.3 (0.6) & 69.6 (0.0) & 44.6 (0.6) & 66.9 (0.3) & 79.3 (0.4) & eliminates all the \(\mathcal{W}\), resulting in an InfoNCE in Riemannian space without hardness awareness. In addition, we examine the effect on the number of factor manifolds. To this end, the variants above are instantiated in product space of \(4\) factors and \(5\) factors, respectively. We report the NMI and ACC of the clustering results on Cora and Citeseer datasets in Table 2, and find that: **i)** Our Congregate beats the \(-\)_fRGCN_ and \(-\varphi\circ gLT\). It shows that introducing additional tangent spaces trends to results in inferior clustering, and thus _testifies the effectiveness of fully Riemannian model_. **ii)** The product space \(5\) factors outperforms that of \(4\) factors. It suggests that more factor manifolds may benefit the performance, and the reason is that more factors give further flexibility for the fine-grained curvature modeling. **iii)**\(-posA\) variant performs better than \(-hard\), and the proposed RGC loss is the best. It shows the importance of hard samples, and _more attentions to hard positives (the border nodes) further help the performance_, which is the reason of our design that we pay more attentions to both hard positives and hard negatives. _RQ3_**: Ricci Curvature & Clustering.** We discuss why Ricci curvature works. Empirically, we further study the clustering capability of Ricci curvature comparing with classic concepts (\(gCooL\) with refined modularity, _HostPool_ with motif conductance and _RicciCom_ with Ricci curvature). We examine the result clusters from microscopic perspective by cluster density and entropy [11]. The density is \(\mathbb{E}_{k}[\frac{E_{k}}{V_{k}(V_{k}-1)}]\), where \(E_{k}\) and \(V_{k}\) are the number of edges and nodes in cluster \(k\). The entropy is \(-\mathbb{E}_{k}[\sum_{c}p_{k}(c)\log p_{k}(c)]\), where \(p_{k}(c)\) is the frequency of class (label) \(c\) occurred in cluster \(k\). Lower entropy means better result, i.e., the cluster contains a major class. The results are visualized in Fig. 2. After a few hundred epochs, Ricci methods achieves even better density than modularity/conductance methods. _It shows the clustering capability of Ricci curvature, verifying our motivation_. Also, we have lower entropy than RicciCom. It is because we further introduce the novel curvature space, supporting fine-grained curvature modeling for graph clustering. ## 5 Related Work ### Deep Graph Clustering In the literature, deep graph clustering methods are roughly divided into \(3\) categories regarding the learning paradigm. 1) Reconstructive methods provide supervision signal by recovering graph information, and generate node clusters by applying or incorporating clustering methods [16, 17]. 2) Adversarial methods regulate the generative process by a discriminator in a min-max game [18, 19]. 3) Contrastive methods acquire discriminative representation without labels by pulling positive samples together while pushing negative samples apart [14, 15]. Meanwhile, deep methods are introduced to bipartite graphs [15], signed graphs [10, 11], temporal graphs [20], heterogeneous graphs [13], and etc. Recently, He _et al._[16] present a novel generative model with EM algorithm; Fettal _et al._[16] introduce a strong matrix optimization framework. Distinguishing from the prior studies, we rethink the problem of graph clustering from the geometric perspective. ### Riemannian Graph Learning Recent years have witnessed the remarkable success achieved by Riemannian graph learning. As hyperbolic space is well aligned with hierarchical or power-law graphs [13], shallow models are first introduced [10, 10], and hyperbolic GCNs with different formulations are then proposed [1, 18, 19, 15]. Beyond hyperbolic space, \(\kappa\)-GCN [1] extend GCN to constant-curvature spaces with \(\kappa\)-stergraphical model. Yang _et al._[16] model the graph in the dual space of Euclidean and hyperbolic ones. Xiong _et al._[16, 17] study graph learning on a kind of pseudo Riemannian manifold, ultrahyperbolic space. Law [16] introduce a quotient manifold for graph learning. Cruceru _et al._[16] study the matrix manifold of Riemannian spaces. Gu _et al._[17]; Wang _et al._[18] explore node embedding in the product spaces. Very recently, Giovanni _et al._[16] investigate in the rotational symmetry of the manifold, but do not consider fine-grained curvature modeling and learnable factors, different from our study. However, none of existing studies focus on graph clustering in Riemannian manifolds to the best of our knowledge. \begin{table} \begin{tabular}{l l|c c c c} \hline \hline & **Variant** & \multicolumn{2}{c|}{**Cora**} & \multicolumn{2}{c}{**Citeseer**} \\ & & ACC & NMI & ACC & NMI \\ \hline \multirow{4}{*}{**Fact**} & **ConRegGEATE** & **78.5** (\(\pm\)0.0) & **63.2** (\(\pm\)0.5) & **72.7** (\(\pm\)0.0) & **50.9** (\(\pm\)3.0) \\ & \(-f\)RGCN & 75.9 (\(\pm\)0.5) & 60.4 (\(\pm\)0.2) & 72.0 (\(\pm\)0.9) & 48.3 (\(\pm\)0.6) \\ & \(-\varphi\circ gLT\) & 77.5 (\(\pm\)0.8) & 61.7 (\(\pm\)0.6) & 71.9 (\(\pm\)1.3) & 47.7 (\(\pm\)0.7) \\ & \(-\)_Jaware_ & 77.2 (\(\pm\)0.2) & 62.1 (\(\pm\)0.8) & 70.3 (\(\pm\)0.5) & 49.0 (\(\pm\)4.9) \\ & \(-hard\) & 76.8 (\(\pm\)0.1) & 61.5 (\(\pm\)0.3) & 69.8 (\(\pm\)0.2) & 48.9 (\(\pm\)0.5) \\ \hline \multirow{4}{*}{**Fact**} & **ConRegGEATE** & **78.1** (\(\pm\)0.9) & **63.8** (\(\pm\)0.4) & **73.1** (\(\pm\)0.6) & **52.4** (\(\pm\)0.7) \\ & \(-f\)RGCN & 76.3 (\(\pm\)2.1) & 61.9 (\(\pm\)0.5) & 72.3 (\(\pm\)0.6) & 49.8 (\(\pm\)0.9) \\ \cline{1-1} & \(-\varphi\circ gLT\) & 77.8 (\(\pm\)0.6) & 62.5 (\(\pm\)0.9) & 72.8 (\(\pm\)0.7) & 51.6 (\(\pm\)0.4) \\ \cline{1-1} & \(-pAware\) & 77.3 (\(\pm\)2.3) & 63.0 (\(\pm\)0.3) & 71.2 (\(\pm\)1.1) & 51.2 (\(\pm\)1.6) \\ \cline{1-1} & \(-hard\) & 76.5 (\(\pm\)1.3) & 62.2 (\(\pm\)0.7) & 70.6 (\(\pm\)0.5) & 49.5 (\(\pm\)0.6) \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on Cora and Citeseer datasets. Figure 2: Visualize density and entropy of the clusters. Conclusion In this paper, we formulate the problem of geometric graph clustering, which is _the first to introduce the curvature space allowing for fine-grained curvature modeling to graph clustering_. We present an end-to-end Congregate built upon a novel heterogeneous curvature space that we construct for geometric graph clustering with Ricci curvatures. Accordingly, graph clusters are trained by an augmentation-free contrastive loss, where we pay more attention to both hard positives and hard negatives in our curvature space. The empirical results show the superiority of our model. ## 7 Acknowledgments Thanks to the anonymous reviewers. The authors of this paper were supported in part by National Natural Science Foundation of China under Grant 62202164, the National Key R&D Program of China through grant 2021YFB1714800, S&T Program of Hebei through grant 21340301D and the Fundamental Research Funds for the Central Universities 2022MS018. Prof. Philip S. Yu is supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. Correspond to Li Sun and Hao Peng.
2304.14300
Learning Absorption Rates in Glucose-Insulin Dynamics from Meal Covariates
Traditional models of glucose-insulin dynamics rely on heuristic parameterizations chosen to fit observations within a laboratory setting. However, these models cannot describe glucose dynamics in daily life. One source of failure is in their descriptions of glucose absorption rates after meal events. A meal's macronutritional content has nuanced effects on the absorption profile, which is difficult to model mechanistically. In this paper, we propose to learn the effects of macronutrition content from glucose-insulin data and meal covariates. Given macronutrition information and meal times, we use a neural network to predict an individual's glucose absorption rate. We use this neural rate function as the control function in a differential equation of glucose dynamics, enabling end-to-end training. On simulated data, our approach is able to closely approximate true absorption rates, resulting in better forecast than heuristic parameterizations, despite only observing glucose, insulin, and macronutritional information. Our work readily generalizes to meal events with higher-dimensional covariates, such as images, setting the stage for glucose dynamics models that are personalized to each individual's daily life.
Ke Alexander Wang, Matthew E. Levine, Jiaxin Shi, Emily B. Fox
2023-04-27T16:03:41Z
http://arxiv.org/abs/2304.14300v1
# Learning Absorption Rates in Glucose-Insulin Dynamics from Meal Covariates ###### Abstract Traditional models of glucose-insulin dynamics rely on heuristic parameterizations chosen to fit observations within a laboratory setting. However, these models cannot describe glucose dynamics in daily life. One source of failure is in their descriptions of glucose absorption rates after meal events. A meal's macronutritional content has nuanced effects on the absorption profile, which is difficult to model mechanistically. In this paper, we propose to learn the effects of macronutrition content from glucose-insulin data and meal covariates. Given macronutrition information and meal times, we use a neural network to predict an individual's glucose absorption rate. We use this neural rate function as the control function in a differential equation of glucose dynamics, enabling end-to-end training. On simulated data, our approach is able to closely approximate true absorption rates, resulting in better forecast than heuristic parameterizations, despite only observing glucose, insulin, and macronutritional information. Our work readily generalizes to meal events with higher-dimensional covariates, such as images, setting the stage for glucose dynamics models that are personalized to each individual's daily life. ## 1 Introduction Type-1 diabetes is a chronic condition of glucose dysregulation that affects 9 million people around the world. Decades of research have produced dozens of glucose-insulin dynamics models in order to understand the condition and help diabetics manage their daily lives. These models are typically developed using physiological knowledge and validated in laboratory settings. However, these mechanistic models are incomplete; they are not flexible enough to fit observations outside of controlled settings, due to unmodelled variables, unmodelled dynamics, and external influences. As a result, these mechanistic models fail to fully describe an individual's glycemic response to external inputs like nutrition. Standard models, such as Dalla Man et al. [10], focus on the glycemic impact of carbohydrates in a meal--carbohydrates are broken down into glucose molecules, then absorbed into blood. However, these models typically ignore other macronutrients, such as fat, fiber, and protein, which are known to contribute substantially to the amount and timing of glucose absorption into the blood. Indeed, this phenomenon is the basis for the glycemic index of various foods. In reality, individual glycemic responses to nutrition go beyond such a simple characterization. For example, Zeevi et al. [28] identified multiple patient sub-groups with different glycemic responses to complex foods. In our paper, we propose a method that can leverage real-world nutrition and glucose-insulin measurements to improve the fidelity of existing mechanistic models. While we tailor this approach to the specific application of type-1 diabetes, we note that our methodology fits within a broad paradigm of hybrid modeling of dynamical systems [19; 22; 24; 27]. These approaches can improve mechanistic ODEs using flexible components that learn from observations of the system and its external controls. ## 2 Background on modelling glucose-insulin dynamics Our paper builds on the tradition of modelling physiological dynamics via ordinary differential equations (ODEs), [5; 10; 16; 21; 26]. Traditional models consider ODEs of the form \(\dot{x}(t)=f(t,x(t))+u(t)\), where \(x\in\mathbb{R}^{n}\) denotes physiologic states, \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) encodes mechanistic knowledge of their interactions, and \(u:\mathbb{R}\to\mathbb{R}^{n}\) represents external time-varying inputs into the system. Significant effort has gone towards identifying \(u\) from insulin, exercise, and meal data, but \(u\) is typically represented via a gastrointestinal ODE model [9; 11] or via hand-chosen functional forms [14; 15; 20]. Both approaches for representing meals depend only on carbohydrate consumption and do not consider other macronutrient quantities. Our paper considers the minimal model of glucose-insulin dynamics by Bergman et al. [5]: \[\dot{G}(t) =-c_{1}[G(t)-G_{b}]-G(t)X(t)+u_{G}(t) \tag{1a}\] \[\dot{X}(t) =-c_{2}X(t)+c_{3}[I(t)-I_{b}]\] (1b) \[\dot{I}(t) =-c_{4}[I(t)-I_{b}]+u_{I}(t) \tag{1c}\] where \(x=(G,X,I)\) and \(u=(u_{G},u_{I})\). Here, \(G:\mathbb{R}\to\mathbb{R}\) represents plasma glucose concentration, \(I:\mathbb{R}\to\mathbb{R}\) represents plasma insulin concentration, \(X:\mathbb{R}\to\mathbb{R}\) represents the effect of insulin on glucose, \(G_{b},I_{b}\in\mathbb{R}\) represent basal glucose and insulin levels, respectively, and \(c_{1},c_{2},c_{3},c_{4}\in\mathbb{R}\) represent rate constants for the interactions. Importantly, \(u_{G}:\mathbb{R}\to\mathbb{R}\) represents the appearance of glucose in the blood (e.g. absorbed from nutrition in the gut) and \(u_{I}:\mathbb{R}\to\mathbb{R}\) represents the appearance of insulin in the blood (e.g. absorbed from subcutaneous injection or drip). See Gallardo-Hernandez et al. [13] for a modern exposition and the units of each quantity. Modelling nutrition absorption from discrete meal events.When simulating the daily management of diabetes, the _continuous_ functions \(u_{G},u_{I}\) are typically derived from observed _discrete_ events (e.g. meals and insulin injections). Each discrete-time event \(e_{i}=(t_{i},m_{i})\) consists of a timestamp \(t_{i}\) and a covariate \(m_{i}\). If \(e_{i}\) is a meal event, \(m_{i}\) may consist of macronutritional information, an image of the food, or both. Pharmacodynamics models are often used to map the insulin dose to a continuous absorption profile \(u_{I}\) that is compatible with the above model. However, the dependence of glucose absorption \(u_{G}\) on full macronutritional content of a meal event is less well-understood; thus _we focus on modelling \(u_{G}\) in this paper_. Mechanistic \(u_{G}\) models often derive \(u_{G}\) as the solution to another set of heuristic ODEs[10]. However, this approach introduces additional handcrafted parameterizations to explain quantities that are unobservable outside of the lab setting, such as the glucose concentration in the stomach over time after a meal. A simpler yet effective approach is to directly model \(u_{G}\) phenomenologically, and estimate it from data [14; 20]. Instead of deriving \(u_{G}\) from an intricate model of the human body, this approach represents \(u_{G}\) directly using a parametric function adapted from data. ## 3 Phenomenologically modelling the absorption rate Let each meal event \(i\) be \(e_{i}=(t_{i},m_{i})\) where \(t_{i}\in\mathbb{R}\) is the meal time and \(m_{i}\in\mathbb{R}^{M}\) is a vector of meal covariates, such as its macronutrition content or even a photo of the food. We assume we have data on a set \(E\) of these meal events. For each meal \(i\), we associate a parametric function \(a_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\), such that \(a_{i}(t)\) is the absorption rate of the meal at time \(t\). The overall control function \(u_{G}\) is then a sum over the events: \[u_{G}(t)=\sum_{i=1}^{|E|}a_{i}(t). \tag{2}\] \(a_{i}\) is usually compactly supported, since meals only affects glucose locally in time. Decomposing \(u_{G}\) into a sum allows us to model the effect of each meal individually, instead of all at once. A simple heuristic choice is a square function \(a_{i}(t)=g_{i}\mathbb{1}_{[0,w)}(t-t_{i})/w\) where \(w\) is the width of the square as a free parameter and \(g_{i}\in\mathbb{R}\) is the amount of glucose produced from the meal. Another choice is the bump function \(a_{i}(t)=g_{i}\mathbb{1}_{[0,\infty)}(t-t_{i})(e^{-b_{1}(t-t_{i})}-e^{-b_{2}(t-t _{i})})/b_{3}\) where \(b_{1}\) and \(b_{2}\) are free parameters and \(b_{3}\) is a normalization constant [1, 2]. For both choices, \(g_{i}\) must be estimated by the patient or by a nutritionist (e.g. when \(m_{i}\) is a food image), which can be highly inaccurate. More importantly, the _shape_ of these parameterizations does not depend on \(m_{i}\), even though foods vary in absorption profiles. A neural phenomenological model.The form of Equation (2) suggests a natural extension that takes advantage of the flexiblity of neural networks. Given a meal event \(e_{i}=(t_{i},m_{i})\), we model its absorption rate using a neural network \(a_{\theta}\) such that \[a_{i}(t)=g_{i}\cdot a_{\theta}(t-t_{i},m_{i})\mathbb{1}_{[0,\infty)}(t-t_{i}). \tag{3}\] We make use of the estimated glucose content \(g_{i}\) following prior approaches since it is often already available in the meals dataset, and gives an expert-informed glucose absorption scale factor. Alternatively, \(g_{i}\) can be included as another input to \(a_{\theta}\) instead of being a multiplicative constant. Even if the estimated \(g_{i}\) is inaccurate, \(a_{\theta}\) has the flexiblity to rescale \(g_{i}\) based on the observed \(m_{i}\). Most importantly, our parameterization differs in that its _shape_ can adapt to the meal covariates \(m_{i}\). We share one neural network \(a_{\theta}\) across all meal events, allowing it to generalize to macronutritional information similar to, but not exactly the same as, meals from the training set. Altogether, Equations (1),(2),(3) define our neural differential equation model. End-to-end training on partial observations.Having defined our parametric function, we now discuss how to learn the parameters \(\theta\) in a setting that is realistic to settings outside of the laboratory. Recent technologies like continuous glucose monitors and artificial pancreases enable real-time measurements of glucose levels and insulin dosage. However, most of a patient's physiological state is unobserved. Within Equation (1), we do not observe insulin \(I\) and its effect \(X\). Let \(x\) be the state of our differential equation from Equation (1). We assume our temporal data consists of noisy partial observations over time \(\{(t_{k},y_{k})\}_{k=1}^{T}\), where \(y_{k}=Hx(t_{k})+\varepsilon\). We assume the projection operator \(H:(G,X,I)\mapsto(G,0,0)\) and \(\varepsilon\) is a zero-mean i.i.d. noise process. Given initial condition \(x(t_{0})=x_{0}\), we can numerically integrate Equation (1) with a given \(u_{I}\) and our parameterized \(u_{G}(\,;\theta)\) to obtain an estimate \(\hat{y}(t_{k})=H\hat{x}(t_{k})\) where \(\hat{x}(t_{k})=\text{Integrate}(f,u,x_{0},t_{0},t_{k})\). We then minimize the mean squared error objective \(L(\theta)=\sum_{k=1}^{T}\|\hat{y}(t_{k})-y_{k}\|_{2}^{2}/T\) with respect to \(\theta\) to fit our parametric model [12]. However, this procedure requires us to know \(x_{0}\), which is not fully observed in practice. Many methods exist for performing such under-determined state and parameter estimation; often, the state-estimation component is performed using filtering or smoothing [6, 7, 8, 19, 25], but can also be learnt through other data-driven [4, 17] or gradient-descent [23] methods. In our experiments, we estimate an initial state \(x_{0}\) by using a sequence of \(F\) observations \((G(t_{-F+1}),G(t_{-F+2}),\ldots,G(t_{0}))\) as a forcing function when forward integrating Equation (1), described in Section 4.3 of Levine and Stuart [19]. This simple procedure was sufficient for our model to learn a good \(\theta\), likely due to the rapidly decaying autocorrelation of (1). ## 4 Experiments We evaluate our proposed method on simulated data. We simulate 28 days worth of glucose, insulin, and meal data for one virtual patient using Equation (1). We evaluate our method against baseline methods with and without glucose observation noise. We also evaluate each method in the realistic setting where the _time_ of each meal is noisily reported, since in daily life, the recorded meal time is often only approximately correct. Data generation.For each day, we generate four meals: breakfast, lunch, dinner, and a late snack. Meals occur uniformly at random within 6-9AM, 11AM-2:30PM, 5-SPM, and 10-11PM, respectively. Each meal contains a glucose amount uniformly random within 5-65g, 20-70g, 40-100g, and 5-15g respectively. For each meal event \(i\), we convert grams of glucose to plasma glucose concentration, assuming the individual has 50dl of blood, and use the result as \(g_{i}\). To simulate different absorption profiles, each meal is a convex mixture of three "absorption templates". Each template \(j\) is given by delayed bump function \(a^{j}(t)\propto g_{i}\,\mathbb{1}_{[0,\infty)}(t-t_{i}-d)(e^{-b_{1}(t-t_{i}-d)}-e^{ -b_{2}(t-t_{i}-d)})\), each with its own set of parameters \((b_{1},b_{2},d)\in\{(0.04,0.09,5\text{min}),(0.08,0.13,5\text{min}),(0.03,0.04,30 \text{min})\}\), visualized in Figure 1. The templates represent regular absorption, fast absorption, and slow absorption, respectively. The macronutition of meal \(i\) is then the vector of mixture coefficients \(m_{i}\in\mathbb{R}^{3}\) such that meal \(i\) has absorption profile \(a_{i}(t)=\sum_{j=1}^{3}m_{ij}a^{j}(t)\). To ensure \(a_{i}\) is smooth, we average each value \(a_{i}(t)\) with a grid of 50 points from the past 5 minutes. For each meal time \(t_{i}\), we simulate an insulin bolus dose at a time sampled from \(\mathcal{N}(t_{i},(10\text{min})^{2})\). We sample a glucose to insulin conversion for each meal from \(\mathcal{N}(7\text{g/U},(1\text{g/U})^{2})\). To simulate imperfect measurements, we add a relative \(N(0,0.05^{2})\) observation noise. To simulate imperfect meal time recordings, we add \(N(5\text{min},(2.5\text{min})^{2})\) noise to meal times. We use a square function \(u_{I}\), corresponding to a constant insulin absorption rate, over \(30\) minutes, which we assume to be known to every model. We use parameters from Andersen and Hojbjerre [3] for Equation 1, and we use Euler integration with a step size of \(0.1\) minutes to produce an observation every \(5\) minutes. Experimental setup.We split our generated data temporally into 3 disjoint training, validation, and testing trajectories. We optimize using Adam [18] for 1000 iterations, with a half-period cosine learning rate schedule following a linear ramp up to \(0.2\) over the first 30 iterations. We use minibatches of 512 sequences of 4 hour windows (48 observations) and use 10 observations for estimating the initial condition. We minimize the mean squared error on the observed glucose values with respect to the parameters \(\theta\) of \(a_{\theta}\), keeping the other parameters of Equation (1) fixed. We parameterize our neural \(a_{\theta}\) using a feedforward network with 2 hidden layers of 64 units and GELU activations. We found that appropriately scaling the input and outputs of \(a_{i}\) is crucial for stable optimization. Evaluations.We compare our neural absorption function against the two common parameterizations of \(u_{G}\) from Section 3, fit via gradient-based optimization. We approximate the piece-wise constant square function using a difference of sigmoids; otherwise the width cannot be learned. Our neural model is able to closely approximate the ground truth \(u_{G}\), especially in the tails, as shown in Figure 1 (left). This results in significantly better forecasts, and our neural model closely tracks the ground truth glucose values and absorption rates, _even extrapolating to durations much longer than what was seen in training_. We visualize such long term forecasts in Figure 1 (right). We also report the forecast RMSE on the test set in Table 1. Our neural model attains lower forecast errors across all settings. In the noiseless case, our neural model is 10x more accurate than heuristic parameterizations. The RMSEs generally increase as we add noise, though the bump and square functions are already such poor forecasters that noise does not worsen their errors significantly. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Exact timestamps} & \multicolumn{2}{c}{Noisy timestamps} \\ \cline{2-5} \(a_{i}\) & Exact observations & Noisy observations & Exact observations & Noisy observations \\ \hline Neural & 0.95\text{m/al} & 3.66\text{m/al} & 1.48\text{m/al} & 3.63\text{m/al} \\ Bump & 9.52\text{m/al} & 10.11\text{m/al} & 9.53\text{m/al} & 10.24\text{m/al} \\ Square & 11.60\text{m/al} & 11.53\text{m/al} & 11.65\text{m/al} & 11.56\text{m/al} \\ \hline \hline \end{tabular} \end{table} Table 1: Forecast RMSE computed over all possible 4 hour windows of the test set trajectory, reflecting the window size used for training. Figure 1: _Left_: (Top) “Absorption templates” used to generate \(u_{G}\). (Bottom) 5 Samples of ground truth and learned \(u_{G}\) for meals from the test set. _Right_: Glucose forecast and predicted absorption rates of each model over a 2 day window from the test trajectory. Discussion Our experiments show that our proposed method is a promising way to learn absorption profiles that depend on macronutritional information. Our approach readily generalizes to handle arbitrary meal covariates beyond macronutritional information, such as food images or descriptions. Although this paper only uses synthetic data, our method can complement any glucose dynamics model of real-world data. Learning accurate dynamics from data, however, remains a challenging problem. We see our method as a vital component in future data-driven hybrid models of glucose-insulin dynamics. ## Acknowledgments and Disclosure of Funding This work was supported in part by AFOSR Grant FA9550-21-1-0397, ONR Grant N00014-22-1-2110, and the Stanford Institute for Human-Centered Artificial Intelligence (HAI). KAW was partially supported by Stanford Data Science as a Data Science Scholar. MEL was supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1745301. EBF is a Chan Zuckerberg Biohub investigator.
2307.02754
Intent-driven Intelligent Control and Orchestration in O-RAN Via Hierarchical Reinforcement Learning
rApps and xApps need to be controlled and orchestrated well in the open radio access network (O-RAN) so that they can deliver a guaranteed network performance in a complex multi-vendor environment. This paper proposes a novel intent-driven intelligent control and orchestration scheme based on hierarchical reinforcement learning (HRL). The proposed scheme can orchestrate multiple rApps or xApps according to the operator's intent of optimizing certain key performance indicators (KPIs), such as throughput, energy efficiency, and latency. Specifically, we propose a bi-level architecture with a meta-controller and a controller. The meta-controller provides the target performance in terms of KPIs, while the controller performs xApp orchestration at the lower level. Our simulation results show that the proposed HRL-based intent-driven xApp orchestration mechanism achieves 7.5% and 21.4% increase in average system throughput with respect to two baselines, i.e., a single xApp baseline and a non-machine learning-based algorithm, respectively. Similarly, 17.3% and 37.9% increase in energy efficiency are observed in comparison to the same baselines.
Md Arafat Habib, Hao Zhou, Pedro Enrique Iturria-Rivera, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Yigit Ozcan, Melike Erol-Kantarci
2023-07-06T03:26:11Z
http://arxiv.org/abs/2307.02754v1
Intent-driven Intelligent Control and Orchestration in O-RAN Via Hierarchical Reinforcement Learning ###### Abstract rApps and xApps need to be controlled and orchestrated well in the open radio access network (O-RAN) so that they can deliver a guaranteed network performance in a complex multi-vendor environment. This paper proposes a novel intent-driven intelligent control and orchestration scheme based on hierarchical reinforcement learning (HRL). The proposed scheme can orchestrate multiple rApps or xApps according to the operator's intent of optimizing certain key performance indicators (KPIs), such as throughput, energy efficiency, and latency. Specifically, we propose a bi-level architecture with a meta-controller and a controller. The meta-controller provides the target performance in terms of KPIs, while the controller performs xApp orchestration at the lower level. Our simulation results show that the proposed HRL-based intent-driven xApp orchestration mechanism achieves \(7.5\%\) and \(21.4\%\) increase in average system throughput with respect to two baselines, i.e. a single xApp baseline and a non-machine learning-based algorithm, respectively. Similarly, \(17.3\%\) and \(37.9\%\) increase in energy efficiency is observed in comparison to the same baselines. O-RAN, rApps, xApp, hierarchical reinforcement learning, orchestration ## I Introduction Open radio access network (O-RAN) facilitates openness and intelligence to support diverse traffic types and their requirements in 5G and beyond networks [1] as well as, multi-vendor RAN deployments. In a multi-vendor environment, rApps and xApps can be hosted in a non-real-time RAN intelligent controller (non-RT-RIC) and near-real-time RAN intelligent controller (near-RT-RIC). In the literature, xApps and rApps have been studied for resource and power allocation, beamforming and management, cell sleeping, traffic steering and so on [2, 3, 4]. Advanced reinforcement learning (RL) algorithms can be used to develop intelligent network functions in O-RAN. However, a multi-rApp or a multi-xApp scenario with a variety of AI-enabled Apps will require intelligent control and orchestration among the Apps to avoid performance degradation. Note that, we focus on xApps as a case study but our work generalizes to rApps as well. To elevate autonomy in O-RAN via xApp orchestration, intent-driven network optimization goals can play a pivotal role. The intent is defined as an optimization goal that is a high-level command given by the operator usually in plain language and it determines a key performance indicator (KPI) that the network should meet, such as "increase throughput by \(10\%\)" or "increase energy efficiency by \(5\%\)" [5]. To better support autonomous orchestration of the xApps in a multi-vendor environment, emphasis on operators' intents is crucial [6]. Intents aid in achieving agile, flexible, and simplified configuration of the wireless networks with minimum possible intervention. Furthermore, intelligent intent-driven management has the ability to constantly acquire knowledge and adjust to changing network conditions by utilizing extensive real-time network data. The inclusion of intent-driven goals for intelligent xApp control and orchestration is a promising yet highly complex task, since there are multiple vendors involved with different network functions and intents may trigger conflicting optimization goals in subsystems. There are a few works on conflict mitigation or xApp cohabitation in O-RAN. For instance, Han _et al._ propose a conflict mitigation scheme among multiple xApps using team learning [7], and Polese _et al._ propose a machine learning (ML)-based pipeline for the cohabitation of multiple xApps in an O-RAN environment [8]. The work outlined in [9] introduces a method for achieving automation throughout the entire life cycle of xApps, beginning with the utilization scenario, requirements, design, verification, and ultimately, the deployment within networks. However, the operator intent is not involved in these works. To this end, we propose a hierarchical reinforcement learning (HRL) method for intent-driven xApp orchestration. Different from the previous works, the proposed scheme has a bi-level architecture, where we can pass the intents to the top-level hierarchy, and process it as optimization goals for the lower-level controller to control and orchestrate xApps. Orchestration can avoid xApp conflicts and improve performance by combining xApps with similar performance objectives. The proposed method is compared with two baselines: non-machine learning (non-ML) solution and a single xApp scenario. Our simulation results show that the proposed HRL-based intent-driven xApp orchestration mechanism achieves \(7.5\%\) and \(21.4\%\) increase in average system throughput along with \(17.3\%\) and \(37.9\%\) increase in energy efficiency, compared to the single xApp and non-ML baselines, respectively. The rest of the paper is organized as follows: Section II discusses the related works, followed by Section III which presents the system model elaborately. The proposed HRL-based xApp orchestration in O-RAN is covered in Section IV. Performance analysis and comparison of the proposed method along with the baselines are presented in Section V. Lastly, we present our conclusions in Section VI. ## II Related work There are a few works that investigate ML-based xApps for RAN optimization and control. Polese _et al._ propose an ML pipeline for multiple xApps in an O-RAN environment [8]. Han _et al._ propose a conflict mitigation scheme among deep reinforcement learning (DRL)-based power allocation and resource allocation xApps [7]. Polese _et al._ propose an Orchest-RAN scheme, in which network operators can specify high-level control objectives in non-RT-RIC to sort out the optimal set of data-driven algorithms to fulfill the provided intent [10]. While the work presented in [10] focuses on selecting the appropriate machine learning models and their execution locations for given inputs from the operator, it does not put emphasis on the network operator's goals as optimization objectives to select and orchestrate xApps. An intent-driven orchestration of cognitive autonomous networks of RAN management is presented in [11], where the authors propose a generic design of intent-based management for controlling RAN parameters and KPIs. Zhang _et al._ propose an intent conflict resolution scheme to realize conflict avoidance in machine learning-based xApps [12]. A graph-based solution is proposed in [13] to determine the specific network function required to fulfill an intent. Compared with existing literature, the main contribution of this work is that we propose an HRL scheme for intent-driven orchestration of xApps. The HRL scheme can well fit the inherent O-RAN hierarchy with non-RT-RIC and near-RT-RIC, and intent-based orchestration enables higher flexibility for network control and management. The intents from the human level operator are provided as goals for the system to achieve, which leads to the orchestration of xApps to achieve the provided goal. ## III System Model ### _System Model_ We consider an O-RAN-based downlink orthogonal frequency division multiplexing cellular system having \(B\) BSs serving \(U\) users simultaneously, and multiple small cells in the system are within the range of a macro cell. There are \(K\) classes of traffic in the system and users are connected with multiple RATs via dual connectivity. There are \(Q\) classes of RATs (\(q_{1},q_{2},...,q_{n}\)), where \(q\) represents a certain access technology (LTE, 5G, etc.). The wireless system model considered in this work is presented in Fig. 1. RIC platforms in the figure (non and near-RT-RIC) can host rApps and xApps which are control and optimization applications operating at different time scales. We design three xApps, namely traffic steering, cell sleeping, and beam forming xApps. In each xApp, we apply deep reinforcement learning for optimization within this xApp, which will be introduced in the following. #### Iii-A1 Traffic Steering xApp The traffic steering xApp aims to achieve a simultaneous balance of QoS requirements for various traffic classes by introducing a traffic steering scheme based on Deep Q-Network (DQN) [14]. We design the reward and state functions to ensure satisfactory performance, focusing on two essential KPIs: network delay and average system throughput. Traffic can be steered to a certain BS based on load experienced, link quality, and traffic type. The details of this xApp can be found in [2]. #### Iii-A2 Cell Sleeping xApp The cell sleeping xApp is designed to reduce power consumption in the system by turning off idle or less busy BSs. The xApp can perform cell sleeping based on traffic load ratios and queue length of each BS. The energy consumption model for the BS is: \[P_{in}=\left\{\begin{array}{ll}P_{0}+\delta_{p}P_{out},&0<P_{out}\leq P_{ max},\\ P_{sleep},&P_{out}=0,\end{array}\right. \tag{1}\] where \(P_{0}\) is the fixed power consumption, \(\delta_{p}\) is the slope of load-dependent power consumption, \(P_{out}\) is the transmission power, \(P_{max}\) is the maximum transmission power, and \(P_{sleep}\) is the constant power consumption in sleep mode [15]. The goal of the cell sleeping xApp is to maximize energy efficiency as much as possible without overloading the active BSs. The optimization goal is formulated as follows: \[\begin{array}{ll}\underset{P_{b}}{\max}&\frac{\sum_{u\in U_{o}}\sum_{b\in B }T_{u,b}}{P_{b}}-\theta b_{u},\\ s.t.&(1),\end{array} \tag{2}\] where \(U_{o}\) is the set of the user equipments (UEs) connected to a certain BS, \(T\) represents the throughput, \(\theta\) is the penalty Fig. 1: O-RAN based system model with macro cell and small cells. factor to reduce overloading, and \(b_{u}\) is the number of the BSs overloaded. Turning off the BSs can greatly decrease power consumption. It reduces the number of BSs active that are serving the live network traffic. This poses a risk of overloading the active BSs. Therefore, the penalty factor related to the number of BSs has been introduced to avoid excessive overloading. To address the formulated problem, the following MDP is formulated: * **State:** The set of the state consists of: \(S=\{q_{L},L_{R}\}\). \(L_{R}\) represents the traffic load ratio of a BS, \(b\). The second element of the state space is the queue length of the BSs representing the load level. * **Action:** Turning the BSs on and off are put into the action set for the DQN implementation. \(A=\{ON,OFF\}\). * **Reward:** The reward function is the same as eq. (2). #### Iii-B3 Beamforming xApp The third xApp is the beamforming xApp. We deploy band-switching BSs from 3.5 GHz to mmWave frequencies [16]. This allows us to support high throughput traffic like enhanced mobile broadband (eMBB) via accurate intelligent beamforming. This xApp can control power based on the location of the UE and it uses minimum transmission power needed which is energy efficient. The xApp employs analog beamforming, and a multi-antenna setup is adopted where each BS deploys a uniform linear array (ULA) of \(M\) antennas [17]. The beamforming weights of every beamforming vector are implemented using constant modulus phase shifters. We also assume that there is a beam steering-based codebook, \(F\), from which every beamforming vector is selected [17]. Every BS \(l\) has a transmit power \(P_{TX,l}\in P\), where \(P\) is the set of candidate transmit powers. We want to optimize two parameters: throughput and energy efficiency using this xApp. To obtain such a goal, the following optimization problem is addressed. \[\begin{split} max\sum_{l\in\{1,2,..,L\}}\left[c_{1}\left(\frac{T_ {k,b}}{T_{QoS}}\right)+c_{2}\left(\frac{\varepsilon}{\varepsilon_{max}} \right)\right],\\ s.t.\hskip 56.905512ptP_{TX,l}[t]\in P,\\ f_{l}[t]\in F,\end{split} \tag{3}\] where \(T_{k,b}\) is the throughput achieved by the system, \(T_{QoS}\) is the defined throughput requirement for a traffic type \(k\), \(\varepsilon\) represents the energy efficiency associated with the BS throughput and transmission power, \(\varepsilon_{max}\) is the maximum theoretical energy efficiency, and \(c_{1}\) and \(c_{2}\) are the weight factors. To solve the formulated problem, the following MDP is defined. * **State:** UE co-ordinates are used as set of states, \(S=\{C_{UE1},C_{UE2},...,C_{UEN}\}\). * **Action:** The action set consists of two elements: \(A=\{\alpha(\chi_{n}),\delta_{n}\}\). Here, \(\chi\) is the steering angle, and \(\alpha(\chi_{n})\) is the array steering vector in the direction \(\chi_{n}\) of the \(n\)-th element in the codebook. \(\delta_{n}\) accounts for the power level change. * **Reward:** The reward function is the same as eq. (3) as presented before. ## IV Proposed HRL-based xApp orchestration Scheme RL problems can be formulated as MDPs where we have a set of states, actions, transition probability, and a reward function (\(S,A,T,R\)). The RL agent in HRL consists of two controllers: a meta-controller and a controller [18]. The MDP for HRL has an added element which is denoted as a set of goals (\(G\)). Depending on the current state, the meta-controller is responsible for generating high-level goals (\(G=g_{1},g_{2},...,g_{n}\)) for the controller. After that, these goals are transformed into high-level policies. The controller chooses low-level action \(a\) according to the high-level policies. This process from the controller yields an intrinsic reward (\(r_{in}\)). Finally, an extrinsic reward (\(r_{ex}\)) is given to the meta-controller from the environment and it will provide the controller with a new goal (\(g_{t}\)). This section will discuss the xApp orchestration scheme via HRL. ### _xApp Coordination Using HRL_ The proposed O-RAN-based system architecture is presented in Fig 2. RIC platforms can host rApps and xApps which are applications operating at different time scales. Three xApps have been defined in previous sections. The rApp in the figure works as an input panel for the network operator, and it can convert these inputs as goals to be optimized. Also, it works as the meta-controller in the non-RT-RIC. Let's assume, \(X\) is a set of xApps and \(Y\) is the subset of \(X\) having at least one element (xApp in our case), that can optimize the network performance based on the operator input. Let \(I\) be the set of candidate KPIs that a xApp can optimize and \(Z\) be the set of QoS requirements the system has to satisfy. Considering all these assumptions, the xApp orchestration problem that we want to address can be formulated as follows: \[\begin{split} max\sum_{i\in I}\sum_{z\in Z}(P_{i}-\rho\xi_{z}), \\ s.t.\hskip 10.0pt\forall(X)\exists(Z):V(O)=1,\end{split} \tag{4}\] where \(P\) is the intended performance metric the operator intends to improve. \(\rho\) is the penalty parameter for QoS requirement violation, and \(\xi_{z}\) is the number of UEs QoS requirements violated to. Lastly, \(V(O)\) is the proposition that "An xApp can improve a performance metric", which is either '0' or '1'. As presented in Fig. 2, the rApp in the system is directly connected to the user panel where the operator may provide input to the system. The operator input is provided as the percentage of the increase related to a certain KPI. For example, \(x\%\) for throughput increase or \(y\%\) for energy efficiency increase or any other intent stated in natural language. The rApp has a hierarchical deep Q-learning (h-DQN) framework [18]. The meta-controller (in non-RT-RIC) takes the increased amount of throughput or increased amount of energy efficiency as a goal, observes the state in the environment and provides both the goal and states to the controller in near-RT-RIC having a bundle of xApps. This type of data passing is done via the A1 interface by which both the non and near-RT-RIC are connected. The controller takes the action of choosing an xApp or a set of xApps based on the provided state and goal. Following, we define the MDP for the meta-controller and controller to address the xApp orchestration problem formulated in eq. (4). * **State:** The set of states consists of traffic flow types of different users in the network. UEs having similar traffic types are grouped together. \(S=\{T_{voice},..,T_{urlle},...,T_{gaming},...,T_{eMBB},..\}\). Elements in this set stand for five different traffic types in the system. Both meta-controller and controller share the same states. * **Action:** xApp or combination of xApp selection is considered as actions to be performed by the controller which is defined as: \(\{A_{xApp1},A_{xApp1,2},....,A_{xAppN}\}\). * **Intrinsic reward:** The intrinsic reward function (\(r_{in}\)) for the controller is: \(r_{in}=P_{i}-\rho\xi_{z}\) which is similar to eq. (4). * **Goal for the controller:** Increased throughput or increased energy efficiency level that can satisfy operator intent is passed to the controller as goals. It is \(G=\{tp_{1},tp_{2},...,tp_{n}\}\) for throughput increasing intents or \(G=\{ee_{1},ee_{2},...,ee_{n}\}\) for energy efficiency increasing intents. Note that these goals can be generalized to other KPIs however for simplicity we target throughput and energy efficiency. * **Extrinsic reward:** The meta-controller is responsible for the overall performance of the system. Therefore, we have set the extrinsic reward function for the meta-controller as the objective of the problem formulation presented in eq. (4). The following equation is basically the summation of the intrinsic reward over \(\tau\) steps. \[r_{ex,\tau}=\frac{1}{n}\sum_{\tau=1}^{n}r_{in,\tau}\quad\forall(u)\in U, \forall(b)\in B,\] (5) The whole process of xApp orchestration can be summarized as follows: * **Step 1:** Operator's intent is provided as input regarding which performance metric is to be improved. * **Step 2:** These performance targets are provided to the controller in near-RT-RIC by the meta controller rApp in the non-RT-RIC as goals to achieve. * **Step 3:** The controller selects an xApp or a combination of xApps to reach the target performance as close as possible. The system learns based on the reward it gets for such kind of xApp selection. * **Step 4:** Selected xApps with their own DRL-based functionalities optimize the performance of the network as a response to the intent of the operator. ### _Baseline Algorithms_ This section includes two baselines. The first baseline is a simulation of the same network scenario based on the system model we have presented so far where there is no intelligent DRL-based xApp to optimize the network. We use non-ML algorithms. For comparing the throughput performance of the proposed HRL-based system, we use a threshold-based traffic steering scheme proposed in [19]. It uses a predefined threshold. The threshold is determined considering the load at each station, channel condition, and user service type. The mean of all these metrics is taken to obtain the threshold (\(T\)) values. Weighted summation of the same parameters is taken to form a variable (\(w\)). Then, the traffic is steered to another BS based on the \(w\) and \(T\) values. This baseline does not include cell sleeping, therefore BSs are always on. In our second baseline, we consider single xApp scenarios. For example, the proposed HRL-based xApp orchestration mechanism is compared with single xApp scenarios where only traffic steering xApp is in action, or the cell sleeping xApp is in action. ## V Performance Evaluation ### _Simulation setup_ A MATLAB-based simulation environment has been developed having one eNB and four gNBs to serve as one macro-cell and four small cells. In total, we deploy 60 UEs with five different traffic types: voice, gaming, video, URLLC, and eMBB. Different types of traffic in the system have variant requirements in terms of different KPIs. QoS requirements of different traffic types have been defined based on our previous work [20]. eMBB and URLLC traffic types have been added additionally to test the system compatibility. For the eMBB traffic type, we consider packet size, \(T_{QoS}\), and \(D_{QoS}\) to be 1500 bytes, 100 Mbps, and 15 ms, respectively [21]. Lastly, specifications related to delay and packet size for the URLLC traffic are set to 32 bytes and 2.5 ms. Fig. 2: Intent-based xApp orchestration with macro and micro cells having different types of traffic. A 5G NSA mode having different types of RAT (LTE and 5G) in the simulation environment work together. We deploy an architecture based on [22]. The carrier frequency for LTE is set to be 800 MHz. For 5G NR small cells, band-switching BSs are deployed at 3.5 GHz and 30 GHz. BS transmission power for LTE and 5G NR is set to be 38 dBm and 43 dBm, respectively [23]. For the HRL implementation, the starting rate of learning is set to 0.95. In order to maintain stable learning performance, we reduce the learning rate periodically after a certain number of episodes. Additionally, the discount factor used is 0.3. The simulation is conducted 10 times using MATLAB, and the average outcomes are presented along with a 95% confidence interval. ### _Simulation results_ Before conducting the performance evaluation of the proposed xApp orchestration scheme, first, we present how the intent-oriented HRL-based orchestration scheme works. Fig. 3 shows that the operator intent of "increase throughput" leads to the selection of certain xApps. When there is a \(5\%\) throughput increase intent from the operator, after a few time slots, there is a sharp increase in throughput. This is because xApp1 (traffic steering xApp) has been invoked. When a 5% increase is again given as an input, a combination of xApp1 and xApp3 (intelligent beamforming xApp) is selected. When the operator provides an intent to decrease power consumption by \(5\%\), we can see from Fig. 3 that there is a sharp decrease in throughput. This is because xApp1 and xApp3 have been terminated at the 461-th time slot and xApp2 (cell sleeping xApp) has been invoked. Fig. 4 presents a similar graph to the previous one but this time it plots the energy efficiency in the time axis. When there is an intent from the operator to achieve "\(10\%\) increase in energy efficiency", we can see that there is an initiation of xApp2 at the 131-th time slot. This xApp performs cell sleeping and saves energy. For the next energy efficiency increase intent given by the operator, it can be seen that both xApp2 and xApp3 are working together. The proposed HRL-based algorithm has successfully orchestrated these two xApps for the desired performance gain. Fig. 3 and 4 basically show the utility of the proposed system. Not only it can induce operator intent as an optimization goal, but also it can orchestrate xApps to gain desired performance output by using the proper combination of xApps. Fig. 5 shows the performance comparison between the proposed HRL-based xApp orchestration scheme and the baseline scenarios in terms of average system throughput. Results are obtained under a constant load of 6 Mbps. The proposed orchestration scheme achieves a \(21.4\%\) increase and \(7.5\%\) increase in average system throughput compared to the non-ML algorithm and single xApp scenario (traffic steering xApp), respectively. It is because of the efficient orchestration mechanism that involves multiple xApps that trigger the optimal combination of xApps to reach better performance based on the operator intent. Fig. 6 shows the performance comparison between the proposed HRL-based xApp orchestration scheme and the baseline scenarios in terms of average energy efficiency. The proposed orchestration scheme obtains a \(17.3\%\) increase and \(37.9\%\) increase in average energy efficiency compared to the single xApp and non-ML scenario (cell sleeping xApp), respectively. Similar to the former, it is because of the HRL-based orchestration mechanism that incorporates multiple xApps to achieve better performance based on the user intent. Also, note that we use traffic steering in the former figure and cell sleeping in this evaluation because they are specifically optimizing throughput and energy respectively. ## VI Conclusions In this paper, we show that the HRL-based intent-driven orchestration mechanism is vastly effective in not only optimizing KPIs but also providing great flexibility and control to the operator. In this study, we have introduced a novel HRL-based xApp orchestration mechanism that can perform xApp management and provide recommendations for the best combination of xApps given the operator's intent. The optimal xApp orchestration scheme has led to a \(7.5\%\) increase in average system throughput and a \(17.3\%\) increase in energy efficiency compared to single xApp usage with no orchestration. In our future work, we plan to extend this orchestration to rApps and other xApps with complex KPI interactions. Fig. 4: Impact of operator intents on energy efficiency. Fig. 3: Impact of operator intents on throughput. ## Acknowledgement This work has been supported by MITACS and Ericsson Canada, and NSERC Canada Research Chairs and NSERC Collaborative Research and Training Experience Program (CREATE) under Grant 497981.
2304.14466
Nordic Vehicle Dataset (NVD): Performance of vehicle detectors using newly captured NVD from UAV in different snowy weather conditions
Vehicle detection and recognition in drone images is a complex problem that has been used for different safety purposes. The main challenge of these images is captured at oblique angles and poses several challenges like non-uniform illumination effect, degradations, blur, occlusion, loss of visibility, etc. Additionally, weather conditions play a crucial role in causing safety concerns and add another high level of challenge to the collected data. Over the past few decades, various techniques have been employed to detect and track vehicles in different weather conditions. However, detecting vehicles in heavy snow is still in the early stages because of a lack of available data. Furthermore, there has been no research on detecting vehicles in snowy weather using real images captured by unmanned aerial vehicles (UAVs). This study aims to address this gap by providing the scientific community with data on vehicles captured by UAVs in different settings and under various snow cover conditions in the Nordic region. The data covers different adverse weather conditions like overcast with snowfall, low light and low contrast conditions with patchy snow cover, high brightness, sunlight, fresh snow, and the temperature reaching far below -0 degrees Celsius. The study also evaluates the performance of commonly used object detection methods such as Yolo v8, Yolo v5, and fast RCNN. Additionally, data augmentation techniques are explored, and those that enhance the detectors' performance in such scenarios are proposed. The code and the dataset will be available at https://nvd.ltu-ai.dev
Hamam Mokayed, Amirhossein Nayebiastaneh, Kanjar De, Stergios Sozos, Olle Hagner, Bjorn Backe
2023-04-27T18:55:43Z
http://arxiv.org/abs/2304.14466v1
Nordic Vehicle Dataset (NVD): Performance of vehicle detectors using newly captured NVD from UAV in different snowy weather conditions. ###### Abstract Vehicle detection and recognition in drone images is a complex problem that has been used for different safety purposes. The main challenge of these images is captured at oblique angles and poses several challenges like non-uniform illumination effect, degradations, blur, occlusion, loss of visibility, etc. Additionally, weather conditions play a crucial role in causing safety concerns and add another high level of challenge to the collected data. Over the past few decades, various techniques have been employed to detect and track vehicles in different weather conditions. However, detecting vehicles in heavy snow is still in the early stages because of a lack of available data. Furthermore, there has been no research on detecting vehicles in snowy weather using real images captured by unmanned aerial vehicles (UAVs). This study aims to address this gap by providing the scientific community with data on vehicles captured by UAVs in different settings and under various snow cover conditions in the Nordic region. The data covers different adverse weather conditions like overcast with snowfall, low light and low contrast conditions with patchy snow cover, high brightness, sunlight, fresh snow, and the temperature reaching far below - 0 degrees Celsius. The study also evaluates the performance of commonly used object detection methods such as YOLOv8s, YOLOv5s, and Faster RCNN. Additionally, data augmentation techniques are explored, and those that enhance the detectors' performance in such scenarios are proposed. The code and the dataset will be available at [https://nvd.ltu-ai.dev_](https://nvd.ltu-ai.dev_) ## 1 Introduction In the Arctic region of Scandinavia, drones are used for monitoring purposes in search and rescue missions. Drones can be first on-site when accidents occur like car accidents or traffic congestion to provide an overview of the event scene which can be lifesaving. In rural areas, long distances between cities and villages and harsh weather conditions such as snow, snow fog, snowstorms, and temperatures reaching far below - 0 degrees Celsius make search and rescue missions by drones difficult. During wintertime the light conditions in the northern hemisphere are low, and for upper northern Scandinavia during the occurrence of Polar night, the sun never rises above the horizon. During wintertime traffic monitoring for road maintenance purposes with drones is a timesaving and more environmentally friendly option than using cars or trucks to inspect the roads. Monitoring bottlenecks in traffic in more urban areas is also of interest to early drivers commuting to work. In a snowy landscape with snowy cars and low light conditions, it is difficult to detect cars from the air, even by the human eye. Detecting objects concealed by snow presents unique challenges compared to other scenarios, primarily because most existing detectors are trained on datasets that either contain images captured under normal weather conditions [1-2] or on artificially generated snow images [3-5]. However, these models are not effective in detecting objects in snowy conditions since snow hides many of the visual features highly crucial and required for object detection. This paper assesses how well object detectors perform using a dataset captured by unmanned aerial vehicles (UAVs) in various winter weather conditions, ranging from light to complete snow cover. The goal is to investigate whether detectors perform poorly in such conditions and to highlight the importance of using adequate training datasets when developing detectors. The primary focus of this study is using UAV images to detect vehicles captured in a wide range of winter weather conditions with various degrees of snow cover and not limited to roads. To understand the novelty and uniqueness of our approach, we conducted an extensive search for research papers or projects with a scope like ours. We found datasets that use UAV images for vehicle detection but have significant differences from our collected dataset, leading to different scopes and challenges. In the following section, we will attempt to provide a technical summary of other datasets captured by drones that have been used in other research. ## 2 UAVs dataset In this section, we will analyze each of the research papers and projects that use images captured by UAV and compare their datasets to ours with the aim of highlighting the key differences that make our research stand out. We aim to demonstrate the novelty and contribution of our research in the field of vehicle detection using UAV images in different weather conditions such as heavy snow. **VisDrone dataset**[6-8]: The need for computer vision in analyzing visual data collected from drones has led to the creation of a comprehensive benchmark dataset called VisDrone. Developed in China, this dataset was intended to facilitate various computer vision tasks related to drone imagery. The VisDrone2019 dataset represents an effort to merge the fields of computer vision and drone technology, however, emphasis is given to object detection regardless of the weather conditions and is not limited to vehicle detection. It does contain cars, but it also includes other kinds of objects such as pedestrians, bicycles, etc. Thus, it is expected that the models built on top of this dataset are not specialized in vehicle detection under extreme weather conditions. It is also based on a different continent, which can have a very different view from a drone compared to a European city. The benchmark dataset contains 288 video clips and 10,209 static images captured by drone-mounted cameras in various locations, environments, objects, and densities in China. The dataset was collected using different drone models, scenarios, and weather and lighting conditions. The frames are manually annotated with more than 2.6 million bounding boxes of objects of interest. **UAV project**[9-11]: This dataset was slightly more like our perspective than the others. It was designed to be a challenging dataset for existing object detection solutions trained on limited datasets. While it was intended to include various weather conditions, its distribution of weather conditions suggests that it only includes fog as an adverse weather condition. The vehicles annotated in this dataset are also present only on the road, which is again a main difference from our dataset. It is also mentioned that in some places, the vehicles were too small to classify them or assess their motion, which is a key difference from our dataset, which aims to identify all vehicles, regardless of their size, if it is identifiable by us to annotate. The dataset consists of 10 hours of raw video that make up the proposed UAVDT benchmark and was cut down to 100 sequences with roughly 80,000 representative frames each. The sequences range in frame count from 83 to 2970. A UAV platform was used to make films in a variety of metropolitan settings, including squares, highways, crossings, toll booths, arterial routes, and T-junctions. The video sequences are captured at 30 frames per second (fps) and in a 1080 x 540-pixel resolution. In the dataset, which included 2,700 automobiles, around 80,000 frames from the 10 hours of raw footage were annotated with 0,84 million bounding boxes. **UAV-Vehicle-Detection-Dataset**[12-13]: This dataset was created to address the orientation and scale-invariant problem, with a focus on detecting and re-identifying vehicles. However, it differs from our research in that it is primarily concerned with identifying vehicles on roads, while our dataset and research aim to identify vehicles in any location. Additionally, the dataset only includes images captured under normal weather conditions without any adverse weather conditions such as rain or snow. There is a similarity between this dataset and ours in terms of capturing vehicles from various angles, resulting in significant perspective distortions, but this is common in most UAV datasets. The training dataset for the vehicle detector is generated from 3 different sources. It consists of 154 images from the aerial-cars-dataset in GitHub, which comes from a video with no extreme weather conditions, 1374 images from the UAV-benchmark-M, and a dataset of 157 custom labeled images. The proposed solution for live tracking of vehicles by detection approach is using 11 frames per second on color videos of 2720p resolution to perform in an efficient way. **Mimos drone dataset**[14-15]: This paper starts with the same motivation of our paper, which is protecting and securing specific areas by text detection. However, it follows a different path, by aiming at identifying the text on the plates of the cars, or any other text on the car. Thus, although the initial motivation is similar, the dataset used and the research itself is completely different from our scope. Added to that, weather conditions are not considered at all during this research, which plays a key role in our research. The dataset consists of 1142 images and most of the photos contain parking signs or traffic signals. This work focuses on low altitude captured images, the ranges are from 1-3, 3-5, and more than 7 meters at different angles. As a result, the dataset contains photos with tiny text and license plate numbers that are in low resolution. **Data synthesizing**[16]: This paper had been a great addition to our research, as it is highlighting the lack of datasets containing adverse weather conditions. Its aim is to build a model that generates rendered weather conditions on images which is different from our scope, but the approach that is followed has some interesting key points for us. Two datasets are used for training the model, the Flickr Weather Image Dataset, and the CARISSMA Weather Image Dataset. Both contain images that are not recorded from UAV, but they both contain different weather conditions such as fog, rain, or snow, and they also both contain car objects. The datasets are only focusing on street videos, and since they are not recorded from UAV the angle is completely different. Thus, even though this research tends to have some similarities to our challenges, it is still completely different. **UAV videos for traffic**[17]: This research's technical implementation is closer to our general goal, which is detecting vehicles through UAV data. However, the scope of this research is to build a model to get traffic information, which means that vehicle detection is restricted to cars on the streets. Also, no context is given about the weather conditions, which is the challenge we aim to explore and solve. The data was captured by UAV on the main city roads of Chongqing, at a height of 200-250 meters above the ground. The video has a high resolution of 3840 pixels by 2160 pixels. The data was created using the VOC2007 standard. **Other available UAV Dataset**: As there are many other datasets used for vehicle detection, we will try to list some of them in the following table and crossmatch them with ours based on the following criteria as explained in both table 1 and figure 1. * Criteria 1 (C1): Data captured from UAV. * Criteria 2 (C2): Data has vehicles. * Criteria 3 (C3): Location is generic. * Criteria 4 (C4): Varying weather conditions. ## 3 Nordic vehicle dataset (NVD) ### Data Capturing The video datasets were acquired using a Freya unmanned aircraft system (figure 2). The flights were conducted autonomously according to preprogrammed flight plans at altitudes varying from 120 m up to 250 m above ground level. The Freya unmanned aircraft specifications are explained in table2. While the specifications of the camera used to capture the image are shown in table 3. ### Data preparation **Data annotation**: The CVAT tool [29] was used in both \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Dataset & C1 & C2 & C3 & C4 & Additional info \\ \hline DLR 3K [17] & & & & & \\ \hline VEDAI- & & & & & \\ \(512\)[18] & & & & & \\ \hline VEDAI- & & & & & \\ \(1024\)[19] & & & & & \\ \(1024\)[19] & & & & & \\ \(20\)[20] & & & & & Aerial Images from different platforms, not specific from UAV, contains vehicles, but no information about places of images (exclusive on streets, or not) or weather conditions & \\ \hline Stanford Drone [21] & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ & & & & & \\ \hline CARPK [22] & & & & & Only restricted to parking lots \\ \hline PUCR+ [22] & & & & & \\ \([22]\) & & & & & \\ \hline CyCAR [23] & & & & & \\ \([24]\) & & & & & \\ \hline UAVDT [25] & & & & & No extreme weather conditions included \\ \hline MOR- & & & & & It contains different scenarios such as nighttime, occlusion and camera motion \\ UAV [26] & & & & & \\ \hline BIRDSAI [27] & & & & & \\ \([28]\) & & & & & \\ \hline \end{tabular} \end{table} Table 1: Applied search criteria over available vehicle dataset. \begin{table} \begin{tabular}{|l|l|} \hline Airframe type & Flying wing \\ \hline Propulsion & Electric, pusher propeller \\ \hline Wing span & 120 cm \\ \hline Take-off weight & 1.2 kg \\ \hline Cruise speed & 13 m/s (47 km/hr) \\ \hline \end{tabular} \end{table} Table 2: Specification of Freya unmanned aircraft. \begin{table} \begin{tabular}{|l|l|} \hline Sensor & 1.0-type (13.2 x 8.8 mm) Eximer RS CMOS \\ \hline Lens & f=7.9 mm (35 mm format equivalent 24mm), F 4.0 \\ \hline Video & 1080 p or 4K at 25 frames per second \\ recording & & \\ \hline Still image & 16 Mpix \\ \hline Sensor & 1.0-type (13.2 x 8.8 mm) Eximer RS CMOS \\ \hline \end{tabular} \end{table} Table 3: Specification of Freya unmanned aircraft. Figure 2: Freya unmanned aircraft. Figure 1: sequential checking search criteria online and self-hosted server setups to annotate the captured images and videos. CVAT provides rectangular bounding boxes for each object in a format that can be used by different detectors. **Data augmentation**: Data augmentation is a crucial technique for improving the performance of object detection models. By increasing the effective size of the dataset, data augmentation helps prevent overfitting and improve generalization. At the same time, it helps to create a more diverse training dataset, exposing the model to a wider range of examples. Here lies the reason, on why we used albumentations. In our case, we aim on a wide range of weather conditions. We employed the technique of albumentation for weather simulation as some of our data that had normal weather conditions. The impact of using albumentation library over the NVD is evaluated in Sec 4. we initially used the albumentations library to augment our data for training and testing. We applied pixel-level transformations, such as simulating weather conditions like snow, rain, and fog. To keep track of the augmented frames, we saved each modified image with a unique identifier. We also made sure that any bounding boxes we had in the original images remained accurate in the augmented image. To do this, we created new annotations that replicated the original bounding boxes and added the unique identifier to the filename. This method was offline which required enormous disk space and processing time. This restriction led us to using the YOLO built-in augmentation which is implemented online. YOLO needs hyperparameters to define different configurations that impact the model training process. Therefore, we assigned values to the hyperparameters that influence data augmentation, which helps to improve our dataset during training. Some hyperparameters that we have set, which affect data augmentation, are listed below, but the entire set can be accessed through the code available on Github. * fl_gamma: 0.0 # focal loss gamma. * hsv_h: 0.015 # image HSV-Hue augmentation (fraction) * hsv_s: 0.7 # image HSV-Saturation augmentation (fraction) * degrees: 45.0 # image rotation (+/- deg) **Flight height estimation**: As part of the data classification process, we used flight height as one of the factors. To estimate the flight height, we developed a method that utilizes the diagonal length of the bounding box in a frame, which we obtained from annotation data, perspective geometry, and maximum flight height information from UAV. The accuracy of the applied method to estimate the height of the UAV is explained in figure 3. The size of the bounding box in an aerial image can be used as an indicator of the flight height. In this work, we use the diagonal length of the bounding box denoted by \(l\), as the vehicle's size in the image plane in pixels. We assume that the diagonal length of the vehicle is denoted by \(L\) (in meters) in the real world. Figure 4: Bounding box’s size defined by its diagonal length \(l\). Figure 3: Estimation of flight altitude for different videos. Figure 4 illustrates the diagonal length of the bounding box (_1_) in an aerial image. \(l\) is calculated as the Euclidean distance between the top-left and bottom-right corners of the bounding box. To estimate the flight height, we need to establish a relationship between \(l\) and the flight height \(H\) (in meters). Figure 5 depicts the perspective geometry of a camera placed at height of \(H\), observing an object with a real diagonal length of \(L\) meters. In the figure the focal length of the camera is denoted by \(f\). \[\begin{array}{l}H=fL\lambda\\ \lambda=\frac{1}{l}\end{array}\] This relationship will be utilized to estimate the flight height in each frame of the video. We assume that f is constant, and L is the same for each vehicle. In each frame, we set l as the mean bounding box size among all vehicles, and H will be proportional to \(\lambda\). To obtain a polynomial fit for each frame, we fit a fourth-degree polynomial to the values of \(\lambda\). However, the above simplifications result in numerous outliers, to handle them. After fitting the polynomial, we determine its maximum value \(\lambda_{max}\). Since we know the maximum flight height \(H_{max}\) for each video, we can calculate the flight height for each frame of the video using the formula: \[H=\frac{H_{max}}{\lambda_{max}}\lambda\] ### Data Description Nordic vehicle dataset (NVD) comprises 22 videos of aerial footage captured in the north of Sweden, with mostly snowy weather conditions. The flight altitudes range from 120 to 250 meters, with varying snow cover and cloud cover. The annotated videos have a total of 8450 annotated frames, containing 26313 annotated cars. The resolution of the videos varies from 1920 x 1080 to 3840 x 2160, with a frame rate of 5 or 25 frames per second. The GSD (Ground Sample Distance) or pixel size ranges from 11.1 cm to 22.2 cm, with some videos being stabilized to ensure smoother footage. Overall, our dataset provides a diverse collection of aerial images of cars in snowy conditions in northern Sweden, with annotated data that can be utilized for various applications concerning safety in the region. The following image samples were taken from various videos under different conditions, and the vehicles have been enlarged for better illustration. Figure 5: Perspective geometry of a camera placed at height H observing an object with real diagonal length of L. Figure 6: NVD samples. ## 4 Experimental Results We incorporated three advanced detectors that are widely used in both academic research and industrial applications. * YOLOv5s * YOLOv8s * Faster R-CNN We assessed the performance of these detectors using NVD and examined how augmentation methods during the data preparation stage affected their performance. For more information on the augmentation methods used, please refer to section 3.2. The data has been prepared as follows to train and infer the chosen detectors. * Total frames = 8450 * Train size = 57% * 4844 frames * 14985 vehicles * Val. size = 14% * 1212 frames * 3991 vehicles * Test size = 28% * 2394 frames * 7337 vehicles The performance of state-of-the-art detectors was measured under different augmentation techniques applied to the NVD dataset in order to assess their impact. The results for the various augmentation techniques used are presented in Tables 5 and 6. Figure 8: Vehicles detected by YOLOv5s_Au but YOLOv5s. Figure 10: Detection results by Faster RCNN. Figure 9: Vehicles detected by YOLOv8s_Au but YOLOv8s. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Model** & **AF** & **AP50** & **AP75** & **APs** \\ \hline Faster RCNN & 24.428 \% & 46.219 \% & 23.050\% & 35.262 \% \\ \hline \end{tabular} \end{table} Table 6: Performance of Faster RCNN on NVD. ## 5 Conclusion Drones can be used for different purposes related to safety as finding events related to car accidents or traffic congestion, which can be lifesaving. However, the harsh weather conditions and low light during wintertime in the northern hemisphere make search and rescue missions by drones difficult. This work highlights the importance of using adequate training datasets when developing object detectors for drones. The Nordic vehicle dataset (NVD) has been prepared to be used by the research community for better evaluation of the detector performance in varying weather conditions. The results of the experiment show that simply fine-tuning the current state-of-the-art models or augmenting the data will not enable the models to achieve the best possible results. This indicates that there is a need for current research conducted for vehicle detection to utilize and benchmark such challenging data collected in difficult situations. Recently a lot of research has been initiated on removing snow, rain, fog, etc. [30-31], however, the effectiveness of deploying them in real-life snowy conditions with limited computations will be explored in future work.
2304.13309
Robustness of the intrinsic anomalous Hall effect in Fe3GeTe2 to a uniaxial strain
Fe3GeTe2 (FGT), a ferromagnetic van der Waals topological nodal line semimetal, has recently been studied. Using first-principles calculations and symmetry analysis, we investigate the effect of a uniaxial tensile strain on the nodal line and the resultant intrinsic anomalous Hall effect (AHE). Our results reveal their robustness to the in-plane strain. Moreover, the intrinsic AHE remains robust even for artificial adjustment of the atomic positions introduced to break the crystalline symmetries of FGT. When the spin-orbit coupling is absent, the nodal line degeneracy remains intact as long as the inversion symmetry or the two-fold screw symmetry is maintained, which reveal that the nodal line may emerge much more easily than previously predicted. This strong robustness is surprising and disagrees with the previous experimental report [Y. Wang et al., Adv. Mater. 32, 2004533 (2020)], which reports that a uniaxial strain of less than 1 % of the in-plane lattice constant can double the anomalous Hall resistance. This discrepancy implies that the present understanding of the AHE in FGT is incomplete. The possible origins of this discrepancy are discussed.
Mijin Lim, Byeonghyeon Choi, Minjae Ghim, Je-Geun Park, Hyun-Woo Lee
2023-04-26T06:32:00Z
http://arxiv.org/abs/2304.13309v1
Robustness of the intrinsic anomalous Hall effect in Fe\({}_{3}\)GeTe\({}_{2}\) to a uniaxial strain ###### Abstract Fe\({}_{3}\)GeTe\({}_{2}\) (FGT), a ferromagnetic van der Waals topological nodal line semimetal, has recently been studied. Using first-principles calculations and symmetry analysis, we investigate the effect of a uniaxial tensile strain on the nodal line and the resultant intrinsic anomalous Hall effect (AHE). Our results reveal their robustness to the in-plane strain. Moreover, the intrinsic AHE remains robust even for artificial adjustment of the atomic positions introduced to break the crystalline symmetries of FGT. When the spin-orbit coupling is absent, the nodal line degeneracy remains intact as long as the inversion symmetry or the two-fold screw symmetry is maintained, which reveal that the nodal line may emerge much more easily than previously predicted. This strong robustness is surprising and disagrees with the previous experimental report [Y. Wang _et al._, Adv. Mater. **32**, 2004533 (2020)], which reports that a uniaxial strain of less than 1 % of the in-plane lattice constant can double the anomalous Hall resistance. This discrepancy implies that the present understanding of the AHE in FGT is incomplete. The possible origins of this discrepancy are discussed. + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] ## I Introduction Two-dimensional magnetic van der Waals (vdW) materials have been investigated intensely in recent years [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Especially, Fe\({}_{3}\)GeTe\({}_{2}\) (FGT) has attracted particular attention as a ferromagnetic topological nodal line semimetal candidate [19; 20; 21; 22; 16]. The coexistence of the ferromagnetic ordering and the nontrivial electronic topology makes interesting Berry phase phenomena within this material, such as the intrinsic anomalous Nernst effect [23; 19] and the intrinsic anomalous Hall effect (AHE) [24; 25; 16]. This topological nodal line originates from the symmetries of the layered structure of FGT, which connect the _d_-orbitals of Fe atoms in the adjacent layers. As the nodal line degeneracy is orbital-driven, it appears in the vanishing spin-orbit coupling (SOC) limit and is tunable depending on the magnetization direction. The SOC-induced band gap becomes the largest when the spin orientation is completely out-of-plane. The most substantial Berry curvature appears then, resulting in a tremendous intrinsic AHE [16]. Strain engineering, an efficient method for controlling the electronic structure of vdW materials, is actively being studied in many scientific branches from emerging quantum phenomena to next-generation information device technologies [26; 27; 28; 29; 30]. Studies have already dealt with changes in the magnetic properties [31; 32; 33; 34] and transport properties [35; 36; 37; 38] of FGT by strain. In particular, the experimental finding that a uniaxial strain of less than 1 % can cause a twofold increase in the size of the AHE is particularly intriguing [32]. As the vast AHE within FGT comes from the band topology, this significant change seems to result from the symmetry breaking that occurred by strains. Also, manipulating the AHE through strain has already been explored in other two-dimensional magnetic materials such as CrI\({}_{3}\)[39; 40] and CrTe\({}_{2}\)[41]. Therefore, it is interesting to investigate the strain-induced phenomena in FGT and eventually understand the principles underlying the AHE difference between FGT and other vdW magnets. In this paper, we compute the intrinsic anomalous Hall conductivity (IAHC) in strained structures via the first-principles method to comprehend the impact of a uniaxial strain on the AHE within FGT. We also investigate how the nodal line is affected by strains. Surprisingly, our results show that uniaxial tensile strains do not significantly affect the intrinsic AHE and the nodal line degeneracy. Even when more constraints on the two bands forming the nodal line are broken by artificially modifying the atomic positions, the overall IAHC is not varied much. Also, when the SOC is excluded, so the time-reversal symmetry (\(T\)) exists, the nodal line is maintained unless either an inversion (\(P\)) or a two-fold screw axis symmetry (\(\tilde{C}_{2z}=\{C_{2z}|\frac{1}{2}\frac{z}{2}\}\)) is broken. This imposes a less stringent constraint on the symmetry as compared to the earlier theoretical study of the topological nodal line in FGT [16]. While the robustness of the intrinsic AHE and the nodal line is surprising in itself, it disagrees with the experimental result [32] that reports two-fold increase of the anomalous Hall resistance upon the strain of less than 1 %. This discrepancy implies that our understanding of the AHE in FGT needs to be improved. The possible origins of the discrepancy are discussed. The organization of this paper is as follows. In Sec. II, we introduce the uniaxial strains, artificial lattice distortions, the resultant changes in the symmetries, and the computational details. In Sec. III, we show how uniaxial strains change the IAHC and the nodal line, and discuss with a symmetry analysis. In Sec. IV, we summarize our conclusions. ## II Method Bulk crystalline FGT consists of AB-stacked alternating atomic layers of honeycomb lattices, where two Fe atoms are positioned vertically above and below the center of each hexagon [Fig. 1]. It belongs to the space group \(D_{6h}^{4}\) (\(P6_{3}/mmc\), No.194), whose generators are the sixfold screw rotation (\(\tilde{C}_{6z}=C_{6z}|\frac{1}{2}\hat{z}\)), the two-fold rotation (\(C_{2x}\)), and the inversion (\(P\)). According to the previous study, the following combinations of the symmetries protect the two-fold nodal line degeneracy along the KH symmetry line: \(\tilde{C}_{6z}\cdot P\), \(M_{x}\cdot P\), and \(P\cdot T\) (at the K point), \(C_{3z}\equiv(\tilde{C}_{6z})^{2}\) and either \(\tilde{C}_{6z}\cdot M_{x}\) or \(P\cdot T\) (at any point between the K and H points), and \(\tilde{M}_{z}\cdot P\cdot T\) (on the \(k_{z}=\pi\) plane, including the H point) [16]. We consider tensile strains along three different directions (\(\theta\)) to break some of them: armchair (AC, \(\pi/2\)), zigzag (ZZ, 0), and chiral (CH, \(\pi/12\)) [Fig. 1(a)]. Since the unit cell is hexagonal, the AC strain along \(\pi/2\) direction is equivalent to the strain along \(\pi/6\) direction. The CH strain along \(\pi/12\) direction is heading towards the center of the other two strains. The magnitude of the strains is between \(0\sim 5\) % of the in-plane lattice constant. The strained FGT has lower symmetries than the pristine FGT. Table 1 shows the symmetries of each strained FGT. Moreover, to examine the effect of the symmetry breaking further, we consider additional lattice distortions to 1 % and 5 % AC-strained structures to make them possess still lower symmetries. Because the two wave functions constituting the nodal line are mainly composed of \(d\)-orbitals of Fe, positions of Fe constituting the hexagonal lattices are chosen to be moved. Through this process, we make four different structures with the following crystalline symmetries: one with only \(P\), one with only \(\tilde{M}_{z}\cdot P\), one with only \(\tilde{M}_{z}\), and one with all three of them. The size of movements is set equal to the extent to which Fe atoms move purely by the strains: 0.0173 A for the 1 % strain and 0.0864 A for the 5 % strain. We note that the lattice constants are adopted from the previous report [42], and the atomic configurations are given in the supplemental material [43] as Tables SI-SIII. First-principles calculations are composed of three steps. First, all structures are relaxed with total energy convergence threshold 1.36\(\times 10^{-3}\) eV and force convergence threshold 0.0257 eV/A, while lattice parameters are fixed. Then the electronic structure of each relaxed structure is obtained. This step is performed by using QUANTUM-ESPRESSO [44] package with the projector augmented wave pseudopotentials [45] from PSlibrary [46] and the revised Perdew-Burke-Ernzerhof exchange-correlation functional [47]. The SOC is also considered. A Monkhorst-Pack [48]\(\mathbf{k}\)-grid of 20\(\times\)20\(\times\)5 is used, and the cutoff energy of wavefunctions is chosen to be 1225 eV. We set the magnetization direction to be the z-axis direction in Fig. 1. Second, the maximally localized Wannier functions (MLWFs) are obtained from the Kohn-Sham states using the code WANNIER90 [49]. We set the initial projections to be \(d_{z^{2}}\), \(d_{xz}\), \(d_{yz}\), \(d_{x^{2}-y^{2}}\), \(d_{xy}\) for Fe, and \(p_{z}\), \(p_{x}\), \(p_{y}\) for Ge and Te. From 178 Kohn-Sham states, 96 MLWFs are obtained. We set the frozen window as 2 eV above the Fermi energy for the disentanglement of inner and outer spaces. Third, the energy eigenvalue, the total Berry curvature (\(\Omega_{\alpha\beta}\)), and the IAHC (\(\sigma_{\alpha\beta}^{\text{AH}}\)) are evaluated with the MLWFs. We use Kubo formula within the linear response theory to compute the total Berry curvature and the IAHC: \[\sigma_{\alpha\beta}^{\text{AH}} =\frac{e^{2}}{\hbar}\frac{1}{V_{cell}N_{k}}\sum_{\mathbf{k}}(-1) \Omega_{\alpha\beta}(\mathbf{k}), \tag{1}\] \[\Omega_{\alpha\beta}(\mathbf{k}) =\sum_{n}f_{n}(\mathbf{k})\Omega_{n,\alpha\beta}(\mathbf{k}),\] (2) \[\Omega_{n,\alpha\beta}(\mathbf{k}) =-2\text{Im}\sum_{m\neq n}\frac{v_{nm,\alpha}(\mathbf{k})v_{mn,\beta}(\mathbf{k})}{(\varepsilon_{n,\mathbf{k}}-\varepsilon_{m,\mathbf{k}}) ^{2}+\Gamma^{2}} \tag{3}\] where \(v_{nm,\alpha}(\mathbf{k})=\langle n\mathbf{k}|\partial_{k_{\alpha}}\hat{H}( \mathbf{k})|m\mathbf{k}\rangle\) are the elements of the velocity operator, and \(\Gamma\) is a smearing parameter whose unit is energy. We set \(\Gamma\) and \(k_{B}T\) in the Fermi-Dirac distribution function to be 0.0129 eV, corresponding to 150 K and less than the Curie temperature of bulk FGT [50; 51]. The summation is performed over a uniform \(\mathbf{k}\)-grid of 120\(\times\)120\(\times\)60. To visualize the nodal line, we investigate the energy gap between the two bands forming the nodal line in the Figure 1: Structure of the pristine FGT (a) viewed on the z-axis and (b) the x-axis. A uniaxial strain \(T\) is indicated by an angle (\(\theta\)) to the x-axis. Considered strains are armchair (AC, \(\pi/2\)), zigzag (ZZ, 0), and chiral (CH, \(\pi/12\)). absence of the SOC. We compute on \(k_{y}=0\) and several \(k_{z}=l\pi\) (\(0\leq l\leq 1\)) planes near the KH symmetry line [Figs. 3(a) and 3(c)]. ## III Results Figure 2(a) shows the band structures of the pristine, 1 %, and 5 % AC-strained FGT. The red- and blue-colored bands are related to the two-fold nodal line degeneracy. As the magnitude of applied tensile strain increases, the gap between the two bands near the K point grows while it stays zero at the H point. This result is due to the breaking of \(\tilde{C}_{6z}\) by the AC strain, required for the existence of the degeneracy at the K point and any point between the K and H points. However, as \(\tilde{M}_{z}\cdot P\cdot T\) is unaffected by the AC strain, the degeneracy at the H point remains. When the SOC is engaged, it separates the colored bands so the red- and blue-colored bands become the black bands located just above and below them, respectively. These two bands contribute to the total Berry curvature with opposite signs. When the strain intensity increases, both bands shift towards below the Fermi energy. As a result, the total Berry curvature, a sum of the contributions of all bands below the Fermi energy, decreases as the magnitude of the strain increases [Fig. 2(b)]. In Fig. 2(c), we present the dependence of the IAHC on the Fermi energy, which is obtained from deformed structures by applying AC strains of various magnitudes. At the exact Fermi energy (\(E-E_{F}=0\)), the IAHC from the pristine FGT is 232.49 (\(\Omega\cdot\mathrm{cm}\))\({}^{-1}\), a value consistent with previous experimental results [14; 17]. Notably, the IAHC remains almost constant at this point, irrespective of the intensity of the strain. In detail, as the strain strength increases to 2.5 %, the IAHC decreases linearly by 5.3 % to 220.23 (\(\Omega\cdot\mathrm{cm}\))\({}^{-1}\). Conversely, for a higher strain of 5 %, the IAHC increases to 244.71 (\(\Omega\cdot\mathrm{cm}\))\({}^{-1}\), which is 5.3 % larger than the case of the pristine FGT. Also, in a wide range of \(E\), the IAHC is virtually unaffected by strains under 1 %. Noticeable differences appear only for strong strains of 2.5 % and 5 %. To be precise, in the region just below the precise Fermi energy (\(E-E_{F}\in[-0.2,0.0]\)), where the intrinsic AHE is associated with the band topology and has already been experimentally implemented [52], the 5 % AC strain causes the IAHC to rise about 2.5 times higher than that obtained from the pristine FGT. The dependence of the IAHC on the direction of the applied strain is illustrated in Fig. 2(d). As above, in the region where the intrinsic AHE of FGT is related to the topological nodal line, the IAHC obtained from 1 % strained structures is nearly independent of the direction of the strain. Similarly, in the cases of 5 % strains, although there is a contrast between the IAHC from them and the undeformed FGT, the directional difference among themselves in this region is insignificant. Also, even though the CH strain breaks more symmetries than the strains in the other two directions, the IAHC from the CH-strained FGT is between those from the AC- and ZZ-strained structures. These results may suggest that the existence of the nodal line degeneracy is not affected by any in-plane strain. The IAHC obtained from the lattice distorted structures shows a more exciting result [dotted lines in Fig. 2(d)]. Even though the symmetry constraints weaken, there is no apparent change in the IAHC. In order to examine the effect of a strain on the nodal line, the energy difference between the two bands, responsible for the nodal line, is investigated in the absence of SOC. Before performing the calculations, we search for the region where the nodal line will be located in the momentum space through a symmetry analysis, which is Figure 2: (a) Band structures of the undeformed, 1 %, and 5 % AC-strained FGT along the KH symmetry line with SOC (solid, black) and without SOC (colored, dashed). (b) Corresponding total Berry curvatures. (c) The Fermi energy dependence of the intrinsic anomalous Hall conductivity (IAHC) in \(x\) % (\(0\leq x\leq 5\)) AC-strained FGT. (d) The Fermi energy dependence of the IAHC in 1 % (left) and 5 % (right) AC-ZZ-, and CH-strained FGT. Dotted lines represent the IAHC obtained from the AC-strained FGT with additional lattice distortions that further lower the system’s symmetry. In (c), results for different strain directions closely overlap and are almost indistinguishable. given in the supplemental material [43]. We find that in the presence of \(\tilde{M}_{y}\), the degeneracy should appear on the \(k_{y}=0\) plane [\(\Lambda\Gamma K^{\prime}\)H\({}^{\prime}\) plane, Fig. 3(a)]. To account for cases where \(\tilde{M}_{y}\) does not exist such as the CH-strained FGT, we also investigate several \(k_{z}\) planes [Fig. 3(c)]. Figure 3(b) shows how the nodal line moves on the \(k_{y}=0\) plane due to the 1 % AC, ZZ, and CH strains. In all cases, the degeneracy at \(k_{z}=\pi\) plane is clearly preserved, since the Kramers' degeneracy in the plane is protected by the symmetry \(\tilde{M}_{z}\cdot P\), which is not broken by any in-plane strain. As predicted by the symmetry analysis, the nodal line persists on the \(k_{y}=0\) plane for the AC and ZZ strains, but disappears for the CH strain. Nevertheless, as there is a nodal point near the KH symmetry line on each \(k_{z}\in[0,\pi]\) plane, their continuous connection can form the nodal line degeneracy [Figs. 3(c) and (d)]. For reference, the same results are also obtained in the case of 5 % strain, given in the supplemental material [43]. The results so far align with the previous report, which shows that nodal line is protected by \(P\) and \((T)\) in the absence of SOC [53]. In our situation, none of the considered strains disrupts \(P\), and \(T\) remains intact due to the exclusion of SOC. To go one step further, we also compute the band gap in lattice distorted structures and the detailed calculation results are given in the supplemental material [43]. Here, we summarize the results. In the structure where both \(P\) and \(\tilde{M}_{z}\) exist (naturally their product \(\tilde{M}_{z}\cdot P\) also exist), the nodal line appears just as in the CH-strained FGT. Although it is not located on the \(k_{y}=0\) plane, nodal points appear on every \(k_{z}\) plane. In the structure with only \(P\), the \(k_{z}=\pi\) plane is not any more a nodal plane due to the breaking of \(\tilde{M}_{z}\cdot P\). However, a nodal point still appears on each \(k_{z}\) plane. In the structure with only \(\tilde{M}_{z}\) symmetry, the nodal line degeneracy is lifted in general. However, we find that the nodal line degeneracy persists even after the \(\tilde{M}_{z}\) symmetry is broken, provided the \(\tilde{M}_{z}\cdot P\) symmetry is maintained. Also, the Kramers' degeneracy at the \(k_{z}=\pi\) plane is also clearly visible. Therefore, this nodal line cannot be distinguished from that in the structure where both \(P\) and \(\tilde{M}_{z}\) exist. From now on, we present a symmetry analysis that demonstrates how the combination of the two symmetries \(\tilde{M}_{z}\cdot P\) and \(T\) protects the topological nodal line within FGT, an expansion of the previous study [53] that shows that the combination of \(P\) and \(T\) protects the topological nodal line. The Hamiltonian of the strained FGT is modeled as shown below, \[H(\vec{k})=h_{0}(\vec{k})\cdot\vec{\sigma_{0}}+\vec{h}(\vec{k})\cdot\vec{ \sigma}, \tag{4}\] where \(\vec{k}=(k_{x},k_{y},k_{z})\) and \(\vec{\sigma}\) is the Pauli matrix in the bases of nodal line-related states \(\psi_{A}\) and \(\psi_{B}\), and \(h_{0}(\vec{k})\) and \(\vec{h}(\vec{k})\) are real coefficients. In the absence of the strain, the nodal line results from the two bands Figure 3: Effects of uniaxial strains on the nodal line degeneracy. (a) The Brillouin zone (BZ) of pristine FGT. The red-colored and dashed line represents the KH symmetry line. Two blue-colored lines schematically show the moved nodal lines due to the AC and ZZ strains. (b) The **k**-resolved band gap. The horizontal axis of each figure is a segment of a straight line connecting \(\Gamma\) and K, and width is the same as 3 % of the distance between them. The Dashed line is equivalent to the KH symmetry line. (c) The BZ of pristine FGT viewed on the \(k_{z}\) axis. Three blue-colored arrows show how a nodal point near the KH symmetry line moves on a \(k_{z}\) plane when a uniaxial strain is engaged. (d) The **k**-resolved band gap on three \(k_{z}\) planes in the case of 1 % CH-strained FGT. The length of the axes is equal to 6 % of the distance between the \(\Gamma\) and K points. (both are superpositions of \(\psi_{A}\) and \(\psi_{B}\)) touching with each other. The operator \(T\tilde{M}_{z}P\) maps the wavevector \((k_{x},k_{y},k_{z})\) to \((k_{x},k_{y},-k_{z})\) due to the actions of \(\tilde{M}_{z}\cdot P\) and \(T\), which transform \((k_{x},k_{y},k_{z})\) into \((-k_{x},-k_{y},k_{z})\) and \((-k_{x},-k_{y},-k_{z})\), respectively. The Schrodinger equation for \(H(\vec{k})\) is given by \[H(\vec{k})u_{n}(\vec{k})=E_{n}(\vec{k})u_{n}(\vec{k}), \tag{5}\] where \(u_{n}(\vec{k})=[\psi_{A}(\vec{k}),\psi_{B}(\vec{k})]^{T}\) is the state of the \(n\)-th band, and \(E_{n}(\vec{k})\) is the corresponding energy. The state is a superposition of the state \(\psi_{A}\) in the A layer and the state \(\psi_{B}\) in the B layer in the AB-stacked FGT. To analyze the constraints on \(H(\vec{k})\) due to the symmetries \(\tilde{M}_{z}P\) and \(T\), we examine their effect on the Schrodinger equation. We have \[T\tilde{M}_{z}PH(\vec{k})u_{n}(\vec{k})=T\tilde{M}_{z}PE_{n}(\vec{k})u_{n}(\vec {k}). \tag{6}\] It follows that \[H(\vec{k}^{\prime})T\tilde{M}_{z}Pu_{n}(\vec{k})=E_{n}(\vec{k})T\tilde{M}_{z} Pu_{n}(\vec{k}). \tag{7}\] where \(\vec{k}^{\prime}=(k_{x},k_{y},-k_{z})\). Since \(T\) can be replaced with the complex conjugate operator \(K\), the Schrodinger equation resulting from the action of \(\tilde{M}_{z}P\) and \(K\) is represented as follows, using \(\tilde{M}_{z}Pu_{n}(\vec{k})=[\psi_{B}(-\vec{k}^{\prime}),\psi_{A}(-\vec{k}^{ \prime})]^{T}\). \[H(\vec{k}^{\prime})\begin{pmatrix}\psi_{B}^{*}(\vec{k}^{\prime})\\ \psi_{A}^{*}(\vec{k}^{\prime})\end{pmatrix}=E_{n}(\vec{k})\begin{pmatrix}\psi_ {B}^{*}(\vec{k}^{\prime})\\ \psi_{A}^{*}(\vec{k}^{\prime})\end{pmatrix}, \tag{8}\] where \(E_{n}(\vec{k})=E_{n}(\vec{k}^{\prime})\). Since \[\begin{pmatrix}\psi_{B}^{*}(\vec{k})\\ \psi_{A}^{*}(\vec{k})\end{pmatrix}=K\begin{pmatrix}\psi_{B}(\vec{k})\\ \psi_{A}(\vec{k})\end{pmatrix}, \tag{9}\] we apply \(K\) to both sides of Eq. (8) to utilize the property \(K^{2}=1\). This results in \[KH(\vec{k}^{\prime})K\begin{pmatrix}\psi_{B}(\vec{k}^{\prime})\\ \psi_{A}(\vec{k}^{\prime})\end{pmatrix}=E_{n}(\vec{k})K^{2}\begin{pmatrix}\psi_ {B}(\vec{k}^{\prime})\\ \psi_{A}(\vec{k}^{\prime})\end{pmatrix}. \tag{10}\] Furthermore, since \[\begin{pmatrix}\psi_{B}(\vec{k})\\ \psi_{A}(\vec{k})\end{pmatrix}=\sigma_{x}\begin{pmatrix}\psi_{A}(\vec{k})\\ \psi_{B}(\vec{k})\end{pmatrix}, \tag{11}\] we add \(\sigma_{x}\) to both sides of the Eq. (10) to take advantage of the property that \(\sigma_{x}^{2}=I_{2}\), resulting in \[\sigma_{x}KH(\vec{k}^{\prime})K\sigma_{x}\begin{pmatrix}\psi_{A}(\vec{k}^{ \prime})\\ \psi_{B}(\vec{k}^{\prime})\end{pmatrix}=E_{n}(\vec{k})\sigma_{x}^{2}\begin{pmatrix} \psi_{A}(\vec{k}^{\prime})\\ \psi_{B}(\vec{k}^{\prime})\end{pmatrix}. \tag{12}\] The left side of Eq. (12) is \(H(\vec{k}^{\prime})u_{n}(\vec{k}^{\prime})\), leading to the following relation of the Hamiltonian in momentum space: \[\sigma_{x}KH(\vec{k}^{\prime})K\sigma_{x}=H(\vec{k}^{\prime}). \tag{13}\] Using the relation given by \[K\begin{pmatrix}\sigma_{x}\\ \sigma_{y}\\ \sigma_{z}\end{pmatrix}K=\begin{pmatrix}\sigma_{x}\\ -\sigma_{y}\\ \sigma_{z}\end{pmatrix}, \tag{14}\] the coefficients of the Hamiltonian in Eq. (13) transform into \[\sigma_{x}\begin{bmatrix}h_{0}(\vec{k}^{\prime})\sigma_{0}+h_{x}(\vec{k}^{ \prime})\sigma_{x}-h_{y}(\vec{k}^{\prime})\sigma_{y}+h_{z}(\vec{k}^{\prime}) \sigma_{z}\end{bmatrix}\sigma_{x}=H(\vec{k}^{\prime}). \tag{15}\] Furthermore, by using the relation of \[\sigma_{x}\begin{pmatrix}\sigma_{x}\\ \sigma_{y}\\ \sigma_{z}\end{pmatrix}\sigma_{x}=\begin{pmatrix}\sigma_{x}\\ -\sigma_{y}\\ -\sigma_{z}\end{pmatrix}, \tag{16}\] we obtain \[h_{0}(\vec{k}^{\prime})\sigma_{0}+h_{x}(\vec{k}^{\prime})\sigma_{x}+h_{y}(\vec{ k}^{\prime})\sigma_{y}-h_{z}(\vec{k}^{\prime})\sigma_{z}=H(\vec{k}^{\prime}). \tag{17}\] The following constraints on the coefficients of \(H(\vec{k}^{\prime})\) are derived: \[h_{x}(\vec{k}^{\prime})=h_{x}(\vec{k}^{\prime}), \tag{18}\] \[h_{y}(\vec{k}^{\prime})=h_{y}(\vec{k}^{\prime}),\] (19) \[h_{z}(\vec{k}^{\prime})=-h_{z}(\vec{k}^{\prime}). \tag{20}\] As a result, we derive \(h_{z}(\vec{k})=0\), and the Hamiltonian and energy eigenvalue can be represented as \[H(\vec{k})=h_{0}(\vec{k})\vec{\sigma_{0}}+h_{x}(\vec{k})\vec{ \sigma_{x}}+h_{y}(\vec{k})\vec{\sigma_{y}}, \tag{21}\] \[E(\vec{k})=h_{0}(\vec{k})\pm\sqrt{h_{x}(\vec{k})^{2}+h_{y}(\vec{k} )^{2}}. \tag{22}\] Consequently, two energy eigenvalues become degenerate for suitable values of \(k_{x}\) and \(k_{y}\) that satisfy the two constraints \(h_{x}(\vec{k})=0\) and \(h_{y}(\vec{k})=0\) for a given value of \(k_{z}\). This situation amounts to the band touching within the given \(k_{z}\) plane. Since the number of free variables (\(k_{x}\) and \(k_{y}\)) match the number of constraints, the band touching occurs generically for each \(k_{z}\). By connecting the band touching points (\(k_{x}\), \(k_{y}\)) for each \(k_{z}\) from \(k_{z}=0\) to \(\pi\), one obtains the nodal line. In the presence of uniaxial strains, the coexistence of \(\tilde{M}_{z}\cdot P\) and \(P\) leads to the formation of nodal lines. The AHE is related to the degeneracy of these nodal lines, which is lifted by SOC. Therefore, we expect the magnitude of the IAHC to remain largely unchanged in the presence of such strains. This behavior is observed when either \(\tilde{M}_{z}\cdot P\) or \(P\) are present. However, when \(\tilde{M}_{z}\) is present alone, the nodal line degeneracy is lifted weakly at 1% strain and significantly at 5%, as shown in Figs. S17 and S19 in the supplementary material [43]. Nevertheless, the IAHC remains largely unaffected at 1% strain since the energy gap after the degeneracy lifting is small compared to the gap widened by SOC. Lastly, we compare our calculation results with the recent experimental result [32]. Contrary to our theoretical results, the experiment reports that the anomalous Hall resistance increases two-fold due to a weak strain of less than 1 %. Thus, our calculation results disagree with the experimental result. Although the origin of this discrepancy is unclear, it is evident that the strain effect on the AHE in FGT requires further study both theoretically and experimentally. Here we discuss possible origins of the discrepancy. A possible origin is the AHE of extrinsic origin. While our calculation is limited to the AHE of intrinsic origin, the extrinsic contributions to the AHE may be important. In transition-metal ferromagnets such as Fe, Co, and Ni, it has been demonstrated that the AHE can receive an extrinsic contribution of up to 10-30% or more [54; 55]. Therefore, it is plausible that a significant extrinsic effect may also be present in FGT, contrary to the commonly adopted assumption that the AHE in FGT is dominated by the intrinsic contribution. Another possible origin is the interplay between the strain and the electron-electron interaction. The interaction effect may be considered in the density functional theory calculation by introducing the Coulomb interaction parameter \(U\). For the pristine FGT, the IAHC does not change significantly regardless of whether \(U\) is considered. For this reason, we excluded the contribution of \(U\) in our density functional theory calculations. But, a more systematic study of the \(U\) effect is necessary. Still another possible origin is the concurrent variation of the longitudinal resistivity \(\rho_{xx}\) with the strain in the experiment [32]. Considering the relation between the anomalous Hall conductivity \(\sigma_{yx}\) and the anomalous Hall resistivity \(\rho_{yx}\), \(\sigma_{yx}=\rho_{yx}/(\rho_{xx}^{2}+\rho_{yx}^{2})\), the discrepancy between our theoretical calculation results and the experimental result can be resolved if \(\rho_{xx}\) increases concurrently with strain applied in the experiment since the simultaneous variations of \(\rho_{yx}\) in the numerator and \(\rho_{xx}^{2}+\rho_{yx}^{2}\approx\rho_{xx}^{2}\) in the denominator may cancel each other and leave \(\sigma_{yx}\) unaltered. Further theoretical and experimental investigation is required to better understand the effect of the strain on the AHE in FGT. ## IV Conclusions In this study, we have investigated the effect of a uniaxial strain on the intrinsic AHE and the topological nodal line within FGT first-principle calculations and model analysis. Our results demonstrate the robustness of both the AHE and nodal line against in-plane strain and artificial lattice distortions. Specifically, we have shown that nodal line degeneracy is preserved when either \(P\) or \(\tilde{M}_{z}\cdot P\) is conserved, indicating that nodal lines may be more easily observed than previously thought. Our findings suggest that the fundamental symmetries identified in this study can be utilized to replicate the topological AHE observed in FGT across a broad range of magnetic materials by selecting space groups that meet the symmetry criteria. Furthermore, our results reveal that the AHE is a robust property of the material, with a high degree of resistance to external perturbations such as strain. These findings have significant implications for the development and optimization of magnetic materials for a wide range of applications, including spintronics and quantum information. ###### Acknowledgements. We thank Daegeun Jo, Seungyun Han and Wooil Yang for fruitful discussions. This research was supported by the National Research Foundation (NRF) of Korea (Grant No. 2020R1A2C2013484). Supercomputing resources including technical supports were provided by the Supercomputing Center, Korea Institute of Science and Technology Information (Contract No. KSC-2021-CRE-0283).
2301.06864
Show me what you want: Inverse reinforcement learning to automatically design robot swarms by demonstration
Automatic design is a promising approach to generating control software for robot swarms. So far, automatic design has relied on mission-specific objective functions to specify the desired collective behavior. In this paper, we explore the possibility to specify the desired collective behavior via demonstrations. We develop Demo-Cho, an automatic design method that combines inverse reinforcement learning with automatic modular design of control software for robot swarms. We show that, only on the basis of demonstrations and without the need to be provided with an explicit objective function, Demo-Cho successfully generated control software to perform four missions. We present results obtained in simulation and with physical robots.
Ilyes Gharbi, Jonas Kuckling, David Garzón Ramos, Mauro Birattari
2023-01-17T13:18:01Z
http://arxiv.org/abs/2301.06864v1
Show me what you want: Inverse reinforcement learning to automatically design robot swarms by demonstration ###### Abstract Automatic design is a promising approach to generating control software for robot swarms. So far, automatic design has relied on mission-specific objective functions to specify the desired collective behavior. In this paper, we explore the possibility to specify the desired collective behavior via demonstrations. We develop Demo-Cho, an automatic design method that combines inverse reinforcement learning with automatic modular design of control software for robot swarms. We show that, only on the basis of demonstrations and without the need to be provided with an explicit objective function, Demo-Cho successfully generated control software to perform four missions. We present results obtained in simulation and with physical robots. ## I Introduction Swarm robotics is an approach to control large groups of autonomous robots [1, 2, 3]. It is considered a prominent research direction [4] and has attained a notable position in the literature [5, 6, 7, 8, 9, 10, 11, 12]. A robot swarm is a decentralized system and consists of relatively simple robots that can perceive and interact with the environment only in their local neighborhood. A swarm is a self-organizing system, that is, its collective behavior emerges from the interactions of its individual robots. The design challenge in swarm robotics is to program the individual robots so that a desired collective behavior emerges. Several methods have been proposed for specific classes of missions [13, 14, 15, 16, 17, 18, 19, 20, 21]. Yet, due to the many unpredictable interactions within the swarm, no generally-applicable and principled method exists to design a desired collective behavior [22, 23, 24]. Automatic off-line design has proven to be a viable approach for the design of control software for robot swarms [25, 26, 27, 28, 29]--other related approaches exist [30, 31, 32]. In automatic off-line design, an optimization algorithm searches the space of possible instances of control software to find one that maximizes a given mission-specific objective function, which measures the performance of the swarm. The objective function is typically assessed through simulations. The selected instance of control software is then uploaded to real robots, which are then deployed in the target environment to perform the mission. Notably, no human intervention beyond the specification of the mission takes place [32]. The objective function is part of the formal specification of the mission at hand. Defining an objective function is challenging, and requires to be familiar with mathematical modeling. This is a task that requires the attention of a skilled professional and could not be performed by an end user. The problem of defining an appropriate objective function is similar to the problem that in the reinforcement learning literature goes by the name of _reward shaping_: the definition of a reward function that facilitates learning a desired policy [33]. Inverse reinforcement learning is an approach to address this problem: instead of learning a policy that maximizes a given reward function, inverse reinforcement learning algorithms learn a reward function from demonstrations of an optimal behavior. The learned reward function can then be used to generate a policy that reproduces the demonstrated behavior. Inverse reinforcement learning is motivated by the fact that, for some classes of problems, demonstrating an optimal behavior is easier than defining a properly shaped reward function [34, 35]. One of the earliest proposed approaches to inverse reinforcement learning is _apprenticeship learning_[35]. Given demonstrations of the desired behavior, the apprenticeship learning algorithm iterates between i) learning a policy based on an intermediate reward function and ii) learning a new intermediate reward function based on the behavior of the previously generated policies. The algorithm stops if the behavior of the current policy is sufficiently close to the provided demonstrations. We contend that inverse reinforcement learning can be adopted in the framework of the automatic design of control software for robot swarms: instead of defining a mission-specific objective function, we can provide demonstrations of the desired swarm behavior and let an inverse reinforcement learning algorithm infer an objective function to automatically generate the control software that produces the desired behavior itself. In this work, we focus on desired behaviors that can be described through the final position of the robots. ## II Related Work Inverse reinforcement learning has already found application in robotics: Krishnan et al. proposed SWIRL, an inverse reinforcement learning algorithm to learn various robot tasks, including parallel parking and surgical cutting along a line [36]. The robot successfully learned the tasks from demon strations and the learned policies were robust to perturbations, such as different initial positions. Inverse reinforcement learning was also studied in the scope of multi-agent systems. Natarajan et al. used inverse reinforcement learning to develop a centralized controller that coordinates multiple traffic lights [37]. Song et al. used inverse reinforcement learning to design policies in general Markov games [38]. In swarm robotics, Sosic et al. used inverse reinforcement learning to learn swarm policies from trajectories obtained from simulations of two particle models [39]. The results show that the swarm was able to replicate the behavior of both particle models. However, the design process required the complete behavior to be already pre-implemented so as to serve as a demonstration. Besides inverse reinforcement learning, other approaches have been adopted in swarm robotics to learn collective behaviors from demonstrations. Li et al. proposed Turing learning, a method that enables robots to imitate the behavior of other pre-programmed robots, without the need to manually specify the set of features that describe the desired behavior [40]. However, the approach assumes that an implementation of the desired behavior exists and can be used to generate demonstrations. Alharthi et al. extracted swarm behaviors from video demonstrations and used evolutionary algorithms to synthesize control software in the form of behavior trees [41]. Also in this case, the approach requires that an implementation of the desired behavior exists. ## III Apprenticeship Learning Reinforcement learning problems are commonly modelled as a Markov decision process \(M=(S,A,T,\gamma,R)\)[42]. A reinforcement learning algorithm learns a policy \(\pi:S\to A\) that maximizes the expected sum of discounted rewards: \(E_{s_{0}}[V_{M}^{\pi}(s_{0})]=E_{s_{0}}[\sum_{t}\gamma^{t}R(s_{t})|\pi]\), with \(s_{0},\ldots,s_{t}\in S\). In inverse reinforcement learning, the reward function \(R\) is not provided. Instead, demonstrations of the desired behavior are given in the form of sequences of states. It is assumed that a "true" reward function \(R\)* exists and it is such that the policy \(\pi\)* that maximizes the value function based on \(R\)* would generate the given demonstrations. In apprenticeship learning [35], it is furthermore assumed that there exists some mapping \(\phi:S\rightarrow[0,1]^{k}\) that maps the states of the system to a \(k\)-dimensional vector of features. The "true" reward function \(R\)* is assumed to be a linear combination of the features: \(R\)*\((s)=w^{*}\cdot\phi(s)\), where \(w^{*}\in\mathbb{R}^{k}\) and \(s\in S\). For every policy \(\pi\), a feature expectation can be defined as \(\mu(\pi)=E_{s_{0}}[\sum_{t}\gamma^{t}\phi(s_{t})|\pi]\in\mathbb{R}^{k}\). It follows that, for \(R\)*, \(E_{s_{0}}[V_{M}^{\pi}(s_{0})]=w^{*}\cdot\mu(\pi)\). When the expectation cannot be computed formally, it can be replaced by an empirical estimate \(\hat{\mu}(\pi)\) computed on the basis of sampled trajectories. With \(\mu_{E}\), we indicate the feature expectation of the provided demonstrations. ``` 0:\(\phi\), \(\mu_{E}\) Select a random initial policy \(\pi_{0}\) Compute \(\mu_{0}:=\mu(\pi_{0})\) repeat Compute \(w_{i+1}\) by fitting a SVM on \(\mu_{E}\) and all \(\mu_{i}\) Learn policy \(\pi_{i+1}\) on rewards \(R_{i+1}(s)=w_{i+1}\cdot\phi(s)\) Compute \(\mu_{i+1}:=\mu(\pi_{i+1})\) until Stopping criterion met return\(w_{i+1}\) as \(w^{*}\) ``` **Algorithm 1** Apprenticeship learning [35] Algorithm 1 shows the pseudo-code of the apprenticeship learning algorithm. Given the mapping \(\phi\) and the feature expectation \(\mu_{E}\) of the demonstrations, the algorithm iteratively refines the vector of weights \(w\), until the observed feature expectation \(\mu_{i}\) approximates \(\mu_{E}\). At every iteration, a support vector machine [43] is fitted on \(\mu_{E}\) and all encountered \(\mu_{i}\). Its coefficients are used as \(w_{i+1}\), the vector of weights that defines the reward function. A new policy \(\pi_{i+1}\) is learned on \(R_{i+1}(s)=w_{i+1}\cdot\phi(s)\) and its feature expectation \(\mu_{i+1}\) is added to the set of feature expectations used to fit the support vector machine in the following iteration. The algorithm stops when a stopping criterion is met--for example, after a number of given iterations or when a criterion of similarity between the demonstrated and generated behavior is met. ## IV Designing Robot Swarms by Demonstration As shown in Section II, all demonstration-based methods proposed in swarm robotics so far require that at least some robots exist that can demonstrate the desired behavior. This clearly prevents the existing approaches from being used to generate new behaviors. It is our contention that this results from the fact that, in the existing literature, demonstrations have always been conceived as descriptions of _how_ the robots should behave. In this work, we consider demonstrations as descriptions of _what_ the swarm should accomplish. Specifically, we focus here on the class of missions in which what the robots should accomplish is to position themselves in the environment according to a desired distribution. In this case, a demonstration is a desired final position. Although this class of missions does not cover all possible missions of interest in swarm robotics, it includes a large share of the missions that have been studied in the literature [44, 45]. We propose Demo-Cho, an automatic design method that combines apprenticeship learning (see Section III) with Chocolate, a state-of-the-art automatic off-line design method to generate control software for robot swarms [12, 46]. Demo-Cho generates control software for the e-puck robot, a two-wheel robot [47, 48], extended by a Linux extension board [49] and a range-and-bearing board [50] (see Figure 1). Its sensors and actuators were formalized through a reference model, namely RM1.1 [51]. According to RM1.1, the robot is endowed with 8 proximity sensors that can perceive obstacles and other robots, 8 light sensors that can perceive a light source, 3 ground sensors that can detect if the floor is white, black or gray, and a range-and-bearing board that provides the number of neighbors perceived and a vector pointing to their center of mass. The robot is also endowed with two wheels whose velocity can be independently controlled. We assume that the robots operate in a bounded arena in which the floor is gray and some regions might be white or black. Outside the arena, there is a light source that is switched on in some missions and off in others. In Demo-Cho, the end user can provide demonstrations of the desired final positions of the robots2. Demo-Cho then uses the apprenticeship learning algorithm to compute a candidate objective function and Chocolate to generate control software based on a candidate objective function. Demo-Cho stops after a fixed number of iterations. Footnote 2: See the supplementary material at [https://iridia.ulb.ac.be/supp/IridiaSupp2022-003/](https://iridia.ulb.ac.be/supp/IridiaSupp2022-003/) Concerning the feature mapping \(\phi\), the features we adopted to describe the final position of the robots are based on the distance of each robot from relevant landmarks. Notably, we consider two classes of landmarks: black or white regions and the nearest peer of each robot. We scale distances to the interval \([0,1]\) according to \(10^{-2x/d}\) where \(d\) is the arena's diameter and \(x\) is the distance to the landmark. Concerning the distance from the regions, if the shortest straight path between the robot and the region is obstructed by a wall, the feature value is set to \(0\). It is worth noting that the set of features is mission-dependent, as the number of black and white regions possibly varies between missions. Yet, the construction of this mapping is fully automatic and does not require the intervention/analysis of a human expert. Because all robots of the swarm are interchangeable, the features form an unordered set. To cast them into a vector in a meaningful way so that the apprenticeship learning algorithm can operate on them, we sort them first by the landmark and then in descending order. To give an example, in the feature vector \((\phi_{l1,1},\phi_{l1,2},...,\phi_{l1,n},\phi_{l2,1},...)\), \(\phi_{l1,1}\) is the feature corresponding to the distance of the nearest robot to landmark \(l1\), \(\phi_{l1,2}\) is the one corresponding to the distance of the second nearest robot to \(l1\), etc. ## V Experimental Setup ### _Design methods_ To appraise the performance of the control software generated by Demo-Cho, we present also the results obtained by Chocolate and EvoStick. Chocolate designs control software in the form of a probabilistic finite-state machine, assembled from behavioral and conditional modules that are hand-crafted once and for all in a mission-agnostic way [46]. EvoStick is an implementation of the classical neuro-evolutionary approach and designs control software in the form of a feed-forward artificial neural network [27]. Notably, both Chocolate and EvoStick require the actual objective function, whereas Demo-Cho does not. ### _Missions_ We assess Demo-Cho on four missions that were already studied in the literature. For each of them, an objective function is available because it was defined as part of their specifications in the original works that introduced them. We report the original objective functions here and we assume that they are accurate representations of the desired collective behaviors. All mission take place in the same dodecagonal arena of approximately \(5\,\mathrm{m}^{2}\). For all missions, the swarm size is fixed to 20 robots. In Homing[12], the swarm must explore the arena and aggregate in the home area represented by a circular black region with radius of \(30\,\mathrm{cm}\) (see Figure 1(a)). The original objective function is \(F_{Homing}=N(T)\), where \(N(t)\) is the number of robots in the home area at time \(t\) and \(T=180\,\mathrm{s}\) is the mission duration. In AAC[46] (aggregation with ambient cues), the swarm must aggregate as quickly as possible in a target area represented by a circular black region with radius of \(30\,\mathrm{cm}\). Additionally, the arena contains Fig. 1: The e-puck robot and its reference model RM1.1. Fig. 2: Missions and an example of a demonstration. with radius of \(30\,\mathrm{cm}\) and a light source is placed outside of the arena (see Figure (b)b). The original objective function is \(F_{AAC}=\sum_{t=1}^{T}N(t)\), where \(N(t)\) is the number of robots in the target are at time \(t\) and \(T=180\,\mathrm{s}\) is the mission duration. In SAC [52] (shelter with ambient cues), the swarm must aggregate as quickly as possible in a shelter that can only be accessed from one side. The shelter is indicated by a white rectangular area of \(25\,\mathrm{cm}\) by \(15\,\mathrm{cm}\) and delimited by three walls, leaving an opening only on one side. The floor in the arena behind the opening of the shelter is black and a light source is placed outside the arena, facing the open side of the shelter (see Figure (c)c). For technical reasons regarding the encoding of the environment in the simulator, the black region is composed by three contiguous rectangular sub-regions, one behind the shelter and one on each of its sides. The original objective function is \(F_{SAC}=\sum_{t=1}^{T}N(t)\), where \(N(t)\) is the number of robots in the shelter at time \(t\) and \(T=180\,\mathrm{s}\) is the mission duration. In CFA [46] (coverage with forbidden areas), the swarm must spread through the arena while avoiding the forbidden areas represented by three black circular regions with radii of \(30\,\mathrm{cm}\) (see Figure (d)d). The original objective function is \(E[d(T)]\), the expected distance between a generic point in the arena and the closest robot not on a forbidden area, at the end of T, and \(T=180\,\mathrm{s}\) is the experiment duration. To be consistent with the other missions in which the objective function is to be maximized, we reformulate the objective function as \(F_{CFA}=250-E\!\left[\!d(T)\right]\) where \(250\) is the theoretical maximum value of \(E[d(T)]\). ### _Protocol_ For each mission, we provided five demonstrations of the final position of the robot swarm to be used by Demo-Cho--see the supplementary material2. We ran 10 independent design processes for each of the three design methods under analysis. All design methods adopt the same simulator: ARGoS3 [53]. Demo-Cho was run for 50 iterations, each iteration with a budget of \(10\,000\) simulation runs per iteration. Chocolate and EvoStick were run with a design budget of \(10\,000\) simulation runs and optimize the original objective function. All in all, this grants Demo-Cho a budget that is fifty times larger than the one of Chocolate and EvoStick. The goal of this protocol is not to achieve a fair comparison between the three design methods, which could be a rather complex endeavour, see the discussion in Section VII. Indeed, Chocolate and EvoStick have the clear advantage of being fed with an objective function; the larger budget allocated to Demo-Cho is intended to compensate somehow for the fact that Demo-Cho has to infer the objective function from the given demonstrations. In this context, we felt that the primary concern was to provide an appropriate budget to each automatic design process: the one performed by Chocolate and EvoStick, and each of the 50 ones performed within each execution of Demo-Cho. Following our previous experience, we allocated to each of these design processes a budget of \(10\,000\) simulations. Concerning the choice of the number of iterations to be taken as a stopping criterion for Demo-Cho, as no previous literature exist on this issue, we fixed this to a sufficiently large number to make sure that the algorithm had time to converge to a meaningful solution--see the discussion in Section VI where we comment _a posteriori_ on this choice, in the light of the results obtained through the present study. Footnote 2: The choice of the number of iterations is not clear, but the choice of the number of iterations is not clear, but the choice of the number of iterations is not clear. We assessed the resulting instances of control software once in simulation and once in reality. In the experiments with the robots, performance was measured automatically using a tracking system [54]. We provide both a qualitative and a quantitative assessment of the performance of the swarms generated by the three methods under analysis. The qualitative assessment is based on visual inspection of the generated behaviors. The quantitative assessment is based on the mission-specific objective function, the same one that Chocolate and EvoStick optimize within the design process. For a detailed discussion of this choice, we refer the reader to Section VII. We report the results in the form of notched boxplots. In the boxplots, the upper and lower hinges correspond to the first and third quartiles. The whiskers extend to the largest value of the sample but no further than 1.5 times the interquartile range from the hinge. Data beyond the whiskers are outliers and are represented by points. We also report the median of the sample, represented by a line in the box, and a 95% confidence interval, represented by notches extending from the median line. If the notches of two boxplots do not overlap, we can conclude that the difference between the medians of the two samples is statistically significant. The source code, experiment files, and results of all experiments are available as supplementary material2. Footnote 2: The choice of the number of iterations in the following iterations to then become rare after 40 iterations. Future work should be devoted to gain a deeper insight in the issue by observing the development of the improvement over an even larger number of iterations. When assessed in reality, all three methods showed a drop in performance--as it is often the case in the automatic design of robot swarms [55]. In the missions Homing, SAC, and CFA the three design methods achieved similar performance in reality. In AAC, Demo-Cho and Chocolate achieved similar performance in reality and outperformed EvoStick. On the basis of these results, we can argue that learning from demonstrations--as opposed to optimizing a given objective function--does not appear to have any major impact on the ability of a modular design method to cross the reality gap. Figure 5 shows the weights \(w\) learned by Demo-Cho, averaged per mission. Some general observations can be made for the four missions. For each group of features--those relating to the same landmark--Demo-Cho tends to Fig. 4: The four missions in reality. Fig. 5: Heat maps of the average weight vectors learned by Demo-Cho. Fig. 3: Experimental results obtained in simulation (narrow white boxes) and reality (wide gray boxes). put larger weights on the feature of lower value, that is those corresponding to the robots that are the farthest from the landmark. Indeed, minimizing the distance of the farthest robots also guarantees that the distance of all robots is minimized. When looking at the weights for the specific missions, we can observe the following: In Homing, the distance to the black region was selected by Demo-Cho as the most important feature. Albeit to a lesser extent, the distance to the nearest neighbor was considered important as well. Thus, the design process rewarded behaviors that aggregate tightly in the home area. Also in AAC, Demo-Cho selects the distance to the black region as the most important feature. Unlike in Homing, however, the distance to the nearest neighbor was not considered important; neither was the one to the white region. For this mission, the design process rewarded behaviors that aggregate in the target area. The tightness of the aggregation possibly resulted implicitly, as all robots must fit in the target area. In SAC, the design process selected two important features: the distance to the white region and the one to the nearest peer. The selection of these two features can be interpreted to describe an aggregation behavior in the shelter. Curiously, unlike for the other features, Demo-Cho assigned the highest weight to the feature associated with the sixth farthest robot from the white region, rather than the feature associated with the farthest one. This might be explained by the fact that it is unlikely that all the robots eventually reach the shelter and five robots outside the shelter at the end of the experimental run is a common outcome. Additionally, we observe three features that Demo-Cho penalizes through the assignment of a negative weight: the distance of the nearest robot to each of the black regions. Maximizing the distance between the nearest robot and a landmark guarantees that the distance of all robots is maximized. In CFA, Demo-Cho selected three groups of features as important: the distance to each of the black regions. In this case, the weights were selected to favor the presence of the robots nearby each of the black regions: the highest weight is associated with the feature corresponding to the distance of robot closest to the landmark. Additionally, Demo-Cho slightly penalizes the features corresponding to the distances from the landmark of the fifth to eighth nearest robots. As a result, the design process aimed to keep the robots close to the forbidden areas without favoring an aggregation. Additionally, some importance is placed on the features describing the inter-robot distance: a slightly positive weight is associated to the distance of nearest peers. The interpretation for the weights is particularly straightforward for Homing, AAC, and SAC, while it is less intuitive for CFA. Indeed, in CFA, one could have expected more emphasis on the inter-robot distance and the penalization of the distance to the forbidden areas. Nonetheless, excluding two outliers, the performance achieved by Demo-Cho in this mission is satisfactory and the behavior of the robots appears to be meaningful at visual inspection--see supplementary videos[2]. ## VII Conclusions In this work, we have presented Demo-Cho, an automatic method for designing control software of robot swarms that combines inverse reinforcement learning with automatic modular design. Instead of optimizing an explicitly defined objective function, Demo-Cho generates control software based on provided demonstrations. In our experiments, Demo-Cho was able to create satisfactory behaviors to perform four missions that were previously studied in the literature. Expressing a desired outcome in terms of a mathematical function is unintuitive and requires the attention of an expert. Specifying desired behaviors through demonstrations is natural and intuitive and could allow even end users without any technical expertise to specify their desired behaviors. In the experiments presented in this paper, we accept the original assumption made by the proponents of the missions that the objective function accurately specifies the desired behavior. We therefore use this objective function for the final assessment of the behaviors produced by Demo-Cho on the basis of the given demonstrations. However, this way of assessing performance is viable only for missions that already have been specified via the definition of an objective function. A general protocol to assess behaviors generated from demonstrations could be defined on the basis of an appropriate metric that measures the degree of similarity between the given demonstrations and the generated behavior. Yet, the goal would not be to reproduce the demonstrations but to generalize with respect to them. An appropriate protocol could take take inspiration from the classical cross-validation and leave-one-out procedures typically adopted in machine learning. A protocol should also be defined to compare in fair way methods based on demonstrations with traditional ones that optimize a given objective function. The latter clearly have an advantage on the former, which have to infer an objective function from the given examples. An appropriate protocol should test also traditional methods on an objective function other than the one they used at design time. For example, two experts might define one objective function each. One of these objective functions could be used by the traditional methods in the design phase; and the other could be used to test both traditional methods and demonstration-based ones. This would put the two methods on the same foot for what concerns the evaluation. In the future, we will extend Demo-Cho to missions that can be represented through the final position of elements other than the robots--e.g., objects to be clustered, gathered, spread in the environment. Additionally, we will investigate the minimum number of demonstrations necessary to design a desired behavior and more generally, the impact the number of demonstrations and their diversity have on the quality of the behaviors that can be obtained.
2310.01692
Synthesis of infrared Stokes spectra in an evolving solar chromospheric jet
Chromospheric jets are plausible agents of energy and mass transport in the solar chromosphere, although their driving mechanisms have not yet been elucidated. Magnetic field measurements are key for distinguishing the driving mechanisms of chromospheric jets. We performed a full Stokes synthesis in the infrared range with a realistic radiative magnetohydrodynamics simulation that generated a chromospheric jet to predict spectro-polarimetric observations from the Sunrise Chromospheric Infrared spectro-Polarimeter (SCIP) onboard the SUNRISE III balloon telescope. The jet was launched by the collision between the transition region and an upflow driven by the ascending motion of the twisted magnetic field at the envelope of the flux tube. This motion is consistent with upwardly propagating non-linear Alfvenic waves. The upflow could be detected as continuous Doppler signals in the CaII 849.8 nm line at the envelope where the dark line core intensity and strong linear polarisation coexist. The axis of the flux tube was bright in both FeI 846.8 nm and CaII 849.8 nm lines with down-flowing plasma inside it. The structure, time evolution, and Stokes signals predicted in our study will improve the physical interpretation of future spectro-polarimetric observations with SUNRISE III/SCIP.
T. Matsumoto, K. Kawabata, K. Katsukawa, H. Iijima, C. Quintero Noda
2023-10-02T23:13:09Z
http://arxiv.org/abs/2310.01692v1
# Synthesis of infrared Stokes spectra in an evolving solar chromospheric jet ###### Abstract Chromospheric jets are plausible agents of energy and mass transport in the solar chromosphere, although their driving mechanisms have not yet been elucidated. Magnetic field measurements are key for distinguishing the driving mechanisms of chromospheric jets. We performed a full Stokes synthesis in the infrared range with a realistic radiative magnetohydrodynamics simulation that generated a chromospheric jet to predict spectro-polarimetric observations from the Sunrise Chromospheric Infrared spectro-Polarimeter (SCIP) onboard the SUNRISE III balloon telescope. The jet was launched by the collision between the transition region and an upflow driven by the ascending motion of the twisted magnetic field at the envelope of the flux tube. This motion is consistent with upwardly propagating non-linear Alfvenic waves. The upflow could be detected as continuous Doppler signals in the Ca ii 849.8 nm line at the envelope where the dark line core intensity and strong linear polarisation coexist. The axis of the flux tube was bright in both Fe i 846.8 nm and Ca ii 849.8 nm lines with downflowing plasma inside it. The structure, time evolution, and Stokes signals predicted in our study will improve the physical interpretation of future spectro-polarimetric observations with SUNRISE III/SCIP. keywords: Sun: photosphere - Sun: chromosphere - Sun: infrared - Sun: magnetic fields - MHD - radiative transfer. ## 1 Introduction Chromospheric jets are collimated eruptions that exist ubiquitously in the chromosphere.Various types of jets, such as spicules (Beckers, 1968), surges (Roy, 1973), and anemone jets (Shibata et al., 2007) have been reported so far. Significant amounts of energy and mass are considered to be transported via chromospheric jets, however, the driving mechanisms have not yet been elucidated. Because the magnetic field plays a crucial role in dynamics, spectro-polarimetry could be the key observation to determine the driving mechanism. The collision between the magnetohydrodynamics (MHD) shock and transition region is a promising mechanism for accelerating chromospheric jets (Osterbrock, 1961). When a certain amount of energy is released in the deep layer, it produces an acoustic wave that propagates upward. As the acoustic wave propagates upward, it evolves into a shock owing to the atmospheric stratification and hits the transition region (Shibata et al., 1982). When the shock hits the transition region, it breaks into shocks and a contact discontinuity, below which the chromospheric materials are lifted. There are several candidates for the origin of the MHD shock. First, the magnetic pressure and centrifugal force associated with Alfven waves generate magneto-acoustic waves via nonlinear mode conversion (Hollweg et al., 1982; Kudoh & Shibata, 1999). Horizontal magnetic signals followed by propagating shocks are expected in this process. Second, sudden downflow is caused by processes, such as magnetic pumping (Kato et al., 2011, 2016) or convective collapse (Parker, 1978; Spruit, 1979) that pull down the plasma in the flux tubes to produce rebounding slow shocks. When the flux tubes are twisted owing to a photospheric vortex (Brandt et al., 1988; Bonet et al., 2008), the perturbed twist propagates with slow shock. In such a case, it might be difficult to distinguish between the Alfven wave model and the rebound shock model because both processes may contribute to feeding the magneto-acoustic waves. Third, magnetic reconnection between pre-existing and newly emerged fields will be the agent for energy release in the chromosphere (Shibata et al., 2007). The photospheric magnetic features, as well as the chromospheric current structure, could provide key evidence for reconnection. To determine the origin of chromospheric jets, we need to infer the properties of the magnetised atmosphere at multiple layers, which requires observing spectral lines that form at different heights. Unfortunately, these transitions also fall far apart in the wavelength spectrum, strictly simultaneously requiring the observation of the Sun at multiple wavelengths. Polarimetric observations will provide magnetic field information, such as mixed polarity at the foot point, current concentrations near the magnetic inversion lines, and helical field, including rich information in the driving process. Line-of-sight structures also help in understanding the height of the energy source that can be deduced from the localised temperature increase (Robustini et al., 2018) and bidirectional flow (Nelson et al., 2019), and propagating wave signals (Tziotzioi et al., 2020). The dynamical nature of polarimetric signals in the chromosphere has been revealed in several contexts (Joshi & de la Cruz Rodriguez, 2018; Siu-Tapia et al., 2020; Morosin et al., 2022); the demands for theoretical predictions for polarimetric signals in the chromospheric jets are increasing to properly interpret the complex observations. The theoretical predictions will help interpret the upcoming polarimetric observations with a wide spectral range, as well as high polarimetric sensitivity, such as SUNRISE III1, Daniel K. Inouye Solar Telescope (Rimmele et al., 2020, DKIST), and European Solar Telescope (Quintero Noda et al., 2022, EST), which provides smoking-gun evidence for the origin of the chromospheric jets. Footnote 1: Official website of the SUNRISE III project can be found in [https://www.mps.mpg.de/solar-physics/surrise](https://www.mps.mpg.de/solar-physics/surrise) This study aims to predict the Stokes signals that will be observed by SUNRISE III/SCIP (Katsukawa et al., 2020, Sunrise Chromospheric Infrared Spectro-Polarimeter). SCIP will be deployed aboard the third flight of SUNRISE stratospheric balloon-borne solar telescope (Barthol et al., 2011). The instrument is a slit scanning spectro-polarimeter capable of simultaneously observing two wide spectral bands, covering 770 nm and 850 nm bands. These bands include both photospheric lines, such as Fe i (846.8 nm) and K i (766.5 nm & 769.9 nm), as well as lower chromospheric lines, such as Ca ii (849.8 nm & 854.2 nm). Additionally, SCIP features a high degree of polarization accuracy of \(3\times 10^{-4}\) (1 sigma) and a high spatial resolution of 0.2 arcsec, owing to negligible effect of atmospheric seeing, providing a powerful tool for studying the fine-scale magnetic fields in the solar chromosphere. By utilizing the unique capabilities of SCIP, this study has the potential to deepen our understanding of the magnetic activities in the solar chromosphere. Forward modeling using full Stokes synthesis of radiative MHD simulations is a robust tool for interpreting complex observed spectra and simulating the capabilities of a given instrument or mission. A series of studies have been devoted to predicting the potential of the spectral lines to be observed by SCIP to infer the properties of, e.g., magnetic pumping (Quintero Noda et al., 2017) and chromospheric jets (Quintero Noda et al., 2019). Although polarisation signals from the helical magnetic field associated with chromospheric jets are expected to be detectable with SUNRISE III/SCIP, their origin and relationship with the jet have not been investigated yet. Hence we expand on the previous work using a time-series analysis to investigate further the driving processes and the evolution of the chromospheric jet. Our analysis suggests that Alfvenic motion in the flux tube generates an upflow that drives the chromospheric jet. Consequently, strong linear polarisation with dark, blue-shifted, and arc-like structures around the flux-tube axis will be detectable with Ca ii infrared line. Along the axis of the flux tube, bright and red-shifted cores are continuously observed in both Fe i (846.8 nm) and Ca ii (849.8 nm) lines. Careful consideration of observation configurations, including the field of views and integration time in SUNRISE III/SCIP can capture all these features and help distinguish the origin of the jets. This paper is organised as follows: Section 2 presents methods and dataset, and Section 3 details our data analysis and results with discussions. Finally, the conclusions of the study can be found in Section 4. ## 2 Methods and dataset In this study, the full Stokes profiles were derived using a radiative MHD simulation combined with a Stokes synthesis code. Infrared spectra emerging from a simulated chromospheric jet were modeled to predict future observations from SUNRISE III/SCIP. Time-series analysis of the synthesised spectra enabled us to distinguish the driving mechanisms of the jet that could not be clarified by spectropolarimetry with a single snapshot. ### Numerical simulation As the reference solar atmosphere, we used a dataset from a radiative MHD simulation identical to that used in Iijima & Yokoyama (2017). The simulation was conducted using RAMENS code, and the full details of the simulation can be found in (Iijima & Yokoyama, 2015; Iijima, 2016). In brief, the RAMENS code is dedicated to simulating a realistic solar atmosphere in which various types of physics are coupled. In addition to the ideal MHD equations, gravity, Spitzer-type thermal conduction, equation of state under the assumption of LTE, including the latent heat of partial ionisation, and radiative cooling are included in the model. Radiative cooling Figure 1: Summary of the time evolution of the analysed jet. (a) Height of the transition region whose temperature was defined to be 4\(\times 10^{4}\) K (The maximum value was taken in x = 2.7 Mm plane) as a function of time. (b)–(d) vertical velocity, \(v_{\mathrm{z}}\) at y = 7.8 Mm. The dashed lines in (b)–(d) indicate the height of the transition region. Temporal evolution (\(\Delta t\) = -1, 0, 1 min) is displayed from left to right. Blue represents upflows while red colour indicates downflowing materials. is computed as a combination of optically thick radiative cooling with gray approximation and optically thin cooling. Based on these assumptions, the RAMENS code enables us to provide a realistic solar atmosphere. The MHD dataset represents a unipolar quiet region with an average magnetic flux density of 10 G. The total duration analysed was approximately 22 min (t \(\equiv\) [396, 418] min) with a temporal cadence of 5 s. The horizontal domain is 9 Mm for in both the x- and y-directions with a periodic boundary. The vertical domain ranges from -2 to 14 Mm and includes both the surface convection zone and corona. The height \(\mathrm{z=0}\) Mm roughly corresponds to the surface where the optical depth is unity at the continuum wavelengths at 500 nm. Among the chromospheric jets appearing in the dataset, we focused on the jet investigated in Iijima & Yokoyama (2017). The jet started to erupt from t = 404 min to t = 413 min, showing a parabolic trajectory (Fig. 1a). Hereafter, we use a different time coordinate, \(\Delta t\)\(\equiv\) t \(-\) 404 min, to set \(\Delta t\) = 0 when the jet started for convenience. Starting from \(\Delta t\) = 0, the jet reached its peak height of approximately 11 Mm at \(\Delta t\) = 5 min and ended at \(\Delta t\) = 9 min. A photospheric magnetic sheet existed below the jet to form a flux tube before the formation of the jet. At \(\Delta t\) = -1 min, there appears to be a blob of upflow (the upflow A in Fig. 1b) that will collide with the transition region to drive the jet at \(\Delta t\) = 0 min (Fig. 1c). After upflow A hits the transition region, chromospheric materials are lifted to form the jet. ### Synthesis of the Stokes profiles We synthesised full-Stokes profiles in the infrared range, including the Ca ii 849.8 nm and Fe i 846.8 nm lines. We focused on the Ca ii 849.8 nm line rather than the more commonly used 854.2 nm line in this study because the polarisation signals turned out to be stronger in the 849.8 nm line, although SCIP can observe both spectral lines simultaneously. The RH code was used to synthesise these lines (Uitenbrock, 2001). The equations of radiative transfer and statistical equilibrium under non-LTE conditions are solved in the RH code. Among the four modules specific for different geometries, rhf1d was used to calculate the emergent spectra assuming a 1D plane-parallel geometry. This geometry module allows us to calculate the Stokes profiles on a column-by-column basis, which can be appropriate for the Ca ii lines where horizontal scattering does not have significant effects (Leenaarts et al., 2009). We used the RH code version 2, last modified on 2020 May 1. Several modifications have been made to the input files that are necessary for the RH code. First, complete redistribution was assumed for all spectral lines. Second, the LTE hydrogen population was calculated using an abundance identical to that used in the RAMENS code. The rest of configurable parameters were identical to their default settings. A spatial and spectral degradation was applied to the synthetic Stokes profiles to reproduce the spatial and spectral resolution of SUNRISE III/SCIP. For the spatial and spectral convolution, a Gaussian kernel with an FWHM of 0.2 \(\times\) (\(\lambda\)/854 nm) arcsec and 4 pm were applied, respectively. Although the spectral resolution has a small dependence on wavelength, we used a constant value of 4 pm for simplicity. Because there were several pixels with poor convergence in Stokes synthesis (less than 1%), we ignored these pixels during the convolution process. To simplify the analysis, six representative variables were mainly used rather than showing all the Stokes profiles: the intensity at the line core at the rest wavelength, \(I_{\mathrm{core}}\), Doppler velocity \(V_{\mathrm{LOS}}\), maximum amplitude of circular (MCP) and linear (MLP) polarisation, total linear polarisation (TLP), and linear polarisation azimuth (LPA). The centre-of-mass wavelength of \(I_{\mathrm{cont}}-I\) at 11 spectral points near the line centre is used to determine the line-of-sight velocity for each line. MCP and MLP are defined as follows: MCP \[\equiv \max_{|\lambda-40|<\Delta t}(|V(\lambda)|),\] (1) MLP \[\equiv \max_{|\lambda-40|<\Delta t}\left(\sqrt{Q(\lambda)^{2}+U(\lambda) ^{2}}\right)\] (2) where \(\Delta\lambda\) = 20 pm (five spectral sampling sizes) for each line and \(\lambda_{0}\) is the central wavelengths in the rest frame. The TLP is defined as \[\mathrm{TLP}\equiv\sqrt{\langle Q\rangle^{2}+\langle U\rangle^{2}}, \tag{3}\] where \[\langle f\rangle\equiv\frac{1}{2\Delta\lambda}\int_{\lambda_{0}-\Delta t}^{ \lambda_{0}+\Delta\lambda}f(\lambda)d\lambda. \tag{4}\] Following the equations given in Landi Degl'Innocenti & Landolf (2004), LPA, which provides an estimation of the magnetic field azimuth, is defined as \[\phi=\frac{1}{2}\arctan\left(\frac{U}{Q}\right), \tag{5}\] where spectral points with maximum amplitudes of \(Q\) and \(U\) in \(|\lambda-\lambda_{0}|<\Delta t\) were used. One of the main advantages of our approach is that it analyses the time series of Stokes profiles throughout the entire evolution of the jet. With this approach, we attempted to specify the spectral properties from the driving process of a jet that were not obtained in the previous study of Quintero Noda et al. (2019). ## 3 Results and discussion Overall, the results presented below show that the jet was driven by a blob of upflow connected to the twisted field lines above a flux tube. A bright down-flowing core and dark up-flowing envelope structure with strong linear polarisation signals can be detected using spectro-polalimetric observations from SUNRISE III/SCIP. ### Estimation of formation layers To roughly estimate the formation layers of each line, Pearson's product moment correlations were obtained between temperature & \(I_{\mathrm{core}}\), horizontal field strength & MLP, vertical field strength & MCP, and vertical velocity & \(V_{\mathrm{LOS}}\) at each height. The peaks in the correlations from Fe i line ranged in 0-0.3 Mm (Fig. 2a), while those from the Ca ii line ranged in 0.5-1.0 Mm (Fig. 2b). These analyses will help us understand the relationship between vertical structures and synthetic data, although further analysis is necessary to retrieve exact height information ### Evolution of the synthesized observables during the jet The synthetic MCP signals capture the evolution of the axial field of the jet (columns a-d in Fig. 3). From \(\Delta t\) = \(-\)6 min, the two flux sheets (B & C in column a in 3) in the granular lanes approached the intersection of the granular lanes. The lower half of sheet B merged with sheet C at \(\Delta t\) = - 4 min to create a flux tube where the foot point of the jet is located. The upper half of sheet B was moved up to create another flux tube. These flux sheets produced strong polarisation signals well above \(10^{-3}\) of \(I_{\rm cont}\) on Fe i line during the lifetime of the jet. A cadence of 1 min is sufficient to capture the flux-sheet merger (\(\Delta\)t = - 4 min). At a height of z = 0.6 Mm, flux sheet B expanded to create a round-shaped axial field of the jet. The axial field was surrounded by a negative spiral-like structure after the flux-sheet merger (column c in Fig. 3). We found that the negative structure was a part of the twisted field, whose elevation angle was locally negative. Although the axial-field components have strong polarisation signals in the Ca ii line, a twisted field with a negative LOS component exhibited weak signals (column d in Fig. 3). The twisted field structure around the axis of the jet was spatially resolved with reasonable linear polarisation signals (columns e-h in Fig. 3) that are above the noise level expected for SCIP. At the photospheric height, the twisted field around the flux tube appears after the flux merger (\(\Delta\)t = - 4 min) and gradually disappears after the jet reaches its maximum height (\(\Delta\)t = 5 min). The photospheric twist revealed strong linear polarisation signals around the envelope of the axial field. A tornado-like structure is continuously observed at the chromospheric heights (column g in Fig. 3). The corresponding structure was found in MLP signals, although the twist direction was not available owing to the 180\({}^{\circ}\) ambiguity. The magnetic twist became increasingly prominent until \(\Delta\)t = 5 min, and gradually disappeared by \(\Delta\)t = 9 min (column h in Fig. 3). From the line-core images, we can infer that the jet had a hot core and cool envelopes (columns a-d in Fig. 4). A bright spot of approximately 0.2 Mm appeared in the line core intensity of Fe i line above the flux tube axis after the flux merger. This enhancement continued until at least \(\Delta\)t = 5 min, and disappeared at \(\Delta\)t = 9 min. The temperature at the photospheric heights also exhibited similar behaviour. At the chromospheric heights, hot core and cool envelope appeared well before and after the jet initiation (\(\Delta\)t = -6,-4, and 5 min). The corresponding structure was found in the line-core images of Ca ii as a bright core and dark envelope. This feature sometimes can be observed as magnetic tornadoes or chromospheric swifts found in past observations (Wedemeyer-Bohm & Rouppe van der Voort, 2009; Wedemeyer-Bohm et al., 2012). The Doppler signals help to find the descending core and ascending envelope of the jet (columns e-h in Fig. 4). The granulation pattern was observed in the Doppler signals of the Fe i line. There was a small downflow near the axis of the flux tube, particularly after the flux sheet merger, which is consistent with the magnetic vortex appearing in the previous simulation (Kitiashvili et al., 2013). At the chromospheric heights, the flux tube axis tended to have a downward flow, while the envelope tended to have an upflow in the Ca ii line. However, it was challenging to distinguish the upflow A in the x-y plane at the chromospheric heights from a single snapshot. This was because the fluctuations in the vertical velocity were dominated by acoustic waves that hindered the determination of the velocity in the jet. A blue shift was sometimes obtained at the same location as the jet-related upflow, although a time-series analysis, such as sit and stare observation, was necessary to confirm that the blue shift was related to the jet-upflow. ### Sit-and-stare observation A set of time distance diagrams for the Fe i 846.8 nm and Ca ii 849.8 nm lines were derived to simulate a sit-and-stare observation (Fig. 5). The slit was located at the centre of the photospheric magnetic concentration (x = 2.1 Mm) along the y-direction. At the photospheric heights, we found that the localised enhancement of the line core intensity continued throughout the lifetime of the jet (Fig. 5a). At the same location, the continuous MCP signals overlapped with the line-core enhancement with strong MLP signals at the envelope (Fig. 5b). Moreover, the down-flow signals with 2-3 km s\({}^{-1}\) were associated with the hot and magnetised core of the flux tube (Fig 5c). At the chromospheric heights, both the dark envelope and bright core can be well distinguished, especially in the later phase of the jet (\(\Delta\)t > 5 min) (Fig 5d). The axial field was detectable using MCP because the signal of approximately 2 % of \(I_{\rm cont}\) was well above the polarimetric sensitivity of SUNRISE III/SCIP (\(1\sigma=3\times 10^{-4}\) of \(I_{\rm cont}\)) (Fig. 5e). The amplitude of MLP signals in the arc-like structure was weaker, so it would be difficult to distinguish between noise and signals using MLP signals for this case. Because the arc-like structure had a width of 0.5 Mm and a lifetime of more than 5 min, spatio-temporal binning will increase the S/N ratio. The arc structure in MLP originated from the field lines at the envelope of the flux tube is writting counterclockwise. An upward Doppler velocity of approximately 1 km s\({}^{-1}\) above the arc indicates the upward motion of the writing flux tube (Fig. 5f). Because the fluctuations of the Doppler velocity in the ambient region were comparable to those of the flux tube, it was necessary to combine the Doppler and polarimetric signals to infer the velocity of the flux tube. Figure 2: Pearson’s product moment correlations between synthetic data in (a) Fe i and (b) Ca ii line and MHD variables as a function of height. The black, orange, blue, and green lines represent the correlation between temperature & \(I_{\rm core}\), horizontal field strength (\(B_{\rm h}=\sqrt{B_{\rm x}^{2}+B_{\rm y}^{2}}\)) & MLP, vertical field strength & MCP, and vertical velocity & \(V_{\rm LOS}\), respectively. Symbols designate the height where each correlation is maximum. ### Detection of linear polarization signals Analysing the linear polarisation signals in chromospheric lines is challenging because they are generally less sensitive to the magnetic field due to their lower effective Lande factors. So, in our case, it may be advisable to use the reference variable TLP instead of MLP to analyse the Ca ii linear polarisation signals. This is because the averaging operation over wavelength reduces the noise level, whereas the maximum operation increases the noise level. For instance, when applying Gaussian noise with \(4\times 10^{-4}\) I\({}_{\rm cont}\), both MLP and TLP signals became comparable to the noise level (Fig. 6a & b). Performing a spatial binning (the spatial sampling increasing from 0.1 to 0.2 arcsec) increased the signal-to-noise ratio up to 6 (Fig. 6c & d). The arc structure was still visible owing to the spatial scale of approximately 1 Mm, although further binning destroyed the structure. Performing an additional binning on the time domain ( temporal sampling increases from 10 to 20 s) further increases the signal-to-noise ratio to more than 6 (Fig. 6e & f) in the most prominent phase. To optimize the observations with SCIP, it is important to consider the trade-off between temporal cadence and polarimetric accuracy, given that SCIP is a slit scanning spectro-polarimeter. In this study, the magnetic arc had a spatial scale of approximately 1 Mm, and its evolution could be effectively resolved with a cadence of less than 2 minutes. By selecting an integration time of 10.24 seconds, the slit scan cover an area of 1 arcsec times the slit size in 1.9 minutes. In addition, that integration times will produce a polarization accuracy of \(4\times 10^{-4}\) of \(I_{\rm cont}\) (1 sigma) without spatial nor temporal summing in the Ca ii 849.8 nm line. Although the FOV is narrower than the magnetic arc, a S/N ratio greater than 6 can be obtained by spatially summing up to two pixels (Fig. 6 c). A narrower FOV can further improve the S/N ratio by temporal binning. ### Driving process of the jet During the jet, the foot point of the twisted field sank and swirled counterclockwise from the edge to the centre of the magnetic concentration in the photosphere. Simultaneously, the twisted part moved upward and became a vertical axial field (Fig. 7). At At = -1 min, upflow A, which was about to collide with the transition region, was connected to the twisted field (Fig 7a). We temporally traced a group of field lines that were connected to the upflowing material assuming an ideal MHD process, and found that the twisted field lines evolved into the axial field with time (Fig 7b & c). As the twisted part moved upward, part of the cool material inside the flux was lifted to the corona to form the jet, while the other half was flowing down along the axial field. After the field lines became the flux tube's axial field, the ambient plasma continuously supplied another twisted field, which maintained the tornado-like structures, as observed in the horizontal field evolution (column g & h in Fig. 3). There are several candidates for the triggering process of up Figure 3: Time evolution of the synthetic images (MCP & MLP) and the related MHD variables during the jet (- 6 min \(<\Delta\)t \(<\) + 9 min). From left to right column, (a) \(B_{x}\) at z = 0.27 Mm, (b) MCP for the Fe ii line, (c) \(B_{x}\) at z = 0.62 Mm, (d) MCP for the Ca ii line, (e) \(B_{\rm h}\) at z = 0.06 Mm, (f) MLP for the Fe i line, (g) \(B_{\rm h}\) at z = 0.56 Mm, and (h) MLP for the Ca ii line were shown. The red arrows in columns (e) and (g) indicate the horizontal component of magnetic field vector. The black line segments in columns (f) and (h) represent the LPA where polarization signals were larger than 3\(\times 10^{-4}\) of \(I_{\rm cont}\). flow A. The first candidate is the flux merger at \(\Delta\)t = -4 min. By temporally tracing field lines in sheets B & C, we found that the field lines were swirling counterclockwise when approaching the intersection of the granular lanes. Because B\({}_{\rm z}>\) 0 in the flux sheets, and the current density in the \(z\)-direction was negative (inferred from column g in Fig. 3), the additional counterclockwise vorticity drove the upward propagating Alfvenic motion. This Alfvenic motion produces the upflow A through nonlinear wave pressure (Hollweg et al., 1982). The second candidate is the downflow along the axis of the flux tube. This downflow contributes to the upflow via the rebounding process (Steffing & Hollweg, 1988; Kato et al., 2016). The third candidate is magnetic reconnections between the twisted field and its overlying field. This reconnection may contribute to the upflow via sling shot effects, similar to the reconnection above the emerging flux (Yokoyama & Shibata, 1996). The combinations of the three candidates could contribute to the upflow, although the further theoretical investigation is needed to quantitatively measure the degree of contribution of each process. ### Summary of the evolving jet Herein, we briefly summarise the important physics and observables during the evolution of the chromospheric jet. * 6 min, two flux sheets B and C were approaching the intersection of granular lanes. This motion can be detectable by tracing enhanced MCP signals or bright I\({}_{\rm core}\) in Fe i line (Fig. 3 & 4). * 4 min, the two flux sheets merged. After the flux merger, the down-flowing hot core was continuously observed in the Fe i line at the foot point during the lifetime of the jet (Fig. 4 & 5). * At \(\Delta\)t = 0 min, the twisted field line moved up to collide with the transition region (Fig. 1 & 7). The twisted field lines can be observed as the arc-like linear polarization signals with strong enough amplitude in Ca ii line (Fig. 3 & 6). The arc-like structure was dark in the line core, showing blueshifted velocities (Fig. 5). Moreover, a bright core and a dark envelope in Ca ii would be useful to specify the jet structure. * The magnetic-tornado-like structure with a hot core and cool envelope continued to exist during the lifetime of the jet (0\(<\Delta\)t\(<\)9 min). This feature would be detectable from both I\({}_{\rm core}\), MLP, and TLP signals in Ca ii (Fig. 3, 4, 5, and 6). ## 4 Conclusion Although there are several candidate mechanisms for generating chromospheric jets, it has been difficult to conclusively determine the driving mechanism directly from observations. Because the magnetic field plays a key role, high spatial, spectral, and spectro-polarimetric observations are essential for distinguishing between competing mechanisms. We found that flux sheet merger, bright Figure 4: Time evolution of the synthetic images (\(I_{\rm core}\) & \(V_{\rm LOS}\) ) and the related MHD variables during the jet (-6 min \(<\Delta\)t\(<\) 9 min). From left to right column, (a) Te at z = 0.27 Mm, (b) \(I_{\rm core}\) for the Fe i line, (c) Te at z = 0.77 Mm, (d) \(I_{\rm core}\) for the Ca ii line, (e) \(V_{\rm z}\) at z = 0.09 Mm, (f) \(V_{\rm LOS}\) for the Fe i line, (g) \(V_{\rm z}\) at z = 1.01 Mm, and (h) \(V_{\rm LOS}\) for the Ca ii line were shown. and down-flowing core, and arc-like linear polarisation signals with dark and blue-shifted envelope were expected from chromospheric jets, although we analysed only one chromospheric jet appearing in the RMHD simulation. These properties are the manifestation of the transition process from the ambient twisted field to the axial field, which is consistent with the upwardly propagating nonlinear Alfvenic waves. The triggering process is still under investigation, although we considered that the key mechanisms would the flux merger, downflow along the core, or magnetic reconnection above the twisted field. Our prediction will be a useful tool for distinguishing the driving mechanisms of chromospheric jets when combined with future observations, such as SUNRISE III, DKIST, or EST. An appropriate time cadence, integration time, and field of view should be selected to fully capture the characteristic magnetic field associated with the chromospheric jets. ## Acknowledgements We would like to express our sincere gratitude to the reviewer for the invaluable feedback and constructive suggestions on our manuscript. Numerical computations for the radiation MHD simulation were performed on a Cray XC30 supercomputer at the Centre for Computational Astrophysics, National Astronomical Observatory of Japan. Numerical analyses and Stokes synthesis were performed on the analysis servers at the Centre for Computational Astrophysics at the National Astronomical Observatory of Japan. A part of this study was conducted using computational resources from the Centre for Integrated Data Science at the Institute for Space-Earth Environmental Research, Nagoya University, through a joint research program. This work was supported by JSPS KAKENHI Grant Number JP18H05234 (PI: Y. Katsukawa). ## Data Availability The data underlying this article will be shared upon reasonable request by the corresponding author. ## References * Barthol et al. (2011) Barthol P., et al., 2011, Sol. Phys., 268, 1 * Beckers (1968) Beckers J. M., 1968, Sol. Phys., 3, 367 * Bonet et al. (2008) Bonet J. A., Marquez I., Sanchez Almeida J., Cabello I., Domingo V., 2008, ApJ, 687, L131 * Brandt et al. (1988) Brandt P. N., Scharmer G. B., Ferguson S., Shine R. A., Tarbell T. D., 1988, Nature, 335, 238 * Hollweg et al. (1982) Hollweg J. V., Jackson S., Galloway D., 1982, Sol. Phys., 75, 35 * Iijima (2016) Iijima H., 2016, PhD thesis, University of Tokyo, Department of Earth and Planetary Environmental Science * Iijima & Yokoyama (2015) Iijima H., Yokoyama T., 2015, ApJ, 812, L30 * Iijima & Yokoyama (2017) Iijima H., Yokoyama T., 2017, ApJ, 848, 38 * Joshi & de Cruz Rodriguez (2018) Joshi J., de la Cruz Rodriguez J., 2018, A&A, 619, A63 * Kato et al. (2016) Kato Y., Steiner O., Hansteen V., Gudiksen B., Wedemeyer S., Carlsson M., 2016, ApJ, 827, 7 * Kato et al. (2011) Kato Y., Steiner O., Steffen M., Suematsu Y., 2011, ApJ, 730, L24 * Katsukawa et al. (2015) Katsukawa Y., del Toro Iniesta J. C., Solanki S. K., Kubo M., Hara H., Shimizu T., Oba T., Kawabata Y., Tsuzuki T., Uraguchi F., Nodomi Y., Shimota K., Tamura T., Suematsu Y., Ishikawa R., Kano R., Matsumoto T., Ichimoto K., Nagata S., Quinterov Nod C., Anan T., Orozco Suarez D., Balaguer Jimenez M., Lopez Jimenez A. C., Cobos Carrascosa J. P., Feller A., Riethmueller T., Gandorfer A., Lagg A., 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 11447 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Sunrise Chromospheric Infrared Spector/Baluminet (SCIP) for sunrise III: system design and capability. p. 114470Y * Kititshvili et al. (2013) Kititshvili I. N., Kosovichev A. G., Lele S. K., Mansour N. N., Wray A. A., 2013, ApJ, 770, 37 * Kudoht & Shibata (1999) Kudoht T., Shibata K., 1999, ApJ, 514, 493 * Landi Degro innecini & Landolf (2004) Landi Degro innecini E., Landolf M., 2004, Polarization in Spectral Lines. Vol. 307 * Leenaarts et al. (2009) Leenaarts J., Carlsson M., Hansteen V., Rouppe van der Voort L., 2009, ApJ, 694, L128 * Morosin et al. (2022) Morosin R., de la Cruz Rodriguez J., Diaz Baso C. J., Leenaarts J., 2022, A&A, 664, A8 * Nelson et al. (2019) Nelson C. J., Freij N., Bennett S., Erdelyi R., Mathioudakis M., 2019, ApJ, 883, 115 * Osterbrock (1961) Osterbrock D. E., 1961, ApJ, 134, 347 * Parker (1978) Parker E. N., 1978, ApJ, 221, 368 * Quinterro Noda et al. (2022) Quinterro Noda C., et al., 2022, A&A, 666, A21 * Quinterro Noda et al. (2019) Quinterro Noda C., Iijima H., Katsukawa Y., Shimizu T., Carlsson M., de la Cruz Rodriguez J., Ruiz Cobo B., Orozco Suarez D., Oba T., Anan T., Kubo M., Kawabata Y., Ichimoto K., Suematsu Y., 2019, MNRAS, 486, 4203 * Quinterro Noda et al. (2017) Quinterro Noda C., Kato Y., Katsukawa Y., Oba T., de la Cruz Rodriguez J., Carlsson M., Shimizu T., Orozco Suarez D., Ruiz Cobo B., Kubo M., Anan T., Ichimoto K., Suematsu Y., 2017, MNRAS, 472, 727 * Rimmele et al. (2020) Rimmele T. R., et al., 2020, Sol. Phys., 295, 172 * Robustini et al. (2018) Robustini C., Leenaarts J., de la Cruz Rodriguez J., 2018, A&A, 609, A14 * Roy (1973) Roy J. R., 1973, Sol. Phys., 28, 95 * Shibata et al. (2007) Shibata K., Nakamura T., Matsumoto T., Otsuji K., Okamoto T. J., Nishizuka N., Kawate T., Watanabe H., Nagata S., UeNo S., Kitai R., Nozawa S., Tsuneta S., Suematsu Y., Ichimoto K., Shimizu T., Katsukawa Y., Tarbell T. D., Berger T. E., Lites B. W., Shine R. A., Title A. M., 2007, Science, 318, 1591 * Shibata et al. (1982) Shibata K., Nishikawa T., Kitai R., Suematsu Y., 1982, Sol. Phys., 77, 121 * Siu-Tapia et al. (2020) Siu-Tapia A. L., Bellot Rubio L. R., Orozco Suarez D., Gafeira R., 2020, A&A, 642, A128 * Spruit (1979) Spruit H. C., 1979, Sol. Phys., 61, 363 * Sterling & Hollweg (1988) Sterling A. C., Hollweg J. V., 1988, ApJ, 327, 950 * Tziotzio et al. (2020) Tziotzio K., Tsiropoula G., Kontogiannis I., 2020, A&A, 643, A166 * Uitenbroek (2001) Uitenbroek H., 2001, ApJ, 557, 389 * Wedemeyer-Bohm et al. (2009) Wedemeyer-Bohm S., Rouppe van der Voort L., 2009, A&A, 507, L9 * Wedemeyer-Bohm et al. (2012) Wedemeyer-Bohm S., Scullion E., Steiner O., Rouppe van der Voort L., de La Cruz Rodriguez J., Fedun V., Erdelyi R., 2012, Nature, 486, 505 * Yokoyama et al. (1996) Yokoyama T., Shibata K., 1996, PASJ, 48, 353 Figure 5: Time-distance diagram of (a) \(I_{\rm core}\), (b) MCP, and (c) \(V_{\rm LOS}\) derived from the synthetic spectra of the Fe i 846.8 nm line, and (d) \(I_{\rm core}\), (e) MCP, and (f) \(V_{\rm LOS}\) derived from the synthetic spectra of the Ca ii 849.8 nm line. Contours of 2, 4, and 6 (1, 2, and 3) sigma levels in MLP were over-plotted in the panels from (a) to (c) ( (d) to (f) ). The slit position was taken along x = 2.1 Mm. Figure 6: Linear polarization signals from the Ca ii 849.8 nm line at \(\Delta t\) = 5 min: (a) TLP with noise, (b) MLP with noise, (c) TLP with noise, 2 binned in the spatial domain, (d) MLP with noise, 2 binned in spatial domain (e) TLP with noise, 2 binned both in the spatial and time domain, (f) MLP with noise, 2 binned in the spatial and time domain. The colour corresponds to S/N ratio. For temporal binning, data in \(\Delta t\) = 5 min - 10 sec was also used.
2310.10947
Weyl channels for multipartite systems
Quantum channels, a subset of quantum maps, describe the unitary and non-unitary evolution of quantum systems. We study a generalization of the concept of Pauli maps to the case of multipartite high dimensional quantum systems through the use of the Weyl operators. The condition for such maps to be valid quantum channels, i.e. complete positivity, is derived in terms of Fourier transform matrices. From these conditions, we find the extreme points of this set of channels and identify an elegant algebraic structure nested within them. In turn, this allows us to expand upon the concept of "component erasing channels" introduced in earlier work by the authors. We show that these channels are completely characterized by elements drawn of finite cyclic groups. An algorithmic construction for such channels is presented and the smallest subsets of erasing channels which generate the whole set are determined.
Tomas Basile, Jose Alfredo de Leon, Alejandro Fonseca, Francois Leyvraz, Carlos Pineda
2023-10-17T02:45:47Z
http://arxiv.org/abs/2310.10947v1
# Weyl channels for multipartite systems ###### Abstract Quantum channels, a subset of quantum maps, describe the unitary and non-unitary evolution of quantum systems. We study a generalization of the concept of Pauli maps to the case of multipartite high dimensional quantum systems through the use of the Weyl operators. The condition for such maps to be valid quantum channels, i.e. complete positivity, is derived in terms of Fourier transform matrices. From these conditions, we find the extreme points of this set of channels and identify an elegant algebraic structure nested _within_ them. In turn, this allows us to expand upon the concept of 'component erasing channels' introduced in earlier work by the authors. We show that these channels are completely characterized by elements drawn of finite cyclic groups. An algorithmic construction for such channels is presented and the smallest subsets of erasing channels which generate the whole set are determined. pacs: 03.65.Yz, 03.65.Ta, 05.45.Mt ## I Introduction The description of open quantum systems [1; 2] serves a twofold purpose. Firstly, it lies at the core of the measurement problem [3; 4], thus bearing a fundamental interest. On the other hand, it describes quantum systems where the inevitable interaction with an environment is taken into account [5]. For most implementations of quantum devices it is crucial to understand and control such unwanted interaction. In both cases, a natural language for such a description is that of quantum channels, which have been subject of intense research [6]. The properties of a quantum channel dictate the characteristics of the associated quantum dynamics. In the realm of qubits, the set of quantum channels has been explored and thanks to a better understanding of its geometry, several physical properties of the set, such as divisibility [7; 8; 9], non-Markovianity [10; 11], channel capacity [12], among others have been unraveled. In a previous paper [13] we proposed and studied a class of channels acting on multi-qubit systems that either erased or preserved the Pauli components of the state. These are the so called Pauli component erasing (PCE) maps, which are an important subset of the Pauli maps. We found that every PCE channel corresponds uniquely to a vector subspace of a discrete vector space. Such channels can be associated with measurements and asymptotic Lindbladian evolution. Moreover, most of the applications in the field of quantum information have been built upon qubits. Nevertheless, many real-world realizations of quantum systems have more than two levels that can be used to provide an important technical advantage. Such advantage is indeed employed to develop several important tasks like quantum cryptography [14; 15], quantum computation [16; 17; 18], violation of Bell inequalities [19], randomness generation [20], among others. For this reason, the study of high-dimensional and multiparticle systems is of relevance. In this article, we introduce the concept of Weyl channels for systems composed of many particles, allowing each of these to be of different dimensions. We begin defining these channels in sec. II as diagonal channels in the basis of multi-particle Weyl matrices, which are tensor products of the well-known Weyl matrices. Moving forward, we proceed to diagonalize the Choi-Jamiolkowski matrix, revealing a linear relationship between the eigenvalues and those of the channel. From this, we find two significant properties of the set of Weyl channels: (1) its extreme points in sec. III, and (2) a subgroup structure of all Weyl channels in sec. IV. Then, in sec. V we extend the notion of _component erasing_ channels by introducing the Weyl erasing channels. Given its semigroup property, we describe the generator subset by means of the aforementioned algebraic structure of Weyl channels. Finally, we wrap up and conclude in sec. VI. ## II Weyl channels A well-known generalization of the Pauli matrices to arbitrary \(d\)-dimensional Hilbert spaces was introduced by Weyl [21] and involves the following unitary matrices [22]: \[U(m,n)=\sum_{k=0}^{d-1}\omega^{mk}\left|k\right\rangle\left\langle k+n\right|. \tag{1}\] Here we introduce the notation we shall use throughout: \(\omega\) is the primitive \(d\)-th root of unity \(\exp(2\pi i/d)\). All arithmetical operations over latin indices are taken over modulo \(d\). We will further be mainly concerned with systems of \(N\) qudits, for which we introduce the following standard notations: \[U(\vec{m},\vec{n})=\bigotimes_{\alpha=1}^{N}U(m_{\alpha},n_{\alpha}) \tag{2}\] Greek indices will always run over a range from \(1\) to \(N\), and the arithmetic operations over them will always be the usual ones. When the range is not specified, it will be from \(1\) to \(N\). We now write for example \[U(\vec{m},\vec{n})=\sum_{\vec{k}}\omega^{\vec{m},\vec{k}}\left|\vec{k}\right> \left<\vec{k}+\vec{n}\right|, \tag{3}\] the notational conventions being self-explanatory. Note further that all our results can routinely be extended to the more complicated case in which the different particles in the \(N\)-particle system have different dimensions, \(d_{\alpha}\). The "vectors" \(\vec{m}\) are then replaced by lists of integers, with \(0\leq m_{\alpha}\leq d_{\alpha}-1\). Whereas this complicates the notation considerably, no points of essential interest are thereby introduced. We thus leave it to the interested reader to develop these issues. When non-trivial points arise in this respect, we shall explicitly point this out. These unitary matrices satisfy certain elementary properties: \[\mathrm{tr}\,U(m,n)^{\dagger}U(m^{\prime},n^{\prime}) =d\delta_{mm^{\prime}}\delta_{nn^{\prime}} \tag{4}\] \[U(m,n)U(m^{\prime},n^{\prime}) =\omega^{m^{\prime}n}U(m+m^{\prime},n+n^{\prime})\] \[U(m,n)U(m^{\prime},n^{\prime}) =\omega^{m^{\prime}n-mn^{\prime}}U(m^{\prime},n^{\prime})U(m,n)\] \[U(m,n)^{\dagger} =\omega^{mn}U(-m,-n)\] as well, of course, as their vectorial equivalents. We now define Weyl maps and the corresponding channels: any density matrix on the space of \(N\) qudits, that is, on \((\mathbb{C}^{d})^{\otimes N}\), can be expressed as \[\rho=\frac{1}{d^{N}}\sum_{\vec{m},\vec{n}}\alpha(\vec{m},\vec{n})U(\vec{m}, \vec{n}) \tag{5}\] where \(\alpha(\vec{m},\vec{n})\) satisfies \(\alpha(\vec{m},\vec{n})=\omega^{\vec{m}\cdot\vec{n}}\alpha^{*}(-\vec{m},-\vec {n})\) in order for \(\rho\) to satisfy the condition of hermiticity. More intricate conditions need to be satisfied in order to yield a positive matrix, but we shall not be concerned with these. A Weyl map is now defined as follows \[\rho\rightarrow\rho^{\prime}=\mathcal{E}[\rho]=\frac{1}{d^{N}}\sum_{\vec{m},\vec{n}}\tau(\vec{m},\vec{n})\alpha(\vec{m},\vec{n})U(\vec{m},\vec{n}). \tag{6}\] Here the \(\tau(\vec{m},\vec{n})\) are complex numbers, whereas \(\rho\) is the density matrix given in (5). In other words, if the \(U(\vec{m},\vec{n})\) are viewed as generators of the vector space of all Hermitian matrices, the Weyl maps act _diagonally_ on this set. We now wish to find the conditions necessary and sufficient for \(\mathcal{E}\) to be a quantum channel, that is, to be trace and hermiticity preserving, as well as completely positive. For the former two conditions, we require \[\tau(\vec{m},\vec{n}) = \tau(-\vec{m},-\vec{n})^{*}, \tag{7a}\] \[\tau(0,0) = 1. \tag{7b}\] To verify complete positivity, we must check the circumstances under which the Choi-Jamiolkowski matrix, given by \[\mathcal{D}=\frac{1}{d^{N}}\sum_{\vec{m},\vec{n}}\tau(\vec{m},\vec{n})U(\vec{ m},\vec{n})\otimes U(\vec{m},\vec{n})^{*} \tag{8}\] is positive semidefinite. Interestingly, eq. (8) is the corresponding Choi-Jamiolkowski matrix, even if \(U(\vec{m},\vec{n})\) are not Weyl operators, but an arbitrary basis of Hilbert-Schmidt space, as shown in Appendix A. To specify the criteria for matrix in eq. (8) to be positive semidefinite, we evaluate its eigenvalues \(\lambda(\vec{m},\vec{n})\). This is easily done after noticing that the various elements of the sum, namely the \(U(\vec{m},\vec{n})\otimes U(\vec{m},\vec{n})^{*}\) all commute for arbitrary values of \(\vec{m}\) and \(\vec{n}\), as readily follows from (4): \[\Big{(}U(m,n)\otimes U(m,n)^{*}\Big{)}\Big{(}U(m^{\prime},n^{ \prime})\otimes U(m^{\prime},n^{\prime})^{*}\Big{)} = \Big{(}U(m,n)U(m^{\prime},n^{\prime})\Big{)}\otimes\Big{(}U(-m,n )U(-m^{\prime},n^{\prime})\Big{)} \tag{9}\] \[= \Big{(}U(m+m^{\prime},n+n^{\prime})\Big{)}\otimes\Big{(}U(-(m+m^ {\prime}),n+n^{\prime})\Big{)}\] The symmetry of the final expression proves the claim, and the extension to the case of arbitrary \(N\) is straightforward. It now remains to determine the eigenvalues of \(U(\vec{m},\vec{n})\), which given its tensor product structure [see eq. (2)] can be reduced to the single-qudit case of \(U(m,n)\). These can be calculated directly studying the recursion relation that follows from the eigenvalue equation for the Weyl operators, see Appendix B. One can then readily see that the eigenvalues \(\mu(r,s)\) of \(U(m,n)^{*}\) take the form \[\mu(r,s)=\omega^{mr-ns}, \tag{10}\] where \(r\) and \(s\) are arbitrary integers modulo \(d\) that serve as labels for the eigenvalue. The degeneracy pattern of these eigenvalues is complicated, but since our focus is on the positivity of \(\mathcal{D}\), we do not need to consider these details. The set of eigenvalues of \(U(\vec{m},\vec{n})\otimes U(\vec{m},\vec{n})^{*}\) is then given by \[\mu(\vec{r},\vec{s})=\omega^{\vec{m}\cdot\vec{r}-\vec{n}\cdot\vec{s}}. \tag{11}\] The condition for the positive semidefiniteness of \(\mathcal{D}\) is thus that, for all \(\vec{r}\) and \(\vec{s}\), \[d^{-N}\sum_{\vec{m},\vec{n}}\tau(\vec{m},\vec{n})\omega^{\vec{m}\cdot\vec{r}- \vec{n}\cdot\vec{s}}=\lambda(\vec{r},\vec{s})\geq 0. \tag{12}\] Note that condition (7a) on \(\tau(\vec{m},\vec{n})\) straightforwardly shows that the left-hand side of (12) is real, so that the inequality is meaningful. The \(\lambda(\vec{r},\vec{s})\) are the eigenvalues of \(\mathcal{D}\). They can also be used to characterize the Weyl channel \(\mathcal{E}\). Inverting the relation (12) we get \[\tau(\vec{m},\vec{n}) = d^{-N}\sum_{\vec{r},\vec{s}}\lambda(\vec{r},\vec{s})\omega^{- \vec{m}\cdot\vec{r}+\vec{n}\cdot\vec{s}}, \tag{13a}\] \[\sum_{\vec{r},\vec{s}}\lambda(\vec{r},\vec{s}) = d^{N}. \tag{13b}\] Here (13b) follows from \(\operatorname{tr}\mathcal{D}=d^{N}\), which is a consequence of (7b) and (8). From (8) and (10) follows that the \(\tau(\vec{m},\vec{n})\) and the \(\lambda(\vec{r},\vec{s})\) are connected by the following _linear_ relationship: \[\tau(\vec{m},\vec{n})=\sum_{\vec{r},\vec{s}}\bigotimes_{\alpha}[F_{\alpha} \otimes F_{\alpha}^{*}]\,(\vec{m},\vec{n};\vec{r},\vec{s})\lambda(\vec{r}, \vec{s}), \tag{14}\] where \(F_{\alpha}\) is the quantum Fourier transform matrix for dimension \(d_{\alpha}\) in the general case, and of dimension \(d\) in the case we shall generally study (see Fig. 1). We have therefore obtained a full characterization of Weyl channels: choosing arbitrary \(\lambda(\vec{r},\vec{s})\) that are positive and add up to \(d^{N}\), the \(\tau(\vec{m},\vec{n})\) given by (13a) define a Weyl channel. It is important to highlight that the set of channels introduced in the present work are different from other kinds of generalization of Pauli channels introduced previously [23; 24; 25]. On the other hand, similar expressions for the eigenvalues associated to random unitary channels on single \(d\)-level systems have been presented in [26; 27]. ## III Set of extreme points The set of Weyl channels is clearly convex, since equations (13) imply that any Weyl channel is given by a convex sum of channels of the form \[\tau_{\vec{r}_{0},\vec{s}_{0}}(\vec{m},\vec{n})=\omega^{-\vec{m}\cdot\vec{r} _{0}+\vec{n}\cdot\vec{s}_{0}}, \tag{15}\] where \(\vec{r}_{0},\vec{s}_{0}\) are fixed vectors whose elements are integer numbers modulo \(d\). Furthermore, we can see that the set of Weyl channels is in fact a \(d^{2N}-1\) dimensional simplex. Recall that all eigenvalues \(\lambda(\vec{r},\vec{s})\) of the Choi-Jamiolkowski matrix of a Weyl channel must be non-negative, and sum up to \(d^{N}\) [see eq. (13b)]. The set \(\lambda(\vec{r},\vec{s})\) is thus the standard \(d^{2N}-1\) dimensional simplex. Since the connection (12) between the \(\lambda\)'s and the \(\tau\)'s is linear and invertible, then the set of all \(\tau\)'s is also a \(d^{2N}-1\) dimensional simplex. Note however that the \(\tau\)'s are complex, so they are actually part of a bigger \(2d^{2N}\) dimensional real vector space. Nonetheless, conditions (7) (which are automatically satisfied by the formulae (12)) additionally limit the \(\tau\)'s so that the number of degrees of freedom is back to \(d^{2N}-1\). Moreover, the extreme points of the simplex of Weyl channels are given by the \(d^{2N}\) channels of equation (15). This is because the extreme points of the \(\lambda\)'s simplex are clearly \[\lambda(\vec{r},\vec{s})=d^{N}\delta_{\vec{r},\vec{r}_{0}}\delta_{\vec{s}, \vec{s}_{0}}. \tag{16}\] Therefore, those of the set of Weyl channels are given by applying the transformation (13a) to these extreme points, obtaining as a result the channels of equation (15). In fact, these channels are the only Weyl channels with the property that for all \(\vec{m},\vec{n}\), \(|\tau(\vec{m},\vec{n})|=1\), as shown in the following theorem. **Theorem 1**.: _A Weyl channel is an extreme point of the set of Weyl channels if and only if \(|\tau(\vec{m},\vec{n})|=1\) for all \(\vec{m},\vec{n}\)._ Proof.: We have already proved that extreme points are of the form (15), and therefore satisfy that \(|\tau(\vec{m},\vec{n})|=1\), so we only need to prove the converse. Equation (13a) says that \[\tau(\vec{m},\vec{n})=d^{-N}\sum_{\vec{r},\vec{s}}\lambda(\vec{r},\vec{s}) \omega^{-\vec{m}\cdot\vec{r}+\vec{n}\cdot\vec{s}}. \tag{17}\] Recall that \(d^{-N}\lambda(\vec{r},\vec{s})\) are non-negative and add up to 1. It follows from the triangle inequality that if the sum in the right-hand side of (17) has more than one term, then \(|\tau(\vec{m},\vec{n})|<1\). Thus, a Weyl channel with \(|\tau(\vec{m},\vec{n})|=1\) must have all \(\lambda(\vec{r},\vec{s})\) equal to 0 except one, say \(\lambda(\vec{r}_{0},\vec{s}_{0})\). In other words, if the Choi-Jamiolkowski matrix of a Weyl channel has only one eigenvalue \(\lambda(\vec{r}_{0},\vec{s}_{0})\) different from zero, then that Weyl channel is an extreme point. The simplest case that illustrates the result of this theorem are the single-qubit Weyl quantum channels. Given that the Weyl operators for \(d=2\) reduce to the Pauli operators, the extremal points for these are the vertices of the well-known tetrahedron of qubit quantum channels [28], which we illustrate in Fig. 2(a). We can now characterize in greater detail these extreme points. For a given value of \(\vec{r}_{0},\vec{s}_{0}\), the effect of the channel given by (15) on a Weyl matrix \(U(\vec{m},\vec{n})\) is \[\mathcal{E}_{\vec{r}_{0},\vec{r}_{0}}\left[U(\vec{m},\vec{n})\right] = \omega^{-\vec{r}_{0}\cdot\vec{m}+\vec{s}_{0}\cdot\vec{n}}U(\vec{m}, \vec{n}) \tag{18}\] \[= U(\vec{s}_{0},\vec{r}_{0})U(\vec{m},\vec{n})U(\vec{s}_{0},\vec{r }_{0})^{\dagger}.\] We see therefore that the extreme points of the set of all Weyl channels are unitary channels. Since all Weyl channels are convex combinations of the extreme points [29], it immediately follows that all the Weyl channels are simply random unitary channels, constructed from the Weyl unitaries. ## IV A Mathematical Structure Within Weyl Channels In this section, we focus on a subset of Weyl channels with physical relevance and mathematical beauty. We consider the Weyl channels, which, when iterated infinitely, converge to channels that completely erase, preserve, or introduce phases to the projections of the density matrix onto the Weyl operator basis. Our main results include the characterization of the group property of this particular subset and a method to determine these channels. Specifically, we show that the corresponding channels can be obtained by identifying all subgroups of \(\mathbb{Z}_{d}\oplus\mathbb{Z}_{d}\) and their homomorphisms to \(\mathbb{Z}_{d}\). ### Subgroup property of Weyl channels **Theorem 2**.: _Let \(\tau(\vec{m},\vec{n})\) and \(\tau(\vec{m}^{\prime},\vec{n}^{\prime})\) have both norm 1. Then so does \(\tau(\vec{m}+\vec{m}^{\prime},\vec{n}+\vec{n}^{\prime})\) and additionally_ \[\tau(\vec{m}+\vec{m}^{\prime},\vec{n}+\vec{n}^{\prime})=\tau(\vec{m},\vec{n}) \tau(\vec{m}^{\prime},\vec{n}^{\prime}) \tag{19}\] Proof.: From (13) follows that, quite generally, \(\tau(\vec{m},\vec{n})\) are convex combinations of complex numbers of the form \(\omega^{k}\), with \(k\) an integer. Nonetheless, the only such convex combinations having norm 1 are themselves numbers of the form \(\omega^{k}\), with \(k\) an integer, therefore \(\tau(\vec{m},\vec{n})\) and \(\tau(\vec{m}^{\prime},\vec{n}^{\prime})\) are, under the hypotheses of the theorem, of the form \(\omega^{k}\) and \(\omega^{k^{\prime}}\), respectively. Similarly for \(\tau(\vec{m}^{\prime},\vec{n}^{\prime})\), which we take to be equal to \(\omega^{k^{\prime}}\). Setting \(\tau(\vec{m},\vec{n})=\omega^{k}\) in (13a) and conveniently rewriting the equation results in \[\omega^{k}=d^{-N}\sum_{l}\omega^{l}\sum_{\vec{r},\vec{s}}\lambda(\vec{r}, \vec{s})\delta_{-\vec{m}\cdot\vec{r}+\vec{n}\cdot\vec{s},l} \tag{20}\] and a similar expression for \(\omega^{k^{\prime}}\), replacing \(\vec{m}\) and \(\vec{n}\) with their primed versions. Since \(\lambda(\vec{r},\vec{s})\) are positive and sum up to \(d^{N}\), it follows that the right-hand side is a convex sum of \(\omega^{l}\). For it to equal \(\omega^{k}\), an extreme point, only the term with \(l=k\) can be different from zero. It follows that \(\lambda(\vec{r},\vec{s})=0\), whenever \(-\vec{m}\cdot\vec{r}+\vec{n}\cdot\vec{s}\neq k\) or \(-\vec{m}^{\prime}\cdot\vec{r}+\vec{n}^{\prime}\cdot\vec{s}\neq k^{\prime}\). From this follows straightforwardly that again \(\lambda(\vec{r},\vec{s})=0\) whenever \[-(\vec{m}+\vec{m}^{\prime})\cdot\vec{r}+(\vec{n}+\vec{n}^{\prime})\cdot\vec{s }\neq(k+k^{\prime}) \tag{21}\] Figure 1: Visualization of the argument of the matrix elements \(\bigotimes_{a}\left[F_{\alpha}\otimes F_{\alpha}^{\ast}\right](\vec{m},\vec{n };\vec{r},\vec{s})\) for different dimensions and number of particles; rows and columns are indexed by the double indices \((\vec{m},\vec{n})\) and \((\vec{r},\vec{s})\), respectively. This matrix map the \(\tau(\vec{m},\vec{n})\) of a Weyl map to the eigenvalues \(\lambda(\vec{r},\vec{s})\) of its Choi-Jamiolkowski matrix, see eqs (12) and (14). We show plots for systems of (a) qubits, (b) qutrits, and (c) single-qudits. Notice that not only the total dimension is relevant, but also the number of particles; for instance, compare \(N=3\), \(d=2\) with \(N=1\), \(d=8\). which implies \[\tau(\vec{m}+\vec{m}^{\prime},\vec{n}+\vec{n}^{\prime})=\omega^{k+k^{\prime}}= \tau(\vec{m},\vec{n})\tau(\vec{m}^{\prime},\vec{n}^{\prime}). \tag{22}\] This means that the set of all \((\vec{m},\vec{n})\), such that \(\tau(\vec{m},\vec{n})\), has norm 1 form an _additive subgroup_ of the abelian group \[\mathcal{G}=\mathbb{Z}_{d}^{\oplus N}\oplus\mathbb{Z}_{d}^{\oplus N} \tag{23}\] with respect to vector addition modulo \(d\). Note that this is one case where a significant difference arises when the \(N\) particles have different dimensions \(d_{\alpha}\); the vectors \((\vec{m},\vec{n})\) would then belong to the group \[\mathcal{G}=\left(\bigoplus_{\alpha=1}^{N}\mathbb{Z}_{d_{\alpha}}\right) \oplus\left(\bigoplus_{\alpha=1}^{N}\mathbb{Z}_{d_{\alpha}}\right). \tag{24}\] In other words, the set \(\mathcal{H}\subseteq\mathcal{G}\) on which \(\tau(\vec{m},\vec{n})\) has norm 1 forms a subgroup of \(\mathcal{G}\) and \(\tau\) can be seen as a homomorphism from \(\mathcal{H}\) to \(\mathbb{Z}_{d}\). To determine all \(\tau(\vec{m},\vec{n})\) of Weyl channels satisfying eq. (19), we must proceed in 2 steps: 1. Determine all subgroups \(\mathcal{H}\subseteq\mathcal{G}\). 2. Determine all homomorphisms from \(\mathcal{H}\) to \(\mathbb{Z}_{d}\). In the following section we present an algorithm to determine all subgroups \(\mathcal{H}\) of \(\mathcal{G}\) and homomorphisms from \(\mathcal{H}\) to \(\mathbb{Z}_{d}\). We wish to remark that given a quantum channel for which some of its coefficients \(\tau(\vec{m},\vec{n})\) satisfy Eq. 19, the rest of coefficients are not necessarily null. However, still these are restricted by the complete positivity condition, Eq. 12. ### Weyl channels To determine all \(\tau(\vec{m},\vec{n})\) of Weyl channels satisfying eq. (19) we proceed in two steps. The first step involves identifying all subgroups of \(\mathcal{G}\) [cf. eq. (24)]. We begin by stating two relevant facts about finite abelian groups, and then discuss how to find the subgroups for the more general case of an abelian group \(\mathcal{G}\), which encompasses the majority of our discussion in this section. After that, we describe the second step, which is how to determine all phases of \(\tau(\vec{m},\vec{n})\) of a WCE channel by determining all homomorphisms from a subgroup to the roots of unity. We refer the reader to Appendix E to illustrate with several examples the algorithm we present in this section. Whenever \(p\) and \(q\) are coprime, the group \(\mathbb{Z}_{pq}\) is isomorphic to \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{q}\). Therefore, we may use the prime decomposition of \(d_{\alpha}\) to separate each \(\mathbb{Z}_{d_{\alpha}}\) in eq. (24) as a sum of cyclic groups of prime power order. We proceed in this way for all \(\alpha\) in eq. (24), and then we group the terms corresponding to different primes, so \(\mathcal{G}\) can be written as \[\mathcal{G}=\bigoplus_{p}\mathcal{G}_{p}, \tag{25}\] with \(\mathcal{G}_{p}=\bigoplus_{i}\mathbb{Z}_{p^{k_{i}}}\), for each prime \(p\) that appears in the decomposition of any of the \(d_{\alpha}\). Since the direct sum of two arbitrary abelian groups of orders \(m\) and \(n\) that are coprime yield all abelian groups of order \(mn\), we can directly construct all subgroups of \(\mathcal{G}\) by finding all the subgroups of each \(\mathcal{G}_{p}\). In other words, although \(\mathcal{G}\) in eq. (24) may have a complicated decomposition, we focus only in determining the subgroups of \(\mathcal{G}_{p}\), which will be convenient to write as \[\mathcal{G}_{p}=\bigoplus_{\alpha=1}^{r}\mathbb{Z}_{p^{M_{\alpha}}} \tag{26}\] where \(M_{\alpha}\) are in non-increasing order. The group \(\mathcal{G}_{p}\) is associated with the sequence \(\overline{M}=M_{1}\,\ldots\,M_{r}\), which is a _partition_ of \(M=\sum_{\alpha}M_{\alpha}\). Therefore, we will refer to \(\mathcal{G}_{p}\) as a group of type \(\overline{M}\). Furthermore, for any partition of \(M\) there exists an abelian group of order \(p^{M}\) that is unique up to an isomorphism [30]. If one has a subgroup \(\mathcal{H}_{p}\) of \(\mathcal{G}_{p}\), the corresponding partition, let us call it \(\overline{N}\), satisfies \(N_{\alpha}\leq M_{\alpha}\). On the other hand, once the choice of the non-increasing order for the partition of the group \(\mathcal{G}_{p}\), the corresponding partitions for the subgroups \(\mathcal{H}_{p}\) inherit a well-defined order from the group, and the corresponding partitions cannot therefore be taken in non-increasing order. Another important fact about finite abelian groups is that they all have a basis; that is, they can be generated by the integer combinations of a set of elements. In our particular case, a simple way of choosing a basis is by picking a generating element for each cyclic group in eq. (26). We denote them \(\vec{e}_{\alpha}\), and therefore an arbitrary \(h\in\mathcal{H}\) can be _uniquely_ expressed as \[h=\sum_{\alpha=1}^{r}n_{\alpha}\vec{e}_{\alpha}, \tag{27}\] where \(n_{\alpha}\in\mathbb{Z}_{p^{M_{\alpha}}}\), and the multiplication of a group element by an integer \(m\) is defined as the addition of the group element to itself repeated \(m\) times. The number \(r\) of elements in the basis is independent of the choice of basis and it is known as the group's rank \(r\). The general idea for finding all subgroups of \(\mathcal{G}_{p}\) is to determine a subset of subgroups such that, upon applying all automorphisms \(T:\mathcal{G}_{p}\mapsto\mathcal{G}_{p}\), all others are found. We will say that two subgroups of \(\mathcal{G}_{p}\) are \(T\)-isomorphic when there is an automorphism \(T\) mapping one to the other. Then, to find the subgroups of \(\mathcal{G}_{p}\) we first determine any subset with the maximum number of subgroups that are not \(T\)-isomorphic. We call these "representative subgroups". By definition, applying all automorphisms \(T\) (which we describe how to find in Appendix C) to the representative subgroups all other subgroups of \(\mathcal{G}_{p}\) are found. Note the difference between the concept of isomorphism for the subgroups, and the concept of \(T\)-isomorphism. The latter depends not only on the group structure of the subgroup \(\mathcal{H}\), but also on the way in which it is embedded in the group \(\mathcal{G}_{p}\). For instance, we can embed the group \(\mathbb{Z}_{2}\) in the group \(\mathbb{Z}_{2^{2}}\oplus\mathbb{Z}_{2}\) either as a subgroup of the first summand or as a subgroup of the second. In other words, the partition \(\overline{M}\) describing the full group is \(\overline{M}=2\,1\) and the subgroup \(\mathbb{Z}_{2}\) can be embedded with a partition \(0\,1\) as well as \(1\,0\). The two subgroups, being both isomorphic to \(\mathbb{Z}_{2}\), are abstractly isomorphic, but that isomorphism cannot be extended to an isomorphism of \(\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\). All subgroups of \(\mathcal{G}_{p}\) are found applying all its automorphisms \(T\) to the subgroups generated by the bases \[\mathcal{B}=\{p^{s_{1}}\vec{e}_{1},\ldots,p^{s_{s}}\vec{e}_{r}\}\quad 0\leq s_{ \alpha}\leq M_{\alpha}. \tag{28}\] Nevertheless, more than one different selection \(\mathbb{S}=\{s_{\alpha}\}\) may determine two bases of subgroups \(T\)-isomorphic, in other words, that are connected by an automorphism of \(\mathcal{G}_{p}\). For example, consider a group \(\mathcal{G}_{p}\) of type \(\overline{M}=2\,2\,1\,1\). The partitions \(\mathbb{S}=0\,1\,0\,1\) and \(\mathbb{S}^{\prime}=1\,0\,1\,1\) determine bases of \(T\)-isomorphic subgroups, because the automorphism defined as \(T(\vec{e}_{1})=\vec{e}_{2}\), \(T(\vec{e}_{2})=\vec{e}_{1}\), \(T(\vec{e}_{3})=\vec{e}_{4}\) and \(T(\vec{e}_{4})=\vec{e}_{3}\) maps one to the other. From each of these \(T\)-isomorphic sets of subgroups we can pick an arbitrary element, which will be called _the representative subgroup_. To find the representative subgroups we need a criterion to determine when two bases \(\mathcal{B}\) of the form (28) generate \(T\)-isomorphic groups. Let us denote \(\tilde{M}_{1},\ldots,\tilde{M}_{q}\) the \(q\) different values in the sequence of numbers in \(\overline{M}\) (for instance, if \(\overline{M}=2211\), then \(q=2\) and \(\tilde{M}_{1}=2,\tilde{M}_{2}=1\)). Furthermore, we define the subset \(S_{j}=\{s_{\alpha},\,\forall\,\alpha:M_{\alpha}=\tilde{M}_{j}\}\) of \(\mathbb{S}\), that is, \(S_{j}\) is the subset of \(\mathbb{S}\) formed by all the \(s_{\alpha}\) whose \(\alpha\)s correspond to the indices of the \(M_{\alpha}\) that are equal to \(\tilde{M}_{j}\). Then, the criterion is the following: two different sets \(\mathbb{S}\) and \(\mathbb{S}^{\prime}\) determine bases of \(T\)-isomorphic subgroups whenever their corresponding subsets \(S_{j}\) and \(S^{\prime}_{j}\) are the same for all \(j\). We are ready to describe the complete algorithm to determine all the subgroups of a given group \(\mathcal{G}\). First, decompose \(\mathcal{G}\) as a sum of prime power order groups \(\mathcal{G}_{p}\). For every \(\mathcal{G}_{p}\), find all sets \(\mathbb{S}=\{s_{\alpha}\}\) and discriminate between them to find the only ones that determine representative subgroups. Then apply to them all automorphisms \(T\) of \(\mathcal{G}_{p}\), so all subgroups of \(\mathcal{G}_{p}\) will be found, albeit with repetitions. A description of the group of automorphisms of an arbitrary abelian group \(\mathcal{G}\) is provided in [31], and the technique is summarized for completeness' sake in Appendix C. Finally, to find the subgroups of \(\mathcal{G}\) apply the direct sum between all different subgroups of each \(\mathcal{G}_{p}\). Furthermore, a way to count the total number of subgroups of \(\mathcal{G}_{p}\) is already known in the literature. Since any abelian group of prime power order can only have subgroups that are also of prime power order, subgroups of order \(p^{L}\), with \(L<M\), can also be characterized by Figure 2: Simplexes of (a) the \(\tau(m,n)\) of all single-qubit Weyl channels, in which we identify \(\tau(0,1)=\tau_{x}\), \(\tau(1,0)=\tau_{y}\), and \(\tau(1,1)=\tau_{z}\), as usual, and similarly for \(\lambda\)s; and of (b) the eigenvalues \(\lambda(m,n)\) of the corresponding Choi-Jamiolkowski matrix. Additionally, we depict in (c) and (d) the extreme points of (a) and Weyl erasing generators, respectively. a partition \(\overline{L}\) of \(L\). It is shown in [30; 32] that necessary and sufficient conditions for the partition \(\overline{L}\) to correspond to a possible subgroup of the group determined by the partition \(\overline{M}\) of \(M\) are \[L_{\alpha} =0\quad(\alpha>r), \tag{29a}\] \[L_{\alpha} \leq M_{\alpha},\] (29b) \[L_{\alpha} \geq L_{\alpha+1}. \tag{29c}\] An expression for the number of different subgroups of type \(\overline{L}\) is already known in the literature. For that matter, we refer the reader to Appendix F. To fully determine the coefficients \(\tau(\vec{m},\vec{n})\) with norm 1, we interpret them as a function that maps \(\mathcal{H}\) to the group of roots of unity \(\omega^{j}\). We consider \(\tau(\vec{m},\vec{n})=\omega^{\phi(\vec{m},\vec{n})}\), thus we are looking for all homomorphisms \(\phi:\bigoplus_{M_{\alpha}}\mathbb{Z}_{p^{M_{\alpha}}}\mapsto\mathbb{Z}_{p^{M_ {1}}}\). To determine one of such functions uniquely, it is sufficient to specify the values of \(\phi\) on a basis of \(\mathcal{H}\), as described in Appendix D. Note that all the above remarks greatly simplify when \(d\) is a prime number. In that case the group \(\mathcal{G}\) is additionally a vector space. The set of subgroups can then be described as the set of vector subspaces using the usual techniques of linear algebra. All the partitions described above then reduce to partitions of the type where \(M_{\alpha}\) is either 1 or 0, and the partition is fully characterized by the number of its non-zero elements, which correspond to the subspace's _dimension_. Finally, the homomorphism \(\tau\) can be described as a linear map from the vector space \(\mathcal{G}\) to the field \(\mathbb{Z}_{d}\), which is once more straightforwardly described in terms of linear algebra. ## V Weyl erasing channels ### Generalities In this section we focus on a particular class of Weyl channels; those for which \(|\tau(\vec{m}^{\prime},\vec{n}^{\prime})|=0\) or 1. In other words, we will discuss Weyl channels that completely erase, preserve or introduce specific phases to the projections of the density matrix of a system of qudits onto the Weyl matrices basis. We will refer to this subset of Weyl channels as Weyl erasing channels. Weyl erasing channels are an interesting subset of Weyl channels, as they arise from the composition of one or more of these channels an infinite number of times. For instance, the infinite composition of any Weyl channel for which all \(|\tau(\vec{m},\vec{n})|<1\), except \(\tau(\vec{0},\vec{0})=1\), results in the completely depolarizing channel. In the general case, the repeated application of a Weyl channel may not converge to a single Weyl erasing channel, however it oscillates between two or more such channels. For example, applying many times the single-qubit Weyl channel depicted with an asterisk in Fig. 2(a) asymptotically oscillates between the channel collapsing the Bloch sphere onto the \(y\) axis and other channel collapsing it onto the \(y\) axis while reflecting it across the \(x\)-\(z\) plane. This oscillation continues indefinitely between \(\vec{\tau}=(1,0,1,0)\) and \(\vec{\tau}^{\prime}=(1,0,-1,0)\). In the following, we derive a Kraus representation of Weyl erasing channels that will provide insight into the physical implementation of these channels. We then focus on deriving an expression of the eigenvalues of the Choi-Jamidkowski matrix of Weyl erasing channels exclusively, as we already have an expression for all Weyl channels [_c.f._ (12)]. For this, we begin presenting an algorithm that uses the mathematical machinery developed in section IV.2 to find all \(\tau(\vec{m},\vec{n})\) of any Weyl erasing channel: 1. Find all sets of indices \(\{(\vec{m},\vec{n}):|\tau(\vec{m},\vec{n})|=1\}\) by determining all subgroups \(\mathcal{H}\subset\mathcal{G}\), with \(\mathcal{G}\) that in (25). 2. Find the values of \(\tau(\vec{m},\vec{n})=\omega^{\phi(\vec{m},\vec{n})}\) for all \((\vec{m},\vec{n})\in\mathcal{H}\) by determining all homomorphisms \(\phi:\mathcal{H}\mapsto\bigoplus_{p}\mathbb{Z}_{p^{M_{1}(p)}}\), with \(M_{1}(p)\) such that \(\mathbb{Z}_{p^{M_{1}(p)}}\) denotes the largest order cyclic group for every \(\mathcal{G}_{p}\) in (25). 3. Assign \(\tau(\vec{m}^{\prime},\vec{n}^{\prime})=0\) for all \((\vec{m}^{\prime},\vec{n}^{\prime})\notin\mathcal{H}\). While an exhaustive enumeration of all Weyl erasing channels for a large number \(N\) of particles is not practical, the construction provides insights into the mathematical structure of this set. In fact, we show examples of Weyl erasing channels for a single-qubit, a 4-level system and a system composed by both of them in Figs. 4-8. We remark that the algorithms to determine \(\tau(\vec{m},\vec{n})\) of Weyl and Weyl erasing channels are both the same up until determining all homomorphisms. Then, for the former one may assign any value to the \(\tau(\vec{m}^{\prime},\vec{n}^{\prime})\notin\mathcal{H}\) as long as they keep the channel completely positive, whereas for the latter one must assign the value zero to all \(\tau(\vec{m}^{\prime},\vec{n}^{\prime})\notin\mathcal{H}\). Let us now evaluate the eigenvalues of the Choi matrix of a Weyl erasing channel. To be consistent with the previous section, we consider Weyl erasing channels of a system of \(N\) particles, each with dimension a power of \(p\), so the group in question is \(\mathcal{G}_{p}\) [see eq. (26)]. The group's rank is then \(r=2N\). Recall these channels have \(\tau(\vec{m},\vec{n})=0\) for all \((\vec{m},\vec{n})\notin\mathcal{H}\), thus, we find from (17) \[\lambda(\vec{r},\vec{s})=d^{-N}\sum_{(\vec{m},\vec{n})\in\mathcal{H}}\tau( \vec{m},\vec{n})\prod_{\alpha=1}^{N}\omega_{\alpha}^{m_{\alpha}r_{\alpha}-n_ {\alpha}s_{\alpha}} \tag{30}\] where \(\omega_{\alpha}=\exp\left(2\pi i/p^{M_{2\alpha}}\right)\) Since the composition of two Weyl channels is simply to multiply both sets of \(\tau(\vec{m},\vec{n})\), we can see that \(\tau(\vec{m},\vec{n})\) of Weyl erasing channels are those of an extreme Weyl channel [_c.f._ (15)] for all \((\vec{m},\vec{n})\in\mathcal{H}\), and \(\tau(\vec{m}^{\prime},\vec{n}^{\prime})=0\) for all \((\vec{m}^{\prime},\vec{n}^{\prime})\notin\mathcal{H}\). Hence, substituting \(\tau(\vec{m},\vec{n})\), and considering \(\omega_{p^{\alpha}}^{k}=\omega_{p^{\beta}}^{p^{(\alpha-\beta)k}}\), \(\alpha>\beta\), we can write \[\lambda(\vec{r},\vec{s})=\sum_{(\vec{m},\vec{n})\in\mathcal{H}}\omega_{p^{M_{1 }}}^{f(\vec{m},\vec{n})}, \tag{31}\] where \(f(\vec{m},\vec{n})=\sum_{\alpha=1}^{N}p^{M_{1}-M_{2\alpha-1}}(m_{\alpha}(r_{ \alpha}-r_{0,\alpha})-n_{\alpha}(s_{\alpha}-s_{0,\alpha}))\). To evaluate this expression we note that \(f\) is an homomorphism \(f:\mathcal{G}_{p}\mapsto\mathbb{Z}_{p^{M_{2}}}\). Therefore, the sum evaluates to zero unless \(f\) maps \(\mathcal{H}\) to the trivial group: \[\lambda(\vec{r},\vec{s})=\begin{cases}|\mathcal{H}|&f(\vec{m},\vec{n})=0\text{ for all }(\vec{m},\vec{n})\in\mathcal{H}\\ 0&\text{otherwise},\end{cases} \tag{32}\] where \(|\mathcal{H}|\) is defined as the number of elements of \(\mathcal{H}\). Furthermore, let us consider the group \(\mathcal{H}^{\perp}=\{(\vec{m},\vec{n}):f(\vec{m},\vec{n})=0\}\). The only non-zero \(\lambda(\vec{r},\vec{s})\) are those with indices \((\vec{r},\vec{s})\) such that \((\vec{r}-\vec{r}_{0},\vec{s}-\vec{s}_{0})\in\mathcal{H}^{\perp}\). Finally, having obtained the eigenvalues of the Choi-Jamiolkowski matrix of Weyl erasing channels we describe their canonical Kraus representation. Recall that that Weyl matrices are the Kraus operators of Weyl channels with probabilities equal to \(\lambda(\vec{r},\vec{s})\) [_c.f._ eq. (18)]. It then follows that Kraus operators of Weyl erasing channels are the subset of Weyl matrices \(U(\vec{r},\vec{s})\), with \((\vec{r}-\vec{r}_{0},\vec{s}-\vec{s}_{0})\in\mathcal{H}^{\perp}\), each with probability \(|\mathcal{H}^{\perp}|/|\mathcal{G}|\). ### Generators In the following, we investigate the smallest subset of Weyl erasing channels which, under composition, generate the whole set. For the sake of simplicity, we start by finding the generators of Weyl channels with \(\tau(\vec{m},\vec{n})=0\) or \(1\), as these are Weyl channels characterized only by subgroups of \(\mathcal{G}\). Subsequently, we move to the most general Weyl erasing channels that either preserve, erase or introduce phases to the density matrix. We shall determine those subgroups that are Indecomposable, in the sense that they cannot be generated as the non-trivial composition of two Weyl channels. We call these the generator subgroups. We consider once again the group \(\mathcal{G}\) shown in eq. (25), thus, we first discuss how to determine the generator subgroups of \(\mathcal{G}_{p}\), and, from those, determine the generator subgroups of \(\mathcal{G}\). Similarly to what we did in section IV.2, the generator subgroups \(V_{p}\) of \(\mathcal{G}_{p}\) can be found constructing representative generator subgroups \(V_{p}^{*}\) and applying all automorphisms of \(\mathcal{G}_{p}\) to them. We claim that a representative subgroup is a generator \(V_{p}\) of \(\mathcal{G}_{p}\) if and only if its basis is of the form \[\mathcal{B}_{V_{p}}=\{\vec{e}_{1},\ldots,\vec{e}_{j-1},p^{s_{j}} \vec{e}_{j},\vec{e}_{j+1},\ldots,\vec{e}_{r}\},\,1\leq s_{j}\leq M_{j}. \tag{33}\] This is verified as follows. Consider a subgroup \(\mathcal{H}_{p}\) with basis (28) such that its set of values \(\mathbb{S}=\{s_{\alpha}\}_{\alpha}\) has two (or more) values \(s_{\beta},s_{\gamma}\neq 0\). That is, \(\mathcal{H}_{p}\) has the basis \(\mathcal{B}_{\mathcal{H}_{p}}=\{\vec{e}_{1},\ldots,\vec{e}_{\beta-1},p^{s_{j} }\vec{e}_{\beta},\ldots,\vec{e}_{\gamma-1},p^{s_{\gamma}}\vec{e}_{\gamma}, \ldots,\vec{e}_{r}\}\). Then, \(\mathcal{H}_{p}\) can be expressed as the non-trivial intersection of the group \(g^{\prime}_{p}\) with basis \(\mathcal{B}_{V_{p}^{\prime}}=\{\vec{e}_{1},\ldots,\vec{e}_{\beta-1},p^{s_{ \beta}}\vec{e}_{\beta},\ldots,\vec{e}_{r}\}\) and another group \(V_{p}^{\prime\prime}\) with basis \(\mathcal{B}_{V_{p}^{\prime\prime}}=\{\vec{e}_{1},\ldots,\vec{e}_{\gamma-1},p^ {s_{\gamma}}\vec{e}_{\gamma},\ldots,\vec{e}_{r}\}\). Now, we check that a group \(\mathcal{H}_{p}\) with a basis of the form (33) cannot be expressed as an intersection of two subgroups containing \(\mathcal{H}_{p}\)_strictly_. Note that groups \(\mathcal{H}_{p}\) satisfying \(\mathcal{H}_{p}\subsetneq\mathcal{H}_{p}^{\prime}\subset\mathcal{G}_{p}\) must have a basis of the form \(\{\vec{e}_{1},\cdots,p^{s_{j}^{\prime}}\vec{e}_{j},\cdots,\vec{e}_{r}\}\), with \(s_{j}^{\prime}<s_{j}\). Therefore, if we have two such groups \(\mathcal{H}^{\prime}_{p}\) and \(\mathcal{H}^{\prime\prime}_{p}\), then the group arising from intersection \(\mathcal{H}_{p}^{\prime}\cap\mathcal{H}_{p}^{\prime\prime}\) has a basis \(\{\vec{e}_{1},\cdots,p^{\max(s_{j}^{\prime},s_{j}^{\prime\prime})}\vec{e}_{j}, \cdots,\vec{e}_{r}\}\), which doesn't generate \(\mathcal{H}_{p}\) because the integer span of \(p^{s_{j^{\prime\prime}}}\vec{e}_{j^{\prime\prime}}\) or \(p^{s_{j^{\prime\prime\prime}}}\vec{e}_{j^{\prime\prime\prime}}\) are strictly larger than the integer span of \(p^{s_{j}^{\prime\prime}}\vec{e}_{j}\). Finally, to find all generator subgroups \(V\) of \(\mathcal{G}=\bigoplus_{p}\mathcal{G}_{p}\) we proceed as follows. We begin by finding the generator subgroups of \(\mathcal{G}_{p}\). Then, the generator subgroups \(V\) are the subgroups of the form \[V=\bigoplus_{p}H_{p}, \tag{34}\] where \(H_{p}=\mathcal{G}_{p}\) for all \(p\) except for one \(p=p^{\prime}\), for which \(H_{p^{\prime}}=V_{p^{\prime}}\). To see this, notice that \(V\) is a generator since any other subgroup \(\mathcal{H}^{\prime}\) strictly containing it must also have \(H_{p}^{\prime}=\mathcal{G}_{p}\) for \(p\neq p^{\prime}\) and \(H_{p^{\prime}}^{\prime}\subsetneq V_{p^{\prime}}\). Therefore, for two such subgroups \(\mathcal{H}^{\prime}\) and \(\mathcal{H}^{\prime\prime}\) to generate \(V\), \(H_{p^{\prime}}^{\prime}\) and \(H_{p^{\prime\prime}}^{\prime\prime}\) should generate \(V_{p^{\prime}}\), which is impossible since \(V_{p^{\prime}}\) is a generator of \(\mathcal{G}_{p^{\prime}}\). On the other hand, any subgroup of \(\mathcal{G}\) that is composed as the sum of groups \(\mathcal{G}_{p}\) except for two (or more) primes \(p^{\prime}\) and \(p^{\prime\prime}\) in which the sum is of any corresponding generator subgroup, may be generated by two subgroups \(\mathcal{H}\) of the form (34) with suitable generators \(H_{p^{\prime}}=V_{p^{\prime}}\) and \(H_{p^{\prime\prime}}=V_{p^{\prime\prime}}\). To summarize up to this point: to determine the generators of Weyl erasing channels with \(\tau(\vec{m},\vec{n})=0,1\) one is only required to determine the generator subgroups of the corresponding group in which the indices \((\vec{m},\vec{n})\) of \(\tau(\vec{m},\vec{n})=1\) live. To determine the generators of Weyl erasing channels that also introduce phases one must still find generator subgroups, and a suitable homomorphism for each of them as well. More specifically, a Weyl erasing generator with \(|\tau(\vec{m},\vec{n})|=0,1\) is completely characterized by a generator subgroup of \(\mathcal{G}\) and a homomorphism \(\phi:\mathcal{G}\mapsto\bigoplus\mathbb{Z}_{p^{M_{2}(p)}}\), with \(\mathbb{Z}_{p^{M_{2}(p)}}\) is the largest order cyclic group for every \(\mathcal{G}_{p}\). The composition of two channels with subgroups \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) and homomorphisms \(\phi_{1}\) and \(\phi_{2}\) yields a channel corresponding to \(\mathcal{H}_{1}\cup\mathcal{H}_{2}\) and homomorphism \(\phi_{1}+\phi_{2}\). Therefore, for every generator subgroup \(V\), if we define a basis \(\phi_{V}^{\phi}\) of the space of homomorphisms, then the set of channels corresponding to each generator subgroup \(V\), together with any element \(\phi_{V}^{\phi}\) mapping \(V\) to the whole group \(\mathbb{Z}_{p^{M_{2}(p)}}\), form a set of generators. For instance, we show a set of Weyl erasing generators of a 4-level system in Fig. 3 To avoid misunderstandings, note that the "generators" for channels with \(\tau(\vec{m},\vec{n})=0,1\), those characterized completely by a generator subgroup (and one may say that also by a homomorphism \(\phi=0\)), are not generators of the whole set of Weyl erasing channels (\(|\tau(\vec{m},\vec{n})|=0,1\)). The former channels can always be obtained through iterated composi non-zero homomorphism. For example, the single-qubit Weyl erasing channels depicted with a blue point in Fig. 2(a), except for the one located at \(\vec{\tau}=(0,0,0)\), are Weyl generator channels with \(\tau(m,n)=0,1\). However, they can be obtained from composing two times each of the three Weyl generator channels depicted with a red point. Those are the generators of all Weyl erasing channels of a single-qubit. ## VI Conclusions In this paper, we explored a class of quantum channels of multipartite systems with different-dimensional particles. Our focus was on extending the study of Pauli and Weyl channels; the former have been studied for systems of many qubits and the latter for single \(d\)-level systems. We begin by introducing the multi-particle Weyl operators \(U(\vec{m},\vec{n})\) to define a Weyl map as a diagonal map in this basis; its eigenvalues are denoted by \(\tau(\vec{m},\vec{n})\). We derived the constraints on \(\tau(\vec{m},\vec{n})\) for a Weyl map to preserve the trace and hermiticy of the density matrix, as well as to be completely positive. For the latter, we diagonalized its Choi-Jamiolkowski matrix and found the linear relationship between these eigenvalues and \(\tau(\vec{m},\vec{n})\). Several features of Weyl channels emerged from our study. We identified the extreme points of the set, which correspond to Weyl operators, highlighting the random unitary nature of Weyl channels. Additionally, we established a subgroup structure within a Weyl channel, showing that the indices \((\vec{m},\vec{n})\) of all \(|\tau_{\vec{m},\vec{n}}|=1\) is a subgroup of the direct product of groups \(\mathbb{Z}_{d_{\alpha}}\oplus\mathbb{Z}_{d_{\alpha}}\), where \(d_{\alpha}\) represents the dimension of the \(\alpha\)-th particle. Furthermore, we introduced Weyl erasing channels, which are Weyl channels that either preserve, erase, or introduce phases to the Weyl operator basis. These extend the concept of _component erasing_ channels. Given that all channels of this type exhibit the subgroup structure, with the remaining \(\tau_{\vec{m},\vec{n}}\) set to zero, we were able to find the smallest subset which generate, under composition, the whole set. Our work contributes to the understanding of many-body quantum channels, and, moreover, to the dynamics of \(d\)-level systems, which have growing importance for both theoretical and practical purposes. Future research directions include investigating divisibility, non-Markovianity, channel capacity, and the subset of entanglement-breaking channels among other properties of the Weyl channel set. ###### Acknowledgements. Support by projects CONACyT 285754, 254515 and UNAM-PAPIIT IG100518, IG101421 is acknowledged. J. A. d. L. acknowledges a scholarship from CONACyT. J. A. d. L. would like to thank Cristian Alvarez for valuable dicussions about finite groups. A. F. acknowledges funding by Fundacao de Amparo a Ciencia e Tecnologia do Estado de Pernambuco - FACEPE, through processes BFP-0168-1.05/19 and BFP-0115-1.05/21. D. D. acknowledges OPTIQUTE APVV-18-0518, DESCOM VEGA-2/0183/21 and Stefan Schwarz Support Fund. ## Appendix A Computation of the Choi-Jamiolkowski matrix This appendix demonstrates that the Choi-Jamiolkowski matrix of any diagonal map \(\mathcal{E}\) takes the form \[\mathcal{D}=\frac{1}{d^{N}}\sum_{\vec{m},\vec{n}}\tau(\vec{m},\vec{n})U(\vec{ m},\vec{n})\otimes U(\vec{m},\vec{n})^{*}, \tag{10}\] whenever \(U(\vec{m},\vec{n})\) form an orthogonal basis of unitaries [23]. The Choi-Jamiolkowski matrix of a quantum map \(\mathcal{E}\) is defined as follows: \[\mathcal{D}=\frac{1}{d^{N}}\sum_{\vec{k},\vec{l}}\mathcal{E}\left[\left|\vec{ k}\right\rangle\left\langle\vec{l}\right|\right]\otimes\left(\left|\vec{k} \right\rangle\left\langle\vec{l}\right|\right). \tag{11}\] We may now express \(\left|\vec{k}\right\rangle\left\langle\vec{l}\right|\) in terms of any orthogonal basis of unitaries \(U(\vec{m},\vec{n})\) \[\left|\vec{k}\right\rangle\!\!\left\langle\vec{l}\right|=\frac{1}{d^{N}}\sum_ {\vec{m},\vec{n}}\mathrm{Tr}\left(U^{\dagger}(\vec{m},\vec{n})\left|\vec{k} \right\rangle\!\!\left\langle\vec{l}\right|\right)\!U(\vec{m},\vec{n}). \tag{12}\] Substituting this in the expression (11) for \(\mathcal{D}\) yields. Figure 3: Weyl erasing generators for a single-particle of dimension \(d=4\). Concatenating in all possible ways the corresponding channels one obtains all the Weyl quantum channels of a 4-level system with \(|\tau(m,n)|=0,1\). \[\frac{1}{d^{N}}\sum_{\vec{k},\vec{l}}\mathcal{E}\Big{(}\big{|}\vec{k} \Big{\rangle}\!\Big{\langle}\vec{l}\Big{|}\Big{)}\otimes\Big{|}\vec{k}\Big{\rangle} \!\Big{\langle}\vec{l}\Big{|} =\frac{1}{d^{2N}}\sum_{\vec{k},\vec{l},\vec{m},\vec{m}^{\prime}, \vec{n},\vec{n}^{\prime}}\mathcal{E}\Big{[}\mathrm{Tr}\left(U^{\dagger}(\vec{m},\vec{n})\left|\vec{k}\Big{\rangle}\!\Big{\langle}\vec{l}\Big{|}\right)U(\vec{m },\vec{n})\right]\otimes\Big{[}\mathrm{Tr}\left(U^{\dagger}(\vec{m}^{\prime}, \vec{n}^{\prime})\left|\vec{k}\Big{\rangle}\!\Big{\langle}\vec{l}\Big{|} \right)U(\vec{m}^{\prime},\vec{n}^{\prime})\Big{]} \tag{4a}\] \[=\frac{1}{d^{2N}}\sum_{\vec{k},\vec{l},\vec{m},\vec{m}^{\prime}, \vec{n},\vec{n}^{\prime}}\mathrm{Tr}\left(U^{\dagger}(\vec{m},\vec{n})\left| \vec{k}\Big{\rangle}\!\Big{\langle}\vec{l}\Big{|}\right)\mathrm{Tr}\left(U( \vec{m}^{\prime},\vec{n}^{\prime})\left|\vec{l}\Big{\rangle}\!\Big{\langle} \vec{k}\Big{|}\right)\tau(\vec{m},\vec{n})U(\vec{m},\vec{n})\otimes U(\vec{m }^{\prime},\vec{n}^{\prime})^{*}, \tag{4b}\] where we have used the complex conjugate of (43). Then, using the definition of trace it follows \[=\frac{1}{d^{2N}}\sum_{\vec{k},\vec{l},\vec{m},\vec{m}^{\prime}, \vec{n},\vec{n}^{\prime}}\langle l|U^{\dagger}(\vec{m},\vec{n})|k\rangle\ \langle k|U(\vec{m}^{\prime},\vec{n}^{\prime})|l\rangle\tau(\vec{m},\vec{n})U (\vec{m},\vec{n})\otimes U(\vec{m}^{\prime},\vec{n}^{\prime})^{*} \tag{4c}\] \[=\frac{1}{d^{N}}\sum_{\vec{m},\vec{n}}\tau(\vec{m},\vec{n})U( \vec{m},\vec{n})\otimes U(\vec{m},\vec{n})^{*}. \tag{4d}\] This expression can also be obtained also using a recently result of Siewert, in which he derives an expression for the maximally entangled state in terms of an arbitrary orthogonal basis [33]. ## Appendix B Eigenvalues of \(U(m,n)\) and of \(U(m,n)\otimes U(m,n)^{*}\) We will find the eigenvalues of \(U(m,n)\). Since it is unitary, we will express the eigenvalues as \(\omega^{c}\), with \(c\in\mathbb{R}\). Let us consider an eigenvector \(|\phi\rangle=\sum_{r}\phi(r)|r\rangle\) with eigenvalue \(\xi=\omega^{c}\). The eigenvalue equation for \(U(m,n)\) leads to the following relation: \[\phi(r+n)=\omega^{-mr}\omega^{c}\phi(r). \tag{5}\] Starting with an arbitrary index \(r\) and applying this recursion equation \(l-1\) times, we obtain \[\phi(r+nl)=\omega^{-lmr-\frac{1}{2}l(l-1)mn+cl}\phi(r). \tag{6}\] In the particular case in which \(l=l^{\prime}:=\frac{d}{\gcd(d,n)}\) we may use that \(l^{\prime}n\) is a multiple of \(d\), so: \[\phi(r)=\omega^{-l^{\prime}mr-\frac{1}{2}l^{\prime}(l^{\prime}-1)mn+cl^{ \prime}}\phi(r), \tag{7}\] which implies that (for values of \(r\) such that \(\phi(r)\neq 0\)): \[-l^{\prime}mr-\frac{1}{2}l^{\prime}(l^{\prime}-1)mn+cl^{\prime}=sd \tag{8}\] for some integer \(s\). Therefore: \[c=\frac{sd}{l^{\prime}}+\frac{1}{2}(l^{\prime}-1)mn+mr=\gcd(d,n)s+mr+\frac{1}{ 2}(l^{\prime}-1)mn. \tag{9}\] So we conclude that all eigenvalues of \(U(m,n)\) necessarily have the form \[\omega^{\gcd(d,n)s+mr+\frac{1}{2}(l^{\prime}-1)mn} \tag{10}\] for \(s\) and \(r\) integers. Furthermore, taken modulo \(d\), the set \(\{\gcd(d,n)s+mr\mid s,r\in\mathbb{Z}_{d}\}\) is equivalent to \(\{ns+mr\mid s,r\in\mathbb{Z}_{d}\}\) which is also equivalent to \(\{\gcd(m,n)k\mid k\in\mathbb{Z}_{d}\}\). Therefore, the \(d\) eigenvalues of \(U(m,n)\) have are \[\xi=\omega^{\gcd(m,n)k+\frac{1}{2}(l^{\prime}-1)mn}. \tag{11}\] From this, it is straightforward that the eigenvalues of \(U(m,n)\otimes U(m,n)^{*}\) are \[\mu(r,s)=\omega^{\gcd(m,n)k-\gcd(m,n)h}. \tag{12}\] This set is equivalent to \[\mu(r,s)=\omega^{mr-ns} \tag{13}\] where \(r,s\) are integers modulo \(d\). ## Appendix C Automorphisms of finite abelian groups In the following we describe the bijective homomorphisms \(T\) of an arbitrary abelian group. Without loss of generality we limit ourselves to groups that are the direct sum of groups of the type \(\mathbb{Z}_{p^{M}}\), specifically \[\mathcal{G}=\bigoplus_{\alpha=1}^{r}\mathbb{Z}_{p^{M_{\alpha}}}. \tag{14}\] To fix notations, we shall work with a fixed basis \(\vec{e}_{\alpha}\), \(1\leq\alpha\leq r\), where \(r\) is the _rank_ of \(\mathcal{G}\). The map \(T\) is therefore uniquely determined by the values of \(T\vec{e}_{\alpha}\). Since \(\vec{e}_{\alpha}\) is a basis, we can write \[T\vec{e}_{\alpha}=\sum_{\beta=1}^{r}t_{\alpha\beta}\vec{e}_{\beta}. \tag{15}\] The \(t_{\alpha\beta}\) are then uniquely determined, if we view them as homomorphisms from \(\mathbb{Z}_{p^{M_{\alpha}}}\) to \(\mathbb{Z}_{p^{M_{\alpha}}}\). Since such homomorphisms can always be expressed through the multiplication by some appropriate number, the expression given in (18) is meaningful. Now let us specify more precisely the range of variation of the \(t_{\alpha\beta}\). We distinguish two cases 1. \(M_{\alpha}\leq M_{\beta}\): in this case any number modulo \(p^{M_{\alpha}}\) will do, and two different such numbers provide different homomorphisms. 2. \(M_{\alpha}>M_{\beta}\): in this case, the number needs to be a multiple of \(p^{M_{\alpha}-M_{\beta}}\), since otherwise it is not possible to define the map. In that case, we may describe \(t_{\alpha\beta}\) as \(p^{M_{\alpha}-M_{\beta}}\tau_{\alpha\beta}\), where \(\tau_{\alpha\beta}\) is an arbitrary number modulo \(p^{M_{\beta}}\) Consider the matrix \(T\) in greater detail, and just as in the main text, let us denote by \(\tilde{M}_{1},\cdots,\tilde{M}_{q}\) the _distinct_ values of \(M_{\alpha}\) in _strictly decreasing order_. We define \(\nu_{\alpha}\) to be the number of times \(\tilde{M}_{\alpha}\) appears repeated in the original series. This defines a division of the \(T\) matrix in _blocks_ of size \(\nu_{\alpha}\times\nu_{\beta}\), where \(1\leq\alpha,\beta\leq q\). We first take the elements \(t_{\alpha\beta}\) modulo \(p\). As a consequence of the observation (2), all blocks with \(\alpha<\beta\) are filled with zeros, whereas all other blocks have arbitrary entries. It thus follows that the matrix is invertible modulo \(p\) if and only if all the diagonal blocks are invertible. The number of invertible \(\nu_{\alpha}\times\nu_{\alpha}\) matrices modulo \(p\) is given by \[I_{\alpha}=\prod_{\beta=1}^{\nu_{\alpha}}\left(p^{\nu_{\alpha}}-p^{\beta-1} \right). \tag{19}\] One sees this by observing that we may first choose an arbitrary non-zero vector of length \(\nu_{\alpha}\) in \(p^{\nu_{\alpha}}-1\) different ways, then chose a second vector independent from the first, and so on. All the other entries in the blocks below the diagonal, that is, the \(t_{\alpha\beta}\) with \(\alpha>\beta\), can be chosen arbitrarily. If we thus define \[K_{0}=\sum_{1\leq\beta<\alpha\leq q}\nu_{\alpha}\nu_{\beta}, \tag{20}\] then the total number of possible forms of the matrix \(T\) modulo \(p\) is \[N(p)=p^{K_{0}}\prod_{\alpha=1}^{q}I_{\alpha}. \tag{21}\] We now need to work out the number of ways this can be extended to the full matrix, where the entries have the full range of variation specified above. Note first that the condition of invertibility carries over automatically upon extension, as the inverse matrix of \(T\) modulo \(p\) can be extended uniquely to the inverse of the extended matrix. To the entries on or below the diagonal, that is, with \(t_{\alpha\beta}\) such that \(\alpha\geq\beta\), we can add any number of the form \(p\tau_{\alpha\beta}\), where \(\tau_{\alpha\beta}\) is an arbitrary number taken modulo \(p^{\tilde{M}_{\alpha}-1}\). So the number of possibilities of extending these blocks is given by \(p^{K_{1}}\), where \[K_{1}=\sum_{1\leq\beta\leq\alpha\leq q}(\tilde{M}_{\alpha}-1)\nu_{\alpha}\nu_ {\beta}. \tag{22}\] For the blocks above the diagonal, that is, the blocks with \(t_{\alpha\beta}\) such that \(\alpha<\beta\), they are of the form \(p^{\tilde{M}_{\alpha}-\tilde{M}_{\beta}}\tau_{\alpha\beta}\), with \(\tau_{\alpha\beta}\) a number modulo \(p^{\tilde{M}_{\beta}}\), so that the total number of ways of extending the blocks above the diagonal is \(p^{K_{2}}\), with \[K_{2}=\sum_{1\leq\alpha<\beta\leq q}\tilde{M}_{\beta}\nu_{\alpha}\nu_{\beta}. \tag{23}\] The final result for the total number of automorphisms is thus given by \[N_{tot}(M_{1},\ldots,M_{r})=p^{K_{0}+K_{1}+K_{2}}\prod_{\alpha=1}^{q}I_{ \alpha}. \tag{24}\] ## Appendix D Homomorphisms from \(\mathcal{H}\) to the cyclic group \(\mathbb{Z}_{d}\) Here we describe the set of homomorphisms \(\phi\) from an abelian group of the form \(\mathcal{H}=\bigoplus_{\alpha=1}^{r}\mathbb{Z}_{p^{M_{\alpha}}}\) to the cyclic group \(\mathbb{Z}_{p^{M_{1}}}\). As always, the numbers \(M_{\alpha}\) are ordered in decreasing order. We may as always choose a basis \(\vec{e}_{\alpha}\) of \(\mathcal{H}\), each having order \(p^{M_{\alpha}}\). The homomorphism \(\phi\) is then uniquely determined by a set of homomorphisms \(\phi_{\alpha}\) from the cyclic groups \(\mathbb{Z}_{p^{M_{\alpha}}}\) to \(\mathbb{Z}_{p^{M_{1}}}\). Whenever \(M_{1}=M_{\alpha}\), \(\phi_{\alpha}\) simply reduces to multiplication by an arbitrary \(r_{\alpha}\) number modulo \(p^{M_{1}}\). On the other hand, if \(M_{\alpha}<M_{1}\), then \(\phi_{\alpha}\) is given by the multiplication by a number of the form \(p^{\tilde{M}_{1}-M_{\alpha}}r_{\alpha}\) where \(r_{\alpha}\) is an arbitrary number modulo \(p^{M_{\alpha}}\). If we therefore define \(\nu\) as the number of \(M_{\alpha}=M_{1}\), so that \(M_{\nu}\geq M_{1}\) but \(M_{\nu+1}<M_{1}\), and \(\nu=0\) if \(M_{\alpha}<M_{1}\) for all \(\alpha\), then \(\phi\) can be expressed as follows: \[\phi\left(\sum_{\alpha}c_{\alpha}\vec{e}_{\alpha}\right) = \vec{\phi}\cdot\vec{e}:=\sum_{\alpha}\phi_{\alpha}c_{\alpha} \tag{25a}\] \[\phi_{\alpha} = \begin{cases}p^{M_{1}-M_{\alpha}}s_{\alpha}&(\alpha\geq\nu)\\ t_{\alpha}&(\alpha<\nu)\end{cases} \tag{25b}\] where \(s_{\alpha}\) and \(t_{\alpha}\) are numbers modulo \(p^{M_{\alpha}}\) and \(p^{M_{1}}\) respectively. The total number of such homomorphisms is therefore given by \(p^{K}\) with \[K=\sum_{\alpha=1}^{\nu}M_{\alpha}+M_{1}(r-\nu) \tag{26}\] ## Appendix E Examples To illustrate the application of the mathematical tools presented in the main text, we will provide detailed examples of how to identify the Weyl channels as is described in section IV. Remember that said channels are characterized by a subgroup \(\mathcal{H}\) of the group of indices \(\mathcal{G}\) (which corresponds to the indices whose \(\tau\)'s have norm \(1\)) and an homomorphism (which gives the phases to each \(\tau\)). The examples will be arranged in increasing generality, starting with a system of one qudit with prime dimension and ending with the most general case of many qudits of arbitrary dimensions. ### Single particle with prime dimension Here we show how to follow the algorithm described in section IV for the case of one qudit with prime dimension \(d=p\). In this case, the group of indices for the \(\tau\)'s is simply \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\) and we search for all its subgroups and then all homomorphisms to \(\mathbb{Z}_{p}\). While we do this in general, we simultaneously show the specific case for \(p=2\) (a single qubit). We start determining the types of subgroups of \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\) and a representative subgroup of each type. For that, we take the following steps: * We select a basis for \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\), the simplest would be \(\{\vec{e}_{1},\vec{e}_{2}\}:=\{(1,0),(0,1)\}\). * Define \(M_{\alpha}\) as the number such that \(p^{M_{\alpha}}\) is the order of \(\vec{e}_{\alpha}\). In this case, \(M_{1}=M_{2}=1\) and therefore the partition for the group is \(\tilde{M}=M_{1}M_{2}=11\). * Find all the sets \(\mathbb{S}=\{s_{\alpha}\}\) with \(0\leq s_{\alpha}\leq M_{\alpha}\). In this case, there are four said sets: \(\{0,0\},\{0,1\},\{1,0\},\{1,1\}\) * For each set, we define the basis \(\mathcal{B}=\{p^{s_{1}}\vec{e}_{1},p^{s_{2}}\vec{e}_{2}\}\), and therefore get the following bases: \[\mathbb{S}=\{0,0\}\rightarrow\mathcal{B}=\{p^{0}\vec{e}_{1},p^{0} \vec{e}_{2}\}=\{\vec{e}_{1},\vec{e}_{2}\}\;,\;\mathbb{S}=\{0,1\}\rightarrow \mathcal{B}=\{p^{0}\vec{e}_{1},p^{1}\vec{e}_{2}\}=\{\vec{e}_{1}\},\] (14) \[\mathbb{S}=\{1,0\}\rightarrow\mathcal{B}=\{p^{1}\vec{e}_{1},p^{0 }\vec{e}_{2}\}=\{\vec{e}_{2}\}\;,\;\mathbb{S}=\{0,0\}\rightarrow\mathcal{B}= \{p^{0}\vec{e}_{1},p^{0}\vec{e}_{2}\}=\{\}.\] (15) Notice that \(p\vec{e}_{\alpha}=(0,0)\), which doesn't contribute to the basis. * Now we only keep bases that are not T-isomorphic, so as to avoid unnecessary redundancies when applying automorphisms in the next step. As mentioned in the main text, we do so by first defining the sequence of numbers \(\tilde{M}_{1},\cdots,\tilde{M}_{q}\) given by the \(q\) different values in the sequence of numbers in \(\tilde{M}\). In this case, as \(\tilde{M}=11\), there is only one number in said sequence, being \(\tilde{M}_{1}=1\). Then, we define the subsets \(S_{j}=\{s_{\alpha},\forall\alpha:M_{\alpha}=\tilde{M}_{j}\}\). In this case, we only have one such subset, which happens to be the whole set, \(S_{1}=\{s_{1},s_{2}\}\). As described in the main text, two different bases of the ones described in the previous step are T-isomorphic if all their sets \(S_{j}\) are the same. In this case, this means that the bases constructed from \(\mathbb{S}=\{1,0\}\) and \(\mathbb{S}=\{0,1\}\) are T-isomorphic. Therefore, we may only keep one of those bases, let's say we keep \(\{0,1\}\) and discard the other one, so that the representative bases are: \[\{\vec{e}_{1},\vec{e}_{2}\},\;\{\vec{e}_{1}\},\;\{\}.\] (16) The subgroups generated by these bases are the "representative subgroups". Then, to find all possible subgroups, we need to find all automorphisms of \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\) and apply them to these representative groups, so that we can obtain all subgroups of each type starting from the representatives. As shown in eq.(14), automorphisms of \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\) are determined by a matrix \(t_{\alpha\beta}\) with dimensions \(r\times r\), where \(r\) is the number of elements in the basis of the group. Therefore, in this case the automorphisms are characterized by \(2\times 2\) matrices, where each entry \(t_{\alpha\beta}\) can be a number modulo \(p\). Furthermore, these entries are constrained by the conditions given in Appendix C, and since in this case \(M_{1}=M_{2}\), all \(t_{\alpha\beta}\) fall into case \(1\) of the aforementioned conditions. This implies that all \(t_{\alpha\beta}\) are numbers modulo \(p^{M_{\alpha}}=p\). This gives a total of \(p^{4}\) possible matrices, but we need to only keep those that are invertible. For example, for the especial case of one qubit, we construct all \(2\times 2\) matrices such that all entries \(t_{\alpha\beta}\) are numbers modulo \(2\), and out of these \(16\) matrices, only \(6\) of them are invertible: \[T_{1}=\begin{pmatrix}0&1\\ 1&1\end{pmatrix},\quad T_{2} =\begin{pmatrix}1&0\\ 1&1\end{pmatrix},\quad T_{3}=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}, \tag{17}\] \[T_{4}=\begin{pmatrix}1&1\\ 1&0\end{pmatrix},\quad T_{5} =\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad T_{6}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}. \tag{18}\] Therefore, these matrices represent the \(6\) possible automorphisms of \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\). To find all the subgroups, we simply apply all automorphisms on each representative subgroup found in eq.(10). For the case of one qubit, the result would be as follows: * \(\{\vec{e}_{1},\vec{e}_{2}\}\): The group generated by this basis is the whole gruop, and when we apply any automorphism, we always get back the whole group because automorphisms are invertible. Therefore, the only group here is the whole group \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\). * \(\{\vec{e}_{1}\}\): For this basis, the representative subgroup it generates is \(\{(0,0),(1,0)\}\). As before, we apply all the automorphisms to this subgroup. We can see an example for the case of one qubit, when applying \(T_{1}\) to the element\((1,0)\), we get as a result \((0,1)\), so applying \(T_{1}\) to the subgroup gives as a result \(\{(0,0),(0,1)\}\). Similarly, when using the other automorphisms in the case of one qubit, we get the following subgroups (excluding repetitions): \[\{(0,0),(1,0)\},\quad\{(0,0),(0,1)\},\quad\{(0,0),(1,1)\}.\] (12) * \(\{\}\): In this case, the representative subgroup is the trivial \(\{(0,0)\}\) and applying any automorphism leaves this subgroup intact. Therefore, we have found all subgroups of \(\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\). For the case of \(p=2\), they are: the complete group, the subgroups obtained in eq. (12) and the trivial subgroup \(\{(0,0)\}\), see Fig 4. That is, up until this point we have found the sets of indices \(\{m,n\}\) for which \(\tau(m,n)\) can have norm \(1\) in a Weyl channel. Now we find all homomorphisms \(\phi:\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\rightarrow\mathbb{Z}_{p}\). A homomorphism is characterized by its value in each element in the basis, \(\phi_{\alpha}:=\phi(\vec{e}_{\alpha})\). To know the possible values of \(\phi_{\alpha}\), we first define \(\nu\) like in Appendix D, as the number such that \(M_{\nu}\geq n\) but \(M_{\nu+1}<n\) (where \(n\) is the exponent of the co-domain of \(\phi\), in this case the co-domain is \(\mathbb{Z}_{p}\), so that \(n=1\) and we can see that \(\nu=2\)). The possible values of \(\phi_{\alpha}\) are given by the cases in eq. (14b): * \(\phi_{1}\): Since \(\alpha=1<\nu=2\), we have the second case and therefore \(\phi_{1}\) is a number modulo \(p^{n}=p\). * \(\phi_{2}\): Since \(\alpha=2\geq\nu=2\), we have the first case, so that \(\phi_{2}=p^{n-M_{2}}s_{s}=s_{2}\) with \(s_{2}\) a number modulo \(p^{M_{2}}=p\). Therefore, \(\phi_{2}\) is also a number modulo \(p\). To find all possible homomorphisms, we determine all possible pairs \(\phi_{1},\phi_{2}\). Since \(\phi_{1},\phi_{2}\) can be any number modulo \(p\), we have a total of \(p^{2}\) homomorphisms. In the special case of one qubit, they are: \(\phi_{1}=\phi_{2}=0;\ \phi_{1}=0,\phi_{2}=1;\ \phi_{1}=1,\phi_{2}=0;\ \text{and}\ \phi_{1}=\phi_{2}=1\). We show in Fig. 5 all Weyl erasing channels of a single qubit. ### Single particle with \(d=p^{n}\) Now we generalize to the case of a single particle with \(d=p^{n}\). To have a concrete example to show, we consider a particle with \(d=2^{2}=4\). * We select a basis of \(\mathbb{Z}_{p^{n}}\oplus Z_{p^{n}}\), for example, \(\{\vec{e}_{1},\vec{e}_{2}\}=\{(1,0),(0,1)\}\). * Define \(M_{\alpha}\) as the number such that \(p^{M_{\alpha}}\) is the order of \(\vec{e}_{\alpha}\). In this case, \(M_{1}=M_{2}=n\), so the partition of the group is \(\bar{M}=M_{1}M_{2}=nn\). For the special case of a \(4\)-level system, the partition is \(\bar{M}=22\). * Find all the sets \(\mathbb{S}=\{s_{\alpha}\}\) with \(0\leq s_{\alpha}\leq M_{\alpha}\). For the case of a \(4\)-level system, there are nine said sets: \(\{0,0\}\), \(\{0,1\}\), \(\cdots\), \(\{2,1\}\), \(\{2,2\}\). * For each set, we define a basis \(\mathcal{B}=\{p^{s_{1}}\vec{e}_{1},p^{s_{2}}\vec{e}_{2}\}\). For example, for a \(4\)-level system, the bases are: Figure 4: Single-qubit Weyl erasing channels with \(\tau(m,n)=0,1\). These are completely characterized by the sets \(\{(m,n):\tau(m,n)=1\}\), which are the subgroups of \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\). Figure 5: Single-qubit Weyl erasing channels with \(|\tau(m,n)|=0,1\). These are completely characterized by two elements: (i) the sets \(\{(m,n):|\tau(m,n)|=1\}\), which are the subgroups of \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), and (ii) all homomorphisms \(\phi:\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\mapsto\mathbb{Z}_{2}\). * We define the sequence of numbers \(\tilde{M}_{1},\cdots,\tilde{M}_{q}\) given by the \(q\) different values in the sequence \(\tilde{M}\). In this case, we only have \(\tilde{M}_{1}=n\). Then, we define the subsets \(S_{j}=\{s_{\alpha},\forall\alpha:M_{\alpha}=\tilde{M}_{j}\}\); in this case we only have \(S_{1}=\{s_{1},s_{2}\}\). As said before, different bases are T-isomorphic if all sets \(S_{j}\) are the same, and we only need to keep one of them. Therefore, for the case of one 4-level system, we only have to keep the following bases, which generate the representative subgroups: \[\{\vec{e}_{1},\vec{e}_{2}\},\ \{\vec{e}_{1},2\vec{e}_{2}\},\ \{\vec{e}_{1}\},\ \{2\vec{e}_{1},2\vec{e}_{2}\},\ \{2\vec{e}_{1}\},\ \{\}.\] Once again, automorphisms are characterized by \(2\times 2\) matrices \(t_{\alpha\beta}\). Since \(M_{1}=M_{2}=n\), all \(t_{\alpha\beta}\) fall into the first case of Appendix C, which implies that all \(t_{\alpha\beta}\) are numbers modulo \(p^{M_{\alpha}}=p^{n}\). This gives a total of \(p^{4n}\) possible matrices, of which we only keep those that are invertible (have non-zero determinant modulo \(p\)). For example, in the case of a 4-level system, there are 96 such matrices. **Find all subgroups:** As before, to find all subgroups of \(\mathbb{Z}_{d}\oplus\mathbb{Z}_{d}\), we apply all automorphisms to each of the representative subgroups found in the first step and omit duplicates. As always, these subgroups describe the indexes \(\tau(m,n)\) which can have norm 1. We show in Fig. 6 some Weyl erasing channels of a 4-level system that are completely characterized by subgroups of \(\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\). We find all homomorphisms \(\phi:\mathbb{Z}_{p^{n}}\oplus\mathbb{Z}_{p^{n}}\rightarrow\mathbb{Z}_{p^{n}}\). As for the last case, the homomorphism is characterized by two values \(\phi_{1}=\phi(\vec{e}_{1}),\phi_{2}=\phi(\vec{e}_{2})\). Using Appendix D, we find that \(\phi_{1}\) and \(\phi_{2}\) are both numbers modulo \(p^{n}\). To find all possible homomorphisms, we determine all possible pairs of \(\phi_{1},\phi_{2}\) which gives a total total of \(p^{2n}\) homomorphisms. For the case of a 4-level system, the 16 homomorphisms are given by all pairs of numbers \(\phi_{1},\phi_{2}\) modulo 4. We show in Fig. 7 some Weyl erasing channels for a 4-level system. ### Single particle with arbitrary dimension Now we consider a single particle with arbitrary dimension \(d\), which can be written with its prime factorization as \(d=\prod_{i=1}^{K}p_{i}^{n_{i}}\). In this case, what we have done in the last examples does not apply, since it only applies for groups of the form \(\mathbb{Z}_{p^{n_{i}}}\oplus\mathbb{Z}_{p^{n_{i}}\mathbb{Z}}\oplus\cdots\oplus \mathbb{Z}_{p^{n_{K}}}\)(notice that all the groups in the sum are powers of the same prime). However, we can still find the subgroups of \(\mathbb{Z}_{d}\oplus\mathbb{Z}_{d}\). To do it, we use the fact that \(\mathbb{Z}_{pq}\simeq\mathbb{Z}_{p}\oplus\mathbb{Z}_{q}\) whenever \(p\) and \(q\) are coprime. Therefore, \(\mathbb{Z}_{d}\simeq\mathbb{Z}_{p_{1}^{n_{1}}}\oplus\cdots\oplus\mathbb{Z}_{p _{K}^{n_{K}}}\), and after reordering we have that: \[\mathbb{Z}_{d}\oplus\mathbb{Z}_{d}\simeq\bigoplus_{i=1}^{K}\mathbb{Z}_{p_{i} ^{n_{i}}}\oplus\mathbb{Z}_{p_{i}^{n_{i}}}. \tag{10}\] Furthermore, it is a well known fact that subgroups of \(F_{1}\oplus F_{2}\) with \(F_{1}\) and \(F_{2}\) groups of coprime orders, are obtained as cartesian products of subgroups of \(F_{1}\) with subgroups of \(F_{2}\). Therefore, because of the decomposition of eq.(10), we can find the subgroups of \(\mathbb{Z}_{d}\oplus\mathbb{Z}_{d}\) by obtaining all the subgroups of each \(\mathbb{Z}_{p_{i}^{n_{i}}}\oplus\mathbb{Z}_{p_{i}^{n_{i}}}\) (which can be done as in the last example) and then taking all their possible cartesian products. Figure 6: Some 4-level system Weyl erasing channels with \(\tau(m,n)=0,1\). Each of those is completely characterized by a set \(\{(m,n):\tau(m,n)=1\}\), which is a subgroup of \(\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\). To find the homomorphisms \(\phi:\mathbb{Z}_{d}\oplus\mathbb{Z}_{d}\rightarrow\mathbb{Z}_{d}\), we picture the \(\phi\) as going from \(\bigoplus_{i=1}^{K}\mathbb{Z}_{p_{i}^{n_{i}}}\oplus\mathbb{Z}_{p_{i}^{n_{i}}}\) to \(\bigoplus_{i=1}^{K}\mathbb{Z}_{p_{i}^{n_{i}}}\). Any such homomorphism can be written as the direct sums of homomorphisms \(\phi_{i}:\mathbb{Z}_{p_{i}^{n_{i}}}\oplus\mathbb{Z}_{p_{i}^{n_{i}}}\rightarrow \mathbb{Z}_{p_{i}^{n_{i}}}\), which we obtained in the last example. Therefore, by constructing all such direct sums, we obtain all the homomorphisms we were looking for. For example, if \(d=12\) all we need to do is find all the subgroups and homomorphisms of \(\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\) and of \(\mathbb{Z}_{3}\oplus\mathbb{Z}_{3}\) and then take cartesian products of these subgroups and the direct sum of the homomorphisms. ### N particles of dimension of prime power dimension Now we consider a system consisting of \(N\) particles, each with dimension \(p^{n_{i}}\) for \(i=1,\cdots,N\), ordered such that \(n_{1}\geq n_{2}\geq\cdots\geq n_{N}\) (notice that the prime \(p\) is the same for all particles). In this case, the problem is to find all the subgroups of \(\mathcal{G}=\mathbb{Z}_{p^{n_{1}}}\oplus\mathbb{Z}_{p^{n_{1}}}\oplus\cdots \mathbb{Z}_{p^{n_{N}}}\oplus\mathbb{Z}_{p^{n_{N}}}\) and homomorphisms from \(\mathcal{G}\) to \(\mathbb{Z}_{p^{n_{1}}}\). As an example, we will develop a system of one qubit and one 4-level system. Similarly to the other examples, to find the representative subgroups we take the following steps: * Select a basis of \(\mathcal{G}\). For example, in the case of a qubit and a 4-level system, the group is \(\mathcal{G}=\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\oplus \mathbb{Z}_{2}\), and we can choose a basis \(\{\vec{e}_{1},\vec{e}_{2},\vec{e}_{3},\vec{e}_{4}\}\), with \(\vec{e}_{1}=(1,0,0,0)\quad\vec{e}_{2}=(0,1,0,0)\quad\vec{e}_{3}=(0,0,1,0)\quad \vec{e}_{4}=(0,0,0,1)\), where the first two entries add mod 4 and the last two add mod 2. * Next, we find the partition of \(\mathcal{G}\). For the qubit and 4-level system, the orders of \(\vec{e}_{1}\) and \(\vec{e}_{2}\) are 4 and the orders of \(\vec{e}_{3},\vec{e}_{4}\) are 2, so that the partition of the group is \(\bar{M}=M_{1}M_{2}M_{3}M_{4}=2211\). * We find all the sets \(\mathbb{S}=\{s_{\alpha}\}\) with \(0\leq s_{\alpha}\leq M_{\alpha}\), in this case there are 36 said sets. * For each set \(\mathbb{S}\), we define the basis \(\mathbb{B}=\{p^{s_{1}}\vec{e}_{1},p^{s_{2}}\vec{e}_{2},p^{s_{3}}\vec{e}_{3},p ^{s_{4}}\vec{e}_{4}\}\). * As before, some of the bases created this way are redundant, since they are T-isomorphic. To eliminate this redundancy, we first define \(\tilde{M}_{1},\cdots,\tilde{M}_{q}\) given by the \(q\) different values of numbers in \(\bar{M}\). In the example of a 4-level system and a qubit, we have that \(\tilde{M}_{1}=2,\tilde{M}_{2}=1\). Then, we define the sets \(S_{j}=\{s_{\alpha},\forall\alpha:M_{\alpha}=\tilde{M}_{j}\}\), which in this case are \(S_{1}=\{s_{1},s_{2}\}\) and \(S_{2}=\{s_{3},s_{4}\}\). Finally, bases are \(T-\)isomorphic if their corresponding sets \(S_{j}\) are equal. For example, the bases that come from the sets \(\mathbb{S}=\{2,1,1,0\}\) and \(\mathbb{S}^{\prime}=\{1,2,0,1\}\) are \(T-\)isomorphic, since \(S_{1}=S^{\prime}_{1}=\{2,1\}\) and \(S_{2}=S^{\prime}_{2}=\{1,0\}\). Therefore, after eliminating redundant bases and keeping only one of each batch, we get the following 18 bases: \[\mathbb{S}=\{0,0,0,0\}\rightarrow\mathcal{B}=\{\vec{e}_{1},\vec{e}_{2}, \vec{e}_{3},\vec{e}_{4}\}\;,\;\mathbb{S}=\{0,0,0,1\}\rightarrow\mathcal{B}=\{ \vec{e}_{1},\vec{e}_{2},\vec{e}_{3}\}\;,\;\mathbb{S}=\{0,1,1,1\}\rightarrow \mathcal{B}=\{\vec{e}_{1},2\vec{e}_{2}\},\] \[\mathbb{S}=\{0,1,0,0\}\rightarrow\mathcal{B}=\{\vec{e}_{1},2\vec {e}_{2},\vec{e}_{3},\vec{e}_{4}\}\;,\;\mathbb{S}=\{0,1,0,1\}\rightarrow \mathcal{B}=\{\vec{e}_{1},2\vec{e}_{2},\vec{e}_{3}\}\;,\;\mathbb{S}=\{0,1,1,1\} \rightarrow\mathcal{B}=\{\vec{e}_{1},2\vec{e}_{2}\},\] \[\mathbb{S}=\{0,2,0,0\}\rightarrow\mathcal{B}=\{\vec{e}_{1},\vec{e }_{3},\vec{e}_{4}\}\;,\;\mathbb{S}=\{0,2,0,1\}\rightarrow\mathcal{B}=\{\vec{e}_{ 1},\vec{e}_{4}\}\;,\;\mathbb{S}=\{0,2,1,1\}\rightarrow\mathcal{B}=\{\vec{e}_{1} \},\] \[\mathbb{S}=\{1,1,0,0\}\rightarrow\mathcal{B}=\{2\vec{e}_{1},2\vec {e}_{2},\vec{e}_{3},\vec{e}_{4}\}\;,\;\mathbb{S}=\{1,1,0,1\}\rightarrow \mathcal{B}=\{2\vec{e}_{1},2\vec{e}_{2},\vec{e}_{3}\}\;,\;\mathbb{S}=\{1,1,1,1\} \rightarrow\mathcal{B}=\{2\vec{e}_{1},2\vec{e}_{2}\},\] \[\mathbb{S}=\{2,1,0,0\}\rightarrow\mathcal{B}=\{2\vec{e}_{2},\vec{e }_{3},\vec{e}_{4}\}\;,\;\mathbb{S}=\{2,1,0,1\}\rightarrow\mathcal{B}=\{2\vec{e}_{2}, \vec{e}_{3}\}\;,\;\mathbb{S}=\{2,1,1,1\}:\{2\vec{e}_{2}\},\] \[\mathbb{S}=\{2,2,0,0\}\rightarrow\mathcal{B}=\{\vec{e}_{3},\vec{e }_{4}\}\;,\;\mathbb{S}=\{2,2,0,1\}\rightarrow\mathcal{B}=\{\vec{e}_{3}\}\;,\; \mathbb{S}=\{2,2,0,0\}\rightarrow\mathcal{B}=\{\}\] As in the other cases, these bases form the representative subgroups of the group. As before, the automorphisms are described by matrices \(t_{\alpha\beta}\). For the special case of a qubit and 4-level system, the matrices are of dimensions \(4\times 4\) (because there are 4 elements in the basis) and the conditions on the entries \(t_{\alpha\beta}\) can be found using the cases described in Appendix C, which lead to: * \(t_{11}\): \(M_{1}=M_{1}\) so that \(t_{11}\) is a number modulo \(p^{M_{1}}=2^{2}=4\). * \(t_{12}\): \(M_{1}=M_{2}\) so that \(t_{12}\) is a number modulo \(p^{M_{1}}=2^{2}=4\). * \(t_{13}\): \(M_{1}>M_{3}\) so that \(t_{13}=p^{M_{1}-M_{3}}\tau_{13}=2\tau_{13}\) with \(\tau_{13}\) a number modulo \(p^{M_{3}}=2\). Therefore, the possible values are 0 and 2. * The same can be done for the rest of the values, and we find that \(t_{11},t_{12},t_{21},t_{22}\in\{0,1,2,3\}\); \(t_{13},t_{14},t_{23},t_{24}\in\{0,2\}\) and \(t_{31},t_{32},t_{33},t_{34},t_{41},t_{42},t_{43},t_{44}\in\{0,1\}\). Then, running through all possible matrices with these entries and keeping only the invertible ones, we find 147456 matrices. As before, to find all subgroups, we apply these automorphisms to every representative subgroup and discard repetitions. This procedure gives us the 249 subgroups of \(\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), some of which are shown in fig. (8). Finally, we find the homomorphisms \(\phi:\mathcal{G}\rightarrow\mathbb{Z}_{p^{n_{1}}}\). For the case of a qubit and a 4-level system, we need the homomorphisms \(\phi:\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\oplus\mathbb{Z}_{2} \rightarrow\mathbb{Z}_{4}\). As before, we need to follow the procedure mentioned in Appendix D. In this case, \(n=2\) and therefore \(\nu=2\). The homomorphisms \(\phi\) are characterized by the values in the basis \(\phi_{\alpha}=\phi(\vec{e}_{\alpha})\) which have to follow the conditions of eq.(10b), that lead to: * \(\phi_{1}\): Since \(\alpha=1<2=\nu\), we are in the second case of eq.(10b), thus \(\phi_{1}\) is a number modulo \(p^{n}=4\). * \(\phi_{2}\): Since \(\alpha=2=2=\nu\), we are in the first case of eq. (10b), thus \(\phi_{2}=p^{n-M_{2}}s_{2}=s_{2}\) with \(s_{2}\) a number modulo \(p^{M_{2}}=4\). * \(\phi_{3}:\) Since \(\alpha=3>2\), \(\phi_{3}=p^{n-M_{3}}s_{3}=2s_{3}\) with \(s_{3}\) a number modulo \(p^{M_{3}}=2\), so that \(\phi_{3}=0,2\). * \(\phi_{4}:\) Equivalently to \(\phi_{3}\) we find that \(\phi_{4}=0,2\). Therefore, the homomorphisms for a qubit and a 4-level system are given by the 4 numbers \(\phi_{1},\phi_{2},\phi_{3},\phi_{4}\) with \(\phi_{1},\phi_{2}\in\{0,1,2,3\}\) and \(\phi_{3},\phi_{4}\in\{0,2\}\) for a total of 64 possibilities. ### Most General Case In the most general case we have \(N\) particles, each with arbitrary dimension \(d_{i}\) and so the group under consideration is \(\mathcal{G}=\bigoplus_{i=1}^{N}\mathbb{Z}_{d_{i}}\oplus\mathbb{Z}_{d_{i}}\). Then, in this direct sum, we can first separate each \(\mathbb{Z}_{d_{i}}\) as a sum of cyclic groups of prime power orders, such as it was done in section E.3 of this appendix. Then, having written \(\mathcal{G}\) as a direct sum of cyclic groups with prime power order, we collect together the cyclic groups of order that is a power of 2, then cyclic groups of order power of 3, 5, 7, and so on for each prime. After this, we can find the subgroups and homomorphisms of each of these collections as it was done in section E.4. Finally, the subgroups of \(\mathcal{G}\) can be found as cartesian products of subgroups of different collections. ## Appendix F Number of subgroups per type \(\overline{L}\) An expression for the number of different subgroups of type \(\overline{L}\) is already known in the literature. To introduce this expression we first need to consider the Ferrers graph of \(\overline{L}\), that is, \(L\) squares of which the first \(L_{1}\) are in the first row, the next \(L_{2}\) in the second, and so on. Then, the conjugate partition \(\overline{L}^{\prime}\) is defined as the Ferrers graph of \(\overline{L}\) obtained by inverting rows and columns. Similarly, the partition \(\overline{M}^{\prime}\) is defined as the conjugate partition of \(\overline{L}\). The number of subgroups \(\mathcal{H}\) of type \(\overline{L}\) of \(\mathcal{G}_{p}\) is given by \[\prod_{\alpha\geq 1}p^{M^{\prime}_{\alpha+1}(L^{\prime}_{\alpha}-M^{\prime}_{ \alpha})}\left[\begin{array}{c}L^{\prime}_{\alpha}-M^{\prime}_{\alpha+1}\\ M^{\prime}_{\alpha}-M^{\prime}_{\alpha+1}\end{array}\right]_{p}, \tag{11}\] where the symbol \[\left[\begin{array}{c}n\\ m\end{array}\right]_{p}=\prod_{s=1}^{m}\frac{p^{n-s+1}-1}{p^{m-s+1}-1} \tag{12}\] denotes the number of vector subspaces of dimension \(m\) in a vector space of dimension \(n\) over the field \(\mathbb{Z}_{p}\). The proof is rather intricate, and we refer the reader to the relevant literature, such as [32; 34]. However, the key fact is that the number of subgroups obtained by our algorithm can be compared with (11) to check that all subgroups of a given partition \(\overline{L}\) have been found.
2304.12017
Small data solutions for the Vlasov-Poisson system with a trapping potential
In this paper, we study small data solutions for the Vlasov-Poisson system with the simplest external potential, for which unstable trapping holds for the associated Hamiltonian flow. We prove sharp decay estimates in space and time for small data solutions to the Vlasov-Poisson system with the unstable trapping potential $\frac{-|x|^2}{2}$ in dimension two or higher. The proofs are obtained through a commuting vector field approach. We exploit the uniform hyperbolicity of the Hamiltonian flow, by making use of the commuting vector fields contained in the stable and unstable invariant distributions of phase space for the linearized system. In dimension two, we make use of modified vector field techniques due to the slow decay estimates in time. Moreover, we show an explicit teleological construction of the trapped set in terms of the non-linear evolution of the force field.
Anibal Velozo Ruiz, Renato Velozo Ruiz
2023-04-24T11:33:36Z
http://arxiv.org/abs/2304.12017v1
# Small data solutions for the Vlasov-Poisson system with a trapping potential ###### Abstract. In this paper, we study small data solutions for the Vlasov-Poisson system with the simplest external potential, for which unstable trapping holds for the associated Hamiltonian flow. We prove sharp decay estimates in space and time for small data solutions to the Vlasov-Poisson system with the unstable trapping potential \(\frac{-|x|^{2}}{2}\) in dimension two or higher. The proofs are obtained through a commuting vector field approach. We exploit the uniform hyperbolicity of the Hamiltonian flow, by making use of the commuting vector fields contained in the stable and unstable invariant distributions of phase space for the linearized system. In dimension two, we make use of modified vector field techniques due to the slow decay estimates in time. Moreover, we show an explicit teleological construction of the trapped set in terms of the non-linear evolution of the force field. ###### Contents * 1 Introduction * 2 Preliminaries * 3 Decay of velocity averages for the linearized system * 4 Small data solutions for the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\) * 5 The two-dimensional case * 6 The trapped set of the characteristic flow ## 1. Introduction In this paper, we study the evolution in time of collisionless many-particle systems on \(\mathbb{R}^{n}\), which are described statistically by a distribution function on phase space that satisfies a non-linear PDE system motivated by kinetic theory. More precisely, we investigate the non-linear dynamics of solutions \(f(t,x,v)\) to the _Vlasov-Poisson system with an external potential_\(\Phi(x)\); given by \[\begin{cases}\partial_{t}f+v\cdot\nabla_{x}f-(\nabla_{x}\Phi+\mu\nabla_{x}\phi )\cdot\nabla_{v}f=0,\\ \Delta_{x}\phi=\rho(f),\\ \rho(f)(t,x):=\int_{\mathbb{R}^{d}}f(t,x,v)dv,\\ f(t=0,x,v)=f_{0}(x,v),\end{cases} \tag{1}\] where \(t\in[0,\infty)\), \(x\in\mathbb{R}^{n}_{x}\), \(v\in\mathbb{R}^{n}_{v}\), and \(\mu\in\{1,-1\}\) is a fixed constant. According to the value of \(\mu\), the interaction between the particles of the system is either _attractive_ (when \(\mu=1\)), or _repulsive_ (when \(\mu=-1\)). The nonlinearity of this classical kinetic PDE system arises from the mean field generated by the many-particle system, through the gradient of the solution to the Poisson equation, which is determined in terms of the so-called _spatial density_\(\rho(f)\), defined by integrating the distribution function in the velocity variables. The Vlasov-Poisson system with an external potential \(\Phi\), describes a collisionless many-particle system for which the trajectories described by its particles are set by the mean field generated by the many-particle system, and an external potential \(\Phi\) motivated by specific considerations of the problem at hand. External potentials have been previously used in the literature to study collisional and collisionless many-particle systems in kinetic theory [11, 12, 13, 14, 15, 16, 17, 18, 19]. The Vlasov-Poisson system with an external potential is motivated by the classical _Vlasov-Poisson system_, given precisely by the Vlasov-Poisson system with a vanishing external potential. The Vlasov-Poisson system was originally introduced for the study of galactic dynamics by Jeans [10], when the interaction between the particles of the system is attractive (\(\mu=1\)). In this setup, the field \(\nabla_{x}\phi\) is also known as the _gravitational field_. Independently, the Vlasov-Poisson system was introduced for the study of plasma physics by Vlasov [10], when the interaction between the particles of the system is repulsive (\(\mu=-1\)). In this setup, the field \(\nabla_{x}\phi\) is also known as the _electric field_. We note that in the plasma physics case, the many-particle system (1) is composed by a single species of particles without global neutrality. The field \(\nabla_{x}\phi\) for the Vlasov-Poisson system with an external potential has the same meaning in both the attractive and the repulsive case. Subsequently, the Vlasov-Poisson system has been widely used to research collisionless many-particle systems in astrophysics [11] and plasma physics [12]. The Vlasov-Poisson system is a non-linear transport-elliptic type PDE system whose rich dynamics have been extensively studied in the scientific literature. The first well-posedness result for this PDE system was obtained by Okabe and Ukai [13] who proved global well-posedness in dimension two and local well-posedness in dimension three. Later in time, a large class of non-trivial stationary solutions for this system were constructed [1, 14, 15]. Seminal independent works by Pfaffelmoser [16] and Lions-Perthame [17] of the early nineties proved _global well-posedness_ for the Vlasov-Poisson system in dimension three (see also Schaeffer's proof [18]). These global well-posedness results can be adapted to incorporate an external potential \(\Phi(x)\), as long as \(\nabla_{x}\Phi\) has Lipschitz regularity (see the introduction of [10]). However, the description of the non-linear dynamics of solutions to the Vlasov-Poisson system for arbitrary finite energy data is not yet fully understood. Nonetheless, non-linear perturbative stability results for stationary solutions of this PDE system have been proved. Orbital stability under spherically symmetric perturbations has been proved for several non-increasing spherically symmetric stationary solutions [19, 20, 18, 13, 15]. We stress the work by Lemou, Mehats, and Raphael [15], who proved orbital stability under _arbitrary perturbations_ for a large class of non-increasing spherically symmetric stationary solutions previously considered in the literature. We also comment on the asymptotic stability of a point charge for the repulsive Vlasov-Poisson system in dimension three by Pausader, Widmayer, and Yang [21]. The first asymptotic stability result for solutions to the Vlasov-Poisson system was obtained by Bardos and Degond [1], who studied the evolution in time of small data solutions for the Vlasov-Poisson system for compactly supported initial data, using the method of characteristics. Later on, this small data global existence result for the Vlasov-Poisson system was improved by Hwang, Rendall and Velasquez [14] who proved optimal time (but not spatial) decay estimates for higher order derivatives of the spatial density for compactly supported data, again using the method of characteristics. More recently, the stability of the vacuum solution for the Vlasov-Poisson system a la Bardos-Degond was revisited by Smulevici [17], who proved stability based upon energy estimates using a vector field method. As a result, Smulevici [17] obtained propagation in time of a global energy bound, in terms of commuted Vlasov fields associated with conservation laws of the free transport operator, and optimal space and time decay estimates for the spatial density induced by the distribution function. Later Duan [18] simplified the functional framework used to prove the stability of the vacuum solution for the Vlasov-Poisson system in [17]. See [19] for another proof of the stability of vacuum using methods coming from dispersive PDEs. In this paper, we are interested in stability results for _dispersive_ collisionless many-particle systems for which the dynamics described by the particles of the system are _hyperbolic_. Motivated by this class of many-particle systems, we consider the Vlasov-Poisson system with the simplest external potential, for which _unstable trapping_ is expected to hold for the Hamiltonian flow associated to small data solutions of this system. For the purposes of this paper, we say that _unstable trapping_ holds for a Hamiltonian flow in \(\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\) if the trajectories of the flow escape to infinity for every point in phase space, except for a non-trivial set of measure zero for which the future of every trajectory of the flow is always bounded. More precisely, we study the non-linear dynamics of small data solutions for the Vlasov-Poisson system with the external potential \(\frac{-|x|^{2}}{2}\). We note that unstable trapping holds trivially for the linear Vlasov equation with the external potential \(\frac{-|x|^{2}}{2}\). As a result, we prove asymptotic stability for small data solutions to the Vlasov-Poisson system with the external potential \(\frac{-|x|^{2}}{2}\) in dimension higher or equal to two, by using a commuting vector field method a la Smulevici. We investigate this toy model in order to offer insights on the study of stability results for dispersive collisionless many-particle systems for which the associated Hamiltonian flow is hyperbolic. This dispersive behaviour holds locally for 1D Hamiltonian flows arising from potentials with a global maximum in a neighborhood of the associated fixed hyperbolic point. An important example of dispersive collisionless many-particle systems, for which the Hamiltonian flow is hyperbolic, is given by many-particle systems in the exterior of black hole backgrounds which admit a _normally hyperbolic trapped set_[16, 17].1 Footnote 1: We stress the trapped set in the exterior of black hole backgrounds is _eventually absolutely \(r\)-normally hyperbolic_ for every \(r\) according to [14, Chapter 1, Definition 4]. ### The main results In this manuscript, we investigate the non-linear dynamics of small data solutions for the Vlasov-Poisson system with the external potential \(\frac{-|x|^{2}}{2}\); given by \[\begin{cases}\partial_{t}f+v\cdot\nabla_{x}f+x\cdot\nabla_{v}f-\mu\nabla_{x} \phi\cdot\nabla_{v}f=0,\\ \Delta_{x}\phi=\rho(f),\\ \rho(f)(t,x):=\int_{\mathbb{R}^{d}}f(t,x,v)dv,\\ f(t=0,x,v)=f_{0}(x,v),\end{cases} \tag{2}\] where \(t\in[0,\infty)\), \(x\in\mathbb{R}_{x}^{n}\), \(v\in\mathbb{R}_{v}^{n}\), and \(\mu\in\{1,-1\}\) is a fixed constant. The local well-posedness theory for this PDE system is standard (see for instance [19, Section 3]). In dimension greater than two, we study the evolution in time of small initial distribution functions \(f_{0}:\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\to[0,\infty)\), in the energy space defined by a higher order Sobolev norm: \[\mathcal{E}_{N}[f]:=\sum_{|\alpha|\leq N}\sum_{Z^{\alpha}\in\lambda^{|\alpha|} }\|Z^{\alpha}f\|_{L^{1}_{x,v}},\] where \(Z^{\alpha}\) are differential operators of order \(|\alpha|\), obtained as compositions of vector fields, in a class \(\lambda\) of commuting vector fields for the linear Vlasov equation with the trapping potential \(\frac{-|x|^{2}}{2}\). This linear Vlasov equation corresponds to the linearization of the Vlasov-Poisson system with the external potential \(\frac{-|x|^{2}}{2}\), with respect to its vacuum solution. See Subsection 2.2 for the precise definition of \(\lambda\). **Theorem 1.1**.: _Let \(n\geq 3\) and \(N\geq 2n\). There exists \(\epsilon_{0}>0\) such that for all \(\epsilon\in(0,\epsilon_{0})\), if the initial data \(f_{0}\) for the Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) on \(\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\) satisfies \(\mathcal{E}_{N}[f_{0}]\leq\epsilon\). Then, the corresponding solution \(f\) for the Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) exists globally, and it satisfies the following estimates for every \(t\in[0,\infty)\) and \(x\in\mathbb{R}^{n}\):_ 1. _Global energy estimate:_ \[\mathcal{E}_{N}[f(t)]\leq 2\epsilon.\] 2. _Decay in space and time of the spatial density for any multi-index_ \(\alpha\) _of order_ \(|\alpha|\leq N-n\)_:_ \[|\rho(Z^{\alpha}f)(t,x)|\leq\frac{C_{N,n}\epsilon}{(e^{t}+|x|)^{n}},\] _as well as improved decay estimates for its derivatives_ \[|\partial_{x}^{\alpha}\rho(f)(t,x)|\leq\frac{C_{N,n}\epsilon}{(e^{t}+|x|)^{n+ |\alpha|}},\] _where_ \(C_{N,n}>0\) _is a uniform constant depending only on_ \(n\) _and_ \(N\)_._ In the two-dimensional case, we study the evolution in time of small initial distribution functions \(f_{0}:\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\to[0,\infty)\), in the energy space defined by a higher order Sobolev norm: \[\mathcal{E}_{N}^{m}[f]:=\sum_{|\alpha|\leq N}\sum_{Y^{\alpha}\in\lambda_{m}|^ {|\alpha|}}\|Y^{\alpha}f\|_{L^{1}_{x,v}},\] where \(Y^{\alpha}\) are differential operators of order \(|\alpha|\), obtained as compositions of modified vector fields in a class \(\lambda_{m}\). The vector fields in \(\lambda_{m}\) are modifications of the commuting vector fields for the linear Vlasov equation with the trapping potential \(\frac{-|x|^{2}}{2}\) in \(\lambda\). See Subsection 5.1 for the precise definition of \(\lambda_{m}\). **Theorem 1.2**.: _Let \(N\geq 7\). There exists \(\epsilon_{0}>0\) such that for all \(\epsilon\in(0,\epsilon_{0})\), if the initial data \(f_{0}\) for the Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) on \(\mathbb{R}_{x}^{2}\times\mathbb{R}_{v}^{2}\) satisfies \(\mathcal{E}_{N}[f_{0}]\leq\epsilon\). Then, the corresponding solution \(f\) for the two dimensional Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) exists globally, and it satisfies the following estimates for every \(t\in[0,\infty)\) and \(x\in\mathbb{R}^{2}\):_ 1. _Global energy estimate:_ \[\mathcal{E}_{N}^{m}[f(t)]\leq 2\epsilon.\] 2. _Decay in space and time of the spatial density for any multi-index_ \(\alpha\) _of order_ \(|\alpha|\leq N-2\)_:_ \[|\rho(Z^{\alpha}f)(t,x)|\leq\frac{C_{N}\epsilon}{(e^{t}+|x|)^{2}},\] _as well as improved decay estimates for its derivatives_ \[|\partial_{x}^{\alpha}\rho(f)(t,x)|\leq\frac{C_{N}\epsilon}{(e^{t}+|x|)^{2+| \alpha|}},\] _where_ \(C_{N}>0\) _is a uniform constant depending only on_ \(N\)_._ _Remark 1_.: 1. The proofs of Theorem 1.1 and Theorem 1.2 fit into the general framework of the vector field method for dispersive collisionless kinetic equations developed in [14], using weighted Sobolev estimates in terms of commuting vector fields. As a result, we obtain sharp decay estimates in space and time for the induced spatial density by exploiting the weights of the corresponding commuting vector fields. 2. We exploit the uniform hyperbolicity of the non-linear Hamiltonian flow, by making use of the commuting vector fields contained in the stable and unstable invariant distributions of phase space2 for the linearized system. In dimension two, we make use of modified vector field techniques due to the slow decay estimates in time. The modifications to the commuting vector fields for the linearized system _grow linearly in time_. This is in contrast with previous applications of modified vector fields to collisionless kinetic equations where the modifications _grow logarithmically in time_. See for instance [14, 15, 16]. As a result, we obtain exponential decay in time for the induced spatial density. The rate of exponential decay for the spatial density coincides with the _sum of all positive Lyapunov exponents_ of the Hamiltonian flow. Footnote 2: We refer to a _distribution_ in phase space \(\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\) as a map \((x,v)\mapsto\Delta_{(x,v)}\subseteq T_{(x,v)}(\mathbb{R}^{n}_{x}\times\mathbb{ R}^{n}_{v})\), where \(\Delta_{(x,v)}\) are vector subspaces satisfying suitable conditions (in the standard sense used in differential geometry). 3. The decay assumed in the velocity variable of the initial distribution functions in Theorem 1.1 and Theorem 1.2 is _optimal_. The integrability in the velocity variable of the distribution function is required to make sense of the Poisson equation classically. Similar assumptions are made for derivatives of \(f\). In particular, Theorem 1.1 and Theorem 1.2 allow initial distribution functions with infinite total Hamiltonian energy. Let \(f\) be a small data solution of (2), according to the assumptions in Theorem 1.1 or Theorem 1.2. The particle dynamics along which the distribution function \(f\) is transported corresponds to the _characteristic flow_ given by \[\frac{d}{dt}X(t,x,v)=V(t,x,v),\qquad\frac{d}{dt}V(t,x,v)=X(t,x,v)-\mu\nabla_{x }\phi(t,X(t,x,v)), \tag{3}\] with the initial data \(X(0,x,v)=x\) and \(V(0,x,v)=v\). The characteristics are well-defined by the classical Cauchy-Lipschitz theorem. In the proofs of Theorem 1.1 and Theorem 1.2, we show that \(\nabla_{x}\phi\) decays exponentially in time. Thus, the characteristic flow (3) determines a decaying perturbation of the linearized particle system as \(t\to\infty\). For this reason, one expects that unstable trapping holds for the characteristic flow (3). **Definition 1.1.1**.: Let \((X(t,x,v),V(t,x,v))\) be a solution of the characteristic flow (3). We say that \((x,v)\)_escapes to infinity_, if \(\|(X(t),V(t))\|\to\infty\) as \(t\to\infty\). If \((x,v)\) does not escape to infinity, we call \((x,v)\)_trapped_. We denote by \(\Gamma_{+}\subset\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\) the union of all trapped \((x,v)\). We call \(\Gamma_{+}\) the _trapped set_. Since the force field \(\nabla_{x}\phi\) decays exponentially in time, the origin \(\{x=0,v=0\}\) is _formally_ a fixed point of (3) when \(t\to\infty\). Applying the stable manifold theorem [11, Theorem 2.6] for decaying perturbations of time-translation-invariant dynamical systems with hyperbolic trapping, the set \[W^{s}(0,0):=\Big{\{}(x,v)\in\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}:(X(t,x, v),V(t,x,v))\to(0,0)\text{ as }t\to\infty\Big{\}}\] defines the stable manifold of the origin. Furthermore, \(W^{s}(0,0)\) is an \(n\)-dimensional invariant manifold of class \(C^{1}\) which converges to the trapped set of the linearized system \(\{x+v=0\}\) when \(t\to\infty\). In the specific case of a decaying perturbation (3) of the linearized particle system, we identify _explicitly_ the stable manifold \(W^{s}(0,0)\) in terms of the non-linear evolution in time of the force field \(\nabla_{x}\phi\). Moreover, we characterize the trapped set \(\Gamma_{+}\) as the stable manifold \(W^{s}(0,0)\). **Theorem 1.3**.: _Let \(f_{0}\) be an initial data for the Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) on \(\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\), such that \(\mathcal{E}_{N}[f_{0}]<\epsilon_{0}\). Let \(\Gamma_{+}\) be the trapped set of the characteristic flow (3) associated to the corresponding solution \(f\) of (2). Then, the trapped set \(\Gamma_{+}\) is equal to the \(n\)-dimensional stable manifold of the origin \(W^{s}(0,0)\) of class \(C^{N-n-1}\). Moreover, the trapped set is characterized as_ \[\Gamma_{+}=\Big{\{}(x,v):x+v=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu \nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{\}}, \tag{4}\] _where we have_ \[\Big{|}\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi(t^{\prime}, X(t^{\prime},x,v))dt^{\prime}\Big{|}\leq C_{N,n}\epsilon_{0},\] _with \(C_{N,n}>0\) a uniform constant depending only on \(n\) and \(N\)._ _Remark 2_.: 1. In the proof of Theorem 1.3, we first characterize the set \(W^{s}(0,0)\) as the right hand side of (4), and later we show this set is a non-empty invariant manifold of class \(C^{1}\). We find the characterization (4) of the trapped set by integrating in time the characteristic flow. In particular, we do _not_ apply the stable manifold theorem [11, Theorem 2.6] to obtain Theorem 1.3. 2. The characterization of the trapped set (4) gives an _explicit teleological construction_ of \(\Gamma_{+}\) in terms of the non-linear evolution in time of the force field \(\nabla_{x}\phi\). In particular, one can easily show that \(\Gamma_{+}\) converges quantitatively to the trapped set of the linearized system \(\{x+v=0\}\) when \(t\to\infty\). 1.1. Previous non-linear stability results for dispersive collisionless many-particle systems using vector fields methods Vector field methods have been developed to obtain _robust_ techniques to prove asymptotic stability results for stationary solutions of non-linear evolution equations. We stress the classical vector field method developed by Klainerman [11] for the study of the wave equation in Minkowski spacetime, which allows the proof of quantitative decay estimates in space and time for solutions to the wave equation, based on weighted Sobolev estimates, using energy norms in terms of commuting vector fields arising from the symmetries of spacetime. The vector field method has shown to be a powerful technique for the study of quasilinear systems of wave equations, such as the Einstein vacuum equations [10, 14]. The study of vector field methods for dispersive collisionless many-particle systems was pioneered by Smulevici [17] who developed a vector field method for this class of kinetic systems, inspired by the classical vector field method for wave equations introduced in [11]. Specifically Smulevici [17] proved the stability of the vacuum solution for the Vlasov-Poisson system using this methodology. Later Duan [18] simplified the functional framework used to prove the asymptotic stability result in [17]. Smulevici [17] was motivated by the work of Fajman, Joudioux, and Smulevici [19], who developed a vector field method to prove decay estimates in space and time for the spatial density induced by solutions to the _relativistic_ Vlasov equation in Minkowski spacetime.3 Furthermore, [19] made use of a vector field method to prove stability results for the vacuum solution of the Vlasov-Nordstrom system. Later on, Fajman, Joudioux, and Smulevici [19] once again used a vector field method to study dispersive collisionless many-particle systems in a neighborhood of Minkowski spacetime, under the geometric framework of general relativity. In other words, the authors of [19] proved the stability of Minkowski spacetime as a solution of the Einstein-Vlasov system. We emphasize that Taylor and Lindblad [18] independently proved the stability of Minkowski spacetime as a solution of the Einstein-Vlasov system by also using a vector field method. Footnote 3: In general relativity, many-particle systems can be composed by particles moving at the speed of light for which their mass vanishes. Nonetheless, we only comment on stability results for relativistic collisionless many-particle systems for which the mass of their particles is one. The vector field method for dispersive collisionless many-particle systems has also been used by Bigorgne [14, 15, 16], in order to prove the stability of vacuum for the relativistic Vlasov-Maxwell in dimension greater or equal to three. Wang [17] obtained another proof of the stability of vacuum for the relativistic Vlasov-Maxwell in dimension three, by using a combination of the vector field method and Fourier techniques. We emphasize that the stability of the vacuum solution for the relativistic Vlasov-Maxwell system had been first shown by Glassey and Schaeffer [10] using the method of characteristics. ### Outline of the paper The remainder of the paper is structured as follows. * **Section 2.** We study the linearization of the non-linear Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\), with respect to its vacuum solution. We introduce the class of vector fields used to define the energy norm seen in Theorem 1.1. We conclude with some basic lemmata for the commuted equations. * **Section 3.** We prove weighted Sobolev inequalities for the induced spatial density of a distribution function by making use of commuting vector fields. We obtain decay in space and time of the spatial density induced by solutions to the linear Vlasov equation with the potential \(\frac{-|x|^{2}}{2}\). * **Section 4.** We prove global existence of small data solutions for the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\) in dimension greater than two. * **Section 5.** We prove global existence of small data solutions for the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\) in dimension two using modified vector fields. * **Section 6.** We characterize the trapped set of the characteristic flow associated to the small data solutions studied in the previous sections. _Acknowledgements._ RVR would like to express his gratitude to his advisors Mihalis Dafermos and Clement Mouhot for their continued guidance and encouragements. RVR also would like to thank Leo Bigorgne and Jacques Smulevici for many helpful discussions. RVR received funding from the ANID grant 72190188, the Cambridge Trust grant 10469706, and the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant 101034255. AVR received funding from the grant FONDECYT Iniciacion 11220409. ## 2. Preliminaries In this section, we introduce the set of commuting vector fields used to study dispersion for the non-linear Vlasov-Poisson system with the external potential \(\frac{-|x|^{2}}{2}\) building upon the dynamics defined by the flow map associated to the characteristics of the linear Vlasov equation with the same external potential. Furthermore, we prove useful lemmata which are going to be applied in the following section to show weighted Sobolev inequalities for the induced spatial density of a distribution function. In the rest of the paper, the notation \(A\lesssim B\) is repetitively used to specify that there exists a universal constant \(C>0\) such that \(A\leq CB\), where \(C\) depends only on the dimension \(n\), the corresponding order of Sobolev regularity, or other fixed constants. ### The Vlasov equation with the potential \(\frac{-|x|^{2}}{2}\) In this subsection, we study the dynamics of the linearization of the non-linear Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) with respect to its vacuum solution, which is given by the _linear Vlasov equation with the trapping potential_\(\frac{-|x|^{2}}{2}\) taking the form \[\begin{cases}\partial_{t}f+v\cdot\nabla_{x}f+x\cdot\nabla_{v}f=0,\\ f(t=0,x,v)=f_{0}(x,v),\end{cases} \tag{5}\] where \(f_{0}:\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\to[0,\infty)\) is a sufficiently regular initial data. We emphasize that this linear Vlasov equation is a transport equation along the Hamiltonian flow given by \[\frac{dx^{i}}{dt}=v^{i},\qquad\frac{dv^{i}}{dt}=x^{i}, \tag{6}\] defined by the Hamiltonian system \((\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n},H)\) in terms of the Hamiltonian \[H(x,v):=\frac{1}{2}\sum_{i=1}^{n}(v^{i})^{2}-\frac{1}{2}\sum_{i=1}^{n}(x^{i}) ^{2}.\] The Hamiltonian system \((\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n},H)\) is _completely integrable in the sense of Liouville_ due to the \(n\) independent conserved quantities in involution \[H^{i}(x,v):=\frac{1}{2}(v^{i})^{2}-\frac{1}{2}(x^{i})^{2},\] where \(i\in\{1,2,\ldots,n\}\), whose sum yields the total Hamiltonian \(H\). In particular, we can write an explicit solution for the linear Vlasov equation (5) by computing the flow map precisely. **Lemma 2.1.1**.: _Let \(f_{0}\) be an initial data for the Vlasov equation (5). Then, the corresponding solution \(f\) to the Vlasov equation (5) is given by_ \[f(t,x,v)=f_{0}\Big{(}x\cosh t-v\sinh t,v\cosh t-x\sinh t\Big{)}. \tag{7}\] Proof.: Integrating directly the Hamiltonian flow (6) satisfied by the characteristics of the linear Vlasov equation, we obtain \[(X^{i}_{\mathscr{L}}+V^{i}_{\mathscr{L}})(t)=e^{t}(X^{i}_{\mathscr{L}}+V^{i}_{ \mathscr{L}})(0),\qquad(X^{i}_{\mathscr{L}}-V^{i}_{\mathscr{L}})(t)=e^{-t}(X^ {i}_{\mathscr{L}}-V^{i}_{\mathscr{L}})(0),\] for every \(i\in\{1,2,\ldots,n\}\). As a result, the flow map \(\phi_{t}:\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\to\mathbb{R}^{n}_{x}\times \mathbb{R}^{n}_{v}\) defined by the characteristics of the Vlasov equation (5) is given by \[\phi_{t}(x,v):=(X_{\mathscr{L}}(t),V_{\mathscr{L}}(t))=\Big{(}x\cosh t+v \sinh t,x\sinh t+v\cosh t\Big{)}, \tag{8}\] which allows to write the solution of the linear Vlasov equation (5) by \[f(t,x,v)=f_{0}(\phi_{-t}(x,v))=(X_{\mathscr{L}}(-t),V_{\mathscr{L}}(-t))=f_{0 }\Big{(}x\cosh t-v\sinh t,v\cosh t-x\sinh t\Big{)},\] in terms of the initial distribution function \(f_{0}\). ### Macroscopic and microscopic vector fields In this subsection, we introduce classes of vector fields contained in the tangent space of phase space used to study the dispersion of small data solutions for the non-linear Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) motivated by the explicit dynamics of the linear Vlasov equation (5). For this purpose, we introduce the following terminology: we say that a vector field is _macroscopic_ if it is contained in the tangent space of \(\mathbb{R}^{n}_{x}\), and we say that a vector field is _microscopic_ if it is contained in the tangent space of \(\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\). Let us consider the generator of the Hamiltonian flow defined by the characteristics of the linear Vlasov equation (5) given by \[X:=v\cdot\nabla_{x}+x\cdot\nabla_{v}, \tag{9}\] and observe the linear Vlasov equation (5) can be written as \[(\partial_{t}+X)f=0.\] The commutators between the vector fields \(\partial_{x^{i}}\), \(\partial_{v^{i}}\) and \(X\) are given by \[[\partial_{x^{i}},X]=\partial_{v^{i}},\quad[\partial_{v^{i}},X]=\partial_{x^{ i}},\quad[\partial_{x^{i}},\partial_{v^{i}}]=0,\quad\text{for every $i\in\{1,2,\ldots,n\}$}.\] This allows us to exhibit several vector fields that commute with equation (5). More precisely, let us consider the following commuting microscopic vector fields 1. unstable vector fields \(U_{i}:=e^{t}(\partial_{x^{i}}+\partial_{v^{i}})\), 2. stable vector fields \(S_{i}:=e^{-t}(\partial_{x^{i}}-\partial_{v^{i}})\), 3. scaling in phase space \(L:=\sum_{i=1}^{n}x^{i}\partial_{x^{i}}+v^{i}\partial_{v^{i}}\), 4. rotations \(R_{ij}:=x^{i}\partial_{x^{j}}-x^{j}\partial_{x^{i}}+v^{i}\partial_{v^{j}}-v^{j }\partial_{v^{i}}\), and define \[\lambda:=\Big{\{}U_{i},S_{i},L,R_{ij}\Big{\}},\quad\lambda_{0}:=\Big{\{}U_{i},L,R_ {ij}\Big{\}},\] where \(i,j\in\{1,2,\ldots,n\}\). The collection of microscopic vector fields \(\lambda\) is used to set the energy space on which the distribution functions in this paper are defined. **Lemma 2.2.1**.: _Let \(f\) be a regular solution of the Vlasov equation with the trapping potential \(\frac{-|x|^{2}}{2}\). Then, \(Zf\) is also a solution of this equation for every \(Z\in\lambda\)._ Proof.: Observe that \([\partial_{t}+X,Z]=0\), for every \(Z\in\lambda\). Thus, we have \[(\partial_{t}+X)(Zf)=Z(\partial_{t}+X)f+[\partial_{t}+X,Z]f=0,\] since \(f\) is a solution of the linear Vlasov equation. Therefore, \(Zf\) is a solution as well. Observe that for every sufficiently regular solution \(f\) to the linear Vlasov equation, the norm \(\|f(t)\|_{L^{1}_{x,v}}\) is constant in time. In particular, we have that \[\|f(t)\|_{L^{1}_{x,v}}=\|f(0)\|_{L^{1}_{x,v}},\] for every \(t\geq 0\). A similar conservation law for derivatives of the distribution function follows from Lemma 2.2.1. **Corollary 2.2.2**.: _Let \(f_{0}\) be a sufficiently regular initial data for the Vlasov equation (5). Then, the corresponding solution \(f\) to the Vlasov equation (5) satisfies that_ \[\|Zf(t)\|_{L^{1}_{x,v}}=\|Zf(0)\|_{L^{1}_{x,v}},\] _for every \(t\geq 0\), and every vector field \(Z\in\lambda\)._ _Remark 3_.: In Section 3, we prove optimal space and time decay estimates for the spatial density induced by solutions to the linear Vlasov equation (5), even though spatial derivatives of the distribution function grow exponentially in time. This follows by using the commuting vector fields of the Vlasov equation contained in the invariant distributions of phase space, since the spatial derivatives of the distribution function can be written as \[\partial_{x^{i}}f(t,x,v) =\frac{1}{2}(\partial_{x^{i}}-\partial_{v^{i}})f(t,x,v)+\frac{1} {2}(\partial_{x^{i}}+\partial_{v^{i}})f(t,x,v)\] \[=\frac{1}{2}e^{t}(\partial_{x^{i}}-\partial_{v^{i}})f_{0}(x_{0}, v_{0})+\frac{1}{2}e^{-t}(\partial_{x^{i}}+\partial_{v^{i}})f_{0}(x_{0},v_{0}),\] in terms of a point \((x_{0},v_{0})\) in the support of the initial distribution function \(f_{0}\). Let us also consider the macroscopic vector fields associated to the microscopic vector fields previously defined by \[U^{x}_{i}=e^{t}\partial_{x^{i}},\quad S^{x}_{i}=e^{-t}\partial_{x^{i}},\quad L ^{x}=\sum_{i=1}^{n}x^{i}\partial_{x^{i}},\quad R^{x}_{ij}=x^{i}\partial_{x^{j }}-x^{j}\partial_{x^{i}},\] and define \[\Lambda=\Big{\{}U^{x}_{i},S^{x}_{i},L^{x},R^{x}_{ij}\Big{\}},\quad\Lambda_{0}= \Big{\{}U^{x}_{i},L^{x},R^{x}_{ij}\Big{\}},\] for \(i,j\in\{1,\ldots,n\}\). The set of macroscopic vector fields \(\Lambda\) and the set of microscopic vector fields \(\lambda\) are precisely related to each other by the following result for the study of the spatial density of an arbitrary distribution function. **Lemma 2.2.3**.: _Let \(f\) be a sufficiently regular distribution function. Then, the derivatives of the induced spatial density satisfy_ \[U_{i}^{x}\rho(f) =\rho(U_{i}f), L^{x}\rho(f)=\rho(Lf)+n\rho(f),\] \[S_{i}^{x}\rho(f) =\rho(S_{i}f), R_{ij}^{x}\rho(f)=\rho(R_{ij}f)\] _for every \(i,j\in\{1,2,\ldots,n\}\)_ ### Macroscopic and microscopic differential operators Let \((Z^{i})_{i}\) be an arbitrary ordering of the microscopic vector fields contained in \(\lambda\). In the following, we use a multi-index notation for the microscopic differential operators of order \(|\alpha|\) given by the composition \[Z^{\alpha}:=Z^{\alpha_{1}}Z^{\alpha_{2}}\ldots Z^{\alpha_{n}},\] for every multi-index \(\alpha\in\mathbb{N}^{n}\). We denote by \(\lambda^{|\alpha|}\) the family of microscopic differential operators obtained as a composition of \(|\alpha|\) vector fields in \(\lambda\). Furthermore, we can uniquely associate a macroscopic differential operator to any microscopic differential operator \(Z^{\alpha}\in\lambda^{|\alpha|}\) by replacing every microscopic vector field \(Z\) by the corresponding macroscopic vector field \(Z^{x}\). By a small abuse of notation, we denote also by \(Z^{\alpha}\) the associated macroscopic differential operator to an arbitrary microscopic differential operator \(Z^{\alpha}\). We denote by \(\Lambda^{|\alpha|}\) the family of macroscopic differential operators of order \(|\alpha|\) obtained as a composition of \(|\alpha|\) vector fields in \(\Lambda\). Finally, we denote by \(\partial_{x}^{\alpha}\) a standard macroscopic differential operator \[\partial_{x}^{\alpha}:=\partial_{x^{1}}^{\alpha_{1}}\partial_{x^{2}}^{\alpha_ {2}}\ldots\partial_{x^{n}}^{\alpha_{n}},\] for every multi-index \(\alpha\in\mathbb{N}^{n}\). In the following, we prove that the arbitrary ordering of the vector fields chosen to build differential operators can be taken without loss of generality modulo some uniform constants. **Lemma 2.3.1**.: _Let \(\Omega\in\{\lambda,\lambda_{0},\Lambda,\Lambda_{0}\}\). Let \(\alpha\) and \(\beta\) be two multi-indices. Then, the commutator between \(Z^{\alpha}\in\Omega^{|\alpha|}\) and \(Z^{\beta}\in\Omega^{|\beta|}\) is given by_ \[[Z^{\alpha},Z^{\beta}]=\sum_{|\gamma|\leq|\alpha|+|\beta|-1}\sum_{Z^{\gamma} \in\Omega^{|\gamma|}}C_{\gamma}^{\alpha\beta}Z^{\gamma},\] _for some constant coefficients \(C_{\gamma}^{\alpha\beta}\)._ Proof.: Observe that \[[U_{i},R_{ij}]=U_{j},\quad[S_{i},R_{ij}]=S_{j},\quad[L,R_{ij}]=0,\quad[R_{ij},R_{jk}]=R_{ik},\] \[[U_{i},L]=U_{i},\quad[S_{i},L]=S_{i},\quad[U_{i},S_{j}]=0,\quad[U_{i},U_{j}]=0,\quad[S_{i},S_{j}]=0,\] for \(i,j\in\{1,\ldots,n\}\), and note that the same commutation relations hold if we replace \(Z\in\lambda\) by the associated macroscopic vector fields \(Z^{x}\in\Lambda\). This argument proves the result for \(|\alpha|=|\beta|=1\). The general statement follows by induction. Moreover, we can use the microscopic differential operators previously discussed to build conservation laws for higher order derivatives of a sufficiently regular solution of the Vlasov equation (5) as in Corollary 2.2.2. **Corollary 2.3.2**.: _Let \(f_{0}\) be a sufficiently regular initial data for the Vlasov equation (5). Then, the corresponding solution \(f\) to the Vlasov equation (5) satisfies_ \[\|Z^{\alpha}f(t)\|_{L^{1}_{x,v}}=\|Z^{\alpha}f(0)\|_{L^{1}_{x,v}},\] _for every \(t\geq 0\), and every multi-index \(\alpha\)._ In the following, we state a key vector field identity to obtain quantitative decay estimates in space and time for the spatial density induced by an arbitrary distribution function in terms of a higher order energy norm according to the weighted Sobolev inequalities proven in the following section. For this purpose, we firstly recall the relation \[|x|^{2}\partial_{x^{j}}=\sum_{i=1}^{n}x^{i}R^{x}_{ij}+x^{j}L^{x},\] noticed in [10, Lemma 2.5] between the macroscopic rotations and the macroscopic scaling. As a result, we have \[|x|\partial_{x^{j}}=\sum_{i=1}^{n}\frac{x^{i}}{|x|}R^{x}_{ij}+\frac{x^{j}}{|x |}L^{x},\] which allows to prove the following useful lemma. **Lemma 2.3.3**.: _For any multi-index \(\alpha\), we have_ \[(e^{t}+|x|)^{\alpha}\partial_{x}^{\alpha}=\sum_{|\beta|\leq|\alpha|}\sum_{Z^{ \beta}\in\Lambda^{|\beta|}_{0}}C_{\beta}Z^{\beta}, \tag{10}\] _for some uniformly bounded functions \(C_{\beta}\)._ We conclude this subsection by relating the macroscopic and microscopic differential operators in the same manner as in Lemma 2.2.3. **Lemma 2.3.4**.: _Let \(f\) be a sufficiently regular distribution function and let \(\alpha\) be a multi-index. Then, there exist constant coefficients \(C_{\beta}^{\alpha}\) such that_ \[Z^{\alpha}\rho(f)=\rho(Z^{\alpha}f)+\sum_{|\beta|\leq|\alpha|-1}C_{\beta}^{ \alpha}\rho(Z^{\beta}f), \tag{11}\] _where the vector fields in the left hand side are macroscopic, whereas the ones in the right hand side are microscopic._ ### The commuted equations Let us denote the non-linear transport operator applied to the distribution function in the Vlasov-Poisson system with the external potential \(\frac{-|x|^{2}}{2}\) by \[\mathbf{T}_{\phi}:=\partial_{t}+v\cdot\nabla_{x}+x\cdot\nabla_{v}-\mu\nabla_{ x}\phi\cdot\nabla_{v},\] where the field \(\nabla_{x}\phi\) is defined through the Poisson equation \(\Delta\phi=\rho(f)\). **Lemma 2.4.1**.: _There exist constant coefficients \(C_{\beta\gamma}^{\alpha}\) such that_ \[[\mathbf{T}_{\phi},Z^{\alpha}]=\sum_{|\gamma|+|\beta|\leq|\alpha|,\,|\beta| \leq|\alpha|-1}C_{\beta\gamma}^{\alpha}\nabla_{x}Z^{\gamma}\phi\cdot\nabla_{ v}Z^{\beta}, \tag{12}\] _where the vector fields \(Z^{\alpha}\in\lambda^{|\alpha|}\), \(Z^{\gamma}\in\Lambda^{|\gamma|}\), and \(Z^{\beta}\in\lambda^{|\beta|}\)._ Proof.: For each vector field \(Z^{i}\in\lambda\), we can easily compute \[[\mathbf{T}_{\phi},Z^{i}]=\mu\sum_{k=1}^{n}\partial_{x^{k}}(Z^{i}\phi+c_{i}\phi) \partial_{v^{k}},\] where \(c_{i}=-2\) if \(Z^{i}=L\), otherwise, \(c_{i}=0\). This verifies equation (12) for \(|\alpha|=1\). We argue inductively on \(|\alpha|\) to prove the general case. Observe that \[[\mathbf{T}_{\phi},Z^{i}Z^{\alpha}]=[\mathbf{T}_{\phi},Z^{i}]Z^{\alpha}+Z^{i}[ \mathbf{T}_{\phi},Z^{\alpha}].\] Since \([\mathbf{T}_{\phi},Z^{i}]Z^{\alpha}\) has the required form, it remains to analyse the second term. Note that \[Z^{i}[\mathbf{T}_{\phi},Z^{\alpha}] =\sum_{\beta,\gamma}C^{\alpha}_{\beta\gamma}\sum_{k=1}^{n}Z^{i}( \partial_{x^{k}}Z^{\gamma}\phi)\partial_{v^{k}}Z^{\beta}+\partial_{x^{k}}Z^{ \gamma}\phi Z^{i}(\partial_{v^{k}}Z^{\beta})\] \[=\sum_{\beta,\gamma}C^{\alpha}_{\beta\gamma}\sum_{k=1}^{n} \partial_{x^{k}}(Z^{i}Z^{\gamma}\phi)\partial_{v^{k}}Z^{\beta}+\partial_{x^{k} }Z^{\gamma}\phi\partial_{v^{k}}Z^{i}Z^{\beta}\] \[\qquad+\sum_{\beta,\gamma}C^{\alpha}_{\beta\gamma}\sum_{k=1}^{n}[ Z^{i},\partial_{x^{k}}]Z^{\gamma}\phi\partial_{v^{k}}Z^{\beta}+\partial_{x^{k} }Z^{\gamma}\phi[Z^{i},\partial_{v^{k}}]Z^{\beta},\] where we have applied \(Z^{i}\) to equation (12), which is our inductive assumption. In the last equality the first term has the correct form and the second one behaves nicely for all choices of \(Z^{i}\): if \(Z^{i}\) is stable or unstable, then the commutators vanish; if \(Z^{i}\) is a rotation the summation cancel out; and if \(Z^{i}=L\) we note that \([L^{i},\partial_{x}^{i}]=-\partial_{x}^{i}\), \([L^{i},-\partial_{v}^{i}]=-\partial_{v}^{i}\). Therefore, the sum has always the required form. **Lemma 2.4.2**.: _Let \(f\) be a sufficiently regular distribution function, and let \(\phi\) be the solution to the Poisson equation \(\Delta\phi=\rho(f)\). Then, for any multi-index \(\alpha\) the function \(Z^{\alpha}\phi\) satisfies the equation_ \[\Delta Z^{\alpha}\phi=\sum_{|\beta|\leq|\alpha|}C^{\alpha}_{\beta}Z^{\beta} \rho(f),\] _for some constant coefficients \(C^{\alpha}_{\beta}\)._ Proof.: Note that \([\Delta,Z]=0\) for any \(Z\in\Lambda\setminus\{L^{x}\}\), and that \([\Delta,L^{x}]=2\Delta\). For \(|\alpha|=1\) the result holds trivially. For higher order derivatives we proceed by induction and use that \[\Delta Z^{i}Z^{\alpha}\phi=Z^{i}\Delta Z^{\alpha}\phi+[\Delta,Z^{i}]Z^{\alpha}\phi,\] noticing that \([\Delta,Z^{i}]\) is either equal to zero, or to a multiple of \(\Delta\). ## 3. Decay of velocity averages for the linearized system In this section, we begin by proving weighted Sobolev inequalities for arbitrary finite energy distribution functions by exploiting the weights contained in the set of macroscopic vector fields \(\Lambda_{0}\) and the set of microscopic vector fields \(\lambda_{0}\). As a result, we prove sharp quantitative decay estimates in space and time for the spatial density induced by solutions to the linear Vlasov equation (5). We also obtain improved decay estimates for derivatives of the spatial density. ### Weighted Sobolev inequalities First, we prove a weighted Sobolev inequality for the spatial density induced by arbitrary finite energy distribution functions. **Proposition 3.1.1**.: _For every sufficiently regular distribution function \(f\), the induced spatial density satisfies that_ \[|\rho(f)(t,x)|\lesssim\frac{1}{(e^{t}+|x|)^{n}}\sum_{|\alpha|\leq n}\sum_{Z^{ \alpha}\in\lambda_{0}^{|\alpha|}}\|Z^{\alpha}f\|_{L^{1}_{x,v}}, \tag{13}\] _for every \(t\geq 0\) and every \(x\in\mathbb{R}^{n}\)._ Proof.: Given a point \((t,x)\in\mathbb{R}\times\mathbb{R}^{n}\), we set the function \(\widetilde{\rho}:\mathbb{R}^{n}\to\mathbb{R}\) given by \(\widetilde{\rho}(y)=\rho(f)(t,x+(e^{t}+|x|)y)\). Applying the standard Sobolev inequality, we have that \[|\rho(f)(t,x)|=|\widetilde{\rho}(0)|\leq\sum_{|\alpha|\leq n}\|\partial_{y}^{ \alpha}\widetilde{\rho}\|_{L^{1}(B_{n}(0,1/2))}, \tag{14}\] where \(B_{n}(0,1/2)\) denotes the open ball in \(\mathbb{R}^{n}_{y}\) of radius \(1/2\). By the chain rule, we have that \[\partial_{y^{j}}\widetilde{\rho}(y)=(e^{t}+|x|)\partial_{x^{j}}\rho(f)(t,x+(e ^{t}+|x|)y).\] Hence, the derivatives \(\partial_{y}^{\alpha}\widetilde{\rho}\) can be bounded for every \(y\in B_{n}(0,1/2)\) and \(|\alpha|\leq n\) by \[|\partial_{y}^{\alpha}\widetilde{\rho}(y)| =(e^{t}+|x|)^{|\alpha|}|\partial_{x}^{\alpha}\rho(f)(t,x+(e^{t}+ |x|)y)|\] \[\lesssim(e^{t}+|x+(e^{t}+|x|)y|)^{|\alpha|}|\partial_{x}^{\alpha} \rho(f)(t,x+(e^{t}+|x|)y)|\] \[\lesssim\sum_{|\beta|\leq|\alpha|}\sum_{Z^{\beta}\in\Lambda_{0}^{ |\beta|}}|Z^{\beta}\rho(f)(t,x+(e^{t}+|x|)y)|,\] where in the second inequality we have compared \(\min_{y\in B_{n}(0,1/2)}e^{t}+|x+(e^{t}+|x|)y|\) with \(e^{t}+|x|\), and in the last inequality we have used Lemma 2.3.3. Integrating in the \(y\) coordinate, applying the change of variables \(z=(e^{t}+|x|)y\), and using the Sobolev inequality (14) we obtain \[|\rho(f)(t,x)|\lesssim\frac{1}{(e^{t}+|x|)^{n}}\sum_{|\beta|\leq n}\sum_{Z^{ \beta}\in\Lambda_{0}^{|\beta|}}\|Z^{\beta}\rho(f)\|_{L^{1}_{x}}. \tag{15}\] Finally, we use Lemma 2.3.4 to conclude the proof of the proposition. We proceed to prove another weighted Sobolev inequality for the spatial density induced by _absolute values_ of arbitrary finite energy distribution functions. The proof follows by a slightly different argument as the one obtained for Proposition 3.1.2. **Proposition 3.1.2**.: _For every sufficiently regular distribution function \(f\), the induced spatial density by its absolute value satisfies that_ \[\rho(|f|)(t,x)\lesssim\frac{1}{(e^{t}+|x|)^{n}}\sum_{|\alpha|\leq n}\sum_{Z^{ \alpha}\in\lambda_{0}^{|\alpha|}}\|Z^{\alpha}f\|_{L^{1}_{x,v}}, \tag{16}\] _for every \(t\geq 0\) and every \(x\in\mathbb{R}^{n}\)._ Proof.: Similarly as in the proof of Proposition 3.1.1, we define a real-valued function \(\widetilde{\psi}:B_{n}(0,1/2)\to\mathbb{R}\) given by \[\widetilde{\psi}(y):=\int_{\mathbb{R}^{n}}|f|\Big{(}t,x+(e^{t}+|x|)y,v\Big{)}dv.\] Using a 1D Sobolev inequality with \(\delta=\frac{1}{4n}\), we have \[\widetilde{\psi}(0)\leq C\int_{|y_{1}|\leq\delta^{1/2}}|\partial_{y_{1}} \widetilde{\psi}(y_{1},0\ldots,0)|+|\widetilde{\psi}(y_{1},0\ldots,0)|dy_{1}, \tag{17}\] where we have used that for a function \(\psi\in W^{1,1}\), the absolute value of \(\psi\) belongs to \(W^{1,1}\), and satisfies \(|\partial|\psi||\leq|\partial\psi|\). Moreover, the derivative in the previous integral can be written as \[\partial_{y_{1}}\widetilde{\psi}(y_{1},0,\ldots,0) =\int_{\mathbb{R}^{n}}e^{t}\partial_{x^{1}}|f|\Big{(}t,x+(e^{t}+ |x|)(y_{1},0,\ldots,0),v\Big{)}dv\] \[\qquad+\int_{\mathbb{R}^{n}}|x|\partial_{x^{1}}|f|\Big{(}t,x+(e^ {t}+|x|)(y_{1},0,\ldots,0),v\Big{)}dv.\] The first integral term of the derivative above can be estimated using integration by parts in the velocity variables to obtain \[|\partial_{y_{1}}\widetilde{\psi}(y_{1},0,\ldots,0)| \leq\Big{|}\int_{\mathbb{R}^{n}}e^{t}\partial_{x^{1}}|f|\Big{(}t, x+(e^{t}+|x|)(y_{1},0,\ldots,0),v\Big{)}dv\Big{|}\] \[\leq\int_{\mathbb{R}^{n}}\Big{|}e^{t}(\partial_{x^{1}}+\partial_ {p^{1}})f\Big{(}t,x+(e^{t}+|x|)(y_{1},0,\ldots,0),v\Big{)}\Big{|}dv,\] and similarly for the second integral term of the derivative above. As a result, we have that \[|\partial_{y_{1}}\widetilde{\psi}(y_{1},0,\ldots,0)|\leq\sum_{Z\in\lambda}\int _{\mathbb{R}^{n}}\Big{|}Zf\Big{(}t,x+(e^{t}+|x|)(y_{1},0,\ldots,0),v\Big{)} \Big{|}dv, \tag{18}\] which can be used to estimate \(\widetilde{\psi}\) by \[\widetilde{\psi}(0) \leq\sum_{Z\in\lambda}\int_{|y_{1}|\leq\delta^{1/2}}\int_{ \mathbb{R}^{n}}\Big{|}Zf\Big{(}t,x+(e^{t}+|x|)(y_{1},0,\ldots,0),v\Big{)} \Big{|}dvdy_{1}\] \[\quad+\int_{|y_{1}|\leq\delta^{1/2}}\int_{\mathbb{R}^{n}}\Big{|} f\Big{(}t,x+(e^{t}+|x|)(y_{1},0,\ldots,0),v\Big{)}\Big{|}dvdy_{1}.\] Iterating this argument for all the variables in space, we obtain \[\widetilde{\psi}(0)\leq\sum_{Z^{\alpha}\in\lambda^{|\alpha|}}\int_{y\in B_{n} (0,1/2)}\int_{\mathbb{R}^{n}}\Big{|}Z^{\alpha}f\Big{(}t,x+(e^{t}+|x|)y,v\Big{)} \Big{|}dvdy, \tag{19}\] from which the proof of the proposition follows using the change of variables \(z=(t+|x|)y\). Finally, we obtain improved decay estimates for derivatives of the spatial density by applying the weighted Sobolev inequality in Proposition 3.1.1 combined with Lemma 2.2.3 and Lemma 2.3.3. **Proposition 3.1.3** (Improved decay estimates for derivatives of the spatial density).: _For every sufficiently regular distribution function \(f\), the induced spatial density satisfies that_ \[|\partial_{x}^{\alpha}\rho(f)(t,x)|\lesssim\frac{1}{(e^{t}+|x|)^{n+|\alpha|}} \sum_{|\beta|\leq n+|\alpha|}\sum_{Z^{\beta}\in\lambda_{0}^{|\beta|}}\|Z^{ \beta}f\|_{L^{1}_{x,v}}, \tag{20}\] _for every \(t\geq 0\), every \(x\in\mathbb{R}^{n}\), and every multi-index \(\alpha\)._ Proof.: Applying the estimate (15) obtained in the proof of Proposition 3.1.1, we have that \[|\partial_{x}^{\alpha}\rho(f)(t,x)|\lesssim\frac{1}{(e^{t}+|x|)^{n}}\sum_{| \beta|\leq n}\sum_{Z^{\beta}\in\Lambda_{0}^{|\beta|}}\|Z^{\beta}\partial_{x}^ {\alpha}\rho(f)\|_{L^{1}_{x}}. \tag{21}\] The improved decay for derivatives of the spatial density follows by commuting the differential operators \(Z^{\beta}\), \(\partial_{x}^{\alpha}\), and using Lemma 2.3.3. ### Applications to solutions to the Vlasov equation with the potential \(\frac{-|x|^{2}}{2}\) The weighted Sobolev inequality in Proposition 3.1.1 shows that the spatial density induced by solutions to the linear Vlasov equation (5) decay quantitatively in space and time. **Corollary 3.2.1**.: _Let \(f_{0}\) be a sufficiently regular initial data for the Vlasov equation (5). Then, the induced spatial density for the corresponding solution \(f\) to the Vlasov equation with the trapping potential \(\frac{-|x|^{2}}{2}\) satisfies_ \[|\rho(f)(t,x)|\lesssim\frac{1}{(e^{t}+|x|)^{n}}\sum_{|\alpha|\leq n}\sum_{Z^{ \alpha}\in\lambda_{0}^{|\alpha|}}\|Z^{\alpha}f_{0}\|_{L^{1}_{x,v}}, \tag{22}\] _for every \(t\geq 0\), and every \(x\in\mathbb{R}^{n}\)._ The weighted Sobolev inequality in Proposition 3.1.2 shows that the spatial density induced by the absolute value of solutions to the linear Vlasov equation (5) decay quantitatively in space and time. **Corollary 3.2.2**.: _Let \(f_{0}\) be a sufficiently regular initial data for the Vlasov equation (5). Then, the induced spatial density for the corresponding solution \(f\) to the Vlasov equation with the trapping potential \(\frac{-|x|^{2}}{2}\) satisfies_ \[\rho(|f|)(t,x)\lesssim\frac{1}{(e^{t}+|x|)^{n}}\sum_{|\alpha|\leq n}\sum_{Z^{ \alpha}\in\lambda_{0}^{|\alpha|}}\|Z^{\alpha}f_{0}\|_{L^{1}_{x,v}}, \tag{23}\] _for every \(t\geq 0\), and every \(x\in\mathbb{R}^{n}\)._ Finally, the improved decay estimates for derivatives of the spatial density in Proposition 3.1.3 shows that the derivatives of the spatial density induced by solutions to the linear Vlasov equation (5) decay quantitatively in space and time. **Corollary 3.2.3**.: _Let \(f_{0}\) be a sufficiently regular initial data for the Vlasov equation (5). Then, the induced spatial density for the corresponding solution \(f\) to the Vlasov equation with the trapping potential \(\frac{-|x|^{2}}{2}\) satisfies_ \[|\partial_{x}^{\alpha}\rho(f)(t,x)|\lesssim\frac{1}{(e^{t}+|x|)^{n+|\alpha|}} \sum_{|\beta|\leq n+|\alpha|}\sum_{Z^{\beta}\in\lambda_{0}^{|\beta|}}\|Z^{ \beta}f_{0}\|_{L^{1}_{x,v}}, \tag{24}\] _for every \(t\geq 0\), every \(x\in\mathbb{R}^{n}\), and every multi-index \(\alpha\)._ ## 4. Small data solutions for the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\) In this section, we study the evolution in time of sufficiently regular small data solutions \(f\) for the Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) in the energy space defined by the norm \[\mathcal{E}_{N}[f]:=\sum_{|\alpha|\leq N}\sum_{Z^{\alpha}\in\lambda^{|\alpha|} }\|Z^{\alpha}f\|_{L^{1}_{x,v}},\] where \(N\in\mathbb{N}\). We emphasize that this energy norm is stronger than the energy norms used to obtain weighted Sobolev inequalities in the previous section. Nonetheless, we can still use this norm to prove quantitative decay estimates for the spatial density induced by solutions of the non-linear system, a crucial ingredient of the global existence result. More precisely, we have included the vector fields contained in the stable invariant distribution of phase space in the energy norm used in this section. We incorporate these vector fields, as together with the unstable vector fields, they generate the standard basis \(\{\partial_{x^{i}},\partial_{v^{i}}\}\) of the tangent space of \(\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}.\) We make use of this fact in the proof of Theorem 1.1. ### The bootstrap assumption The proof of Theorem 1.1 follows by a standard continuity argument. We aim to prove that for \(\epsilon>0\) sufficiently small, if the initial data satisfies \(\mathcal{E}_{N}[f_{0}]\leq\epsilon\), then, the global energy estimate \(\mathcal{E}_{N}[f(t)]\leq 2\epsilon\) holds for every \(t\geq 0\). For this purpose, we define \[T:=\sup\Big{\{}t\geq 0:\mathcal{E}_{N}[f(s)]\leq 2\epsilon\text{ for every }s\in[0,t]\Big{\}}. \tag{25}\] In the following, we show that the energy of the distribution function satisfies \(\mathcal{E}_{N}[f(t)]\leq\frac{3\epsilon}{2}\) for every \(t\in[0,T]\). Therefore, the supremum (25) is infinite, and we obtain global existence of small data solutions for the Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\). ### Proof of Theorem 1.1 By the standard energy estimate for the commuted distribution function \(Z^{\alpha}f\) in \(L^{1}_{x,v}\), we obtain \[\|Z^{\alpha}f(t)\|_{L^{1}_{x,v}}\leq\|Z^{\alpha}f(0)\|_{L^{1}_{x,v}}+\int_{0}^ {t}\|\mathbf{T}_{\phi}(Z^{\alpha}f)(s)\|_{L^{1}_{x,v}}ds. \tag{26}\] Furthermore, we write the non-linear Vlasov equation for the commuted distribution function \(Z^{\alpha}f\) as \[\mathbf{T}_{\phi}(Z^{\alpha}f)=\sum_{|\beta|\leq|\alpha|-1,\,|\gamma|+|\beta| \leq|\alpha|}C^{\alpha}_{\beta\gamma}\nabla_{x}Z^{\gamma}\phi\cdot\nabla_{v}Z ^{\beta}f, \tag{27}\] by using the commutator in Lemma 2.4.1. We write the gradient in the velocity variables on the previous identity using the stable and unstable vector fields contained in \(\lambda\) by \[\partial_{v^{i}}Z^{\beta}f=\frac{1}{2e^{t}}\Big{(}e^{t}(\partial_{x^{i}}+ \partial_{v^{i}})Z^{\beta}f\Big{)}-\frac{e^{t}}{2}\Big{(}e^{-t}(\partial_{x^{i }}-\partial_{v^{i}})Z^{\beta}f\Big{)}, \tag{28}\] to obtain the bound \[\|\mathbf{T}_{\phi}(Z^{\alpha}f)\|_{L^{1}_{x,v}}\lesssim e^{t}\Big{(}\sum_{1 \leq|\beta|\leq|\alpha|,\,|\gamma|+|\beta|\leq|\alpha|+1}\|\nabla_{x}Z^{ \gamma}(\phi)Z^{\beta}f\|_{L^{1}_{x,v}}\Big{)} \tag{29}\] for the non-linear contribution in the energy estimate for the commuted distribution function \(Z^{\alpha}f\). In order to bound the non-linear terms \(\|\nabla_{x}Z^{\gamma}(\phi)Z^{\beta}f\|_{L^{1}_{x,v}}\), we follow the strategy used by Duan [14] to prove the stability of the vacuum solution for the Vlasov-Poisson system. More precisely, we make use of the explicit form of the Green function for the Poisson equation in \(\mathbb{R}^{n}\) to estimate the gradient \(\nabla_{x}Z^{\gamma}\phi\) combined with the bootstrap assumption to bound the derivatives of the distribution function \(Z^{\beta}f\) in \(L^{1}_{x,v}\). For this purpose, we need the following elementary estimate proved in [14, Lemma 3.2]. **Lemma 4.2.1**.: _For every \(n\geq 2\), there exists a uniform constant \(C_{n}>0\) depending only on \(n\), such that for every \(x\in\mathbb{R}^{n}\) we have_ \[\int_{\mathbb{R}^{n}}\frac{dy}{|y|^{n-1}(1+|x+y|)^{n}}\leq C_{n}.\] As a consequence of Lemma 4.2.1, we obtain decay in time for the integral term \[\int_{\mathbb{R}^{n}}\frac{1}{|y|^{n-1}(e^{t}+|x-y|)^{n}}dy=\frac{1}{e^{(n-1)t }}\int_{\mathbb{R}^{n}}\frac{1}{|y^{\prime}|^{n-1}(1+|y^{\prime}-\frac{x}{e^{ t}}|)^{n}}dy^{\prime}\lesssim\frac{1}{e^{(n-1)t}}, \tag{30}\] by using the change of variables \(y=e^{t}y^{\prime}\). We use the estimate (30) to prove decay for the gradient \(\nabla_{x}Z^{\gamma}\phi\). We improve the bootstrap assumption (25) using the following technical lemma to bound the non-linear terms \(\|\nabla_{x}Z^{\gamma}(\phi)Z^{\beta}f\|_{L^{1}_{x,v}}\). **Lemma 4.2.2**.: _Under the bootstrap assumption (25), the corresponding solution \(f\) to the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\) satisfies_ \[\|\nabla_{x}Z^{\gamma}(\phi)Z^{\beta}f\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon^ {2}}{e^{(n-1)t}} \tag{31}\] _for every \(t\in[0,T]\), and for any multi-indices \(\beta\), \(\gamma\) such that \(|\beta|\leq N\), \(|\gamma|\leq N\), and \(|\beta|+|\gamma|\leq N+1\)._ Proof.: Combining the commuted Poisson equation in Lemma 2.4.2 with the relation between the macroscopic and microscopic vector fields established in Lemma 2.3.4, we obtain \[\Delta Z^{\gamma}\phi=\sum_{|\gamma^{\prime}|\leq|\gamma|}C_{\gamma^{\prime}} ^{\gamma}\rho(Z^{\gamma^{\prime}}f),\] for some fixed coefficients \(C_{\gamma^{\prime}}^{\gamma}\). We use the Green function for the Poisson equation in \(\mathbb{R}^{n}\) to write the solution of the commuted Poisson equation as \[Z^{\gamma}\phi(t,x)=\sum_{|\gamma^{\prime}|\leq|\gamma|}\int_{\mathbb{R}^{n}} \frac{C_{n}C_{\gamma^{\prime}}^{\gamma}}{|y|^{n-2}}\rho(Z^{\gamma^{\prime}}f) (t,x-y)dy,\] whose gradient can be estimated directly by \[|\nabla_{x}Z^{\gamma}\phi(t,x)|\lesssim\sum_{|\gamma^{\prime}|\leq|\gamma|} \int_{\mathbb{R}^{n}}\frac{1}{|y|^{n-1}}\rho(|Z^{\gamma^{\prime}}f|)(t,x-y)dy. \tag{32}\] By the weighted Sobolev inequality for the absolute value of distribution functions in Proposition 3.1.2, we estimate \[\rho(|Z^{\gamma^{\prime}}f|)(t,x-y) \lesssim\frac{1}{(e^{t}+|x-y|)^{n}}\sum_{|\beta^{\prime\prime}| \leq|\gamma^{\prime}|+n}\|Z^{\beta^{\prime\prime}}f\|_{L^{1}_{x,v}}\] \[\lesssim\frac{1}{(e^{t}+|x-y|)^{n}}\sum_{|\beta^{\prime\prime}| \leq N}\|Z^{\beta^{\prime\prime}}f\|_{L^{1}_{x,v}}\] \[\lesssim\frac{\epsilon}{(e^{t}+|x-y|)^{n}},\] for every \(|\gamma^{\prime}|\leq N-n\). Hence, the solution of the commuted Poisson equation satisfies that for every \(|\gamma|\leq N-n\), we have \[|\nabla_{x}Z^{\gamma}\phi(t,x)|\lesssim\epsilon\sum_{|\gamma^{\prime}|\leq| \gamma|}\int_{\mathbb{R}^{n}}\frac{1}{|y|^{n-1}(e^{t}+|x-y|)^{n}}dy\lesssim \frac{\epsilon}{e^{(n-1)t}},\] where we have used the estimate (30) in the last inequality. As a result, the left hand side of (31) can be bounded by \[\|\nabla_{x}Z^{\gamma}(\phi)Z^{\beta}f\|_{L^{1}_{x,v}} \lesssim\frac{\epsilon}{e^{(n-1)t}}\|Z^{\beta}f\|_{L^{1}_{x,v}}\] \[\lesssim\frac{\epsilon}{e^{(n-1)t}}\sum_{|\beta|\leq N}\|Z^{\beta }f\|_{L^{1}_{x,v}}\] \[\lesssim\frac{\epsilon^{2}}{e^{(n-1)t}},\] for every \(|\gamma|\leq N-n\). Otherwise, if \(|\gamma|>N-n\) then \(|\beta|\leq N-n\), since \(|\beta|+|\gamma|\leq N+1\) and \(N\geq 2n\). It follows from the bound \(|\beta|\leq N-n\) and Proposition 3.1.2 that \[\rho(|Z^{\beta}f|)(t,x)\lesssim\frac{\epsilon}{(e^{t}+|x|)^{n}}. \tag{33}\] Therefore \[\|\nabla_{x}Z^{\gamma}(\phi)Z^{\beta}f\|_{L^{1}_{x,v}} =\int|\nabla_{x}Z^{\gamma}\phi(t,x)|\rho(|Z^{\beta}f|)(t,x)dx\] \[\lesssim\epsilon\sum_{|\gamma^{\prime}|\leq|\gamma|}\iint\frac{1} {|y|^{n-1}}\rho(|Z^{\gamma^{\prime}}f|)(t,x-y)\frac{1}{(e^{t}+|x|)^{n}}dxdy,\] \[\lesssim\epsilon\sum_{|\gamma^{\prime}|\leq|\gamma|}\iint\frac{1} {|y|^{n-1}}\rho(|Z^{\gamma^{\prime}}f|)(t,z)\frac{1}{(e^{t}+|z+y|)^{n}}dzdy,\] \[\lesssim\epsilon\sum_{|\gamma^{\prime}|\leq|\gamma|}\int\rho(|Z^{ \gamma^{\prime}}f|)(t,z)\bigg{(}\int\frac{1}{|y|^{n-1}(e^{t}+|z+y|)^{n}}dy \bigg{)}dz\] \[\lesssim\frac{\epsilon}{e^{(n-1)t}}\sum_{|\gamma^{\prime}|\leq N }\|Z^{\gamma^{\prime}}f\|_{L^{1}_{x,v}}\] \[\lesssim\frac{\epsilon^{2}}{e^{(n-1)t}},\] where we have used the change of variables \(z=x-y\) and the previous estimates (30), (32), and (33). The quantitative decay estimate for the non-linear terms \(\nabla_{x}Z^{\gamma}(\phi)Z^{\beta}f\) given by Lemma 4.2.2 shows that the \(L^{1}_{x,v}\) norm of the non-linear contribution in the energy estimate for the commuted distribution function \(Z^{\alpha}f\) satisfies that for every \(t\in[0,T]\) we have \[\|\mathbf{T}_{\phi}(Z^{\alpha}f)\|_{L^{1}_{x,v}} \lesssim e^{t}\Big{(}\sum_{1\leq|\beta|\leq|\alpha|,\,|\gamma|+| \beta|\leq|\alpha|+1}\|\nabla_{x}Z^{\gamma}(\phi)Z^{\beta}f\|_{L^{1}_{x,v}} \Big{)}\] \[\lesssim\frac{\epsilon^{2}}{e^{(n-2)t}},\] by using the bound (29) previously obtained. Therefore, the energy \(\mathcal{E}_{N}[f]\) of the solution to the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\) is bounded for every \(t\in[0,T]\) by \[\mathcal{E}_{N}[f(t)]\leq\mathcal{E}_{N}[f(0)]+C\epsilon^{2}\int_{0}^{t}\frac {ds}{e^{(n-2)s}}, \tag{34}\] where \(C>0\) is a uniform constant depending only on \(n\) and \(N\). We emphasize that the time integral in the right hand side of (34) is uniformly bounded for any \(t\geq 0\), due to the exponential decay in time of \(\exp(-(n-2)t)\) in dimension \(n\geq 3\). As a result, the bootstrap assumption (25) is improved provided \(\epsilon>0\) is sufficiently small so that \[\mathcal{E}_{N}[f(t)]\leq\frac{3}{2}\epsilon,\] where we have used the smallness assumption \(\mathcal{E}_{N}[f(0)]\leq\epsilon\) on the initial distribution function. This concludes the proof of the global energy estimate (i). Finally, note that the decay estimates in space and time for the induced spatial density in (ii) follow from applying the global energy estimate (i) combined with Proposition 3.1.2, and Proposition 3.1.3. ## 5. The two-dimensional case In this section, we study the evolution in time of sufficiently regular small data solutions \(f\) for the Vlasov-Poisson system with the trapping potential \(\frac{-|x|^{2}}{2}\) in dimension two. ### The modified vector fields We recall the class \(\lambda\) of commuting vector fields given by \[\lambda=\Big{\{}U_{i},S_{i},L,R_{ij}\Big{\}},\] where \(i,j\in\{1,2\}\). Let \((Z^{i})_{i}\) be an arbitrary ordering of the microscopic vector fields in \(\lambda\). For each vector field \(Z^{i}\in\lambda\), we compute \[[\mathbf{T}_{\phi},Z^{i}]=\mu\sum_{k=1}^{2}\partial_{x^{k}}(Z^{i}\phi+c_{i} \phi)\partial_{v^{k}},\] where \(c_{i}=-2\) if \(Z^{i}=L\), otherwise, \(c_{i}=0\). This commutator can be written in terms of the vector fields in \(\lambda\) by using the identity \[\partial_{v^{k}}=\frac{1}{2e^{t}}e^{t}(\partial_{x^{k}}+\partial_{v^{k}})- \frac{e^{t}}{2}e^{-t}(\partial_{x^{k}}-\partial_{v^{k}}). \tag{35}\] Using this decomposition, we gain an exponentially growing factor, which does not allow to close the energy estimate. We avoid this problem by considering a modified set of vector fields of the form \[Y^{i}=Z^{i}-\sum_{k=1}^{2}\varphi_{k}^{i}(t,x,v)S_{k},\] where \(\varphi_{k}^{i}\) are sufficiently regular functions that vanish at \(t=0\). For every modified vector field \(Y^{i}\), we have \[[\mathbf{T}_{\phi},Y^{i}]=\mu\sum_{k=1}^{2}\partial_{x^{k}}(Z^{i}\phi+c_{i} \phi)\partial_{v^{k}}-\sum_{k=1}^{2}\mathbf{T}_{\phi}(\varphi_{k}^{i})S_{k}- \mu\sum_{k,j=1}^{2}\varphi_{k}^{i}\partial_{x^{j}}S_{k}\phi\partial_{v^{j}}.\] Using the decomposition (35), we have \[[\mathbf{T}_{\phi},Y^{i}] =\frac{\mu}{2e^{t}}\sum_{k=1}^{2}\partial_{x^{k}}(Z^{i}\phi+c_{i} \phi)U_{k}-\frac{e^{t}\mu}{2}\sum_{k=1}^{2}\partial_{x^{k}}(Z^{i}\phi+c_{i} \phi)S_{k}-\sum_{k=1}^{2}\mathbf{T}_{\phi}(\varphi_{k}^{i})S_{k}\] \[\quad-\frac{\mu}{2e^{t}}\sum_{k,j=1}^{2}\varphi_{k}^{i}\partial_{ x^{j}}S_{k}\phi U_{j}+\frac{e^{t}\mu}{2}\sum_{k,j=1}^{2}\varphi_{k}^{i}\partial_{ x^{j}}S_{k}\phi S_{j}.\] We remove the slower decaying terms by setting \(\mathbf{T}_{\phi}(\varphi_{k}^{i})=-\frac{\mu}{2}e^{t}\partial_{x^{k}}(Z^{i} \phi+c_{i}\phi)\). **Definition 5.1.1**.: Let \(\{Z^{i}\}_{i}\) be an ordering of \(\lambda\). The modified vector fields \(Y^{i}\) are defined as \[Y^{i}:=Z^{i}-\sum_{k=1}^{n}\varphi_{k}^{i}(t,x,v)S_{k},\qquad S_{k}:=e^{-t}( \partial_{x^{k}}-\partial_{v^{k}}),\] where 1. \(\varphi_{k}^{i}\equiv 0\) if \(Z^{i}\) is a stable vector field, i.e. \(Z^{i}=S_{k}\). 2. If \(Z^{i}\) is not a stable vector field, then, \(\varphi_{k}^{i}(t,x,v)\) is determined by \[\mathbf{T}_{\phi}(\varphi_{k}^{i})=-\frac{\mu}{2}e^{t}\partial_{x^{k}}(Z^{i} \phi+c_{i}\phi),\qquad\varphi_{k}^{i}(0,x,v)=0,\] where \(c_{i}=-2\) if \(Z^{i}=L\), otherwise, we set \(c_{i}=0\). The set of modified vector fields is denoted by \(\lambda_{m}\). Throughout the paper, we denote by \(Y\) a generic modified vector field in \(\lambda_{m}\). We use a multi-index notation for the microscopic differential operators of order \(|\alpha|\) given by the composition \[Y^{\alpha}=Y^{\alpha_{1}}Y^{\alpha_{2}}\dots Y^{\alpha_{n}},\] for every multi-index \(\alpha\). We denote by \(\lambda_{m}^{|\alpha|}\) the family of microscopic differential operators obtained as a composition of \(|\alpha|\) vector fields in \(\lambda_{m}\). We denote by \(\mathcal{M}\) the set of all functions \(\{\varphi_{k}^{i}\}\). We also denote by \(\varphi\) a generic function in \(\mathcal{M}\). **Definition 5.1.2**.: We say that \(P(\varphi)\) is a multilinear form of degree \(d\) and signature less than \(k\) if \(P(\varphi)\) is of the form \[P(\varphi)=\sum_{\begin{subarray}{c}|\alpha_{1}|+\dots+|\alpha_{d}|\leq k\\ (\varphi_{1},\dots,\varphi_{d})\in\mathcal{M}^{d}\end{subarray}}C_{\bar{\alpha},\bar{\varphi}}\prod_{j=1,\dots,d}Y^{\alpha_{j}}(\varphi_{j}),\] where \(\alpha_{j}\) are multi-indices, and \(C_{\bar{\alpha},\bar{\varphi}}\) are uniform constants depending on \(\bar{\alpha}=(\alpha_{1},\dots,\alpha_{d})\) and \(\bar{\varphi}=(\varphi_{1},\dots,\varphi_{d})\). ### Properties of the modified vector fields In this subsection, we study the main properties of the modified vector fields that will be used later in the main proofs. **Lemma 5.2.1**.: _For any multi-index \(\alpha\), we have_ \[[\mathbf{T}_{\phi},Y^{\alpha}]=\sum_{d=0}^{|\alpha|+1}\sum_{i=1}^{2}\sum_{| \beta|,|\gamma|\leq|\alpha|}P^{\alpha i}_{d\gamma\beta}(\varphi)\partial_{x^{ i}}Z^{\gamma}(\phi)Y^{\beta}, \tag{36}\] _where \(P^{\alpha i}_{d\gamma\beta}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) such that \(k\leq|\alpha|-1\) and \(k+|\gamma|+|\beta|\leq|\alpha|+1\)._ Proof.: As a first step, we show the commutation formula when \(|\alpha|=1\). After using the equation satisfied by the coefficients \(\varphi_{k}^{i}\), we have \[[\mathbf{T}_{\phi},Y^{i}]=\frac{\mu}{2e^{t}}\sum_{k=1}^{2}\partial_{x^{k}}(Z^ {i}\phi+c_{i}\phi)U_{k}-\frac{\mu}{2e^{t}}\sum_{k,j=1}^{2}\varphi_{k}^{i} \partial_{x^{j}}S_{k}\phi U_{j}+\frac{e^{t}\mu}{2}\sum_{k,j=1}^{2}\varphi_{k}^ {i}\partial_{x^{j}}S_{k}\phi S_{j}.\] The desired identity is obtained by rewriting the unstable vector fields as \(Z^{k}=Y^{k}+\sum_{l=1}^{k}S_{l}\). The general case is proven by induction. Assume that the commutation formula holds for some multi-index \(\alpha\) and let \(Y\in\lambda\) be an arbitrary modified vector field. By the identity \[[\mathbf{T}_{\phi},YY^{\alpha}]=[\mathbf{T}_{\phi},Y]Y^{\alpha}+Y[\mathbf{T}_ {\phi},Y^{\alpha}],\] it is enough to show that the terms in the right-hand side have the correct form. The first term \([\mathbf{T}_{\phi},Y]Y^{\alpha}\) is treated using the case when \(|\alpha|=1\). The second term \(Y[\mathbf{T}_{\phi},Y^{\alpha}]\) generates three different types of terms. The terms \[Y(P^{\alpha i}_{d\gamma\beta}(\varphi))\partial_{x^{i}}Z^{\gamma}(\phi)Y^{\beta}\] have the correct form, since the multilinear forms have the same degree and their signature increases by one. The terms \[P^{\alpha i}_{d\gamma\beta}(\varphi)Y(\partial_{x^{i}}Z^{\gamma}(\phi))Y^{\beta}\] also have the correct form, since \(Y\) is schematically of the form \(Z-\varphi S\), so \[P^{\alpha i}_{d\gamma\beta}(\varphi)Y(\partial_{x^{i}}Z^{\gamma}(\phi))Y^{ \beta}=\sum_{|\gamma^{\prime}|\leq|\gamma|+1}P^{\prime\alpha i}_{d\gamma^{ \prime}\beta}(\varphi)\partial_{x^{i}}Z^{\gamma^{\prime}}(\phi)Y^{\beta},\] where \(P^{\prime\alpha i}_{d\gamma^{\prime}\beta}\) are multilinear forms of degree at most \(d+1\) with the same signature. Finally, the last terms are of the form \(P^{\alpha i}_{d\gamma\beta}(\varphi)\partial_{x^{i}}Z^{\gamma}(\phi)YY^{\beta}\) which also satisfy the required properties. **Lemma 5.2.2**.: _For any multi-index \(\alpha\), we have_ \[Z^{\alpha}=\sum_{d=0}^{|\alpha|}\sum_{|\beta|\leq|\alpha|}P^{\alpha}_{d\beta}( \varphi)Y^{\beta},\] _where \(P^{\alpha}_{d\beta}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) with \(k\leq|\alpha|-1\) and \(k+|\beta|\leq|\alpha|\)._ Proof.: The lemma holds for \(|\alpha|=1\), since \(Z^{i}=Y^{i}+\sum_{k=1}^{2}\varphi_{k}^{i}S_{k}\) where \(S_{k}\) is also a modified vector field. An inductive argument shows the general case. **Lemma 5.2.3**.: _For every multi-index \(\alpha\), we have_ \[\rho(Z^{\alpha}f)=\sum_{d=0}^{|\alpha|}\sum_{|\beta|\leq|\alpha|}\rho(Q_{d \beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta}f)+\sum_{j=1}^{|\alpha|}\sum_{d= 1}^{|\alpha|+1}\sum_{|\beta|\leq|\alpha|}\frac{1}{e^{2jt}}\rho(P_{d\beta}^{ \alpha j}(\varphi)Y^{\beta}f),\] _where \(Q_{d\beta}^{\alpha}(\partial_{x}\varphi)\) are multilinear forms with respect to \(\partial_{x}\varphi\) of degree \(d\) and signature less than \(k^{\prime}\) such that \(k^{\prime}\leq|\alpha|-1\) and \(k^{\prime}+d+|\beta|\leq|\alpha|\), and \(P_{d\beta}^{\alpha j}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) such that \(k\leq|\alpha|\) and \(k+|\beta|\leq|\alpha|\)._ Proof.: Given \(Z\in\lambda\), the modified vector field corresponding to \(Z\) is denoted by \(Y\in\lambda_{m}\). We will use the schematic notations \(Y=Z-\varphi S\) and \(Z=Y+\varphi S\) instead of the lengthy formulae given before. We will also use the notation \(e^{t}\partial_{x}+e^{t}\partial_{v}-\varphi S\) to denote a generic modified vector field. We denote a generic modified vector field by \(Y^{\prime}\), and a generic coefficient by the letter \(\varphi^{\prime}\in\mathcal{M}\). Firstly, we prove the lemma in the case when \(|\alpha|=1\). Given \(Z\in\lambda\), we have \[\int Z(g)dv =\int(Z-\varphi S+\varphi S)(g)dv\] \[=\int Y(g)dv+\int\varphi S(g)dv\] \[=\int Y(g)dv+\int\frac{\varphi}{e^{2t}}(e^{t}(\partial_{x}g+ \partial_{v}g)-\varphi S(g)+\varphi S(g)-2e^{t}\partial_{v}g)dv\] \[=\int Y(g)dv+\int\frac{\varphi}{e^{2t}}(Y^{\prime}(g)+\varphi S( g))dv-2\int\frac{\varphi}{e^{t}}\partial_{v}gdv.\] The first and second terms of the right-hand side of the last line have the correct form. For the last term, we integrate by parts in the velocity variable \[-\int\frac{\varphi}{e^{t}}\partial_{v}gdv =\frac{1}{e^{t}}\int\partial_{v}\varphi gdv\] \[=\frac{1}{e^{2t}}\int(e^{t}\partial_{x}\varphi+e^{t}\partial_{v }\varphi-\varphi^{\prime}S\varphi+\varphi^{\prime}S\varphi-e^{t}\partial_{x} \varphi)gdv\] \[=\frac{1}{e^{2t}}\int(Y^{\prime}\varphi+\varphi^{\prime}S\varphi )gdv-\frac{1}{e^{t}}\int\partial_{x}(\varphi)gdv,\] where now all terms are of the correct form. This proves the lemma when \(|\alpha|=1\). We now assume that the lemma holds true for some \(\alpha\). Let \(Z\) be a non-modified vector field. Using that \(Z\rho(Z^{\alpha}g)=\rho(ZZ^{\alpha}g)+c_{Z}\rho(Z^{\alpha}g)\), we only need to show that \(Z\rho(Z^{\alpha}g)\) has the correct form. Using the induction hypothesis and writing \(Y=Z-\varphi S\) to denote the associated modified vector field, we have \[Z\rho(Z^{\alpha}g)=\sum_{d=0}^{|\alpha|}\sum_{|\beta|\leq|\alpha| }\rho\Big{(}Z\Big{[}Q_{d\beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta}f\Big{]} \Big{)}\] \[\qquad\qquad\qquad+\sum_{j=1}^{|\alpha|}\sum_{d=1}^{|\alpha|+1} \sum_{|\beta|\leq|\alpha|}\frac{1}{e^{2jt}}\rho\Big{(}Z\Big{[}P_{d\beta}^{ \alpha j}(\varphi)Y^{\beta}f\Big{]}\Big{)}+c_{Z}\rho(Z^{\alpha}(g)).\] The last term has already the right form. Writing \(Z=Y+\varphi S\) in the term \(\rho\Big{(}Z\Big{[}P_{d\beta}^{\alpha j}(\varphi)Y^{\beta}f\Big{]}\Big{)}\), one easily see that they also have the desired form. For the missing term, we write \[\rho\Big{(}Z\Big{[}Q_{d\beta}^{\alpha}(\partial_{x}\varphi)Y^{ \beta}f\Big{]}\Big{)} =\rho\Big{(}(Y+\varphi S)\Big{[}Q_{d\beta}^{\alpha}(\partial_{x} \varphi)Y^{\beta}f\Big{]}\Big{)}\] \[=\rho\Big{(}Y\Big{[}Q_{d\beta}^{\alpha}(\partial_{x}\varphi)Y^{ \beta}f\Big{]}\Big{)}+\rho\Big{(}\varphi S\Big{[}Q_{d\beta}^{\alpha}(\partial _{x}\varphi)Y^{\beta}f\Big{]}\Big{)}.\] The first term of the right-hand side has the correct form. For the second term, we write \[\varphi S=\frac{\varphi}{e^{2t}}(e^{t}\partial_{x}+e^{t}\partial_{v}-\varphi ^{\prime}S+\varphi^{\prime}S-2e^{t}\partial_{v})=\frac{\varphi}{e^{2t}}Y^{ \prime}+\frac{\varphi\varphi^{\prime}}{e^{2t}}S-2\frac{\varphi}{e^{t}} \partial_{v},\] so that \[\rho\Big{(}\varphi S\Big{[}Q_{d\beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta} f\Big{]}\Big{)}=\frac{1}{e^{2t}}\rho\Big{(}(\varphi Y^{\prime}+\varphi\varphi^{ \prime}S)\Big{[}Q_{d\beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta}f\Big{]} \Big{)}+\frac{2}{e^{t}}\rho\Big{(}\partial_{v}\varphi\Big{[}Q_{d\beta}^{\alpha }(\partial_{x}\varphi)Y^{\beta}f\Big{]}\Big{)},\] where we have integrated by parts the last term. The first term on the right-hand side has the correct form. For the second term, we again write \(\partial_{v}\varphi=\frac{1}{e^{t}}(e^{t}\partial_{x}+e^{t}\partial_{v}- \varphi^{\prime}S+\varphi^{\prime}S-e^{t}\partial_{x})\varphi\), so that \[\frac{2}{e^{t}}\rho\Big{(}\partial_{v}\varphi\Big{[}Q_{d\beta}^{ \alpha}(\partial_{x}\varphi)Y^{\beta}f\Big{]}\Big{)} =\frac{2}{e^{2t}}\rho\Big{(}Y^{\prime}(\varphi)\Big{[}Q_{d\beta} ^{\alpha}(\partial_{x}\varphi)Y^{\beta}f\Big{]}\Big{)}+\frac{2}{e^{2t}}\rho \Big{(}\varphi^{\prime}S(\varphi)\Big{[}Q_{d\beta}^{\alpha}(\partial_{x} \varphi)Y^{\beta}f\Big{]}\Big{)}\] \[\qquad-\frac{2}{e^{t}}\rho\Big{(}\partial_{x}\varphi\Big{[}Q_{d \beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta}f\Big{]}\Big{)},\] where all terms now have the correct form. **Lemma 5.2.4**.: _We have_ \[Y^{\alpha}\nabla\phi=Z^{\alpha}\nabla\phi+\frac{1}{e^{2t}}\sum_{d=1}^{|\alpha |}\sum_{|\beta|\leq|\alpha|}P_{d\beta}^{\alpha}(\varphi)Z^{\beta}\nabla\phi,\] _where \(P_{d\beta}^{\alpha}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) such that \(k\leq|\alpha|-1\) and \(k+|\beta|\leq|\alpha|\)._ Proof.: For \(|\alpha|=1\), we have \[Y\nabla\phi=(Z+\varphi S)\nabla\phi=Z\nabla\phi+\frac{1}{e^{2t}}\varphi(e^{t }\partial_{x})\nabla\phi.\] An inductive argument shows the general case. ### The bootstrap assumptions In this section, we consider distribution functions in the energy space defined in terms of the modified vector fields. For \(N\geq 7\), we set the energy \[\mathcal{E}_{N}^{m}[f]:=\sum_{|\alpha|\leq N}\sum_{Y^{\alpha}\in\lambda_{m}^{| \alpha|}}\|Y^{\alpha}f\|_{L^{1}_{x,v}}.\] Let \(T\geq 0\) be the largest time such that, for all \(t\in[0,T]\), we have (B1) \[\mathcal{E}_{N}^{m}[f(t)]\leq 2\epsilon.\] * For every multi-index \(\alpha\) with \(|\alpha|\leq N-4\) and every \(Y^{\alpha}\in\lambda_{m}^{|\alpha|}\), we have \[|Y^{\alpha}\varphi(t,x,v)|\leq\epsilon^{\frac{1}{2}}(1+t).\] * For every multi-index \(\alpha\) with \(|\alpha|\leq N-5\) and every \(Y^{\alpha}\in\lambda_{m}^{|\alpha|}\), we have \[|Y^{\alpha}\nabla\varphi(t,x,v)|\leq\epsilon^{\frac{1}{2}}.\] * For every multi-index \(\alpha\) with \(|\alpha|\leq N-3\) and every \(Z^{\alpha}\in\Lambda_{m}^{|\alpha|}\), we have \[|\nabla_{x}Z^{\alpha}\phi(t,x)|\leq\frac{\epsilon^{\frac{1}{2}}}{e^{t}}.\] _Remark 4_.: * The modified vector fields satisfy that \(Y^{i}=Z^{i}\) at time \(t=0\). For this reason, the energy norm of the initial data is equal to \[\mathcal{E}_{N}[f_{0}]=\sum_{|\alpha|\leq N}\sum_{Z^{\alpha}\in\lambda^{| \alpha|}}\|Z^{\alpha}f_{0}\|_{L^{1}_{x,v}}\leq\epsilon.\] * The bootstrap argument set in this subsection can also be used to show global existence for the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\) in dimension greater than two. The bootstrap assumptions can be improved in higher dimensions identically as in the two-dimensional case. In the rest of the section, we will only consider \(n=2\). ### Weighted Sobolev inequality with the modified vector fields Using the bootstrap assumptions on \(\varphi\), we prove a weighted Sobolev inequality in terms of the modified vector fields. **Proposition 5.4.1**.: _For every sufficiently regular distribution function \(f\), the induced spatial density satisfies_ \[\rho(|Y^{\alpha}f|)(t,x)\lesssim\frac{1}{(e^{t}+|x|)^{2}}\sum_{|\beta|\leq| \alpha|+2}\sum_{Y^{\beta}\in\lambda_{m}^{|\beta|}}\|Y^{\beta}f\|_{L^{1}_{x,v}}\] _for every \(t\geq 0\), every \(x\in\mathbb{R}^{2}\), and every multi-index \(|\alpha|\leq N-2\)._ Proof.: Similarly as in the proof of Proposition 3.1.2, we define a real-valued function \(\tilde{\psi}:B_{2}(0,1/2)\to\mathbb{R}\) given by \(\tilde{\psi}(y)=\rho(|Y^{\alpha}f|)(t,x+(e^{t}+|x|)y)\). Using a 1D Sobolev inequality with \(\delta=\frac{1}{8}\), we have \[\rho(|Y^{\alpha}f|)(t,x)\lesssim\int_{|y_{1}|\leq\delta^{1/2}}(|\partial_{y_{ 1}}\tilde{\psi}|+|\tilde{\psi}|)(y_{1},0)dy_{1},\] where as before \[\partial_{y_{1}}\tilde{\psi}(y) =(e^{t}+|x|)\partial_{x^{1}}\rho(|Y^{\alpha}f|)(t,x+(t+|x|)y)\] \[=e^{t}\int_{y}\partial_{x^{1}}(|Y^{\alpha}f|)(t,x+(e^{t}+|x|)y,v) dv+|x|\int_{y}\partial_{x^{1}}(|Y^{\alpha}f|)(t,x+(e^{t}+|x|)y,v)dv.\] Now, \[e^{t}\int_{v}\partial_{x^{1}}(|Y^{\alpha}f|)dv =\int_{v}(e^{t}\partial_{x^{1}}+e^{t}\partial_{v^{1}}-\sum_{i=1}^{n }\varphi_{1}^{i}S_{i}+\sum_{i=1}^{n}\varphi_{1}^{i}S_{i})(|Y^{\alpha}f|)dv\] \[=\int_{v}Y_{1}(|Y^{\alpha}f|)dv+\int_{v}\sum_{i=1}^{n}\varphi_{1}^ {i}S_{i}(|Y^{\alpha}f|)dv.\] The first term on the right-hand side is simply estimated by \[\Big{|}\int_{v}Y_{1}(|Y^{\alpha}f|)dv\Big{|}\leq\int_{v}|Y_{1}Y^{\alpha}f|dv.\] For the second term, we again make the modified vector fields to appear \[\int_{v}\varphi_{1}^{i}S_{i}(|Y^{\alpha}f|)dv =\int_{v}\frac{\varphi_{1}^{i}}{e^{2t}}\Big{(}e^{t}\partial_{x^{i }}+e^{t}\partial_{v^{i}}-\sum_{k=1}^{n}\varphi_{i}^{k}S_{k}+\sum_{k=1}^{n} \varphi_{i}^{k}S_{k}-2e^{t}\partial_{v^{i}}\Big{)}(|Y^{\alpha}f|)dv\] \[=\int_{v}\frac{\varphi_{1}^{i}}{e^{2t}}Y_{i}(|Y^{\alpha}f|)dv+ \int_{v}\frac{\varphi_{1}^{i}}{e^{2t}}\Big{(}\sum_{k=1}^{n}\varphi_{i}^{k}S_{ k}-2e^{t}\partial_{v^{i}}\Big{)}(|Y^{\alpha}f|)dv.\] The first term on the right-hand side can then be estimated as above, using that \(\frac{\varphi_{1}^{i}}{e^{2t}}\) is uniformly bounded from the bootstrap assumptions. For the remainder terms, we first note that in view of the bootstrap assumptions, the terms of the form \[\Big{|}\int_{v}\frac{\varphi_{1}^{i}}{e^{2t}}\varphi_{i}^{k}S_{k}(|Y^{\alpha} f|)dv\Big{|}.\] can be estimated by \[\Big{|}\int_{v}|S_{k}|Y^{\alpha}f||dv\Big{|}\leq\int_{v}|S_{k}Y^{\alpha}f|dv.\] For the last term, we integrate by parts in the velocity variable \[-\int_{v}\frac{\varphi_{1}^{i}}{e^{t}}\partial_{v^{i}}(|Y^{\alpha }f|)dv =\frac{1}{e^{t}}\int_{v}\partial_{v^{i}}\varphi_{1}^{i}|Y^{\alpha }f|dv\] \[=\frac{1}{e^{2t}}\int_{v}\Big{(}e^{t}\partial_{x^{i}}+e^{t} \partial_{v^{i}}-\sum_{k=1}^{n}\varphi_{i}^{k}S_{k}+\sum_{k=1}^{n}\varphi_{i}^ {k}S_{k}-e^{t}\partial_{x^{i}}\Big{)}\varphi_{1}^{i}|Y^{\alpha}f|dv\] \[=\frac{1}{e^{2t}}\int_{v}Y_{i}(\varphi_{1}^{i})|Y^{\alpha}f|dv+ \frac{1}{e^{2t}}\sum_{k=1}^{n}\int_{v}\varphi_{i}^{k}S_{k}(\varphi_{1}^{i})|Y^ {\alpha}f|dv\] \[\qquad-\frac{1}{e^{t}}\int_{v}\partial_{x^{i}}(\varphi_{1}^{i})| Y^{\alpha}f|dv.\] The first term only grows like \((1+t)\) according to the bootstrap assumptions, so this growth can be absorbed thanks to the exponential factor in front. The second and third terms can also be absorbed using the exponential factor in front. Putting everything together, we obtain \[\rho(|Y^{\alpha}f|)(t,x)\lesssim\int_{|y_{1}|\leq\delta^{\frac{1}{2}}}\int_{v }(|YY^{\alpha}f|+|Y^{\alpha}f|)(t,x+(e^{t}+|x|)(y_{1},0),v)dvdy_{1}.\] The remaining of the proof follows as in the proof of Proposition 3.1.2, repeating the previous arguments for each of the variables and applying the usual change of coordinates. ### Estimates for \(\|Y^{\alpha}(\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\) In this subsection, we prove the core estimates to close the bootstrap argument previously set. We begin proving decay estimates for \(\|\nabla_{x}Z^{\gamma}(\phi)Y^{\alpha}f\|_{L^{1}_{x,v}}\). **Lemma 5.5.1**.: _For every multi-indices \(\gamma\) and \(\alpha\), with \(|\gamma|\leq N\) and \(|\alpha|\leq N-2\), we have_ \[\|\nabla_{x}Z^{\gamma}(\phi)Y^{\alpha}f\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon }{e^{t}}\sum_{|\beta|\leq|\gamma|}\|\rho(Z^{\beta}f)\|_{L^{1}_{x}}.\] Proof.: The proof is exactly the same as in the higher dimensional case, where we have used the representation formula for \(\nabla_{x}Z^{\gamma}\phi\). The argument uses the weighted Sobolev inequality with modified vector fields. **Lemma 5.5.2**.: _For every sufficiently small \(\sigma>0\), there exist constants \(C_{\sigma}\) and \(\epsilon_{\sigma}\) such that if \(\epsilon\leq\epsilon_{\sigma}\), then, for all multi-indices \(\alpha\) and \(\beta\), with \(|\alpha|\leq N-1\), \(|\beta|\leq N\), and \(|\alpha|+|\beta|\leq N+1\), we have_ \[\|Y^{\alpha}(\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\leq C_{\sigma}e^{t\sigma}\epsilon.\] _Moreover, for all multi-indices \(\alpha\) and \(\beta\), with \(|\alpha|\leq N-2\), \(|\beta|\leq N\), and \(|\alpha|+|\beta|\leq N\), and all \(1\leq i\leq 2\), we have_ \[\|Y^{\alpha}(\partial_{x^{i}}\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\leq C_{ \sigma}\epsilon.\] Proof.: Let us denote \[\mathcal{F}(t):=\sum_{|\alpha|\leq N-1}\sum_{\begin{subarray}{c}|\beta|\leq N \\ |\alpha|+|\beta|\leq N+1\end{subarray}}\|Y^{\alpha}(\varphi)(t)Y^{\beta}(f)(t) \|_{L^{1}_{x,v}},\] \[\mathcal{G}(t):=\sum_{|\alpha|\leq N-2}\sum_{|\alpha|+|\beta|\leq N}\sum_{i=1 }^{2}\|Y^{\alpha}(\partial_{x^{i}}\varphi)(t)Y^{\beta}(f)(t)\|_{L^{1}_{x,v}}.\] By the bootstrap assumptions, if \(|\alpha|\leq N-4\), then \[\|Y^{\alpha}(\varphi)(t)Y^{\beta}(f)(t)\|_{L^{1}_{x,v}}\lesssim\epsilon^{ \frac{1}{2}}(1+t)\|Y^{\beta}(f)\|_{L^{1}_{x,v}}\lesssim e^{t\sigma_{0}} \epsilon^{\frac{3}{2}},\] where \(\sigma_{0}>0\) is a small constant that is to be fixed later. Similarly, if \(|\alpha|\leq N-5\), we have \[\|Y^{\alpha}(\partial_{x^{i}}\varphi)(t)Y^{\beta}(f)(t)\|_{L^{1}_{x,v}} \lesssim\epsilon^{\frac{3}{2}}.\] If \(|\alpha|>N-4\), then, we have \(|\beta|\leq N-3\) since \(N\geq 7\). In this case, we estimate the terms \(\|Y^{\alpha}(\varphi)(t)Y^{\beta}(f)(t)\|_{L^{1}_{x,v}}\) through the method of characteristics \[\|Y^{\alpha}(\varphi)(t)Y^{\beta}(f)(t)\|_{L^{1}_{x,v}}\leq\int_{0}^{t}\|{ \bf T}_{\phi}(Y^{\alpha}(\varphi)Y^{\beta}(f))\|_{L^{1}_{x,v}}(s)ds.\] We decompose the term \({\bf T}_{\phi}(Y^{\alpha}(\varphi)Y^{\beta}(f))\) into three different contributions defined by \[{\bf T}_{\phi}(Y^{\alpha}(\varphi)Y^{\beta}(f))=Y^{\alpha}(\varphi){\bf T}_{ \phi}(Y^{\beta}(f))+[{\bf T}_{\phi},Y^{\alpha}](\varphi)Y^{\beta}(f)+Y^{\alpha }{\bf T}_{\phi}(\varphi)Y^{\beta}(f)=:N_{1}+N_{2}+N_{3}.\] **Estimate of \(N_{1}\).** By the commutation formula (36), we have \[N_{1}=\sum_{d=0}^{|\beta|+1}\sum_{i=1}^{2}\sum_{|\beta^{\prime}|,|\gamma|\leq| \beta|}P_{d\gamma\beta^{\prime}}^{\beta i}(\varphi)\partial_{x^{i}}Z^{\gamma}( \phi)Y^{\beta^{\prime}}(f)Y^{\alpha}(\varphi).\] For \(|\beta|\leq N-3\), the signatures of the multilinear forms \(P_{d\gamma\beta^{\prime}}^{\beta i}(\varphi)\) are less than \(N-4\). By the bootstrap assumptions, we have \[|P_{d\gamma\beta^{\prime}}^{\beta i}(\varphi)|\lesssim(1+t)^{N}\lesssim e^{t \sigma_{0}},\] \[|\partial_{x^{i}}Z^{\gamma}(\varphi)|\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^ {t}},\] where \(\sigma_{0}\in(0,1)\) is a small number to be set later. As a result, we obtain that \[\|N_{1}\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_{0 })}}\mathcal{F}(t).\] **Estimate of \(N_{2}\).** By the commutation formula (36), we have \[N_{2}=\sum_{d=0}^{|\alpha|+1}\sum_{i=1}^{2}\sum_{|\beta^{\prime}|,|\gamma| \leq|\alpha|}P_{d\gamma\beta^{\prime}}^{\alpha i}(\varphi)\partial_{x^{i}}Z^{ \gamma}(\phi)Y^{\beta^{\prime}}(\varphi)Y^{\beta}(f),\] where the multilinear forms \(P_{d\gamma\beta^{\prime}}^{\alpha i}\) has signature less than \(k\leq|\alpha|-1\) and \(k+|\gamma|+|\beta^{\prime}|\leq|\alpha|+1\leq N\). If \(|\gamma|\leq N-3\), then \[|\partial_{x^{i}}Z^{\gamma}(\phi)|\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^{t }}.\] The term \(P_{d\gamma\beta^{\prime}}^{\alpha i}(\varphi)Y^{\beta^{\prime}}(\varphi)\) is a multi-linear form with at most one factor \(Y^{\alpha^{\prime}}(\varphi)\) with \(N-4<|\alpha^{\prime}|\leq|\alpha|\), while the remaining terms can be uniformly bounded by \((1+t)^{N}\lesssim e^{t\sigma_{0}}\). Therefore, we obtain \[\|P_{d\gamma\beta^{\prime}}^{\alpha i}(\varphi)\partial_{x^{i}}Z^{\gamma}( \phi)Y^{\beta^{\prime}}(\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\lesssim\frac{ \epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_{0})}}\mathcal{F}(t).\] If \(|\gamma|>N-3\), then, by the bootstrap assumptions \[|P_{d\gamma\beta^{\prime}}^{\alpha i}(\varphi)Y^{\beta^{\prime}}(\varphi)| \lesssim(1+t)^{N}.\] By Lemma 5.5.1, we have \[\|\nabla_{x}Z^{\gamma}(\phi)Y^{\beta}f\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon }{e^{t}}\sum_{|\eta|\leq|\gamma|}\|Z^{\eta}f\|_{L^{1}_{x,v}},\] since \(|\gamma|\leq|\alpha|\leq N-1.\) By Lemma 5.2.2, we have \[\|Z^{\eta}f\|_{L^{1}_{x,v}}\leq\sum_{d^{\prime}=0}^{|\eta|}\sum_{|\eta^{\prime }|\leq|\eta|}\|P_{d^{\prime}\eta^{\prime}}^{\eta}(\varphi)Y^{\eta^{\prime}}(f) \|_{L^{1}_{x,v}}\lesssim(1+t)^{N}\mathcal{F}(t),\] so we obtain \[\|P_{d\gamma\beta^{\prime}}^{\alpha i}(\varphi)\partial_{x^{i}}Z^{\gamma}( \phi)Y^{\beta^{\prime}}(\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\lesssim\epsilon \frac{(1+t)^{2N}}{e^{t}}\mathcal{F}(t)\lesssim\frac{\epsilon}{e^{t(1-\sigma_{0 })}}\mathcal{F}(t).\] Putting the previous estimates together, we have \[\|N_{2}\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_{0})}} \mathcal{F}(t).\] **Estimate of \(N_{3}\).** Let us recall the equation that defines the modification of the vector fields give by \[\mathbf{T}_{\phi}(\varphi)=e^{t}\sum_{i=1}^{2}\sum_{|\eta|\leq 1}c_{Z,i}\partial_{x ^{i}}Z^{\eta}\phi.\] By Lemma 5.2.4, we have \[N_{3}=e^{t}\sum_{i=1}^{2}\sum_{|\eta|\leq|\alpha|+1}c_{\eta,i} \partial_{x^{i}}Z^{\eta}(\phi)Y^{\beta}(f)+\sum_{d=1}^{|\alpha|}\sum_{i=1}^{2 }\sum_{|\eta|\leq|\alpha|+1}P^{\alpha}_{d\eta}(\varphi)\partial_{x^{i}}Z^{ \eta}(\phi)Y^{\beta}(f)=:I^{A}_{3}+I^{B}_{3},\] where \(P^{\alpha}_{d\eta}(\varphi)\) are multi-linear forms of degree \(d\) with signatures less than \(k\) satisfying \(k\leq|\alpha|\leq N-1\) and \(k+|\eta|\leq|\alpha|+1.\) If \(|\eta|\leq N-3,\) we have \[|\partial_{x^{i}}Z^{\eta}(\phi)|\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^{t}},\] \[\|P^{\alpha}_{d\eta}(\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\lesssim(1+t)^{N} \mathcal{F}(t),\] so we have \[\|P^{\alpha}_{d\eta}(\varphi)\partial_{x^{i}}Z^{\eta}(\phi)Y^{ \beta}(f)\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_ {0})}}\mathcal{F}(t).\] If \(|\eta|>N-3,\) we have \[|P^{\alpha}_{d\eta}(\varphi)|\lesssim(1+t)^{N},\] so we obtain the estimate \[\|N_{3}\|_{L^{1}_{x,v}}\lesssim e^{t}\sum_{|\eta|\leq|\alpha|+1}\| \partial_{x^{i}}Z^{\eta}(\phi)Y^{\beta}(f)\|_{L^{1}_{x,v}}+\frac{\epsilon^{ \frac{1}{2}}}{e^{t(1-\sigma_{0})}}\mathcal{F}(t).\] By Lemma 5.5.1, we have \[\|\partial_{x^{i}}Z^{\eta}(\phi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\lesssim\sum_{| \eta^{\prime}|\leq|\eta|}\frac{\epsilon}{e^{t}}\|\rho(Z^{\eta^{\prime}}f)\|_{ L^{1}_{x}}\lesssim\frac{\epsilon^{2}}{e^{t}}+\frac{\epsilon}{e^{t}}\sum_{1\leq| \eta^{\prime}|\leq|\eta|}\|\rho(Z^{\eta^{\prime}}f)\|_{L^{1}_{x}}.\] For \(|\eta^{\prime}|\geq 1,\) we can write \(Z^{\eta^{\prime}}f=Z^{\eta^{\prime\prime}}(Zf)\) where \(0\leq|\eta^{\prime\prime}|=|\eta^{\prime}|-1\leq N-1.\) Applying Lemma 5.2.3 to \(Zf,\) we have \[\rho(Z^{\eta^{\prime}}f) =\sum_{d=0}^{|\eta^{\prime\prime}|}\sum_{|\beta^{\prime}|\leq| \eta^{\prime\prime}|}\rho(Q^{\eta^{\prime\prime}}_{d\beta^{\prime}}(\partial_ {x}\varphi)Y^{\beta^{\prime}}(Zf))+\sum_{j=1}^{|\eta^{\prime\prime}|}\sum_{d =1}^{|\eta^{\prime\prime}|+1}\sum_{|\beta^{\prime}|\leq|\eta|}\frac{1}{e^{2jt} }\rho(P^{\eta^{\prime\prime}j}_{d\beta^{\prime}}(\varphi)Y^{\beta^{\prime}}( Zf))\] \[=:P_{1}+P_{2},\] where \(Q^{\eta^{\prime\prime}}_{d\beta^{\prime}}(\partial_{x}\varphi)\) are multilinear forms with respect to \(\partial_{x}\varphi\) of degree \(d\) and signature less than \(k^{\prime}\) such that \(k^{\prime}\leq|\eta^{\prime\prime}|-1\leq N-2\) and \(k^{\prime}+d+|\beta^{\prime}|\leq|\eta^{\prime\prime}|,\) and \(P^{\eta^{\prime\prime}j}_{d\beta^{\prime}}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) such that \(k\leq|\eta^{\prime\prime}|\) and \(k+|\beta^{\prime}|\leq|\eta^{\prime\prime}|\leq N-1\). For the term \(P_{1}\), we have \[\rho(Q_{d\beta^{\prime}}^{\eta^{\prime\prime}}(\partial_{x}\varphi )Y^{\beta^{\prime}}Zf) =\rho(Q_{d\beta^{\prime}}^{\eta^{\prime\prime}}(\partial_{x} \varphi)Y^{\beta^{\prime}}(Yf+c_{Y}\varphi Sf))\] \[=\rho(Q_{d\beta^{\prime}}^{\eta^{\prime\prime}}(\partial_{x} \varphi)Y^{\beta^{\prime}}Yf)+\sum_{|\beta^{\prime\prime}|\leq|\beta^{\prime}| }c_{Y\beta^{\prime\prime}}\rho(Q_{d\beta^{\prime}}^{\eta^{\prime\prime}}( \partial_{x}\varphi)Y^{\beta^{\prime\prime}}(\varphi)Y^{\beta^{\prime}-\beta^ {\prime\prime}}Sf).\] Since \(k^{\prime}\leq N-2\) and \(k^{\prime}+d+|\beta^{\prime}|+1\leq|\eta^{\prime\prime}|+1<N+1\), we have \[\|\rho(Q_{d\beta^{\prime}}^{\eta^{\prime\prime}}(\partial_{x}\varphi)Y^{\beta ^{\prime}}Yf)\|_{L^{1}_{x}}\lesssim\mathcal{G}(t).\] For the second contribution of the term \(P_{1}\), we have either \(k^{\prime}+d\leq N-4\) or \(|\beta^{\prime\prime}|\leq N-4\), so by the bootstrap assumptions \[\|\rho(Q_{d\beta^{\prime}}^{\eta^{\prime\prime}}(\partial_{x}\varphi)Y^{\beta ^{\prime\prime}}(\varphi)Y^{\beta^{\prime}-\beta^{\prime\prime}}Sf)\|_{L^{1}_ {x}}\lesssim\mathcal{F}(t)+(1+t)\mathcal{G}(t).\] Therefore, the term \(P_{1}\) satisfies \[\|P_{1}\|_{L^{1}_{x}}\lesssim\mathcal{F}(t)+e^{t\sigma_{0}}\mathcal{G}(t).\] Using the identity \(Z=Y+\varphi S\), the term \(P_{2}\) can be estimated as \[\|P_{2}\|_{L^{1}_{x}}\lesssim\frac{(1+t)^{N}}{e^{t}}\mathcal{F}(t).\] Putting the previous bounds together, we obtain \[\|N_{3}\|_{L^{1}_{x,v}}\lesssim\epsilon^{2}+\epsilon\mathcal{F}(t)+\epsilon e ^{t\sigma_{0}}\mathcal{G}(t)+\frac{\epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_{0}) }}\mathcal{F}(t).\] In the case when \(Y^{\alpha}=Y^{\alpha^{\prime}}\partial_{x^{l}}\), then, the term \(N_{3}\) is given by \[N_{3}=e^{t}\sum_{i=1}^{2}\sum_{|\eta|\leq|\alpha|}c_{\eta,i,l}\partial_{x^{i}} \partial_{x^{l}}Z^{\eta}(\phi)Y^{\beta}(f)+\sum_{d=1}^{|\alpha|-1}\sum_{i=1}^{2 }\sum_{|\eta|\leq|\alpha|}P_{d\eta}^{oil}(\varphi)\partial_{x^{i}}\partial_{x ^{l}}Z^{\eta}(\phi)Y^{\beta}(f).\] Using the vector field \(e^{t}\partial_{x^{l}}\in\Lambda\), the estimate of \(N_{3}\) is improved by \[\|N_{3}\|_{L^{1}_{x,v}}\lesssim\sum_{|\eta|\leq|\alpha|+1}\|\partial_{x^{i}}Z^ {\eta}(\phi)Y^{\beta}(f)\|_{L^{1}_{x,v}}+\frac{\epsilon^{\frac{1}{2}}}{e^{t(1- \sigma_{0})}}\mathcal{F}(t).\] As a result, we obtain the improved estimate \[\|N_{3}\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon^{2}}{e^{t}}+\frac{\epsilon}{e^{t (1-\sigma_{0})}}\mathcal{G}(t)+\frac{\epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_{0 })}}\mathcal{F}(t).\] Summarizing, for every \(|\alpha|>N-4\), we have \[\|\mathbf{T}_{\phi}(Y^{\alpha}(\varphi)Y^{\beta}f)\|_{L^{1}_{x,v}}\lesssim \epsilon^{2}+\epsilon\mathcal{F}(t)+\epsilon e^{t\sigma_{0}}\mathcal{G}(t)+ \frac{\epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_{0})}}\mathcal{F}(t).\] And for every \(|\alpha|>N-5\), we have \[\|\mathbf{T}_{\phi}(Y^{\alpha}(\partial_{x}\varphi)Y^{\beta}f)\|_{L^{1}_{x,v} }\lesssim\frac{\epsilon^{2}}{e^{t}}+\frac{\epsilon}{e^{t(1-\sigma_{0})}} \mathcal{G}(t)+\frac{\epsilon^{\frac{1}{2}}}{e^{t(1-\sigma_{0})}}\mathcal{F}( t).\] Thus, by the method of characteristics we obtain \[\|Y^{\alpha}(\varphi)(t)Y^{\beta}(f)(t)\|_{L^{1}_{x,v}} \leq\int_{0}^{t}\|\mathbf{T}_{\phi}(Y^{\alpha}(\varphi)Y^{\beta}(f ))\|_{L^{1}_{x,v}}(s)ds\] \[\lesssim\epsilon^{2}t+\epsilon^{\frac{1}{2}}\int_{0}^{t}\mathcal{ F}(s)ds+\epsilon\int_{0}^{t}e^{s\sigma_{0}}\mathcal{G}(s)ds,\] and \[\|Y^{\alpha}(\partial_{x}\varphi)(t)Y^{\beta}(f)(t)\|_{L^{1}_{x,v }} \leq\int_{0}^{t}\|\mathbf{T}_{\phi}(Y^{\alpha}(\partial_{x}\varphi )Y^{\beta}(f))\|_{L^{1}_{x,v}}(s)ds\] \[\lesssim\epsilon^{2}+\epsilon\int_{0}^{t}\frac{1}{e^{s(1-\sigma_ {0})}}\mathcal{G}(s)ds+\epsilon^{\frac{1}{2}}\int_{0}^{t}\frac{1}{e^{s(1- \sigma_{0})}}\mathcal{F}(s)ds.\] Therefore, we have \[\mathcal{F}(t) \lesssim\epsilon^{\frac{3}{2}}e^{t\sigma_{0}}+\epsilon^{\frac{1 }{2}}\int_{0}^{t}\mathcal{F}(s)ds+\epsilon\int_{0}^{t}e^{s\sigma_{0}}\mathcal{ G}(s)ds,\] \[\mathcal{G}(t) \lesssim\epsilon+\epsilon\int_{0}^{t}\frac{1}{e^{s(1-\sigma_{0} )}}\mathcal{G}(s)ds+\epsilon^{\frac{1}{2}}\int_{0}^{t}\frac{1}{e^{s(1-\sigma_ {0})}}\mathcal{F}(s)ds.\] Applying Gronwall's lemma to the estimate for \(\mathcal{F}(t)\), we have \[\mathcal{F}(t)\lesssim\Big{(}\epsilon^{\frac{3}{2}}e^{t\sigma_{0}}+\epsilon \int_{0}^{t}e^{s\sigma_{0}}\mathcal{G}(s)ds\Big{)}e^{tC\epsilon^{\frac{1}{2}}}.\] Applying this estimate to the bound of \(\mathcal{G}(t)\), we have \[\mathcal{G}(t)\lesssim\epsilon+\epsilon\int_{0}^{t}\frac{1}{e^{s( 1-\sigma_{0})}}\mathcal{G}(s)ds+\epsilon^{2}\int_{0}^{t}\frac{1}{e^{s(1-2 \sigma_{0}-C\epsilon^{\frac{1}{2}})}}ds\] \[\qquad\qquad\qquad\qquad+\epsilon^{\frac{3}{2}}\int_{0}^{t}\frac {1}{e^{s(1-\sigma_{0}-C\epsilon^{\frac{1}{2}})}}\int_{0}^{s}e^{\tau\sigma_{0} }\mathcal{G}(\tau)d\tau ds,\] where the last term satisfies \[\epsilon^{\frac{3}{2}}\int_{0}^{t}\frac{1}{e^{s(1-\sigma_{0}-C \epsilon^{\frac{1}{2}})}}\int_{0}^{s}e^{\tau\sigma_{0}}\mathcal{G}(\tau)d\tau ds =\epsilon^{\frac{3}{2}}\int_{0}^{t}e^{\tau\sigma_{0}}\mathcal{G }(\tau)\int_{\tau}^{t}\frac{ds}{e^{s(1-\sigma_{0}-C\epsilon^{\frac{1}{2}})}}d\tau.\] \[\leq\frac{\epsilon^{\frac{3}{2}}}{1-\sigma_{0}-C\epsilon^{\frac{ 1}{2}}}\int_{0}^{t}\frac{\mathcal{G}(\tau)}{e^{\tau(1-2\sigma_{0}-C\epsilon^{ \frac{1}{2}})}}d\tau.\] Choosing \(\sigma_{0}\) and \(\epsilon_{\sigma}\) such that \(2\sigma_{0}+C\epsilon^{\frac{1}{2}}\leq\min\{\frac{1}{2},\sigma\}\), we have \[\mathcal{G}(t)\lesssim\epsilon,\qquad\mathcal{F}(t)\lesssim\epsilon e^{t \sigma}.\] ### Improving the bootstrap assumptions In this subsection, we improve the bootstrap assumptions (B1)-(B4) by applying the estimates for the terms \(\|Y^{\alpha}(\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\). **Lemma 5.6.1**.: _Let \(f_{0}\) be an initial distribution function satisfying \(\mathcal{E}^{m}_{N}[f_{0}]\leq\epsilon\). If \(\epsilon>0\) is sufficiently small, then, for all \(t\in[0,T]\), we have_ \[\mathcal{E}^{m}_{N}[f(t)]\leq\frac{3}{2}\epsilon.\] Proof.: By Lemma 5.2.1, for every multi-index \(|\alpha|\leq N\), we have \[[\mathbf{T}_{\phi},Y^{\alpha}]f=\sum_{d=0}^{|\alpha|+1}\sum_{i=1}^{2}\sum_{| \beta|,|\gamma|\leq|\alpha|}P_{d\gamma\beta}^{\alpha i}(\varphi)\partial_{x^{i }}Z^{\gamma}(\phi)Y^{\beta}f,\] where \(P_{d\gamma\beta}^{\alpha i}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) such that \(k\leq|\alpha|-1\) and \(k+|\gamma|+|\beta|\leq|\alpha|+1\). When \(|\gamma|\leq N-3\), we have \[|\partial_{x^{i}}Z^{\gamma}(\phi)|\leq\frac{\epsilon^{\frac{1}{2}}}{e^{t}},\] by the bootstrap assumptions. By Lemma 5.5.2, we have \[\|P_{d\gamma\beta}^{\alpha,i}(\varphi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\lesssim(1+ t)^{N+1}e^{t\sigma}\epsilon,\] since \(k+|\beta|\leq N+1\) and \(k\leq N-1\). By taking \(\sigma>0\) small enough, we have \[\|P_{d\gamma\beta}^{\alpha,i}(\varphi)\partial_{x^{i}}Z^{\gamma}(\phi)Y^{ \beta}(f)\|\lesssim\frac{\epsilon^{\frac{3}{2}}}{e^{t\sigma^{\prime}}},\] for some \(\sigma^{\prime}>0\) If \(|\gamma|>N-3\), we have \[|P_{d\gamma\beta}^{\alpha,i}(\varphi)|\lesssim(1+t)^{N+1},\] since \(k\), \(|\beta|\leq N-4\) due to \(N\geq 7\). By Lemma 5.5.1, we have \[\|\partial_{x^{i}}Z^{\gamma}(\phi)Y^{\beta}(f)\|_{L^{1}_{x,v}}\lesssim\frac{ \epsilon}{e^{t}}\sum_{|\eta|\leq|\gamma|}\|Z^{\eta}(f)\|_{L^{1}_{x,v}}.\] By Lemma 5.2.2, we have \[Z^{\eta}(f)=\sum_{d=0}^{|\eta|}\sum_{|\eta^{\prime}|\leq|\eta|}P_{d\eta^{ \prime}}^{\eta}(\varphi)Y^{\eta^{\prime}},\] where \(P_{d\eta^{\prime}}^{\eta}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) with \(k\leq|\eta|-1\leq N-1\) and \(k+|\eta^{\prime}|\leq|\eta|\leq N.\) By Lemma 5.5.2, we have \[\|Z^{\eta}(f)\|_{L^{1}_{x,v}}\lesssim(1+t)^{N}e^{t\sigma}\epsilon,\] which implies the existence of \(\sigma^{\prime}>0\) such that \[\|P_{d\gamma\beta}^{\alpha,i}(\varphi)\partial_{x^{i}}Z^{\gamma}(\phi)Y^{ \beta}(f)\|\lesssim\frac{\epsilon^{2}}{e^{t\sigma^{\prime}}}.\] Thus, there exists \(\sigma^{\prime}>0\) such that \[\|\mathbf{T}_{\phi}Y^{\alpha}(f)\|_{L^{1}_{x,v}}\lesssim\frac{\epsilon^{\frac{ 3}{2}}}{e^{t\sigma^{\prime}}}.\] As a result, we obtain \[\mathcal{E}_{N}^{m}[f(t)]\leq\mathcal{E}_{N}^{m}[f_{0}]+\sum_{|\alpha|\leq N} \int_{0}^{t}\|\mathbf{T}_{\phi}Y^{\alpha}(f)\|_{L^{1}_{x,v}}\leq\epsilon+C \epsilon^{\frac{3}{2}}\int_{0}^{\infty}\frac{ds}{e^{s\sigma^{\prime}}}\leq \frac{3}{2}\epsilon,\] when \(\epsilon>0\) is small enough. **Lemma 5.6.2**.: _For every multi-index \(|\alpha|\leq N-3\), we have_ \[|\nabla_{x}Z^{\alpha}\phi(t,x)|\leq\frac{\epsilon}{e^{t}}.\] Proof.: The proof follows the same strategy than the proof of Lemma 4.2.2. Using the Green function for the Poisson equation, we estimate the gradient \(\nabla_{x}Z^{\gamma}\phi\) by \[|\nabla_{x}Z^{\gamma}\phi|(t,x)\lesssim\sum_{|\gamma^{\prime}|\leq|\gamma|} \int_{\mathbb{R}^{n}}\frac{1}{|y|^{n-1}}\rho(|Z^{\gamma^{\prime}}f|)(x-y)dy.\] By Lemma 5.2.3, we have \[\rho(Z^{\alpha}f)=\sum_{d=0}^{|\alpha|}\sum_{|\beta|\leq|\alpha|}\rho(Q_{d \beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta}f)+\sum_{j=1}^{|\alpha|}\sum_{d= 1}^{|\alpha|+1}\sum_{|\beta|\leq|\alpha|}\frac{1}{e^{2jt}}\rho(P_{d\beta}^{ \alpha j}(\varphi)Y^{\beta}f),\] where \(Q_{d\beta}^{\alpha}(\partial_{x}\varphi)\) are multilinear forms with respect to \(\partial_{x}\varphi\) of degree \(d\) and signature less than \(k^{\prime}\) such that \(k^{\prime}\leq|\alpha|-1\leq N-4\) and \(k^{\prime}+d+|\beta|\leq|\alpha|\leq N-3\), and \(P_{d\beta}^{\alpha j}(\varphi)\) are multilinear forms of degree \(d\) and signature less than \(k\) such that \(k\leq|\alpha|\leq N-3\) and \(k+|\beta|\leq|\alpha|\leq N-3.\) Applying the weighted Sobolev inequality to every term in the above equation, we have \[|\rho(Q_{d\beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta}f)(x-y)| \lesssim\frac{1}{(e^{t}+|x-y|)^{2}}\sum_{|\eta|\leq 2}\|Y^{\eta}[Q_{d \beta}^{\alpha}(\partial_{x}\varphi)Y^{\beta}f]\|_{L^{1}_{x,v}},\] \[|\rho(P_{d\beta}^{\alpha j}(\varphi)Y^{\beta}(f))(x-y)| \lesssim\frac{1}{(e^{t}+|x-y|)^{2}}\sum_{|\eta|\leq 2}\|Y^{\eta}[P_{d \beta}^{\alpha j}(\varphi)Y^{\beta}f]\|_{L^{1}_{x,v}}.\] Since \(N\geq 7\), there is at most one term \(Y^{\eta^{\prime}}(\varphi)\) with \(|\eta^{\prime}|>N-4\). By the bootstrap assumption and Lemma 5.5.2, we obtain \[|\rho(Z^{\alpha}f)(x-y)|\lesssim\frac{\epsilon}{(e^{t}+|x-y|)^{2}}+\frac{ \epsilon(1+t)^{N}}{(e^{t}+|x-y|)^{2}e^{t}}\lesssim\frac{\epsilon}{(e^{t}+|x- y|)^{2}}.\] By Lemma 4.2.1, we have \[|\nabla_{x}Z^{\gamma}\phi|(t,x)\lesssim\frac{\epsilon}{e^{t}}.\] **Lemma 5.6.3**.: _For every multi-index \(\alpha\) with \(|\alpha|\leq N-4\), we have_ \[|Y^{\alpha}\varphi(t,x,v)|\leq\epsilon(1+t).\] _Moreover, for every multi-index \(\alpha\) with \(|\alpha|\leq N-5\), we have_ \[|Y^{\alpha}\nabla\varphi(t,x,v)|\leq\epsilon.\] Proof.: Integrating along the characteristics, we have \[|Y^{\alpha}\varphi(t,x,v)|\leq\int_{0}^{t}\|\mathbf{T}_{\phi}Y^{\alpha}( \varphi)(s)\|_{L^{\infty}_{x,v}}ds.\] We estimate the two terms of the decomposition \[\mathbf{T}_{\phi}Y^{\alpha}(\varphi)=Y^{\alpha}\mathbf{T}_{\phi}(\varphi)+[ \mathbf{T}_{\phi},Y^{\alpha}](\varphi).\] Using the equation that defines the coefficient \(\varphi\), we have \[Y^{\alpha}\mathbf{T}_{\phi}(\varphi)=e^{t}\sum_{|\eta|\leq|\alpha|+1}c_{\eta,i} \partial_{x^{i}}Z^{\eta}(\phi)+\sum_{d=1}^{|\alpha|}\sum_{|\eta|\leq|\alpha|+1} P_{d\eta}^{\alpha}(\varphi)\partial_{x^{i}}Z^{\eta}(\phi),\] where \(P_{d\eta}^{\alpha}(\varphi)\) are multi-linear forms of degree \(d\) with signatures less than \(k\) such that \(k\leq|\alpha|\leq N-4\) and \(k+|\eta|\leq|\alpha|+1\leq N-3\). By the bootstrap assumptions and the improved estimates for \(\partial_{x^{i}}Z^{\eta}(\phi)\), we have \[|Y^{\alpha}\mathbf{T}_{\phi}(\varphi)|(t)\lesssim\epsilon+\frac{(1+t)^{N+1}}{ e^{t}}\lesssim\epsilon.\] The commutator \([\mathbf{T}_{\phi},Y^{\alpha}](\varphi)\) is treated using Lemma 5.2.1 from where \[[\mathbf{T}_{\phi},Y^{\alpha}]=\sum_{d=0}^{|\alpha|+1}\sum_{i=1}^{2}\sum_{| \beta|,|\gamma|\leq|\alpha|}P_{d\gamma\beta}^{\alpha i}(\varphi)\partial_{x^ {i}}Z^{\gamma}(\phi)Y^{\beta},\] where then multilinear form \(P_{d\gamma\beta}^{\alpha i}(\varphi)\) has degree \(d\) and signature less than \(k\) with \(k\leq|\alpha|-1\leq N-5\) and \(k+|\gamma|+|\beta|\leq|\alpha|+1\leq N-3\). By the bootstrap assumptions, we have \[|[\mathbf{T}_{\phi},Y^{\alpha}](\varphi)|\lesssim\epsilon\frac{1+(1+t)^{N+1}} {e^{t}}\lesssim\epsilon.\] Putting the previous estimates together, we have \[|Y^{\alpha}\varphi(t,x,v)|\lesssim\int_{0}^{t}\epsilon ds\lesssim\epsilon(1+t).\] Replacing the differential operator \(Y^{\alpha}\) by \(Y^{\alpha}\partial_{x}\) in the previous estimates, the term \(\partial_{x^{i}}Z^{\eta}(\phi)\) is replaced by \(\partial_{x^{i}}\partial_{x^{j}}Z^{\eta}(\phi)\) which provides additional decay since \(e^{-t}\partial_{x^{i}}(e^{t}\partial_{x^{j}})Z^{\eta}(\phi)\). As a result, we obtain \[|Y^{\alpha}\nabla_{x}\varphi(t,x,v)|\lesssim\int_{0}^{t}\frac{\epsilon}{e^{s} }ds\lesssim\epsilon.\] In summary, we have improved the bootstrap assumptions (B1)-(B4), and therefore the proof of Theorem 1.2 is completed. ## 6. The trapped set of the characteristic flow In this section, we study the trapped set \(\Gamma_{+}\) of the characteristic flow induced by the small data solutions of (2) that we studied in the previous sections. We give an explicit characterization of \(\Gamma_{+}\), which coincides with the stable manifold at the origin \(W^{s}(0,0)\). ### Properties of the trapped set Let \(f\) be a small data solution to the Vlasov-Poisson system with the potential \(\frac{-|x|^{2}}{2}\), according to the assumptions in Theorem 1.1 or Theorem 1.2. Let us describe the trapped set of the particle system determined by the characteristic flow \[\frac{d}{dt}X(t,x,v)=V(t,x,v),\qquad\frac{d}{dt}V(t,x,v)=X(t,x,v)-\mu\nabla_{x }\phi(t,X(t,x,v)). \tag{37}\] We have shown that for every small data solution of the system, the force field \(\nabla_{x}\phi\) decays exponentially in time. In particular, the origin \(\{x=0,v=0\}\) is formally a fixed point of (37) when \(t\to\infty\). We define the set \[W^{s}(0,0):=\Big{\{}(x,v)\in\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}:(X(t,x,v ),V(t,x,v))\to(0,0)\text{ as }t\to\infty\Big{\}}.\] **Proposition 6.1.1**.: _The set \(W^{s}(0,0)\) is an \(n\)-dimensional invariant manifold of class \(C^{N-n-1}\). Moreover, the set \(W^{s}(0,0)\) is characterized as_ \[W^{s}(0,0)=\Big{\{}(x,v):x+v=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla _{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{\}}. \tag{38}\] _We call \(W^{s}(0,0)\) the stable manifold of the origin._ Proof.: **Characterization of \(W^{s}(0,0)\).** Integrating the characteristic flow (37), we have \[X(t,x,v)+V(t,x,v)+e^{t}\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu \nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}=e^{t}(x+v), \tag{40}\] \[X(t,x,v)-V(t,x,v)-e^{-t}\int_{0}^{t}e^{t^{\prime}}\mu\nabla_{x} \phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}=e^{-t}(x-v). \tag{39}\] Thus, the characteristic flow \((X(t,x,v),V(t,x,v))\) satisfies \[X(t,x,v)=\frac{e^{t}}{2}\Big{(}x+v-\int_{0}^{t}\frac{1}{e^{t^{ \prime}}}\mu\nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{)}\\ +\frac{1}{2e^{t}}\Big{(}x-v+\int_{0}^{t}e^{t^{\prime}}\mu\nabla_{ x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{)}, \tag{42}\] \[V(t,x,v)=\frac{e^{t}}{2}\Big{(}x+v-\int_{0}^{t}\frac{1}{e^{t^{ \prime}}}\mu\nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{)}\\ -\frac{1}{2e^{t}}\Big{(}x-v+\frac{1}{2e^{t}}\int_{0}^{t}e^{t^{ \prime}}\mu\nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{)}. \tag{41}\] If the dimension \(n\geq 2\), then for every \(t_{1}\geq t_{2}\), we have \[\Big{|}\int_{0}^{t_{1}}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi (t^{\prime},X(t^{\prime}))dt^{\prime}-\int_{0}^{t_{2}}\frac{1}{e^{t^{\prime}}} \mu\nabla_{x}\phi(t^{\prime},X(t^{\prime}))dt^{\prime}\Big{|} \lesssim\Big{|}\int_{t_{2}}^{t_{1}}\frac{1}{e^{t^{\prime}}} \nabla_{x}\phi(t^{\prime},X(t^{\prime}))dt^{\prime}\Big{|}\] \[\lesssim\epsilon^{\frac{1}{2}}\int_{t_{2}}^{t_{1}}\frac{dt^{ \prime}}{e^{2t}}\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^{2t_{2}}}.\] Thus, the limit \[\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi(t^{\prime},X(t^{ \prime}))dt^{\prime}, \tag{43}\] is a well-defined real value such that \[\Big{|}\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi(t^{\prime},X (t^{\prime}))dt^{\prime}\Big{|}\lesssim\epsilon^{\frac{1}{2}}. \tag{44}\] Furthermore, for every \(t\geq 0\) we have \[\Big{|}\int_{0}^{t}e^{t^{\prime}}\mu\nabla_{x}\phi(t^{\prime},X(t^{\prime}))dt ^{\prime}\Big{|}\lesssim\epsilon^{\frac{1}{2}}\int_{0}^{t}dt^{\prime}\lesssim \epsilon^{\frac{1}{2}}t,\] where we have used the decay in time of the force field \(\nabla_{x}\phi\). By the representation formulae (41) and (42), we have \[W^{s}(0,0)=\Big{\{}(x,v):e^{t}\Big{(}x+v-\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu \nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{)}\to 0\text{ as }t\to\infty \Big{\}},\] so in particular \[W^{s}(0,0)\subset\Big{\{}(x,v):x+v=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu \nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{\}}.\] Furthermore, if \(x+v=\int_{0}^{\infty}e^{-t^{\prime}}\mu\nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\), then \[e^{t}\Big{|}x+v-\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu\nabla_{ x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{|} =e^{t}\Big{|}\int_{t}^{\infty}\frac{1}{e^{t^{\prime}}}\nabla_{x} \phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{|}\] \[\lesssim\epsilon^{\frac{1}{2}}e^{t}\int_{t}^{\infty}\frac{dt^{ \prime}}{e^{2t^{\prime}}}\lesssim\frac{\epsilon^{\frac{1}{2}}}{e^{t}}.\] Hence, we have \[\Big{\{}(x,v):x+v=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi( t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{\}}\subset W^{s}(0,0),\] so the equality (38) holds. **Nonemptiness of \(W^{s}(0,0)\).** By the smallness (44) of the limits (43), the sets \[A_{i}=\Big{\{}(x^{i},v^{i})\in\mathbb{R}_{x^{i}}\times\mathbb{R}_{v^{i}}:x^{i} +v^{i}>\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\partial_{x^{i}}\phi(t^{ \prime},X(t^{\prime},x,v))dt^{\prime}\Big{\}}\] and \[B_{i}=\Big{\{}(x^{i},v^{i})\in\mathbb{R}_{x^{i}}\times\mathbb{R}_{v^{i}}:x^{i }+v^{i}<\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\partial_{x^{i}}\phi(t^{ \prime},X(t^{\prime},x,v))dt^{\prime}\Big{\}},\] are clearly non-empty for every \(i\in\{1,2,\ldots,n\}\). By the intermediate value theorem, the sets \[W^{s}_{i}(0,0)=\Big{\{}(x^{i},v^{i})\in\mathbb{R}_{x^{i}}\times\mathbb{R}_{v^ {i}}:x^{i}+v^{i}=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\partial_{x^{i}} \phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}\Big{\}}\] are non-empty for every \(i\in\{1,2,\ldots,n\}\). **Invariance of \(W^{s}(0,0)\).** If \((x,v)\in W^{s}(0,0)\), then, we have \[X(t,x,v)+V(t,x,v)=e^{t}\int_{t}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x} \phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime},\] by using the representation formula (39) and the characterization (38) of \(W^{s}(0,0)\). By change of variables, we have \[e^{t}\int_{t}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi (t^{\prime},X(t^{\prime},x,v))dt^{\prime} =e^{t}\int_{0}^{\infty}\frac{1}{e^{t+t^{\prime}}}\mu\nabla_{x}\phi (t+t^{\prime},X(t+t^{\prime},x,v))dt^{\prime}\] \[=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi(t+t^ {\prime},X(t+t^{\prime},x,v))dt^{\prime}.\] Thus \((X(t,x,v),V(t,x,v))\in W^{s}(0,0)\) since \[X(t,x,v)+V(t,x,v)=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}\phi(t+ t^{\prime},X(t+t^{\prime},x,v))dt^{\prime}.\] In other words, \(W^{s}(0,0)\) is invariant. **Manifold structure of \(W^{s}(0,0)\).** We define the maps \(\Psi:\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\to\mathbb{R}^{n}\) and \(\Phi:\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\to\mathbb{R}^{n}\), given by \[\Psi(x,v):=x+v-\Phi(x,v),\qquad\Phi(x,v):=\int_{0}^{\infty}\frac{1}{e^{t^{ \prime}}}\mu\nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v))dt^{\prime}.\] We have proved in (43)-(44) that \(\Phi\) is a well-defined map such that \(|\Phi(x,v)|\leq\epsilon^{\frac{1}{2}}\). In particular, the map \(\Psi\) is also well-defined. In the following, we show that \(\Psi\) and \(\Phi\) are maps in the class \(C^{N-n-1}\). We obtain the proposition by proving \(\det[\partial_{x^{j}}\Psi_{i}](x,v)\neq 0\) for every \((x,v)\in\{(x,v):\Psi(x,v)=0\}\), and then applying the implicit function theorem. _Claim 1_.: For every \((t,x,v)\in[0,\infty)\times\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\) and every \(i\in\{1,2,\dots,n\}\), we have \[|\partial_{x^{i}}X(t,x,v)|\leq(1+2\epsilon^{\frac{1}{2}})e^{t},\qquad|\partial _{v^{i}}X(t,x,v)|\leq(1+2\epsilon^{\frac{1}{2}})e^{t}. \tag{45}\] Proof.: By the formula (41), the derivatives \(\partial_{x^{i}}X(t)\), \(\partial_{v^{i}}X(t)\) satisfy \[\partial_{x^{i}}X(t)=\cosh t +\frac{1}{2e^{t}}\int_{0}^{t}e^{t^{\prime}}\mu\nabla_{x}(\partial _{x^{i}}\phi)(t^{\prime},X(t^{\prime}))\partial_{x^{i}}X(t^{\prime})dt^{\prime}\] \[-\frac{e^{t}}{2}\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x} (\partial_{x^{i}}\phi)(t^{\prime},X(t^{\prime}))\partial_{x^{i}}X(t^{\prime}) dt^{\prime}, \tag{47}\] \[\partial_{v^{i}}X(t)=\sinh t +\frac{1}{2e^{t}}\int_{0}^{t}e^{t^{\prime}}\mu\nabla_{x}(\partial _{x^{i}}\phi)(t^{\prime},X(t^{\prime}))\partial_{v^{i}}X(t^{\prime})dt^{\prime}\] \[-\frac{e^{t}}{2}\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x} (\partial_{x^{i}}\phi)(t^{\prime},X(t^{\prime}))\partial_{v^{i}}X(t^{\prime}) dt^{\prime}, \tag{46}\] In particular, \(\partial_{x^{i}}X(0,x,v)=1\) and \(\partial_{x^{i}}X(0,x,v)=0\) satisfy (45) for every \((0,x,v)\in[0,\infty)\times\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\). The proof of the estimate for \(\partial_{x^{i}}X\) follows by a continuity argument. Let \[T:=\sup\Big{\{}t\geq 0:|\partial_{x^{i}}X(s,x,v)|\leq(1+2\epsilon^{\frac{1}{2}} )e^{s}\text{ for every }s\in[0,t]\Big{\}}. \tag{48}\] Using the bootstrap assumption in (46), we have \[|\partial_{x^{i}}X(t)| \lesssim e^{t}+(1+2\epsilon^{\frac{1}{2}})\frac{\epsilon^{\frac{ 1}{2}}t}{2e^{t}}+(1+2\epsilon^{\frac{1}{2}})\frac{\epsilon^{\frac{1}{2}}e^{t} }{2}\int_{0}^{t}\frac{dt^{\prime}}{e^{2t^{\prime}}}\] \[\lesssim(1+\epsilon^{\frac{1}{2}}+2\epsilon)e^{t}\lesssim\Big{(} 1+\frac{3}{2}\epsilon^{\frac{1}{2}}\Big{)}e^{t}.\] Therefore, the supremum (48) is infinite, and we obtain the desired estimate for \(\partial_{x^{i}}X\). The same argument proves the estimate for \(\partial_{v^{i}}X\). By Claim 1, for every \(t_{1}\geq t_{2}\) we have \[\Big{|}\int_{0}^{t_{1}}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}( \partial_{x^{i}}\phi)(t^{\prime},X(t^{\prime}))\partial_{x^{i}}X(t^{\prime}) dt^{\prime}-\int_{0}^{t_{2}}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}( \partial_{x^{i}}\phi)(t^{\prime},X(t^{\prime}))\partial_{x^{i}}X(t^{\prime})dt^ {\prime}\Big{|}\] \[\lesssim\Big{|}\int_{t_{2}}^{t_{1}}\frac{1}{e^{t^{\prime}}}\nabla_ {x}(\partial_{x^{i}}\phi)(t^{\prime},X(t^{\prime}))\partial_{x^{i}}X(t^{\prime}) dt^{\prime}\Big{|}\] \[\lesssim\epsilon^{\frac{1}{2}}(1+2\epsilon^{\frac{1}{2}})\int_{t _{2}}^{t_{1}}\frac{dt^{\prime}}{e^{2t}}\lesssim\epsilon^{\frac{1}{2}}\frac{1}{e ^{2t_{2}}}.\] Thus, the limit \[\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}(\partial_{x^{i}}\phi)(t^{ \prime},X(t^{\prime}))\partial_{x^{i}}X(t^{\prime})dt^{\prime}, \tag{49}\] is a well-defined real value such that \[\Big{|}\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}(\partial_{x^{i}} \phi)(t^{\prime},X(t^{\prime}))\partial_{x^{i}}X(t^{\prime})dt^{\prime}\Big{|} \lesssim\epsilon^{\frac{1}{2}}. \tag{50}\] Thus, the integral \[\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu\nabla_{x}(\partial_{x^{i}}\phi)(t^{ \prime},X(t^{\prime},x,v))\partial_{x^{i}}X(t^{\prime},x,v)dt^{\prime}\] converges uniformly with respect to \((x,v)\in\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\). Using the continuity of the derivative \(\partial_{x^{i}}(\nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v)))=\nabla_{x}( \partial_{x^{i}}\phi)(t^{\prime},X(t^{\prime},x,v))\partial_{x^{i}}X(t^{ \prime},x,v)\) for every \((t,x,v)\in[0,\infty)\times\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\), then \[\partial_{x^{i}}\Phi(x,v)=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\nabla_{ x}(\partial_{x^{i}}\phi)(t^{\prime},X(t^{\prime},x,v))\partial_{x^{i}}X(t^{ \prime},x,v)dt^{\prime} \tag{51}\] is well-defined. Furthermore, the estimate \(|\partial_{x^{i}}\Phi(x,v)|\lesssim\epsilon^{\frac{1}{2}}\) holds. The same argument shows that \(\partial_{v^{i}}\Phi(x,v)\) is well-defined and \(|\partial_{v^{i}}\Phi(x,v)|\lesssim\epsilon^{\frac{1}{2}}\). We have proved that \(\Phi\) and \(\Psi\) are maps of class \(C^{1}\). Next, we proceed to show that \(\Phi\) and \(\Psi\) are actually maps of class \(C^{N-n-1}\). In the following claim, we will use the multivariate Faa di Bruno formula [13, Theorem 2.1] to estimate the partial derivatives of \(\nabla_{x}\phi(t,X(t))\). For this purpose, we introduce a linear order in \(\mathbb{N}_{0}^{2n}\). If \(\mu=(\mu_{1},\dots,\mu_{2n})\) and \(\nu=(\nu_{1},\dots,\nu_{2n})\) belong to \(\mathbb{N}_{0}^{2n}\), we write \(\mu\prec\nu\) provided one of the following holds: 1. \(|\mu|\leq|\nu|\). 2. \(|\mu|=|\nu|\) and \(\mu_{1}<\nu_{1}\). 3. \(|\mu|=|\nu|\), \(\mu_{1}=\nu_{1}\),..., \(\mu_{k}=\nu_{k}\), and \(\mu_{k+1}<\nu_{k+1}\) for some \(1\leq k\leq 2n\). _Claim 2_.: For every \((t,x,v)\in[0,\infty)\times\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\) and every \(2\leq|\alpha|\leq N-n-1\), we have \[|\partial_{x,v}^{\alpha}X(t,x,v)|\leq(1+2\epsilon^{\frac{1}{2}})e^{t}. \tag{52}\] Proof.: By the formula (41), the derivative \(\partial_{x,v}^{\alpha}X(t)\) satisfies \[\partial_{x,v}^{\alpha}X(t)=\frac{1}{2e^{t}}\int_{0}^{t}e^{t^{\prime}}\mu \partial_{x,v}^{\alpha}(\nabla_{x}\phi(t^{\prime},X(t^{\prime})))dt^{\prime}- \frac{e^{t}}{2}\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu\partial_{x,v}^{\alpha} (\nabla_{x}\phi(t^{\prime},X(t^{\prime})))dt^{\prime}. \tag{53}\] In particular, \(\partial_{x,v}^{\alpha}X(0,x,v)=0\) satisfies (52) for every \((0,x,v)\in[0,\infty)\times\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\). Suppose that (52) holds for every derivative \(\partial_{x,v}^{\beta}X\) with \(|\beta|<|\alpha|\). If \(|\alpha|=2\) the estimate (52) holds for every \(\partial_{x,v}^{\beta}X\) with \(|\beta|<2\) by Claim 1. The proof of the estimate (52) follows by a continuity argument. Let \[T:=\sup\Big{\{}t\geq 0:|\partial_{x,v}^{\alpha}X(s,x,v)|\leq(1+2\epsilon^{\frac{1}{2} })e^{s}\text{ for every }s\in[0,t],\text{ and every }||\Big{\}}. \tag{54}\] By the multivariate Faa di Bruno formula [13, Theorem 2.1], we have \[\partial_{x,v}^{\alpha}(\nabla_{x}\phi(t,X(t)))=\sum_{1\leq|\lambda|\leq| \alpha|}\nabla_{x}\partial_{x}^{\lambda}\phi(t,X(t))\sum_{s=1}^{|\alpha|}\sum _{p_{s}(\alpha,\lambda)}(\alpha!)\prod_{j=1}^{s}\frac{\prod_{i=1}^{2n}(\partial_ {x,v}^{l_{j}}X^{i})^{k_{j}^{i}}}{k_{j}!(l_{j}!)^{|k_{j}|}}, \tag{55}\] where \[p_{s}(\alpha,\lambda)=\Big{\{}(k_{1},\ldots,k_{s};l_{1},\ldots,l_{s })\in(\mathbb{N}_{0}^{2n})^{2s}:|k_{i}|>0,\quad 0\prec l_{1}\prec\ldots,\prec l_{s},\] \[\sum_{i=1}^{s}k_{i}=\lambda,\quad\sum_{i=1}^{s}|k_{i}|l_{i}=\nu \Big{\}}.\] Using the bootstrap assumption to estimate the derivative \(\partial_{x,v}^{\alpha}(\nabla_{x}\phi(t,X(t)))\), we have \[|\partial_{x,v}^{\alpha}(\nabla_{x}\phi(t,X(t)))| \lesssim\epsilon^{\frac{1}{2}}\sum_{1\leq|\lambda|\leq|\alpha|}(1+ 2\epsilon^{\frac{1}{2}})^{|\lambda|}e^{-t(1+|\lambda|)}e^{t\sum_{j=1}^{s}|k_{ j}|}\] \[\lesssim\epsilon^{\frac{1}{2}}e^{-t}\sum_{1\leq|\lambda|\leq| \alpha|}(1+2\epsilon^{\frac{1}{2}})^{|\lambda|}\lesssim\epsilon^{\frac{1}{2} }e^{-t}, \tag{56}\] where we have used the decay in time of the force field. Applying (56) in the representation formula (53), we obtain \[|\partial_{x,v}^{\alpha}X(t)|\lesssim\frac{t}{e^{t}}\epsilon^{\frac{1}{2}}+e^ {t}\epsilon^{\frac{1}{2}}\lesssim\Big{(}1+\frac{3}{2}\epsilon^{\frac{1}{2}} \Big{)}e^{t}.\] Therefore, the supremum (54) is infinite, and we obtain the desired estimate for \(\partial_{x,v}^{\alpha}X(t)\). By Claim 2, for every \(t_{1}\geq t_{2}\) we have \[\Big{|}\int_{0}^{t_{1}}\frac{1}{e^{t^{\prime}}}\mu\partial_{x,v}^ {\alpha}(\nabla_{x}\phi(t^{\prime},X(t^{\prime})))dt^{\prime}-\int_{0}^{t_{2}} \frac{1}{e^{t^{\prime}}}\mu\partial_{x,v}^{\alpha}(\nabla_{x}\phi(t^{ \prime},X(t^{\prime})))dt^{\prime}\Big{|}\] \[\lesssim\Big{|}\int_{t_{2}}^{t_{1}}\frac{1}{e^{t^{\prime}}} \partial_{x,v}^{\alpha}(\nabla_{x}\phi(t^{\prime},X(t^{\prime})))dt^{\prime} \Big{|}\] \[\lesssim\epsilon^{\frac{1}{2}}\int_{t_{2}}^{t_{1}}\frac{dt^{ \prime}}{e^{2t}}\lesssim\epsilon^{\frac{1}{2}}\frac{1}{e^{2t_{2}}}.\] Thus, the limit \[\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\partial_{x,v}^{\alpha}(\nabla_{x} \phi(t^{\prime},X(t^{\prime})))dt^{\prime}, \tag{57}\] is a well-defined real value such that \[\Big{|}\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu\partial_{x,v}^{\alpha}( \nabla_{x}\phi(t^{\prime},X(t^{\prime})))dt^{\prime}\Big{|}\lesssim\epsilon^{ \frac{1}{2}}. \tag{58}\] Thus, the integral \[\int_{0}^{t}\frac{1}{e^{t^{\prime}}}\mu\partial_{x,v}^{\alpha}(\nabla_{x}\phi( t^{\prime},X(t^{\prime},x,v)))dt^{\prime}\] converges uniformly with respect to \((x,v)\in\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\). Using the continuity of the derivative \(\partial_{x,v}^{\alpha}(\nabla_{x}\phi(t,X(t,x,v)))\) for every \((t,x,v)\in[0,\infty)\times\mathbb{R}_{x}^{n}\times\mathbb{R}_{v}^{n}\), then \[\partial_{x,v}^{\alpha}\Phi(x,v)=\int_{0}^{\infty}\frac{1}{e^{t^{\prime}}}\mu \partial_{x,v}^{\alpha}(\nabla_{x}\phi(t^{\prime},X(t^{\prime},x,v)))dt^{\prime} \tag{59}\] is well-defined. Furthermore, the estimate \(|\partial_{x,v}^{\alpha}\Phi(x,v)|\lesssim\epsilon^{\frac{1}{2}}\) holds. We have proved that \(\Phi\) and \(\Psi\) are maps of class \(C^{N-n-1}\). As a result, for every \((x,v)\in\{(x,v):\Psi(x,v)=0\}\) we have \[\det[\partial_{x^{j}}\Psi_{i}](x,v)=\det[\delta_{ij}-\partial_{x^{j}}\Phi_{i}](x, v)>0,\] since \(|\partial_{x^{i}}\Phi(x,v)|\lesssim\epsilon^{\frac{1}{2}}\). By the implicit function theorem, we conclude that \(W^{s}(0,0)\) is an \(n\)-dimensional manifold of class \(C^{N-n-1}\). **Corollary 6.1.2**.: _The trapped set \(\Gamma_{+}\) of the characteristic flow (37) is equal to the stable manifold of the origin \(W^{s}(0,0)\)._ Proof.: By the representation formulae (41) and (42), if \(x+v\neq\int_{0}^{\infty}e^{-t^{\prime}}\mu\nabla_{x}\phi(t^{\prime},X(t^{ \prime},x,v))dt^{\prime}\), then \(|X(t,x,v)|\to\infty\) and \(|V(t,x,v)|\to\infty\). In other words, every \((x,v)\in\mathbb{R}^{n}_{x}\times\mathbb{R}^{n}_{v}\setminus W^{s}(0,0)\) escapes to infinity. In contrast, every \((x,v)\in W^{s}(0,0)\) is trapped by definition. Proof of Theorem 1.3.: Apply Proposition 6.1.1 and Corollary 6.1.2.
2302.01483
SPADE: Self-supervised Pretraining for Acoustic DisEntanglement
Self-supervised representation learning approaches have grown in popularity due to the ability to train models on large amounts of unlabeled data and have demonstrated success in diverse fields such as natural language processing, computer vision, and speech. Previous self-supervised work in the speech domain has disentangled multiple attributes of speech such as linguistic content, speaker identity, and rhythm. In this work, we introduce a self-supervised approach to disentangle room acoustics from speech and use the acoustic representation on the downstream task of device arbitration. Our results demonstrate that our proposed approach significantly improves performance over a baseline when labeled training data is scarce, indicating that our pretraining scheme learns to encode room acoustic information while remaining invariant to other attributes of the speech signal.
John Harvill, Jarred Barber, Arun Nair, Ramin Pishehvar
2023-02-03T01:36:38Z
http://arxiv.org/abs/2302.01483v1
# Spade: Self-supervised Pretraining for Acoustic Disentanglement ###### Abstract Self-supervised representation learning approaches have grown in popularity due to the ability to train models on large amounts of unlabeled data and have demonstrated success in diverse fields such as natural language processing, computer vision, and speech. Previous self-supervised work in the speech domain has disentangled multiple attributes of speech such as linguistic content, speaker identity, and rhythm. In this work, we introduce a self-supervised approach to disentangle room acoustics from speech and use the acoustic representation on the downstream task of device arbitration. Our results demonstrate that our proposed approach significantly improves performance over a baseline when labeled training data is scarce, indicating that our pretraining scheme learns to encode room acoustic information while remaining invariant to other attributes of the speech signal. John Harvill\({}^{1*}\), Jarred Barber\({}^{2\dagger}\), Arun Nair\({}^{2}\), Ramin Pishehvar\({}^{2}\)\({}^{1}\)University of Illinois Urbana-Champaign, USA \({}^{2}\)Amazon Alexa AI, USA keyword spotting, source localization, self-supervised pretraining, disentanglement, acoustics ## 1 Introduction Disentanglement of speech into its multiple components is a fundamental problem in signal processing with applications in voice conversion [1, 2, 3, 4], automatic speech recognition [5, 6, 7, 8], speaker recognition [9], and privacy preservation in speech [10]. The goal of disentanglement is to separate different attributes of the speech signal such that the final signal representation will be invariant to attributes not relevant to the downstream task. Previous work has focused on invariance towards acoustic content1 so that target attributes like speaker identity or linguistic content will be emphasized. In this paper, we specifically want to preserve acoustic content and extract representations that are invariant to all other attributes in a self-supervised fashion. These representations are then used for the task of device arbitration [11]. Footnote 1: In this paper, “acoustic content”, or “acoustics” refers to the information in an audio signal related to the Room Impulse Response (RIR) at a particular location in a room when the source audio is played from a different, fixed location. The device arbitration task has arisen recently due to the ubiquity of smart voice assistant-enabled devices, which we refer to as "voice assistants" (VA) or "devices" interchangeably. Many households now have multiple VAs in the same room, leading to ambiguity with respect to which device should interact with the user. When the user wishes to begin interaction with a VA, they must first utter a wakeword ("Alexa", "Hey Google", etc.) which is then recorded by all devices in the room. For the most natural user experience, only the intended device from the user should wake up and continue to interact with the user. This leads to the device arbitration problem: _given \(N\) recordings of a source audio, where each recording comes from a VA, determine which VA is the intended one_. Note that device arbitration is closely related to the well-studied source localization problem [12]. Time Difference Of Arrival (TDOA) [13] is an effective technique that solves source localization for audio signals but unfortunately requires large arrays of microphones not present on VAs. There are several other constraints imposed by modern VAs that make device arbitration a non-trivial problem: (1) The positions of each VA are unknown and could change over time (moving the device from the living room to the kitchen). (2) The acoustic environment of the VAs is unknown. (3) Clock synchronization between devices is not always available, making TDOA between devices infeasible. Given these constraints, the device arbitration task must be solved by only relying on the room acoustic information present in each audio recording. Motivated by these constraints, we propose SPADE: Self-supervised Pretraining for Acoustic DisEntanglement. SPADE is a pretraining technique that disentangles acoustic information from speech signals by using multiple views of a source audio without the need for labels. We find that when used in combination with previous work on device arbitration [11], SPADE leads to improved performance when less labeled training data is present. Given that labeled data is difficult to collect for this task, SPADE is an invaluable technique for improving performance on device arbitration at zero additional inference cost. The remainder of our paper is organized as follows: In Section 2 we discuss prior work on device arbitration and related tasks. In Section 3, we discuss the data used in our experiments and the simulation process from which it is generated. In Section 4, we detail our proposed pretraining approach. In Section 5, we discuss our experimental setup and in Section 6 we discuss results. The paper ends with conclusions and suggestions for future work in Section 7. ## 2 Prior Work While device arbitration is a relatively new problem, it is closely related to source localization [12, 13]. The goal of source localization is to determine the position of the object emitting sound, i.e. the source. Current techniques rely on large arrays of microphones which are not available given our problem setup. Previous work [11] also demonstrated that directly predicting source distances from each device and making arbitration decisions based on those distances resulted in worse performance. The goal in arbitration is to select an attended device the user is speaking to based on distance and direction. Common techniques compare signal attributes like Signal-to-Noise Ratio (SNR), estimated distance between source and microphone, cross-correlation, etc. [14]. The goal of device arbitration is different from channel selection [14, 15] since the optimal device is simply defined as that which is closest to the user or as the attended device (defined as the one in the look direction of the user or any other acoustical or visual relevant cue), and not that which leads to better performance on another downstream task. The differences in problem setup and objective between device arbitration and source localization show that device arbitration should be studied separately and has its own set of unique challenges that make it an interesting problem. We used as a baseline, the machine learning-based device arbitration proposed by Barber et al. [11]. The authors proposed an end-to-end approach to train a neural network to perform device arbitration. Their model consists of a small convolutional feature extractor that runs locally on each device, and a larger arbitration network that runs in the cloud to make the final decision. Results demonstrated significantly improved performance across a variety of room conditions over a simple energy-based approach. ## 3 Dataset Simulation Currently, there is no large-scale dataset for device arbitration with known ground truth labels, so we run our experiments with simulated data following the three main steps described in Barber et al. [11]: * **Sample scenario:** Sample a room from a variety of acoustic settings (room length/width, noise sources) as well as device/speaker positions within the room. Given device/speaker positions, generate the arbitration label based on smallest Euclidean distance from device to speaker. * **Generate RIR:** Given the sampled scenario, generate a Room Impulse Response (RIR) for each device in the scenario using an acoustic simulator. * **Generate audio:** Convolve speech utterances with generated RIRs and mix with noise for each device. After this step, we have an artificial dataset of device arbitration audio and corresponding ground truth labels. We use the Image Source Method (ISM) [16] for data simulation and source audio from [11], but have updated the scene sampling hyperparameters to those in Table 1. ## 4 Method Our arbitration model is based on that proposed by Barber et al. [11] and is composed of two components: the feature encoder and the classifier that makes the arbitration decision. Prior to end-to-end training of the encoder and classifier, we pretrain the encoder using two schemes: contrastive and reconstructive pretraining. Our preprocessing pipeline and pretraining schemes are discussed in the following subsections. ### Audio Preprocessing Audio is first transformed to log-filterbank energy (LFBE) features, where the spectrogram is computed using a 25ms frame size and a frame skip of 10ms. The Mel transform is applied to the spectrogram with 64 Mel bands, followed by the log transform. LFBE features are mean and variance normalized before downstream processing by neural models. ### Encoder Architecture The encoder architecture is a residual convolutional model composed of 18 convolutional layers, with batch normalization and the ReLU activation function applied to each layer. The encoder model is the same for all approaches discussed in this paper (pretraining, baseline). This network produces a sequential feature representation of the input with much smaller temporal resolution than the input LFBE features. During pretraining we use a small Transformer [17] network to map the sequence of vectors output by the encoder to a single vector. This network is discarded after pretraining but is implicit as part of the encoder in the following pretraining discussions and Figs. 1 and 2. ### Contrastive Pretraining Contrastive loss functions have been shown to create high-quality representations across a variety of domains like speech and natural language processing [18, 19]. This family of loss functions operates by assuming data are similar or dissimilar with respect to a particular attribute and encouraging embeddings of the data to reflect these relationships via distance in an embedding space. In a device arbitration scene, audio from the same device has the same room acoustic properties, while audio from different devices will have different acoustic properties.2 Since we want to encode room acoustic information, we can encourage embeddings of audio from the same device to be similar and simultaneously encourage embeddings of audio from different devices to be orthogonal (see Fig. 1). Since speech content will be the same across all \(N\) recordings from a given \begin{table} \begin{tabular}{|c|c|} \hline Parameter & Distribution \\ \hline Room length/width (m) & Uniform(3.0, 10.0) \\ \hline Room height (m) & Uniform(2.5, 6.0) \\ \hline Reverberation time (s) & Beta(1.1, 3.0) \\ \hline Number of devices & ShiftedPoisson(m=3,l=2,h=15) \\ \hline Number of noise sources & ShiftedPoisson(m=2,l=1,h=5) \\ \hline Speech level (dB SPL) & Uniform(55.0, 70.0) \\ \hline Noise level (dB SPL) & Uniform(50.0, 70.0) \\ \hline \end{tabular} \end{table} Table 1: Hyperparameters for scene sampling. For the ShiftedPoisson distribution we denote mean with “m”, low with “l” and high with “h.” Figure 1: Contrastive pretraining scheme: Acoustic encoder has shared weights. room, the embedding should be invariant to speech content and only encode acoustic information. Let us denote the audio recorded by device \(a\) as \(x^{a}\). For this pretraining approach, we create a fixed-length embedding \(z^{a}\) of \(x^{a}\) by passing through the encoder3, i.e. \(z^{a}=\text{encoder}(\text{LFBE}(x^{a}))\). Then \(z^{a}\) is normalized to have magnitude one. We can denote a continuous slice of \(x^{a}\) from time \(t_{1}\) to \(t_{2}\) as \(x^{a}_{t_{1}:t_{2}}\) where \(t_{2}>t_{1}\) and its corresponding fixed-length embedding as \(z^{a}_{t_{1}:t_{2}}\). Given \(N\) recordings \(x^{1},x^{2},...,x^{N}\) that are zero-padded to the same length \(T\) and a splitting index \(t_{split}\) such that \(0<\frac{T}{2}-\epsilon<t_{split}<\frac{T}{2}+\epsilon<T\), where \(\epsilon\) is a small random jitter, we arrive at our loss function \(\mathcal{L}_{C}\) for one arbitration scenario: Footnote 3: The Transformer network discussed in Section 4.2 is implied here to follow the encoder to create the fixed-length representation. \[\mathcal{L}_{1}=\sum_{i=1}^{N}\sum_{j=1}^{N}|\langle z^{i}_{0:t_{split}},z^{ j}_{t_{split}:T}\rangle-\delta_{ij}| \tag{1}\] \[\mathcal{L}_{2}=\sum_{i=1}^{N}\sum_{j\neq i}|\langle z^{i}_{0:t_{split}},z^{ j}_{0:t_{split}}\rangle|+|\langle z^{i}_{t_{split}:T},z^{j}_{t_{split}:T}\rangle| \tag{2}\] \[\mathcal{L}_{C}=\mathcal{L}_{1}+\mathcal{L}_{2} \tag{3}\] where \(\delta_{ij}\) is the Kronecker delta function. Note that \(\mathcal{L}_{1}\) encourages the two halves of the same audio recording to map to the same embedding. The \(\mathcal{L}_{2}\) term provides stronger supervision for invariance to speech content by encouraging different recordings of the same respective halves of the audio to be orthogonal. ### Reconstructive Pretraining Disentanglement of different attributes in speech has been accomplished previously using autoencoding with an information bottleneck [1, 2]. We take a similar approach to disentangle room acoustic information from speech in a self-supervised fashion, making the assumption that the room acoustic properties are constant4 for the duration of the wakeword audio (\(\sim\)2s). Footnote 4: This stationarity assumption may not be true in all cases (people/pets moving around), but it is reasonable given that audio is only recorded over a two-second interval. Our model consists of a speech encoder, an acoustic encoder, and a reconstruction decoder. The speech encoder \(S(\cdot)\) and reconstruction decoder \(R(\cdot)\) are Transformers [17] and each produce a sequence of vectors. We design the acoustic encoder \(A(\cdot)\) to produce a fixed-dimensional embedding due to the stationarity assumption of the acoustics and the need to create an information bottleneck. Given that \(N\) recordings of a source audio all contain identical speech content, the speech representation \(s_{i}=S(x^{i})\) should be the same for each recording, i.e. \(s^{1}\approx s^{2}\approx...\approx s^{N}\). To encourage this, we reconstruct one audio recording's LFBE features using its acoustic embedding and the speech embedding from another audio recording in the room. The information bottleneck enforced by creating a small fixed-dimensional embedding encourages the acoustic encoder only to represent information not common to all signals, i.e. room acoustics. Given that the \(N\) recordings are not time-aligned, we provide alignment information to the decoder by extracting the envelope \(env(x^{i})\) of the LFBE features for \(x^{i}\), which is the mean of the feature vector at each timestep: \[env(x^{i})_{k}=\frac{1}{N}\sum_{q=1}^{N}\text{LFBE}(x^{i})_{kq} \tag{4}\] where \(k\) denotes the time axis, \(q\) denotes the feature axis of the LFBE feature matrix, and \(N=64\). Note that the envelope extractor is not a learnable module but rather a simple mean operation at each timestep to produce a low-dimensional representation of the LFBE features (feature vector \(\rightarrow\) scalar). We can formally write our loss function \(\mathcal{L}_{R}\) for one arbitration scenario as: \[\mathcal{L}_{R}=\frac{1}{N}\sum_{i=1}^{N}||\text{LFBE}(x^{i})-R(S(x^{\neq i}),A(x^{i}),env(x^{i}))||_{2}^{2} \tag{5}\] where \(x^{\neq i}\) denotes a randomly-selected audio recording from \(\{x^{1},x^{2},...,x^{N}\}\) other than \(x^{i}\). We form the input to \(R(\cdot)\) by copying \(A(x^{i})\) along the time axis and concatenating it to \(S(x^{\neq i})\) and \(env(x^{i})\). See Fig. 2 for a diagram detailing this process. ### Arbitration Classifier Architecture The arbitration classifier was implemented previously [11] as a Multilayer Perceptron (MLP) network, but we found further improvement using a self-attention network like the Transformer [17]. For each device \(i\), the encoder outputs a sequence of hidden states \(h_{1}^{i},h_{2}^{i},...,h_{K}^{i}\). For \(N\) devices, the hidden states are concatenated along the time axis to form the sequence: \[H=h_{1}^{1},h_{2}^{1},...,h_{K}^{1},h_{1}^{2},h_{2}^{2},...h_{K}^{2},...,h_{1} ^{N},h_{2}^{N},...h_{K}^{N} \tag{6}\] Figure 2: Reconstruction pretraining scheme: Speech encoder, envelope, and reconstruction network have same time axis dimension as the LFBE time axis. Acoustic encoder creates fixed-length embedding that is copied along time axis of speech encoder and envelope output. Diagonally-shaded units are from non-learnable modules. which is then passed through a network of self-attention layers5 to produce the sequence: Footnote 5: Positional encodings are added to the input. \[G=g_{1}^{1},g_{2}^{1},...,g_{K}^{1},g_{1}^{2},g_{2}^{2},...g_{K}^{2},...,g_{1}^{N },g_{2}^{N},...g_{K}^{N} \tag{7}\] Each sequence \(g_{1}^{i},g_{2}^{i},...,g_{K}^{i}\) is then passed through a second network of self-attention to create a summary \(G_{i}\) over time. Each \(G_{i}\) is then passed through a two-layer feedforward neural network, outputting a scalar logit for device \(i\). The logits are then passed through a softmax layer to produce the arbitration probabilities. The entire classification network is optimized using the crossentropy loss between the arbitration probabilities and the ground truth label distribution. ### Baseline The contributions of this paper are the self-supervised pretraining approaches, so as a baseline, we use the encoder and classifier networks discussed previously but do not pretrain the encoder. ## 5 Experiments The training dataset consists of 300k arbitration scenarios. To demonstrate the effectiveness of pretraining for learning representations useful for device arbitration, we create datasets of exponentially decreasing size. For dataset \(i\) the training set size \(s_{i}=\lfloor S/4^{i}\rfloor\), where \(S=\)300k (full training set size). We choose \(i\in\{0,1,2,3\}\) such that the smallest training set consists of \(\sim\) 4.7k scenarios. ### Experimental Procedure For each experiment, we choose the final arbitration model based on the checkpoint with the lowest validation loss and evaluate on a held-out test set. We have four experiment setups: * **Baseline:** Train encoder-classifier model end-to-end on each of the training data subsets of size \(s_{i}\) for \(i\in\{0,1,2,3\}\). * **Contrastive:** Pretrain (acoustic) encoder using contrastive approach on all available training data (300k scenarios, no labels involved). Pick best validation checkpoint as initialization for encoder and then finetune encoder-classifier model end-to-end on each training data subset of size \(s_{i}\) for \(i\in\{0,1,2,3\}\). * **Reconstructive:** Same as contrastive setup except that we use reconstructive pretraining. * **Combo:** Pretrain using both contrastive and reconstructive pretraining. Loss function becomes \(\mathcal{L}=\lambda\mathcal{L}_{R}+(1-\lambda)\mathcal{L}_{C}\) where we set \(\lambda=0.5\). Finetune encoder-classifier model as in previous setups. ## 6 Results Results are presented here as _relative error rate_ with respect to the performance of the 4.7k baseline setting. Denoting the accuracy of the target method \(m\) as \(\text{acc}_{m}\) and the accuracy of the 4.7k baseline setting as \(\text{acc}_{\text{base}}\), we compute relative error rate as: \[\text{err}_{\text{rel}}=\frac{1-\text{acc}_{m}}{1-\text{acc}_{\text{base}}} \tag{8}\] Our results are presented in Figure 3. The most important trend is that the pretraining approaches outperform the baseline by a larger margin when the training dataset is small. This demonstrates that our proposed pretraining schemes preserve acoustic information, learning features relevant to the device arbitration problem and are most beneficial when labeled training data is scarce. Initialization of the encoder with a pretrained checkpoint helps combat overfitting during the device arbitration training process. We also find that the combination of contrastive and reconstructive pretraining does not lead to any noticeable improvement over either approach in isolation, indicating that both approaches may encode similar information. ## 7 Conclusion In this paper we propose constrastive and reconstructive pretraining, two forms of self-supervised representation learning, that disentangle acoustic content from speech. Unlike previous work that has aimed to create representations that are invariant to acoustic content, we aim to encode acoustic content only and demonstrate its usefulness through the device arbitration problem. We find that both of our proposed pretraining approaches lead to improvement over the baseline, and that improvement is more significant when the labeled training dataset is small. This provides empirical evidence that our pretraining objectives lead to representations of acoustic content that can be useful for the device arbitration task even in the absence of a large training corpus. Given that self-supervised techniques require no human annotations, it may be possible to apply our proposed approaches to other research problems. For example, our disentangled acoustic representations may be used for other acoustic tasks like room acoustic property estimation or acoustic adaptation for home theater. While not studied in this paper, the speech content representations learned from reconstructive pretraining may be valuable for acoustic-invariant applications like speaker recognition or ASR since they are designed to encode all information except acoustics. Figure 3: Relative Error Rate with respect to worst case.
2304.02204
Identifying topologically critical band from pinch-point singularities in spectroscopy
In this paper, we investigate the relationship between pinch point singularities observed in energy- and momentum-resolved spectroscopy and topologically non-trivial gapless points. We show that these singularities are a universal signature, and that the Berry flux encoded must be $n\pi$ for an $n-$fold pinch point under suitable symmetry protection. Our results apply to most systems and are independent of their microscopic details. Hence they provide a new way to identify topological phases without requiring detailed knowledge of the microscopic model. Our work can be readily applied in spectroscopy experiments on various platforms.
Han Yan
2023-04-05T03:31:50Z
http://arxiv.org/abs/2304.02204v4
# Identifying topologically critical band from pinch-point singularities in spectroscopy ###### Abstract In this paper, we investigate the relationship between pinch point singularities observed in energy- and momentum-resolved spectroscopy and topologically non-trivial gapless points. We show that these singularities are a universal signature, and that the Berry flux encoded must be \(n\pi\) for an \(n-\)fold pinch point under suitable symmetry protection. Our results apply to most systems and are independent of their microscopic details. Hence they provide a new way to identify topological phases without requiring detailed knowledge of the microscopic model. Our work can be readily applied in spectroscopy experiments on various platforms. ## I Introduction Topologically critical gapless points in the electron and magnon band structures are significant and frequently-occurring features of quantum matter. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Usually, to unambiguously determine if a gapless point is accidental or topological, one needs to reconstruct the Hamiltonian by reading the band dispersion relations from the energy and momentum-resolved spectroscopy, while also acquiring a significant amount of microscopic information including lattice structure, symmetries _etc_. Such spectroscopy techniques include angle-resolved photoemission spectroscopy (ARPES) [14] and scanning tunneling microscopy (STM) for electron bands [15, 16], and inelastic neutron scattering (INS) for magnon bands [17], and polariton photoluminescence (PP) for photonic lattices [18]. In this work, we discuss a different way to unambiguously identify topologically non-trivial gapless points from spectroscopy alone, without knowing much else about the system. This approach utilizes the often-ignored information: the spectroscopic intensity distribution on the bands. Due to the winding of the wavefunction around a topologically non-trivial gapless point, the spectroscopy intensity on the two bands can show a universal, characteristic singular pattern which we call an \(n-\)fold pinch point [Fig. 1][19]. Further more, if the system admits a suitable symmetry, the pinch point is _guaranteed_ to be topologically critical and encodes a Berry curvature of \(n\pi\). Experimentalists have actually observed this universal pattern in a various materials [Table. 1], although the hidden connection has not been discussed much. The simplest case, \(1-\)fold pinch point, is actually a Dirac cone, and has been shown in ARPES experiments [20, 21, 22, 24] on several graphene-based materials [47, 6, 48]. The \(2-\)fold pinch points appear in bilayer graphene [33], FeSe [41, 42], and various frustrated lattice materials [34, 35, 36, 37, 38, 39, 40, 43, 46, 49, 50, 51, 52]. The \(n-\)fold pinch point's implication of the underlying Gauss's law has been a focus in classical and quantum spin liquids [49, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62], but the connection to Berry curvature has been rarely mentioned. The universal patterns of pinch points has a high application value. Its advantage relies on the fact that it does not require one to know much about the microscopic details of the matter or to reconstruct the full Hamiltonian. Also, this is particularly useful for two touching-bands with quadratic or higher-order g \begin{table} \begin{tabular}{l l l l l} \hline \hline & Pinch point & Material & Experiment & Ref. \\ \hline \(\star\) & 1-fold & graphene & ARPES & [20, 21, 22, 23] \\ \(\star\) & 1-fold & graphene & ARPES & [24] \\ \(\star\) & 1-fold & CoTiO\({}_{3}\) & INS & [25, 26] \\ \(\star\) & 1-fold, gapped & YMn\({}_{6}\)Sn\({}_{6}\) & INS & [27] \\ \(\star\) & 1-fold, gapped & CrI\({}_{3}\) & INS & [28, 29] \\ \(\star\) & 1-fold, gapped & CrBr\({}_{3}\) & INS & [30] \\ \(\star\) & 1-fold, gapped & CrSeTe\({}_{3}\) & INS & [31] \\ \(\star\) & 1-fold, gapped & Fe\({}_{3}\)Sn\({}_{2}\) & ARPES & [32] \\ \(\star\) & 2-fold & graphene & STM & [33] \\ \(\star\) & 2-fold & \(\mathrm{Nd}_{2}\)Zr\({}_{2}\)O\({}_{7}\) & INS & [34, 35, 36, 37] \\ & 2-fold & Ca\({}_{10}\)Cr\({}_{7}\)O\({}_{28}\) & INS & [38, 39, 40] \\ \(\star\) & 2-fold, gapped & FeSe & ARPES & [41, 42] \\ \(\star\) & 2-fold, gapped & CoSn & ARPES, STM & [43, 44] \\ \(\star\) & 2-fold, gapped & Lu\({}_{2}\)V\({}_{2}\)O\({}_{7}\) & INS & [45] \\ & 2-fold, gapped & Cu(1,3-bdc) & INS & [46] \\ \(\star\) & 2-fold splits & photonic orbital & PP & [18] \\ \(\star\) & or opens gap & graphene & & \\ \hline \hline \end{tabular} \end{table} Table 1: A survey of known experiments exhibiting the pinch points [Figs. 1, 3]. For items labeled with a star (\(\star\)), the pinch points can be seen directly from or inferred from the references. For items without the star, there are no direct observation of the pinch points due to technological limits. But we predict that the pinch points can be observed in principle. The gapped pinch point case means the two bands have a small gap opening as shown in Fig. 3(c). “2-fold splits” means the 2-fold pinch point splits into two copies of 1-fold pinch point, as shown in Fig. 3(b). See first paragraph of main text for abbreviations for experimental methods. where the dispersion alone is not a very distinguishing factor like the Dirac cones. Another application is to twisted bilayer systems [63; 64; 65], where the full Hamiltonian for all bands is practically impossible to reconstruct. Dirac cone as 1-fold pinch point.--We start by introducing the "pinch point". In this worke work on 2D systems, but its generalization to 3D is fairly straightforward. The simplest case -- \(1-\)fold pinch point -- is imprinted on the most common ingredient of topological band systems: the Weyl/Dirac cone [3; 26; 66]. Consider a local region in the reciprocal momentum space, where two bands have a degenerate point set at \(\mathbf{q}=\mathbf{0}\) [cf. Fig. 1(a)]. In the neighborhood of \(\mathbf{q}=\mathbf{0}\), they are also gapped from other bands, so we can focus on the two-band subsystem only. Spectroscopy measures the band structure as well as the intensity distribution of certain correlation function on each band. We denote the upper and lower band's dispersion relations as \(\omega_{-}(\mathbf{q})\) and \(\omega_{+}(\mathbf{q})\). The energy and momentum-resolved spectroscopy of the two bands are (assuming infinitely fine resolution) \[\begin{split}\mathcal{S}_{+}(\omega,\mathbf{q})&=\delta (\omega-\omega_{+}(\mathbf{q}))S_{+}(\mathbf{q}),\\ \mathcal{S}_{-}(\omega,\mathbf{q})&=\delta(\omega- \omega_{-}(\mathbf{q}))S_{-}(\mathbf{q}).\end{split} \tag{1}\] Here, we separated the dispersion \(\delta(\omega-\omega_{\pm}(\mathbf{q}))\) and the intensity distribution \(S_{\pm}(\mathbf{q})\) for future convenience. The intensity \(S_{\pm}\) usually measures the amplitude of the wavefunction in a particular basis. For example, in case of graphene, it measures \(\langle(c_{A}^{\dagger}+c_{B}^{\dagger})(c_{A}+c_{B})\rangle\) of the band wavefunction, where \(A,B\) are the two sublattice indices, so the basis measured is \(c_{A}+c_{B}\). The wavefunction \((1/\sqrt{2},1/\sqrt{2})^{T}\) has maximal intensity, while \((1/\sqrt{2},-1/\sqrt{2})^{T}\) has zero. The \(1-\)fold pinch point refers to the spectroscopic intensity \(S_{\pm}(\mathbf{q})\) distribution on the Dirac cone as illustrated in Fig. 1(b2). The intensity distribution only depends on the angle around the gapless point. On one band, it reaches zero on one side, and maximum on the other. The intensity varies smoothly except at the gapless point, where it becomes singular (i.e. not continuous). The other band has a similar pattern of intensity distribution, but the strong and weak regions switch sides. This pattern has in fact been observed in various experiments, as we summarized in Table. 1. \(n-\)fold pinch point.--The pattern of \(1-\)fold pinch point can be generalized to \(n-\)fold pinch point. The cases of \(n=1,2,3,4\) are illustrated in Fig. 1. The upper row panels show the entire energy and momentum resolved spectroscopy, and the mid row panels show the intensity distribution \(S_{\pm}(\mathbf{q})\) without the dispersion. The crucial ingredient of the \(n-\)fold pinch point is the singularity in the intensity distribution \(S_{\pm}(\mathbf{q})\). Near \(\mathbf{q}=\mathbf{0}\), the intensity only depends on the angle \(\theta\) around \(\mathbf{0}\). An \(n-\)fold pinch point has \(n\) dark wings where the intensity is low, and \(n\) bright wings where the intensity is high. The most symmetric form of the intensity distribution, Figure 1: Schematic illustration of \(n-\)fold pinch points. (a) Band touching scenarios discussed in this paper, highlighted in red. (b-1) 1-fold pinch points imprinted on the two bands. (b-2) the spectroscopy density distribution on the lower band. (b-3) The corresponding configuration of the wavefunction, which winds \(\pi\) around the pinch point. It encodes a Berry curvature of \(\pm\pi\) at the gapless point. (c,d,e) 2,3,4-fold pinch points illustrated same way as (b). by choosing a suitable angle as \(\theta=0\), is \[\begin{split} S_{+}(\mathbf{q})&=A\cos^{2}(n\theta/2),\\ S_{-}(\mathbf{q})&=A\sin^{2}(n\theta/2).\end{split} \tag{2}\] Here, \(A\) is just a scalar signifying the overall intensity. \(S_{\pm}(\mathbf{q})\) at \(\mathbf{q}=\mathbf{0}\) is singular, since one obtains different values of \(S_{\pm}(\mathbf{0})\) by approaching it from different directions. For a specific lattice model, the intensity distribution can be mildly distorted up to an isomorphic mapping. The more general form is \[\begin{split} S_{+}(\mathbf{q})&=A\cos^{2}(n\Theta( \theta)/2),\\ S_{-}(\mathbf{q})&=A\sin^{2}(n\Theta(\theta)/2),\end{split} \tag{3}\] where \(\Theta(\theta)\) is a smooth, monotonically increasing function as a bijection from \([0,2\pi)\) to itself. \[\Theta(0)=0,\;\Theta(2\pi)=2\pi,\Theta^{\prime}(\theta)>0. \tag{4}\] The main message of this paper is that such gapless \(n-\)fold pinch point is guaranteed to be topologically non-trivial. By "non-trivial" we mean the degenerate point is not accidental, and generally carries a non-zero Berry curvature, except for some extremely fine-tuned cases. A much stronger result is that if the system additionally admits a suitable symmetry (for example, time reversal symmetry for spinless electron systems), then the Berry flux encoded must be \(n\pi\). A remark is in order before we proceed to the proof. In this work we assume that there is no "extrinsic" factors from the coupling between the system and the probing particles (photons or neutrons), which may also exhibit pinch point patterns. For ARPES, if the electron is in an anisotropic orbit, the photon-electron coupling will pick up an angle-dependent projector depending on the polarization of photons, which may yield pinch point patterns. A very detailed discussion can be found in Ref. [67]. One needs to choose the photon polarization (or use unpolarized photons) properly to avoid such effects. The same principle applies to INS - for example, polarized neutrons couple to spin-1/2 dimers in a fixed direction can also have similar projectors. However, these extrinsic, "fake" pinch points are often not present in actual experiment, or at least avoidable in principle. In all examples given in Table. 1, one does not need to worry about them. _Pinch points are topologically critical. --_ We now prove that under suitable symmetries, the \(2-\)fold pinch-point singularity is associated with a gapless point with Berry flux \(\pm 2\pi\) encoded. Here we take the symmetry be time reversal (\(\mathcal{T}\)) for the spinless electrons, which forces the Hamiltonian to be real. The proof can be easily generalized to \(n-\)fold pinch-points and other proper symmetries (see end of section). The physical picture is the following. The \(n-\)fold pinch point pattern (Fig. 1(b1-e1)) indicates that the spectroscopy intensity follows a squared-sine function. This requires that the wavefunction of the corresponding band, which is a two component real vector, rotates in a sine/cosine manner on a loop around the gapless point, and accumulate a quantized total rotation angle \(n\pi\) (Fig. 1(b3-e3)) at the end. This is exactly the origin of the Berry flux. Now, onto the formal proof, we first set up the model with a few simplifications without affecting its topological features. In the vicinity of the pinch-point, we consider the relevant subsystem with two degrees of freedom \(\psi^{1},\;\psi^{2}\), and their corresponding two-level Hamiltonian. The upper and lower bands correspond to two wavefunctions written as two orthonormal, unit vectors of complex entries \[\mathbf{\psi}_{+}(\mathbf{q})=\begin{pmatrix}\psi^{1}_{\pm}\\ \psi^{2}_{+}\end{pmatrix},\qquad\mathbf{\psi}_{-}(\mathbf{q})=\begin{pmatrix}\psi^{1} _{\pm}\\ \psi^{2}_{-}\end{pmatrix}. \tag{5}\] The system is then described by a Hermitian Hamiltonian \[\mathcal{H}(\mathbf{q})=\mathbf{S}\begin{pmatrix}\omega_{+}(\mathbf{q})&0\\ 0&\omega_{-}(\mathbf{q})\end{pmatrix}\mathbf{S}^{\dagger}. \tag{6}\] where \(\mathbf{S}=(\mathbf{\psi}_{+},\mathbf{\psi}_{-})\) is the eigenvector matrix, and \(\omega_{\pm}(\mathbf{q})\) are the two bands' dispersion relations. Figure 2: Representative continuous winding configurations of \(\mathbf{\psi}_{-}\) consistent with a 2-fold pinch point. (a) \(\mathbf{\psi}_{-}\) configurations with winding number \(\pm 1\), which are also smoothly varying. (b) \(\mathbf{\psi}_{-}\) configurations with winding number \(0\), which are not smoothly varying. (c) Plot of \(\psi^{1}_{-}\) as a function of angle \(\theta\), showing non-smoothness at \(\theta=\pi/2,\;3\pi/2\). Under the \(\mathcal{T}\) symmetry, the eigenvectors and Hamiltonian are real. Because the spectroscopy intensity follows a squared-sine function, it requires \[|\psi_{-}^{1}(\mathbf{q})|\propto|q^{x}|/q=|\sin\theta|. \tag{7}\] This puts strong constraint on the possible eigenvector configurations. Some of them are listed in Fig. 2(a,b). We also require the Hamiltonian to be _smooth_, i.e., _continuous_ and _differentiable_ to any order. This is generically true for systems with short-rang interactions, and not so only for systems with certain fine-tuned long-range interactions. Hence the requirement applies to most physically realistic systems [68, 69]. It plays a key role in eliminating the physically unrealistic cases shown in Fig. 2(b), because in those cases, the eigenvector has to go though points where unsmooth change of its component(s) is bounded to happen, even though there is no gap closing at those points. One of such unsmooth components is illustrated in Fig. 2(c). A more detailed, technical analysis is presented in the appendix. Therefore, for \(\psi_{-}^{1}\) to vary smoothly, it has to take the form \(\psi_{-}^{1}=\sin\theta\) over the entire circle, up to an overall minus sign. After a similar analysis on \(\psi_{-}^{2}\) and \(\mathbf{\psi}_{+}\), we can conclude that the viable eigenvectors in \(\mathbf{q}-\)space are those in Fig. 2(a), which are all topologically equivalent in eyes of Berry flux at the gapless point. We may pick the eigenvectors to be \[\mathbf{\psi}_{+}(\mathbf{q}) =(-\cos\theta,\sin\theta)=\frac{1}{q}(-q^{y},q^{x}), \tag{8}\] \[\mathbf{\psi}_{-}(\mathbf{q}) =(\sin\theta,\cos\theta)=\frac{1}{q}(q^{x},q^{y}).\] The eigenvectors and eigenvalues completely determines the Hamiltonian (cf. Eq (6)). Written as an effective magnetic field coupled to Pauli matrices, it is \[\mathcal{H}(\mathbf{q})= \frac{\omega_{+}(\mathbf{q})+\omega_{-}(\mathbf{q})}{2}\mathbbm{1} \tag{9}\] \[+\Delta(\mathbf{q})\frac{(q^{x})^{2}-(q^{y})^{2}}{2q^{2}}\sigma_{z}- \Delta(\mathbf{q})\frac{q^{x}q^{y}}{q^{2}}\sigma_{x}\] \[\equiv \frac{\omega_{+}(\mathbf{q})+\omega_{-}(\mathbf{q})}{2}\mathbbm{1}+\bm {\sigma}\cdot\mathbf{B}(\mathbf{q}),\] where \(\Delta(\mathbf{q})=\omega_{+}(\mathbf{q})-\omega_{-}(\mathbf{q})\) is the energy gap. From the Hamiltonian we can read off the effective magnetic field \(\mathbf{B}(\mathbf{q})\) to be \[\mathbf{B}(\mathbf{q})=\Delta(\mathbf{q})\left(-\frac{q^{x}q^{y}}{q^{2}},0,\frac{(q^{x})^ {2}-(q^{y})^{2}}{2q^{2}}\right). \tag{10}\] The crucial property is that, on a loop around the pinch point, the two components \((B^{x},B^{z})\) as a 2D vector field form a vortex of winding number 2. In Fig. 3(a), the normalized \((B^{x},B^{z},B^{y})/B(\mathbf{q})\) is plotted. Here we swapped \(B^{y}\) and \(B^{z}\) for better visualization. This is known to encode a Berry curvature of \(\pm 2\pi\)[13]. Another way to see this is to directly compute the Berry flux enclosed by a loop \(\mathbf{q}=q(\cos\theta,\sin\theta)\) around the gapless point, defined as \[C_{\pm}=\int_{0}^{2\pi}\mathrm{d}\theta\;i\mathbf{\psi}_{\pm}^{\dagger}\cdot \partial_{\theta}\mathbf{\psi}_{\pm}. \tag{11}\] We conclude our proof here. This proof can be intuitively generalized to a general \(n-\)fold pinch point. In Fig. 1, we plot the smoothly varying \(\mathbf{\psi}_{-}\) for different cases. Note that for odd \(n\), \(\mathbf{\psi}_{-}\) needs anti-periodic boundary condition instead. Without the symmetry protection, the different components of an eigenvectors can have different complex phases, so the proof above does not apply any longer, and the Berry-flux at the gapless point is generally not quantized. One example of such scenarios is to add different phases to \(\psi^{1}\) and \(\psi^{2}\) in Eq. (8), which can yield a finite Berry flux contribution when plugged into Eq. (11), or even render to total Berry zero. In this case, \(\mathbf{B}(\mathbf{q})/B\) still travels back and forth twice from the north pole to south pole on the unit sphere for a path of \(\mathbf{q}\) around the pinch point. It can be, however, not two great circles, but some general curve. The solid angle enclosed by the path, which is the Berry flux, is then not quantized. Finally, the picture above also shows that other symmetry protections can also work, if the normalized effective magnetic field is restricted to move on a fixed great circle between the two poles. Splitting and gapping the pinch point.--How the topologically critical gapless points transform under different perturbations is a well-studied topic. In this section we revisit some of these transformations, with a focus on the corresponding pinch point phenomenology. Fig. 3(b) shows a 2-fold pinch point spliting into two 1-fold pinch points (Dirac cones). Correspondingly, the Berry curvature of \(2\pi\) is also split into the two 1-fold pinch points each carrying Barry flux \(\pi\), assuming the proper symmetry protection. In this process the overall Berry flux is conserved. Using our model for demonstration, this can be done by introducing a perturbation of constant \(B^{z}>0\) for the Hamiltonian in (9), and take \(\Delta(\mathbf{q})=cq^{2}\), \[\mathbf{B}_{\text{z-tuned}}(\mathbf{q})=\left(-cq^{x}q^{y},0,\frac{c}{2}((q^{x})^{2} -(q^{y})^{2})+\delta B^{z}\right). \tag{12}\] The original gapless point at \(\mathbf{q}=\mathbf{0}\) is then split into two linearly dispersive gapless points at \(\mathbf{q}=(0,\pm\sqrt{2\delta B^{z}/c})\). the original winding number-2 vortex of \(\mathbf{B}(\mathbf{q})\) splits into two vortices, each with winding number 1 [Fig. 3(b)], consistent with the Berry flux conservation. The critical gapless point can also be gapped, and induces well-defined, opposite non-zero Berry curvature on the two bands locally. We can consider perturbing the effective magnetic field in the following way, \[\mathbf{B}_{\text{y-tuned}}(\mathbf{q})=\left(-cq^{x}q^{y},\delta B^{y},\frac{c}{2}((q^{x })^{2}-(q^{y})^{2})\right). \tag{13}\] As a consequence, the normalized \(\mathbf{B}_{\text{y-tuned}}/B_{\text{y-tuned}}\) form half a skyrmion, or a meron, as illustrated in Fig. 3(c). Since the skyrmion is of winding number 2, the two bands get local Berry curvature \(\pm 2\pi\). The \(\pm\) sign depends on the sign of \(\delta B^{y}\), and cannot be distinguished from the spectroscopy pattern. The pinch point singularity disappears as the gap opens. The spectroscopic intensity on two bands becomes smooth at the center, but gradually recovers the pinch point pattern when zoomed out. An example of this will be discussed in detail in a separate work studying the Kagome model [70]. Similar examples can also be found in Refs. [71; 72]. Discussion.--The main message of this work is that the \(n-\)fold pinch points observed in energy-momentum resolved spectroscopy indicate that the gapless point is topologically critical, and encodes Berry flux \(n\pi\) if there is a suitable symmetry protection. We have proven this for time reversal symmetry, and also provided a survey of experiments (Table. 1) that observe the universal phenomenon. This result does not rely on further microscopic details of the system, hence has great potential in experimental application across several platforms including ARPES, STM, and INS. Another useful lesson is that the reversed conclusion is _often but not always_ true: a topologically critical gapless point _may_ appear as a \(n-\)fold pinch point. Some of these cases has been discussed in Refs. [73; 26; 66], although the connection to pinch points were not mentioned. It is not always true, because the winding of certain wavefunction may not be captured by the correlation function measured by spectroscopy. The Honeycomb and Kagome lattice magnon models studied in Ref. [74; 58; 75] are such examples, in which some critical gapless points appear as 1FPP and 2FPP, but some are completely dark in the structure factor due to extrinsic, accidental cancellation of form factor. The bigger picture we gain from this work is that the spectroscopy contains a huge amount of information, of which a lot are still waiting to be exploited. For example, instead of points, other topologically critical loci of band degeneracy should also manifest universal, characteristic patterns. This idea also applies to interacting or non-Hermitian Hamiltonians. Developing the zoology of them will be an important and useful piece of phenomenological study. ## Acknowledgment We specially thank Nic Shannon for inspiring discussions at the beginning stage of this work, and Owen Benton for enlightening discussions on Berry flux. We also thank Andreas Thomasen for his helpful review of the paper. H.Y. is supported by the Theory of Quantum Matter Unit at Okinawa Institute of Science and Technology, the Japan Society for the Promotion of Science (JSPS) Research Fellowships for Young Scientists, and the National Science Foundation Division of Materials Research under the Award DMR-191751 at different stages of this project.
2307.14136
Solitons to Mean Curvature Flow in the hyperbolic 3-space
We consider {translators} (i.e., initial condition of translating solitons) to mean curvature flow (MCF) in the hyperbolic $3$-space $\mathbb H^3$, providing existence and classification results. More specifically, we show the existence and uniqueness of two distinct one-parameter families of complete rotational translators in $\mathbb H^3$, one containing catenoid-type translators, and the other parabolic cylindrical ones. We establish a tangency principle for translators in $\mathbb H^3$ and apply it to prove that properly immersed translators to MCF in $\mathbb H^3$ are not cylindrically bounded. As a further application of the tangency principle, we prove that any horoconvex translator which is complete or transversal to the $x_3$-axis is necessarily an open set of a horizontal horosphere. In addition, we classify all translators in $\mathbb H^3$ which have constant mean curvature. We also consider rotators (i.e., initial condition of rotating solitons) to MCF in $\mathbb H^3$ and, after classifying the rotators of constant mean curvature, we show that there exists a one-parameter family of complete rotators which are all helicoidal, bringing to the hyperbolic context a distinguished result by Halldorsson, set in $\mathbb R^3$.
R. F. de Lima, A. K. Ramos, J. P. dos Santos
2023-07-26T12:03:39Z
http://arxiv.org/abs/2307.14136v2
# Solitons to mean curvature flow ###### Abstract. We consider translators to mean curvature flow in hyperbolic \(3\)-space \(\mathbb{H}^{3}\), providing existence and some classification results. More specifically, we show the existence and uniqueness of a one-parameter family of complete rotational catenoid-type translators, as well as of a one-parameter family of translators which are parabolic cylinders. We establish a tangency principle for translators in \(\mathbb{H}^{3}\) and apply it to prove that properly immersed translators to mean curvature flow in \(\mathbb{H}^{3}\) are not cylindrically bounded. In addition, we classify all translators in \(\mathbb{H}^{3}\) of constant mean curvature. Finally, we construct a one-parameter family of complete helicoidal rotating solitons (rotators) to mean curvature flow in \(\mathbb{H}^{3}\). _2020 Mathematics Subject Classification:_ 53E10 (primary), 53E99 (secondary). _Key words and phrases:_ soliton - mean curvature flow - hyperbolic space - invariant surfaces. (A1) Departamento de Matematica - UFRN (A2) Departamento de Matematica Pura e Aplicada - UFRGS (A3) Departamento de Matematica - UnB _E-mail address:_ [email protected], [email protected], [email protected]. The second and third authors were partially supported by the National Council for Scientific and Technological Development - CNPq ###### Abstract We consider the _translating catenoid catenoid_ of a \(2\)-dimensional torus \(\mathbb{H}^{3}\), which is a \(2\)-dimensional torus \(\mathbb{H}^{3}\), which defines a unit normal of \(\varSigma\) with respect to the hyperbolic metric \(ds^{2}.\) With these orientations, if we denote by \(\overline{H}\) (resp. \(H\)) the mean curvature of \(\varSigma\) with respect to the Euclidean metric (resp. hyperbolic metric) of \(\mathbb{R}^{3}_{+},\) we have that \(\overline{H}\) and \(H\) satisfy the following relation (cf. [13, Lemma 10.1.1]): \[H(p)=x_{3}\overline{H}(p)+\bar{\eta}_{3}(p)\;\;\forall p=(x_{1},x_{2},x_{3}) \in\varSigma. \tag{1}\] ### Mean curvature flow We say that a family of oriented surfaces \(\varSigma_{t}=X_{t}(M)\) of a Riemannian \(3\)-manifold \(\overline{M}\)_evolves under mean curvature flow_ if the corresponding one-parameter family of immersions \[X_{t}\colon M\to\overline{M},\;\;t\in[0,\delta),\;\;0<\delta\leq+\infty,\] satisfies the following condition: \[\frac{\partial X_{t}}{\partial t}^{\perp}(p)=H_{t}(p)\eta_{t}(p)\;\;\forall p \in M, \tag{2}\] where \(\eta_{t}\) is the unit normal to \(X_{t},\)\(H_{t}\) is the mean curvature of \(X_{t}\) with respect to \(\eta_{t},\) and \(\frac{\partial X_{t}}{\partial t}^{\perp}\) denotes the normal component of \(\frac{\partial X_{t}}{\partial t},\) that is, \[\frac{\partial X_{t}}{\partial t}^{\perp}=\left\langle\frac{\partial X_{t}}{ \partial t},\eta_{t}\right\rangle\eta_{t}\,.\] In particular, the equality (2) is equivalent to \[\left\langle\frac{\partial X_{t}}{\partial t},\eta_{t}\right\rangle=H_{t}.\] We call such a family \(X_{t}:M\to\overline{M},\)\(t\in[0,\delta)\) a _mean curvature flow_ (MCF, for short) in \(\overline{M}\) with initial data \(X_{0}.\) In this setting, we say that \(\varSigma_{t}=X_{t}(M)\) is a _soliton_ or a _self-similar solution_ to MCF if there exists a one-parameter subgroup \(\mathcal{G}:=\{\Gamma_{t}\,;\,t\in\mathbb{R}\}\) of the group of isometries of \(\overline{M},\) such that \(\Gamma_{0}\) is the identity map of \(\overline{M}\) and \[\varSigma_{t}=\Gamma_{t}(\varSigma)\;\;\forall t\in\mathbb{R}\] is a MCF. More specifically, we shall call such a family \(\varSigma_{t}\) a \(\mathcal{G}\)-_soliton_. Let \(\xi\) be the Killing field determined by the subgroup \(\mathcal{G},\) that is, for any \(p\in\overline{M},\) \[\xi(p):=\frac{\partial\Gamma_{t}}{\partial t}(p)\;\;\;\text{at}\;\;t=0.\] It can be proved (see, e.g., [11]) that the surface \(\varSigma=X_{0}(M)\) with unit normal \(\eta\) is the initial condition of a \(\mathcal{G}\)-_soliton_ generated by \(\xi\) in \(\overline{M}\) if and only if the equality \[H=\left\langle\xi,\eta\right\rangle \tag{3}\] holds everywhere on \(\varSigma.\) So, in the class of solitons, equation (2) is in fact a prescribed mean curvature problem. ## 3. Translators to MCF in \(\mathbb{H}^{3}\) Consider in hyperbolic space \(\mathbb{H}^{3}\) the group \(\mathcal{G}=\{\Gamma_{t}\,;\,t\in\mathbb{R}\}\subset\text{Iso}(\mathbb{H}^{3})\) of hyperbolic translations defined by \[\Gamma_{t}(p)=e^{t}p,\;\;p\in\mathbb{H}^{3}.\] In this setting, an initial condition of a \(\mathcal{G}\)-soliton will be called a _translating soliton_ or simply a _translator_. Using the abuse of notation \[p=(x_{1},x_{2},x_{3})\in\mathbb{H}^{3}\leftrightarrow x_{1}\partial_{x_{1}}+x _{2}\partial_{x_{2}}+x_{3}\partial_{x_{3}}\in T_{p}\mathbb{H}^{3},\] the Killing field associated to \(\mathcal{G}\) is \(\xi(p)=p,\,p\in\mathbb{H}^{3}\). Thus, it follows from (3) that a surface \(\varSigma\subset\mathbb{H}^{3}\) is a translator to MCF if and only if \[H(p)=\langle p,\eta(p)\rangle\ \,\forall p\in\varSigma. \tag{4}\] **Example 3.1**.: Let \(\Pi\) be a totally geodesic vertical plane of \(\mathbb{H}^{3}\) which contains \((0,0,1)\). Since \(H\) vanishes on \(\Pi,\) it is clear that (4) holds for \(\varSigma=\Pi.\) Thus, \(\Pi\) is a stationary translator to MCF in \(\mathbb{H}^{3}.\) In fact, equation (3) implies that a minimal surface \(\varSigma\subset\mathbb{H}^{3}\) is a (stationary) translator to MCF if and only if it is invariant under the group \(\mathcal{G}\) of hyperbolic isometries as above. A complete classification of such surfaces is given by the following description. **Theorem 3.2**.: _There exists a one-parameter family \(\varSigma_{\theta}\), \(\theta\in(0,\pi],\) of properly embedded minimal surfaces in \(\mathbb{H}^{3}\) with the following properties:_ * \(\varSigma_{\theta}\) _is invariant under the one-parameter group_ \(\{\Gamma_{t}\}_{t\in\mathbb{R}}\) _of hyperbolic translations_ \[p\in\mathbb{H}^{3}\mapsto e^{t}p\in\mathbb{H}^{3},\] _and so it is a stationary translator to MCF in_ \(\mathbb{H}^{3}.\)__ * \(\partial_{\infty}\varSigma_{\theta}\cap\mathbb{R}^{2}\) _is the union of two half lines making an angle_ \(\theta.\)__ * \(\varSigma_{\pi}\) _is a vertical plane._ _Furthermore, if \(\varSigma\) is a properly embedded minimal surface of \(\,\mathbb{H}^{3}\) which is invariant under the group \(\Gamma_{t},\) then \(\varSigma=\varSigma_{\theta}\) for some \(\theta\in(0,\pi].\)_ The proof of Theorem 3.2, for convenience, will be presented separately in Section 5. Concerning the case of translators with nonzero constant mean curvature, we start with the next example. **Example 3.3**.: Let \(\mathscr{H}_{h}\) be the horosphere of \(\mathbb{H}^{3}=(\mathbb{R}^{3}_{+}\,,\,ds^{2})\) at height \(h>0,\) i.e., \[\mathscr{H}_{h}=\{(x_{1},x_{2},h)\in\mathbb{H}^{3}\,;\,x_{1},x_{2}\in\mathbb{ R}\}.\] At any point \(p=(x_{1},x_{2},h)\in\mathscr{H}_{h},\) we have that \(H(p)=1\) and \(\eta(p)=he_{3},\) so that \[\langle p,\eta(p)\rangle=\frac{1}{h^{2}}h^{2}=1=H(p)\ \ \forall p\in\mathscr{H}_{h}.\] Hence, \(\mathscr{H}_{h}\) is a translator to MCF in \(\mathbb{H}^{3}.\) In our next result we show that horospheres are the only translators to MCF which have nonzero constant mean curvature. In the proof, we shall use the following evolution formula for the mean curvature \(H_{t}\) (notation as in Section 2) of a mean curvature flow \(X_{t}:M\to\overline{M}\): \[\frac{\partial H_{t}}{\partial t}=\Delta H_{t}+H_{t}(\|A_{t}\|^{2}+\overline{ \operatorname{Ric}}(\eta_{t},\eta_{t})), \tag{5}\] where \(\overline{\operatorname{Ric}}\) denotes the Ricci tensor of \(\overline{M}\) (see [8, Theorem 3.2-(v)]). **Theorem 3.4**.: _Let \(\varSigma\) be a connected translator to MCF in \(\mathbb{H}^{3}\) which has nonzero constant mean curvature. Then, \(\varSigma\) is an open subset of a horosphere._ Proof.: After a change of orientation, we may assume without loss of generality that the mean curvature \(H\) of \(\varSigma\) is positive. Let \(X_{t}:M\to\mathbb{H}^{3},\,t>0,\) be the MCF such that \(X_{0}(M)=\varSigma\) and \[X_{t}(p)=e^{t}X_{0}(p),\ \ p\in M.\] Since \(X_{t}(M)\) differs from \(X_{0}(M)\) by an ambient isometry, \(H_{t}=H>0\) is constant in space and time, thus \(\partial H_{t}/\partial t=\Delta H_{t}=0.\) Also, in \(\mathbb{H}^{3}\), \(\overline{\text{Ric}}(\eta_{t},\eta_{t})=-2.\) Then, formula (5) yields \(\|A_{t}\|^{2}=2\) for all \(t\geq 0.\) Taking \(t=0\), we conclude that the principal curvatures \(k_{1},k_{2}\) of \(\Sigma\) satisfy: \[\left\{\begin{array}{ccccc}k_{1}&+&k_{2}&=&2H,\\ k_{1}^{2}&+&k_{2}^{2}&=&2,\end{array}\right.\] from where it follows that \(H\in(0,1]\) and, after reindexing, \[k_{1}=H+\sqrt{1-H^{2}},\quad k_{2}=H-\sqrt{1-H^{2}}.\] Since \(H\) is constant, both \(k_{1}\) and \(k_{2}\) are constant, so \(\Sigma\) is isoparametric. The isoparametric surfaces of \(\mathbb{H}^{3}\) are classified (see [3, Theorem 3.14]) and the fact that \(H\in(0,1]\) imply that \(\Sigma\) is either an open subset of a horosphere or of an equidistant surface to a totally geodesic plane. However, \(k_{1}^{2}+k_{2}^{2}=2\) only holds when \(\Sigma\) is contained in a horosphere, which finishes the proof of the theorem. **Remark 3.5**.: Since (5) holds for any \(\mathcal{G}\)-soliton, the proof of Theorem 3.4 applies to show that any initial condition of a \(\mathcal{G}\)-soliton in \(\mathbb{H}^{3}\) with nonzero constant mean curvature is necessarily an open subset of a horosphere. ### Rotational translators In this section, we focus on translators to MCF in \(\mathbb{H}^{3}\) which are invariant under rotations about the \(x_{3}\)-axis. With this purpose, we first consider vertical rotational graphs. More precisely, let \(\phi\) be a positive smooth function on an open interval \(I\subset(0,+\infty)\), and assume its graph \(\Sigma\) in \(\mathbb{H}^{3}\) is invariant under rotations about the \(x_{3}\)-axis. Then, \(\Sigma\) admits a parameterization of the form \[X(u,v)=(v\cos u,v\sin u,\phi(v)),\ \ (u,v)\in U:=\mathbb{R}\times I\subset \mathbb{R}^{2}.\] We shall call \(\Sigma=X(U)\) the _rotational vertical graph determined by_\(\phi.\) For a rotational graph \(\Sigma\) as above, a direct computation gives that \[\bar{\eta}:=(\bar{\eta}_{1},\bar{\eta}_{2},\bar{\eta}_{3})=\varrho(-\phi^{ \prime}\cos u,-\phi^{\prime}\sin u,1),\quad\varrho:=\frac{1}{\sqrt{1+(\phi^{ \prime}))^{2}}},\] is a unit normal with respect to the induced Euclidean metric, and that the corresponding Euclidean mean curvature is \[\overline{H}=\frac{\varrho}{2}\left(\frac{\phi^{\prime\prime}}{1+(\phi^{ \prime})^{2}}+\frac{\phi^{\prime}}{v}\right).\] Thus, from (1), the mean curvature \(H\) of \(\Sigma\) in \(\mathbb{H}^{3}\) with respect to \(\eta:=\phi\bar{\eta}\) is \[H=\phi\overline{H}+\bar{\eta}_{3}=\varrho\left(\frac{\phi}{2}\left(\frac{\phi ^{\prime\prime}}{1+(\phi^{\prime})^{2}}+\frac{\phi^{\prime}}{v}\right)+1 \right). \tag{6}\] It is also straightforward to see that the equality \[\langle X,\eta\rangle=\frac{\varrho}{\phi}(\phi-v\phi^{\prime}) \tag{7}\] holds everywhere on \(\Sigma\). From (6) and (7), we conclude that equation (4) for the vertical graph \(\Sigma\) is equivalent to the second order ODE: \[\phi^{\prime\prime}=-\phi^{\prime}(1+(\phi^{\prime})^{2})\left(\frac{2v}{\phi ^{2}}+\frac{1}{v}\right). \tag{8}\] Defining \(\Omega:=(0,+\infty)\times(0,+\infty)\times\mathbb{R},\) and \[\Psi(x,y,z)=-z(1+z^{2})\left(\frac{2x}{y^{2}}+\frac{1}{x}\right),\ \ (x,y,z)\in\Omega,\] we get from (8) the following **Lemma 3.6**.: _A vertical rotational graph determined by a smooth function \(\phi\) is a translator to MCF in \(\,\mathbb{H}^{3}\) if and only if \(\phi\) is a solution to the second order ODE:_ \[y^{\prime\prime}=\Psi(x,y,y^{\prime}). \tag{9}\] Next, we establish some properties of the solutions to (9). **Lemma 3.7**.: _For any \(x_{0},y_{0}>0\) and any \(\lambda\in\mathbb{R},\) the initial value problem_ \[\left\{\begin{array}{l}y^{\prime\prime}=\Psi(x,y,y^{\prime})\\ y(x_{0})=y_{0}\\ y^{\prime}(x_{0})=\lambda\end{array}\right. \tag{10}\] _has a unique smooth solution \(\phi\) on \([x_{0},+\infty)\) which has the following properties:_ * \(\phi\) _is constant if_ \(\lambda=0.\)__ * \(\phi\) _is increasing, concave and bounded above by a positive constant if_ \(\lambda>0.\)__ * \(\phi\) _is decreasing, convex and bounded below by a positive constant if_ \(\lambda<0.\)__ Proof.: Since \(\Psi\) is \(C^{\infty}\) in \(\Omega,\) the standard results on solutions for ODE's ensure the existence and uniqueness of a \(C^{\infty}\) solution \(\phi\) defined in a maximal interval \(I_{\max}:=[x_{0},x_{\max}),\)\(x_{\max}\leq+\infty,\) in the sense that the equality \[\phi^{\prime\prime}=\Psi(x,\phi,\phi^{\prime})=-\phi^{\prime}(1+(\phi^{\prime })^{2})\left(\frac{2x}{\phi^{2}}+\frac{1}{x}\right) \tag{11}\] holds in \(I_{\max}.\) If \(\lambda=0,\) it is clear from (11) that the solution \(\phi\) is constant, in which case \(x_{\max}=+\infty.\) This proves (i). Assume now that \(\lambda>0.\) Then, \(\phi\) is increasing near \(x_{0}.\) Also, from property (i) and the uniqueness of solutions, \(\phi\) has no critical points. Hence, \(\phi\) is increasing in \(I_{\max}.\) In addition, equality (11) gives that \(\phi\) is concave in \(I_{\max},\) which yields \(x_{\max}=+\infty.\) Let us prove that \(\phi\) is bounded above. To do so, set \[F(x,\phi,\phi^{\prime}):=-(1+(\phi^{\prime})^{2})\left(\frac{2x}{\phi^{2}}+ \frac{1}{x}\right)\] and observe that (11), together with the equality \((\log(\phi^{\prime}))^{\prime}=\phi^{\prime\prime}/\phi^{\prime},\) yields \[\phi^{\prime}(x)=\lambda\exp\left(\int_{x_{0}}^{x}F(t,\phi(t),\phi^{\prime}(t) )dt\right). \tag{12}\] Clearly, \(F(x,\phi(x),\phi^{\prime}(x))<-1/x\) for all \(x\in I_{\max}.\) Thus, \[0<\phi^{\prime}(x)\leq\lambda\exp\left(\int_{x_{0}}^{x}-\frac{1}{t}dt\right)= \frac{\lambda x_{0}}{x}\,,\] which implies that \[\lim_{x\to+\infty}\phi^{\prime}(x)=0. \tag{13}\] Now, notice that the equality \[\lim_{x\to+\infty}\frac{x}{\phi(x)}=+\infty\] holds regardless \(\phi\) being bounded or unbounded. Indeed, in the first case, the equality is trivial, and in the latter case, it follows from (13) and the l'Hopital rule. In particular, there exists \(x_{1}>x_{0}\) such that \(x^{2}/\phi^{2}(x)>1/2\)\(\forall x\geq x_{1}\), which yields \[\frac{2x}{\phi^{2}(x)}+\frac{1}{x}>\frac{2}{x}\quad\forall x\in I_{1}:=[x_{1},+ \infty).\] From this last inequality, we have that \(F(x,\phi(x),\phi^{\prime}(x))<-2/x\) for all \(x\in I_{1}\). Set \[\log(\varLambda)=\int_{x_{0}}^{x_{1}}F(t,\phi(t),\phi^{\prime}(t))dt.\] Then, considering (12) once more, we obtain, \(\forall x\in I_{1}\) \[\phi^{\prime}(x) = \lambda\exp\left(\int_{x_{0}}^{x_{1}}F(t,\phi(t),\phi^{\prime}(t ))dt+\int_{x_{1}}^{x}F(t,\phi(t),\phi^{\prime}(t))dt\right)\] \[\leq \lambda\varLambda\exp\left(\int_{x_{0}}^{x}-\frac{2}{t}dt\right)\] \[= \frac{\lambda\varLambda x_{0}^{2}}{x^{2}}\,.\] By integrating both sides on \([x_{1},x]\subset I_{1}\), we finally get \[\phi(x)-\phi(x_{1})\leq\lambda\varLambda x_{0}^{2}\left(\frac{1}{x_{1}}-\frac {1}{x}\right)<\lambda\varLambda\frac{x_{0}^{2}}{x_{1}}\quad\forall x\in I_{1},\] which implies that \(\phi\) is bounded above. This proves (ii). To prove (iii), we can argue as in the proof of (ii) to conclude that \(\phi\) is decreasing and convex in \(I_{\max}\) if \(\lambda<0\). We claim that \[\lim_{x\to x_{\max}}\phi(x)>0,\] which, by the definition of \(\Psi\), implies that \(x_{\max}=+\infty\). Assume, by contradiction, that \(\lim_{x\to x_{\max}}\phi(x)=0\). Then, one has \[\lim_{x\to x_{\max}}\phi^{\prime\prime}(x)=+\infty. \tag{14}\] Indeed, equality (14) follows directly from (11) if \(\lim_{x\to x_{\max}}\phi^{\prime}(x)\neq 0\). If, instead, \(\lim_{x\to x_{\max}}\phi^{\prime}(x)=0\), then \[\lim_{x\to x_{\max}}\frac{\phi^{\prime}(x)}{\phi^{2}(x)}=\lim_{x\to x_{\max}} \frac{\phi^{\prime\prime}(x)}{2\phi(x)\phi^{\prime}(x)}.\] So, if \(\phi^{\prime\prime}\) were bounded, the above limit would be infinite. But then, from (11), we would have \(\lim_{x\to x_{\max}}\phi^{\prime\prime}(x)=+\infty\), which would be a contradiction. Hence, (14) holds. Now, we compute \(\phi^{\prime\prime\prime}\) from equality (11), obtaining \[\phi^{\prime\prime\prime} = -\phi^{\prime\prime}(1+(\phi^{\prime})^{2})\left(\frac{2x}{\phi^ {2}}+\frac{1}{x}\right)-2(\phi^{\prime})^{2}\phi^{\prime\prime}\left(\frac{2x }{\phi^{2}}+\frac{1}{x}\right)\] \[-\phi^{\prime}(1+(\phi^{\prime})^{2})\left(\frac{2}{\phi^{2}}- \frac{4x\phi^{\prime}}{\phi^{3}}-\frac{1}{x^{2}}\right).\] Therefore, setting \[\chi:=-\phi^{\prime\prime}(1+(\phi^{\prime})^{2})\frac{1}{x}-2(\phi^{\prime})^{2 }\phi^{\prime\prime}\frac{1}{x}+\phi^{\prime}(1+(\phi^{\prime})^{2})\frac{1}{x^ {2}},\] we have \(\chi<0\) in \(I_{\max}\) and \[\phi^{\prime\prime\prime} = \frac{2x(1+(\phi^{\prime})^{2})}{\phi^{2}}\left(\frac{2(\phi^{ \prime})^{2}}{\phi}-\phi^{\prime\prime}\right)-\frac{2\phi^{\prime}(1+(\phi^{ \prime})^{2})}{\phi^{2}}-\frac{4x(\phi^{\prime})^{2}\phi^{\prime\prime}}{\phi^ {2}}+\chi\] \[= \frac{2x(\phi^{\prime})^{2}(1+(\phi^{\prime})^{2})}{\phi^{3}} \left(2-\frac{\phi\phi^{\prime\prime}}{(\phi^{\prime})^{2}}\right)-\frac{2 \phi^{\prime}}{\phi^{2}}(1+(\phi^{\prime})^{2}+2x\phi^{\prime}\phi^{\prime \prime})+\chi.\] However, from (11), one has \[\lim_{x\to x_{\max}}\frac{\phi(x)\phi^{\prime\prime}(x)}{(\phi^{ \prime}(x))^{2}} = \lim_{x\to x_{\max}}-\frac{1+(\phi^{\prime}(x))^{2}}{\phi^{ \prime}(x)}\left(\frac{2x}{\phi(x)}+\frac{\phi(x)}{x}\right)\] \[\geq \lim_{x\to x_{\max}}-\frac{2x}{\phi(x)\phi^{\prime}(x)}=+\infty,\] and \[\lim_{x\to x_{\max}}[\phi^{\prime}(x)\phi^{\prime\prime}(x)] = \lim_{x\to x_{\max}}\left[-(\phi^{\prime}(x))^{2}(1+(\phi^{ \prime}(x))^{2})\left(\frac{2x}{\phi^{2}(x)}+\frac{1}{x}\right)\right]\] \[\leq \lim_{x\to x_{\max}}-\frac{2x(\phi^{\prime}(x))^{2}}{\phi^{2}(x)}=-\infty.\] In the last limit, we used the fact that \[\lim_{x\to x_{\max}}\frac{\phi^{\prime}(x)}{\phi(x)}=-\infty,\] which is immediate if \(\lim_{x\to x_{\max}}\phi^{\prime}(x)<0.\) Otherwise, it follows easily from the l'Hopital rule. From the above limits and (15), we conclude that \(\phi^{\prime\prime\prime}(x)<0\) for all sufficiently large \(x\in I_{\max}\), which contradicts (14). This finishes the proof of (iii), and so of the lemma. Lemmas 3.6 and 3.7 already imply the existence of rotational translators. However, to improve the description of these examples, we next consider rotational surfaces which are also horizontal graphs. More precisely, given a rotational surface \(\varSigma\subset\mathbb{H}^{3}\) with axis \(\ell:=\{0\}\times(0,+\infty)\), let us consider \(\gamma=\varSigma\cap\{x_{1}=0\}\) as the profile curve of \(\varSigma\) and assume that the tangent plane of \(\varSigma\) at a given point \(p\in\gamma\) is not orthogonal to \(\ell\). If we let \(d\) denote the Euclidean distance function from \(\gamma\) to \(\ell\) on \(\mathbb{R}^{3}_{+}\) and let \(v\) parameterize \(\gamma\), then, in a neighborhood of \(p\), \(\varSigma\) can be parameterized as \[X(u,v):=(u,\sqrt{d^{2}(v)-u^{2}},v),\ \ (u,v)\in U\subset\mathbb{R}\times(0,+ \infty).\] We shall call \(X(U)\) the _horizontal rotational graph determined by \(d\)_. **Lemma 3.8**.: _A horizontal rotational graph determined by a smooth function \(d\) is a translator to MCF in \(\mathbb{H}^{3}\) if and only if the function \(d\) is a solution to the ODE:_ \[y^{\prime\prime}=\left(\frac{2y^{2}}{x^{2}}+1\right)\frac{1+(y^{\prime})^{2}}{ y}.\] _In particular, such a solution \(d\) is strictly convex._ Proof.: Writing \(\varphi(u,v):=\sqrt{d^{2}(v)-u^{2}}\), we have that a Euclidean unit normal to \(\Sigma\) is \[\bar{\eta}:=(\bar{\eta}_{1},\bar{\eta}_{2},\bar{\eta}_{3})=\varrho(-\varphi_{u},1,-\varphi_{v}),\quad\varrho:=\frac{1}{\sqrt{1+\varphi_{u}^{2}+\varphi_{v}^{2}}},\] and the corresponding Euclidean mean curvature is \[\overline{H}(X(u,v))=\frac{\varrho^{3}(u,v)}{2}\varLambda(u,v),\] where \(\varLambda\) is the function \[\varLambda:=\varphi_{uu}(1+\varphi_{v}^{2})-2\varphi_{uv}\varphi_{u}\varphi_{v }+\varphi_{vv}(1+\varphi_{u}^{2}).\] Hence, the hyperbolic mean curvature \(H\) of \(\Sigma\) is \[H=v\overline{H}+\bar{\eta}_{3}=\varrho\left(\frac{v\varrho^{2}}{2}\varLambda- \varphi_{v}\right), \tag{16}\] and its hyperbolic unit normal is \(\eta:=v\bar{\eta}\), so that \[\langle X,\eta\rangle=\frac{\varrho}{v}(\varphi-u\varphi_{u}-v\varphi_{v}). \tag{17}\] From (16) and (17), after noticing that \(\varphi_{u}=\frac{-u}{\varphi}\), we have that the translating soliton equation \(\langle X,\eta\rangle=H\) for \(\Sigma\) is equivalent to \[\varLambda=\frac{2d^{2}}{v^{2}\varphi\varrho^{2}}. \tag{18}\] After taking all first and second order partial derivatives of \(\varphi\) and applying to \(\varLambda\), we get from a direct and long calculation that \[\varLambda=\frac{d^{2}}{\varphi^{3}}(dd^{\prime\prime}-(d^{\prime})^{2}-1). \tag{19}\] Finally, observing that \[\frac{\varphi^{2}}{\varrho^{2}}=\varphi^{2}(1+\varphi_{u}^{2}+\varphi_{v}^{2} )=\varphi^{2}\frac{u^{2}+(dd^{\prime})^{2}+\varphi^{2}}{\varphi^{2}}=d^{2}(1+ (d^{\prime})^{2}),\] it follows from (18) and (19) that \[d^{\prime\prime}=\left(\frac{2d^{2}}{v^{2}}+1\right)\frac{1+(d^{\prime})^{2}} {d}\,,\] as we wished to prove. Now, we are in position to prove the existence of properly embedded annular translators to MCF in \(\mathbb{H}^{3}\), which we shall call _translating catenoids_, see Figure 1. **Theorem 3.9**.: _There exists a one-parameter family \(\mathscr{F}:=\{\varSigma_{r}\,;\,r>0\}\) of noncongruent, properly embedded rotational annular translators in \(\mathbb{H}^{3}\). For each \(r>0\), the surface \(\varSigma_{r}\in\mathscr{F}\) satisfies:_ 1. \(\varSigma_{r}\) _is contained in a slab determined by two horospheres_ \(\mathscr{H}_{r^{-}}\) _and_ \(\mathscr{H}_{r^{+}}\)_. In particular, the asymptotic boundary of_ \(\varSigma_{r}\) _is the point at infinity of the horosphere_ \(\mathscr{H}\) _at height_ \(1\)_._ 2. \(\varSigma_{r}\) _is the union of two vertical graphs_ \(\varSigma_{r}^{-}\) _and_ \(\varSigma_{r}^{+}\) _over the complement of the Euclidean_ \(r\)_-disk_ \(\,\mathcal{D}_{r}\) _centered at the rotation axis in the horosphere_ \(\mathscr{H}\) 3. _The graphs_ \(\varSigma_{r}^{-}\) _and_ \(\varSigma_{r}^{+}\) _lie in distinct connected components of_ \(\,\mathbb{H}^{3}-\mathscr{H}\) _with common boundary the_ \(r\)_-circle that bounds_ \(\mathcal{D}_{r}\) _in_ \(\mathscr{H},\) _being_ \(\varSigma_{r}^{-}\) _asymptotic to_ \(\mathscr{H}_{r^{-}}\) _and_ \(\varSigma_{r}^{+}\) _asymptotic to_ \(\mathscr{H}_{r^{+}}\)_._ _In addition, the limiting behaviour of \(\varSigma_{r}\) is as follows:_ 1. _As_ \(r\to 0,\)__\(\varSigma_{r}\) _converges_1 _to a double copy of_ \(\mathscr{H}.\)__ Footnote 1: The convergence is on the \(C^{2,\alpha}\)-norm, on compact sets outside \((0,0,1).\)__ 2. _As_ \(r\to+\infty,\)__\(\varSigma_{r}\) _escapes to infinity, and both_ \(\mathscr{H}_{r^{-}}\) _and_ \(\mathscr{H}_{r^{+}}\) _converge to_ \(\mathscr{H}.\)__ Proof.: Given \(r>0,\) let \(d_{r}:(1-\delta,1+\delta)\to(0,+\infty)\) be the local solution to the following initial value problem: \[\left\{\begin{array}{l}y^{\prime\prime}=\left(\frac{2y^{2}}{x^{2}}+1\right) \frac{1+(y^{\prime})^{2}}{y}\,,\\ y(1)=r,\\ y^{\prime}(1)=0.\end{array}\right. \tag{20}\] By Lemma 3.8, the rotational horizontal graph \(\Sigma_{r}\) determined by \(d_{r}\) is a translator to MCF in \(\mathbb{H}^{3}.\) Since \(d_{r}\) is strictly convex, \(x=1\) is a strict local minimum of \(d_{r}\) and \(\Sigma_{r}-\mathscr{H}\) is the union of two disjoint rotational vertical graphs \(\Sigma_{r}^{-}\) and \(\Sigma_{r}^{+}\) over an open set contained in \(\mathscr{H}-\mathcal{D}_{r}.\) Let us index \(\Sigma_{r}^{+}\) as being the component contained in the horoball \(\{x_{3}>1\}\). Then, Lemma 3.6 applies to \(\Sigma_{r}^{+}\), which corresponds to an increasing solution of (10) (i.e., one for which the initial condition \(\lambda\) is positive). By Lemma 3.7, such a solution is defined in an interval \([x_{0},+\infty)\) and is bounded above. Therefore, \(\Sigma_{r}^{+}\) can be continued indefinitely, being asymptotic to a horosphere \(\mathscr{H}_{r^{+}}\) of \(\mathbb{H}^{3}.\) In particular, \(\Sigma_{r}^{+}\) is a graph over \(\mathscr{H}-\mathcal{D}_{r}.\) Analogously, Lemmas 3.6 and 3.7 give that \(\Sigma_{r}^{-}\) can be continued indefinitely and is asymptotic to a horosphere \(\mathscr{H}_{r^{-}}\) of \(\mathbb{H}^{3}.\) Since \(\Sigma_{r}=\operatorname{closure}(\Sigma_{r}^{-})\cup\operatorname{closure} (\Sigma_{r}^{+}),\) we have that \(\Sigma_{r}\) is an annular properly embedded translator to MCF in \(\mathbb{H}^{3}.\) This proves assertions (i)-(iii). To prove assertions (iv) and (v), consider the following parameterization of the graph \(G_{r}\) of the solution \(d_{r}\) of (20): \[\alpha_{r}(s):=(0,d_{r}(s),s),\,\,\,s\in(r_{-},r_{+}).\] We get from a direct computation that, with the induced Euclidean metric, the curvature of \(\alpha_{r}\) at \(s=1\) is \(k_{r}(1)=d_{r}^{\prime\prime}(1)=2r+1/r.\) So, we have (see Fig. 2): \[\lim_{r\to 0}k_{r}(1)=\lim_{r\to+\infty}k_{r}(1)=+\infty. \tag{21}\] Now, set \(r_{*}\) for either \(0\) or \(+\infty.\) For each \(r>0\), \(G_{r}\) intersects both \(\{x_{3}>1\}\) and \(\{x_{3}<1\}\). In particular, (21) allows us to choose points \(p_{r}^{-}:=\alpha_{r}(s_{r}^{-})\in G_{r}\cap\{x_{3}<1\}\) and \(p_{r}^{+}:=\alpha_{r}(s_{r}^{+})\in G_{r}\cap\{x_{3}>1\}\) such that \[\lim_{r\to r_{*}}s_{r}^{-}=\lim_{r\to r_{*}}s_{r}^{+}=1,\] Figure 2. In the proof of Theorem 3.9, the curvature of the horizontal graph \(G_{r}\) at the point of minimal distance to the axis goes to infinity as \(r\) goes to either \(0\) or \(+\infty.\) with unit tangent vectors (with respect to the Euclidean metric) satisfying \[\lim_{r\to r_{*}}\frac{\alpha^{\prime}_{r}(s^{+}_{r})}{\|\alpha^{\prime}_{r}(s^{+ }_{r})\|}=-\lim_{r\to r_{*}}\frac{\alpha^{\prime}_{r}(s^{-}_{r})}{\|\alpha^{ \prime}_{r}(s^{-}_{r})\|}=\partial_{x_{2}}, \tag{22}\] or, equivalently, \(\lim_{r\to r_{*}}d^{\prime}_{r}(s^{+}_{r})=-\lim_{r\to r_{*}}d^{\prime}_{r}(s^ {-}_{r})=+\infty\). Consider the surfaces \(S^{-}_{r}\) and \(S^{+}_{r}\) obtained from the solutions of (10) with the following initial conditions: \[\left\{\begin{array}{l}y(d_{r}(s^{-}_{r}))=s^{-}_{r},\\ y^{\prime}(d_{r}(s^{-}_{r})=\frac{1}{d^{\prime}_{r}(s^{-}_{r})}\end{array} \right.\quad\text{ and }\quad\left\{\begin{array}{l}y(d_{r}(s^{+}_{r}))=s^{+}_{r},\\ y^{\prime}(d_{r}(s^{+}_{r})=\frac{1}{d^{\prime}_{r}(s^{+}_{r})}.\end{array}\right.\] Hence, by uniqueness, \(S^{-}_{r}=\Sigma^{-}_{r}\) and \(S^{+}_{r}=\Sigma^{+}_{r}\). But then, in the case \(r_{*}=0\), the continuity of the family of solutions of (10) with respect to initial data, together with (22), implies that both \(\Sigma^{-}_{r}\) and \(\Sigma^{+}_{r}\) converge, on compact sets, to the horosphere \(\mathscr{H}\), proving (iv). Furthermore, when \(r_{*}=+\infty\), (v) follows from (22) and items (ii), (iii) in Lemma 3.7. This concludes our proof. Let \(\Sigma\) be a connected rotational translator in \(\mathbb{H}^{3}\) with (possibly empty) boundary. Then, Lemmas 3.6 and 3.8, together with the uniqueness of solutions of ODE's with given initial conditions, imply that the profile curve of \(\Sigma\) coincides, up to its boundary, to the profile curve of some translating catenoid \(\Sigma_{r}\) obtained in Theorem 3.9. Therefore, we have the following uniqueness result. **Theorem 3.10**.: _Any connected rotational translator of \(\mathbb{H}^{3}\) is an open subset of some member of the family \(\mathscr{F}\) presented in Theorem 3.9._ A distinguished property of translators to MCF in \(\mathbb{R}^{3}\) is that they are critical points of a weighted area functional and, therefore, they become minimal surfaces when changing the ambient metric in a suitable manner [10]. In particular, the tangency principle applies to them, which allows one to use translators as barriers (cf. [14]). On the other hand, it is unknown to us if translators to MCF in \(\mathbb{H}^{3}\) can be made minimal in a similar fashion. Nevertheless, as we establish in the next result, the tangency principle holds for translators in \(\mathbb{H}^{3}\), and this will be applied, together with Theorem 3.9, to prove that complete translators in \(\mathbb{H}^{3}\) are never cylindrically bounded. **Theorem 3.11** (**tangency principle for translators)**.: _Let \(\Sigma_{1}\) and \(\Sigma_{2}\) be two translators to MCF in \(\mathbb{H}^{3}\) which are tangent at a point \(p\in\operatorname{int}\Sigma_{1}\cap\operatorname{int}\Sigma_{2}\). If \(\Sigma_{1}\) lies on one side of \(\Sigma_{2}\) in a neighborhood of \(p\) in \(\mathbb{H}^{3},\) then \(\Sigma_{1}\) and \(\Sigma_{2}\) coincide in a neighborhood of \(p\) in \(\Sigma_{1}\cap\Sigma_{2}.\) Moreover, if \(\Sigma_{1}\) and \(\Sigma_{2}\) are both complete and connected, then \(\Sigma_{1}=\Sigma_{2}.\)_ Proof.: Let \(\Sigma_{1}\) and \(\Sigma_{2}\) be two translators to MCF in \(\mathbb{H}^{3}\), tangent at a point \(p\in\Sigma_{1}\cap\Sigma_{2}\), and such that \(\Sigma_{1}\) stays locally on one side of \(\Sigma_{2}\). If \(T_{p}\Sigma_{1}\) is not vertical, there exist a domain \(\Omega\subset\mathbb{R}^{2}\) and positive functions \(u_{1},\,u_{2}\colon\Omega\to\mathbb{R}\) such that neighborhoods \(U\subset\Sigma_{1}\) and \(V\subset\Sigma_{2}\) containing \(p\) are respectively parameterized by \[U=\{(x,y,u_{1}(x,y))\mid(x,y)\in\Omega\},\quad V=\{(x,y,u_{2}(x,y))\mid(x,y)\in \Omega\}.\] Furthermore, after reindexing we may assume that \(u_{1}\geq u_{2}\) in \(\Omega\). Let \(\Sigma_{1}\) and \(\Sigma_{2}\) be oriented with respect to vector fields \(\eta_{1}\) and \(\eta_{2}\) so that \(\eta_{1}(p)=\eta_{2}(p)\) points upwards. Thus, if \(Q\) is the quasilinear elliptic operator \[Q(u)=u_{xx}(1+u_{y}^{2})+u_{yy}(1+u_{x}^{2})-2u_{xy}u_{x}u_{y}, \tag{23}\] it follows from (1) that the mean curvature functions \(H_{1}\), \(H_{2}\) of \(U\) and \(V\) satisfy \[H_{i}=u_{i}\frac{Q(u_{i})}{2(1+(u_{i})_{x}^{2}+(u_{i})_{y}^{2})^{\frac{3}{2}}}+ \frac{1}{(1+(u_{i})_{x}^{2}+(u_{i})_{y}^{2})^{\frac{1}{2}}},\quad i\in\{1,2\}.\] Then, after setting \(B(x,y,u,Du)=2(1+u_{x}^{2}+u_{y}^{2})(xu_{x}+yu_{y})\), where \(Du\) denotes the (Euclidean) gradient of \(u\), it follows from (4) that \[(u_{i})^{2}Q(u_{i})+B(x,y,u_{i},Du_{i})=0,\quad i\in\{1,2\}. \tag{24}\] But the operator \(u^{2}Q(u)+B(x,y,u,Du)\) in (24) satisfies the hypothesis of the tangency principle for quasilinear operators [16, Theorem 2.2.2], thus \(U=V\). The case where \(T_{p}\varSigma_{1}\) is vertical can be treated analogously: after a rotation about the \(x_{3}\)-axis (which preserves the property of being a translator to MCF), locally, both \(\varSigma_{1}\) and \(\varSigma_{2}\) can be parameterized as horizontal graphs \[\{(x,u_{1}(x,z),z)\mid(x,z)\in\widehat{\Omega}\}\,\,\,\text{and}\,\,\,\{(x,u_ {2}(x,z),z)\mid(x,z)\in\widehat{\Omega}\}\] for some domain \(\widehat{\Omega}\subset\mathbb{R}^{2}_{+}\), and both \(u_{1}\), \(u_{2}\) satisfy \[z^{2}Q(u)+\widehat{B}(x,z,u,Du)=0\] for \(\widehat{B}(x,z,u,Du)=2(xu_{x}-u)(1+u_{x}^{2}+u_{z}^{2})\) and \(Q\) as in (23). Once again, we obtain from [16, Theorem 2.22] that \(\varSigma_{1}\) and \(\varSigma_{2}\) coincide in a neighborhood of \(p\). At this point, we have shown that if \(\varSigma_{1}\) and \(\varSigma_{2}\) are tangent at a point \(p\), they must coincide in neighborhoods which are either horizontal or vertical graphs for \(\varSigma_{1}\) and \(\varSigma_{2}\). The proof for the case where \(\varSigma_{1}\) and \(\varSigma_{2}\) are complete and connected now follows from covering \(\varSigma_{1}\) and \(\varSigma_{2}\) with such (overlapping) neighborhoods. **Remark 3.12**.: Theorem 3.11 contrasts with the tangency principle for the constant mean curvature case (see, for instance, [13, Theorem 3.2.4]): two distinct geodesic spheres in \(\mathbb{R}^{3}\) with the same mean curvature can be tangent to each other without violating the tangency principle. In the setting of translators, the tangency principle does not require any assumptions on the orientation of \(\varSigma_{1}\) and \(\varSigma_{2}\) because, from (4), if \(\varSigma_{1}\) and \(\varSigma_{2}\) are translators to MCF which are tangent at a point \(p\), then necessarily their mean curvature vectors \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) must agree at \(p\), which defines a coinciding, _standard_ (local) orientation for both \(\varSigma_{1}\) and \(\varSigma_{2}\). Recall that a circular cone in \(\mathbb{R}^{3}_{+}:=\mathbb{R}^{2}\times(0,+\infty)\) with vertex at \(p\in\mathbb{R}^{2}\) and axis \(\gamma_{p}:=\{p\}\times(0,+\infty)\) constitutes a _cylinder_\(\mathscr{C}\) in \(\mathbb{H}^{3}\), that is, the set of points of \(\mathbb{H}^{3}\) at a fixed distance to the vertical geodesic \(\gamma_{p}.\) The convex side of \(\mathscr{C}\) is the component of \(\mathbb{H}^{3}-\mathscr{C}\) which contains \(\gamma_{p}\). **Corollary 3.13**.: _There is no properly immersed translator to MCF in \(\mathbb{H}^{3}\) which is contained in the convex side of a cylinder with vertex at \(p=(0,0).\) In particular, there is no closed (i.e., compact without boundary) translator to MCF in \(\mathbb{H}^{3}.\)_ Proof.: Suppose, by contradiction, that there exists a properly immersed translator \(\varSigma\) to MCF in \(\mathbb{H}^{3}\) which is contained in the convex side \(\Omega\) of a cylinder \(\mathscr{C}\) with vertex at \(p=(0,0).\) Clearly, the property of being a translator is invariant by the translations \(\Gamma_{t}(p):=e^{t}p\), \(t\in\mathbb{R}.\) Therefore, we can assume without loss of generality that \(\varSigma\) intersects the horosphere \(\mathscr{H}\) of height \(1\). Under the above conditions, we have from item (v) of Theorem 3.9 that there exists \(R>0\) such that, for any \(r>R\), the translating catenoid \(\varSigma_{r}\) of the family \(\mathscr{F}\) is disjoint from \(\mathscr{C}\), and so from \(\varSigma.\) On the other hand, for a sufficiently small \(r>0\), and \(\Sigma\) have nonempty intersection. Taking into account the asymptotic behavior of \(\Sigma_{r},\) together with the hypothesis that \(\Sigma\) is contained in \(\Omega,\), as \(r\) decreases from \(R\) to zero, a standard argument shows that there will be a first value \(r_{*}\) such that \(\Sigma_{r_{*}}\) is the element of \(\mathscr{F}\) that first establishes a contact with \(\Sigma\) at a point \(p\in\Sigma\cap\Sigma_{r_{*}},\) as in Figure 3. Then, \(\Sigma\) and \(\Sigma_{r_{*}}\) are tangent at \(p\) with \(\Sigma\) on one side of \(\Sigma_{r},\) and the tangency principle (Theorem 3.11) applies to show that \(\Sigma=\Sigma_{r_{*}},\) which is a contradiction, since \(\Sigma\) is contained in \(\Omega\) and \(\Sigma_{r_{*}}\) is not. ### Parabolic translators Having considered rotational translators in the previous section, we now look at translators which are invariant by a \(1\)-parameter group of _parabolic_ isometries of \(\mathbb{H}^{3},\) i.e., isometries of \(\mathbb{H}^{3}\) that fix parallel families of horospheres. Horizontal cylinders over curves on vertical totally geodesic planes of \(\mathbb{H}^{3}\) (to be called _parabolic cylinders_) are the simplest examples of surfaces which are invariant by parabolic translations. When these generating curves are graphs on the whole of \(\mathbb{R},\) such a surface can be parameterized by a map \(X\colon\mathbb{R}^{2}\to\mathbb{R}^{3}_{+}\) defined by \[X(u,v)=(u,v,\phi(v)),\ (u,v)\in\mathbb{R}^{2},\] where \(\phi\) is a smooth positive function on \(\mathbb{R}.\) We shall call \(\Sigma:=X(\mathbb{R}^{2})\) the _parabolic cylinder determined by \(\phi.\)_ Defining \(\varrho(v):=(1+(\phi^{\prime}(v))^{2})^{-1/2},\) we have that \[\bar{\eta}:=\varrho(0,-\phi^{\prime},1)\] is a unit normal to \(\Sigma\) with respect to the induced Euclidean metric of \(\mathbb{R}^{3}_{+}.\) With this orientation, the Euclidean mean curvature \(\overline{H}\) of \(\Sigma\) is \[\overline{H}=\frac{\varrho^{3}\phi^{\prime\prime}}{2}\,.\] Figure 3. In the proof of Corollary 3.13, there exists a smallest \(r_{*}>0\) such that \(\Sigma_{r_{*}}\) intersects \(\Sigma\) tangentially, with \(\Sigma\) in one of the two regions of \(\mathbb{H}^{3}\) defined by \(\Sigma_{r_{*}}.\) From this last equality and (1), we have that the hyperbolic mean curvature \(H\) of \(\varSigma\) with respect to the orientation \(\eta:=\phi\bar{\eta}\) is \[H=\varrho\left(\frac{\varrho^{2}\phi\phi^{\prime\prime}}{2}+1\right).\] Since \(\langle\eta,X\rangle=\varrho(\phi-v\phi^{\prime})/\phi,\) we also have that the identity (4) for the parabolic cylinder \(\varSigma=X(\mathbb{R}^{2})\) is equivalent to the following second order ODE: \[\phi^{\prime\prime}=-\phi^{\prime}(1+(\phi^{\prime})^{2})\frac{2v}{\phi^{2}}.\] The above considerations yield **Lemma 3.14**.: _A parabolic cylinder determined by a smooth function \(\phi\) is a translator to MCF in \(\,\mathbb{H}^{3}\) if and only if \(\phi\) is a solution to the second order ODE:_ \[y^{\prime\prime}=-y^{\prime}(1+(y^{\prime})^{2})\frac{2x}{y^{2}}. \tag{25}\] The solutions of (25) are all increasing on \(\mathbb{R}\) and their graphs are "S-shaped", as attested by the following **Lemma 3.15**.: _Given \(\lambda\geq 0,\) the initial value problem_ \[\left\{\begin{array}{l}y^{\prime\prime}=-y^{\prime}(1+(y^{\prime})^{2}) \frac{2x}{y^{2}}\\ y(0)=1\\ y^{\prime}(0)=\lambda\end{array}\right. \tag{26}\] _has a unique smooth solution \(\phi:\mathbb{R}\to\mathbb{R}\) which has the following properties:_ * \(\phi\) _is constant if_ \(\lambda=0.\)__ * \(\phi\) _is increasing, convex in_ \((-\infty,0),\) _and concave in_ \((0,+\infty)\) _if_ \(\lambda>0.\)__ * \(\phi\) _is bounded above and below by positive constants._ Proof.: Assertion (i) is immediate. So, assume \(\lambda>0.\) Proceeding as in the proof of Lemma 3.7, we get from the equality \[\phi^{\prime\prime}=-\phi^{\prime}(1+(\phi^{\prime})^{2})\frac{2x}{\phi^{2}} \tag{27}\] that \(\phi\) necessarily satisfies: \[\phi^{\prime}(x)=\lambda\exp\left(\int_{0}^{x}F(t,\phi(t),\phi^{\prime}(t)) \right)dt,\] where \(F\) is given by \[F(x,y,z):=-\frac{2x}{y^{2}}(1+z^{2}).\] Hence, \(\phi\) is increasing. This, together with (27), implies that \(\phi\) is defined in a maximal interval \(I_{\max}:=(x_{\min},+\infty)\) with \(-\infty\leq x_{\min}<0.\) It also follows from (27) that \(\phi\) is convex in \((x_{\min},0),\) and concave in \((0,+\infty).\) Next, we prove that the solution \(\phi\) is bounded above. Since \(\phi\) is concave in \((0,+\infty),\) we have that \(\lambda=\phi^{\prime}(0)>\phi^{\prime}(x)\) for all \(x>0.\) Then, integration on both sides of this last inequality yields \[\phi(x)\leq\lambda x+1\ \ \forall x\geq 0,\] which implies that \[\lim_{x\rightarrow+\infty}\frac{x}{\phi(x)}\geq\lim_{x\rightarrow+\infty}\frac{x }{\lambda x+1}=\frac{1}{\lambda}>0. \tag{28}\] Now, choose a small \(\epsilon>0\) such that \(C:=\lambda^{-1}-\epsilon\) is positive. It follows from (28) that, for a sufficiently large \(x_{1}>0,\) one has \(x/\phi(x)>C\) for all \(x\geq x_{1},\) so that \[\frac{2x}{\phi^{2}(x)}\geq\frac{2C^{2}}{x}\ \ \forall x\geq x_{1},\] from which we obtain \[F(x,\phi(x),\phi^{\prime}(x))\leq-\frac{2C^{2}}{x}\ \ \forall x\geq x_{1}. \tag{29}\] Now, for any given \(x_{0}>0,\) consider the following initial value problem: \[\left\{\begin{array}{l}y^{\prime\prime}=-y^{\prime}(1+(y^{\prime})^{2}) \frac{2x}{y^{2}}\\ y(x_{0})=\phi(x_{0})\\ y^{\prime}(x_{0})=\phi^{\prime}(x_{0}).\end{array}\right. \tag{30}\] By uniqueness, \(\phi\) is a solution to (30), and once again we may write \[\phi^{\prime}(x)=\lambda_{0}\exp\left(\int_{x_{0}}^{x}F(t,\phi(t),\phi^{ \prime}(t))\right)dt\ \ \forall x\geq x_{0}, \tag{31}\] where \(\lambda_{0}:=\phi^{\prime}(x_{0})>0.\) Thus, defining \(\lambda_{1}=\phi^{\prime}(x_{1}),\) we obtain \[\phi^{\prime}(x)=\lambda_{1}\exp\left(\int_{x_{1}}^{x}F(t,\phi(t),\phi^{ \prime}(t))\right)dt\ \ \forall x\geq x_{1} \tag{32}\] and it follows from (29) and (32) that \[\phi^{\prime}(x)\leq\lambda_{1}\left(\frac{x_{1}}{x}\right)^{2C^{2}}\ \ \forall x\geq x_{1},\] so that \(\phi^{\prime}(x)\to 0\) as \(x\rightarrow+\infty.\) Therefore, we have \[\lim_{x\rightarrow+\infty}\frac{x}{\phi(x)}=+\infty.\] In particular, there exists \(x_{2}\geq x_{1}\) such that the inequality \[\frac{2x}{\phi^{2}(x)}\geq\frac{2}{x}\] holds for all \(x\geq x_{2},\) which gives that \(F(x,\phi(x),\phi^{\prime}(x))<-2/x\) for all \(x\geq x_{2}.\) Therefore, applying (31) for \(x_{2},\) \[\phi^{\prime}(x)=\lambda_{2}\exp\left(\int_{x_{2}}^{x}F(t,\phi(t),\phi^{ \prime}(t))\right)dt,\ \ \lambda_{2}=\phi^{\prime}(x_{2})\] and we may proceed just as in the proof of Lemma 3.7 to conclude that \[\phi(x)-\phi(x_{2})\leq\lambda_{2}x_{2}^{2}\left(\frac{1}{x_{2}}-\frac{1}{x} \right)<\lambda_{2}x_{2}\ \ \ \forall x\geq x_{2},\] which implies that \(\phi\) is bounded above. Next, we show that \(\phi\) is bounded below by a positive constant. With this purpose, assume by contradiction that \[\lim_{x\rightarrow{x_{\min}}}\phi(x)=0. \tag{33}\] Reasoning as in the proof of item iii of Lemma 3.7, we obtain from this assumption that \[\lim_{x\to x_{\min}}\phi^{\prime\prime}(x)=+\infty, \tag{34}\] which, in turn, implies that \[\lim_{x\to x_{\min}}\frac{\phi^{\prime}(x)}{\phi(x)}=+\infty. \tag{35}\] Equality (34) gives that \(\phi^{\prime\prime}\) is necessarily decreasing in a neighborhood of any sufficiently small \(x\in(x_{\min},0).\) However, by computing \(\phi^{\prime\prime\prime}\) from (27), we get \[\phi^{\prime\prime\prime}=-\phi^{\prime\prime}(1+(\phi^{\prime})^{2})\frac{2x }{\phi^{2}}-4(\phi^{\prime})^{2}\phi^{\prime\prime}\frac{x}{\phi^{2}}-\phi^{ \prime}(1+(\phi^{\prime})^{2})\left(\frac{2}{\phi^{2}}-\frac{4x\phi^{\prime}}{ \phi^{3}}\right). \tag{36}\] Then, considering the equality \(-\phi^{\prime}(1+(\phi^{\prime})^{2})=\phi^{2}\phi^{\prime\prime}/(2x),\) which we get from (27), and applying it to the last summand of (36), we obtain \[\phi^{\prime\prime\prime}=\phi^{\prime\prime}\left[-\frac{2x}{\phi^{2}}(1+3( \phi^{\prime})^{2})+\frac{1}{x}-\frac{2\phi^{\prime}}{\phi}\right]=\phi^{ \prime\prime}\left[-\frac{2x}{\phi^{2}}-2\frac{\phi^{\prime}}{\phi}\left(1+3x \frac{\phi^{\prime}}{\phi}\right)+\frac{1}{x}\right].\] This last equality, together with equations (33)-(35), clearly yields \[\lim_{x\to x_{\min}}\phi^{\prime\prime\prime}(x)=+\infty,\] which contradicts (34). Therefore, we have \[\lim_{x\to x_{\min}}\phi(x)>0,\] from which we conclude that \(\phi\) is bounded from below by a positive constant. In particular, we must have \(x_{\min}=-\infty.\) This finishes the proof. Lemmas 3.14 and 3.15 immediately give the following result (see Fig. 4). **Theorem 3.16**.: _There exists a one-parameter family \(\mathscr{F}:=\{\Sigma_{\lambda}\,;\,\lambda\in[0,+\infty)\}\) of noncongruent, complete translators in \(\mathbb{H}^{3}\) (to be called hyperbolic grim reapers) which are horizontal parabolic cylinders generated by the solutions of (26). As a consequence, \(\Sigma_{0}\) is the horosphere \(\mathscr{H}\subset\mathbb{H}^{3}\) at height one, and for \(\lambda>0,\) each \(\Sigma_{\lambda}\in\mathscr{F}\) is an entire graph over \(\mathbb{R}^{2}\) which is contained in a slab determined by two horospheres \(\mathscr{H}_{-}\) and \(\mathscr{H}_{+}.\) Furthermore, there exist open sets \(\Sigma_{\lambda}^{-}\) and \(\Sigma_{\lambda}^{+}\) of \(\Sigma_{\lambda}\) such that \(\Sigma_{\lambda}^{-}\) is asymptotic to \(\mathscr{H}_{-}\), \(\Sigma_{\lambda}^{+}\) is asymptotic to \(\mathscr{H}_{+},\) and \(\Sigma_{\lambda}=\operatorname{closure}\left(\Sigma_{\lambda}^{-}\right) \cup\operatorname{closure}\left(\Sigma_{\lambda}^{+}\right)\)._ **Remark 3.17**.: The symmetry in (25) allows us to extend the family \(\mathscr{F}\) in Theorem 3.16 for values \(\lambda<0\) by simply defining \(\widetilde{\phi}(x)=\phi(-x)\) for a given solution \(\phi\) to (26) with positive initial data for \(\phi^{\prime}\). However, the respective grim reaper generated by \(\widetilde{\phi}\) correspond to a rotation of \(\pi\) around the \(x_{3}\)-axis, being therefore congruent to an element of \(\mathscr{F}\). Analogously to the rotational case, the uniqueness of solutions of ODE's with given initial conditions yields the following result. **Theorem 3.18**.: _Any connected rotator in \(\mathbb{H}^{3}\) which is a parabolic cylinder is, up to an ambient isometry (see Remark 3.17), an open subset of some member of the family \(\mathscr{F}\) presented in Theorem 3.16._ If \(\Gamma\subset\mathbb{R}^{2}\) is the graph of the function \(t\in(-\pi/2,\pi/2)\mapsto-\log(\cos t),\) then the cylinder \(\Sigma=\Gamma\times\mathbb{R}\subset\mathbb{R}^{3}\) is a translator to MCF contained in a slab \(\mathcal{S}\) of \(\mathbb{R}^{3},\) known as the _grim reaper cylinder_. This nomenclature is due to the fact that the curve \(\Gamma\) provides a solution to the curve shortening flow, called _the grim reaper_, which is given by the translation of \(\Gamma\) in \(\mathbb{R}^{2}\) in the \(\vec{e}_{2}\)-direction. By the avoidance principle, such a solution "kills" any other solution in the region \((-\pi/2,\pi/2)\times\mathbb{R}\) (see [2, Chapter 2]). Similarly, two surfaces (one of them compact) in \(\mathbb{R}^{3}\) moving under MCF which are initially disjoint remain so until one of them collapses. Hence, as \(\Sigma\) translates under MCF, it "kills" all solutions to (2) in \(\mathcal{S}\) with compact initial condition. An analogous process occurs in our case: any surface of the family \(\mathscr{F}\) in Theorem 3.16 has this "killing" property. Indeed, by [12, Theorem 4], the avoidance principle applies to surfaces moving under MCF in \(\mathbb{H}^{3}.\) For this reason, we named the elements of \(\mathscr{F}\) hyperbolic grim reapers. **Remark 3.19**.: At the completion of this manuscript, we became acquainted with the preprint [15], in which the authors consider solitons to MCF generated by conformal fields in \(\mathbb{H}^{n},\) called _conformal solitons_. There, they obtained rotational and cylindrical conformal solitons whose initial conditions are named winglike catenoids and grim reaper cylinder, respectively. However, such solitons are not related to the ones considered here, since their generating fields are not Killing. We close this section with the following **Conjecture 3.20**.: _A translator of \(\mathbb{H}^{3}\) which is an entire graph over \(\mathbb{R}^{2}\) is, up to an ambient isometry, one of the members of the family \(\mathscr{F}\) presented in Theorem 3.16._ ## 4. Rotators to MCF in \(\mathbb{H}^{3}\) Let us consider now the one-parameter group \(\mathcal{G}\subset\operatorname{Iso}(\mathbb{H}^{3})\) of rotations \(\Gamma_{t}\) of \(\mathbb{H}^{3}=(\mathbb{R}^{3}_{+},ds^{2})\) about the \(x_{3}\)-axis. Considering the decomposition \(\mathbb{R}^{3}_{+}=\mathbb{R}^{2}\times(0,+\infty),\) we have that \[\Gamma_{t}=\left[\begin{array}{cc}e^{tJ}&\\ &1\end{array}\right],\quad J=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}.\] In this setting, an initial condition of a \(\mathcal{G}\)-soliton will be called a _rotating soliton_ or simply a _rotator_. The (horizontal) Killing field associated to \(\mathcal{G}\) is \(\xi(p)=J\pi(p),\)\(p\in\mathbb{H}^{3},\) where \(\pi\) denotes the projection over \(\{(0,0,1)\}^{\perp}\subset\mathbb{R}^{3},\) i.e., \(\pi(x_{1},x_{2},x_{3})=x_{1}\partial_{x_{1}}+x_{2}\partial_{x_{2}}\). Hence, a surface \(\Sigma\) of hyperbolic space \(\mathbb{H}^{3}\) is a rotator to MCF if and only if \[H(p)=\left\langle J\pi(p),\eta(p)\right\rangle\ \forall p\in\Sigma. \tag{37}\] Since no horosphere is a rotator in \(\mathbb{H}^{3},\) the considerations of Remark 3.5 yield **Proposition 4.1**.: _There is no rotator of nonzero constant mean curvature in \(\mathbb{H}^{3}.\)_ We shall seek for rotators to MCF in \(\mathbb{H}^{3}\) in the class of _helicoidal surfaces_, which are described as follows. Choose a smooth curve with trace contained in the horosphere \(\mathscr{H}:=\mathbb{R}^{2}\times\{1\}\) of height \(1:\) \[s\in\mathbb{R}\mapsto(\alpha(s),1)\in\mathscr{H},\] where \(\alpha\colon\mathbb{R}\to\mathbb{R}^{2}\) is a regular curve parameterized by arc length. Given a constant \(h>0,\) we call a parameterized surface \(\Sigma=X(\mathbb{R}^{2})\subset\mathbb{H}^{3}\) a _helicoidal surface_ generated by \(\alpha\) with _pitch_\(h,\) if the parameterization \(X:\mathbb{R}^{2}\to\mathbb{H}^{3}\) writes as \[X(u,v)=e^{hv}(e^{vJ}\alpha(u),1),\ \ (u,v)\in\mathbb{R}^{2}. \tag{38}\] Considering a parameterization of \(\alpha\) by arc length, \(\alpha(s)=(u(s),v(s),0),\)\(s\in\mathbb{R},\) and writing \[T(s)=u^{\prime}(s)\partial_{x}+v^{\prime}(s)\partial_{y},\quad N(s)=-v^{ \prime}(s)\partial_{x}+u^{\prime}(s)\partial_{y},\] the curvature of \(\alpha\) is given by \[k(s)=\langle\alpha^{\prime\prime}(s),N(s)\rangle_{e}=-u^{\prime\prime}(s)v^{ \prime}(s)+v^{\prime\prime}(s)u^{\prime}(s),\] where \(\langle\,,\,\rangle_{e}\) stands for the Euclidean metric of \(\mathbb{R}^{2}.\) Furthermore, by the well known Frenet-Serret equations, one has \[T^{\prime}=kN,\quad N^{\prime}=-kT.\] In this setting, if we define the functions \[\uptau:=\langle\alpha,T\rangle_{e}\quad\text{and}\quad\mu:=\langle\alpha,N \rangle_{e}, \tag{39}\] we get from a direct computation that \[\bar{\eta}=\varrho(e^{vJ}N,-(\uptau+h\mu)/h),\ \varrho:=h(h^{2}+(\uptau+h\mu)^{2})^{-1/2}, \tag{40}\] is an Euclidean unit normal to the helicoidal surface \(\Sigma,\) and that its Euclidean mean curvature in this orientation is \[\overline{H}=e^{-hv}\varrho\frac{k((h^{2}+1)r^{2}+h^{2})-(h\uptau-\mu)}{2(h^ {2}+(\uptau+h\mu)^{2})},\] where \(r^{2}:=\uptau^{2}+\mu^{2}.\) From this equality and (1), we have that the hyperbolic mean curvature \(H\) of \(\Sigma\) is \[H=\frac{\varrho}{h}\left(h\frac{k((h^{2}+1)r^{2}+h^{2})-(h\uptau-\mu)}{2(h^{ 2}+(\uptau+h\mu)^{2})}-(\uptau+h\mu)\right). \tag{41}\] These considerations yield the following existence result, which brings [7, Theorem 3.1] to \(\mathbb{H}^{3}.\) **Theorem 4.2**.: _For any smooth function \(\Psi\colon\mathbb{R}^{2}\to\mathbb{R}\) and any constant \(h>0,\) there exists a one-parameter family of complete helicoidal surfaces of pitch \(h\) in \(\mathbb{H}^{3}\) each of them with mean curvature function \(H\) satisfying_ \[H(X(u,v))=\Psi(\uptau(u),\mu(u)),\] _where \(X\) is the parameterization given in (38) and \(\uptau\) and \(\mu\) are as in (39)._ Proof.: Considering equality (41) for the given function \(H=H(\uptau,\mu)\) and solving for \(k,\) we have that \(k=k(\uptau,\mu)\) is a smooth function of \((\uptau,\mu)\in\mathbb{R}^{2}.\) However, by [7, Lemma 3.2], there exists a one-parameter family of plane curves \(\alpha:\mathbb{R}\to\mathbb{R}^{2},\) each of them with curvature \(k.\) Therefore, for such an \(\alpha,\) and for a given \(h>0,\) the helicoidal surface of \(\mathbb{H}^{3}\) with pitch \(h\) whose generating curve is \(\alpha\) has mean curvature function \(H=H(\uptau,\mu),\) as we wished to prove. Now, we verify the conditions under which a helicoidal surface \(\varSigma=X(\mathbb{R}^{2})\) of \(\mathbb{H}^{3}\) is a rotator to MCF. By (38)-(40), \[\langle J\pi(X),\eta(X)\rangle = \langle J(e^{hv}e^{vJ}\alpha),e^{hv}\bar{\eta}(X)\rangle=\langle Je ^{vJ}\alpha,\bar{\eta}(X)\rangle_{e}\] \[= \langle e^{vJ}J\alpha,ge^{vJ}N\rangle_{e}=\varrho\langle J\alpha,N\rangle_{e}\] \[= -\varrho\langle\alpha,JN\rangle_{e}=\varrho\langle\alpha,T \rangle_{e}\] \[= \varrho\uptau,\] which, together with (37), implies the following result. **Lemma 4.3**.: _A helicoidal surface \(\varSigma=X(\mathbb{R}^{2})\) of pitch \(h>0\) parameterized as in (38) is a rotator to MCF in \(\mathbb{H}^{3}\) if and only if its mean curvature function \(H=H(\uptau,\mu)\) satisfies_ \[H=\frac{h\uptau}{\sqrt{h^{2}+(h\mu+\uptau)^{2}}}. \tag{42}\] In what follows, we prove the main result of this section, which provides the existence of complete rotators in \(\mathbb{H}^{3}\) by means of helicoidal surfaces, and completely describe the topology of the corresponding generating curves (see Figure 5). Figure 5. A helicoidal rotator (right) in \(\mathbb{H}^{3}\) and its generating curve (left). **Theorem 4.4**.: _For any \(h>0,\) there exists a one-parameter family of complete rotators to MCF in \(\mathbb{H}^{3}\) whose elements are all helicoidal surfaces of pitch \(h.\) For each such surface, the trace of the generating curve \(\alpha\colon\mathbb{R}\to\mathbb{R}^{2}\) consists of two unbounded properly embedded arms centered at the point of \(\alpha\) which is closest to the origin \(o\in\mathbb{R}^{2},\) with each arm spiraling around \(o.\)_ Proof.: The existence part of the statement follows directly from Lemma 4.3 and Theorem 4.2. So, it remains to prove that the generating curve \(\alpha\) of any such helicoidal surface has the asserted geometric properties. Keeping the above notation, we first observe that, from equalities (41) and (42), the curvature \(k\) of \(\alpha\) satisfies: \[k=\frac{2(h^{2}+(\tau+h\mu)^{2})((h+1)\tau+h\mu)+h(h\tau-\mu)}{h((h^{2}+1)r^{2 }+h^{2})}. \tag{43}\] Also, from (39) and the Frenet-Serret equations, one has \[\tau^{\prime}=1+k\mu\quad\text{and}\quad\mu^{\prime}=-k\tau, \tag{44}\] which, together with (43), yields the ODE system (see Fig. 6) \[\left\{\begin{array}{rcl}\tau^{\prime}&=&1+\frac{2(h^{2}+(\tau+h\mu)^{2})(( h+1)\tau\mu+h\mu^{2})+h^{2}\tau\mu-h\mu^{2}}{h((h^{2}+1)r^{2}+h^{2})},\\ \mu^{\prime}&=&-\frac{2(h^{2}+(\tau+h\mu)^{2})((h+1)\tau^{2}+h\tau\mu)+h^{2} \tau^{2}-h\tau\mu}{h((h^{2}+1)r^{2}+h^{2})}.\end{array}\right. \tag{45}\] Now, we establish the properties of \(\alpha=\tau T+\mu N\) through the following claims. **Claim 4.5**.: _The ODE system (45) has no constant solutions, and all solutions are defined on \(\mathbb{R}.\)_ Proof of Claim 4.5.: Assume, by contradiction, that there exists a constant solution \(\psi(s)=(\tau_{0},\mu_{0}),\)\(s\in\mathbb{R}.\) Since \(\tau^{\prime}=\mu^{\prime}=0,\) we have from (44) that \(k_{0}:=k(\tau_{0},\mu_{0})\) Figure 6. Phase portrait of system (45) for \(h=1.\) satisfies \(k_{0}\mu_{0}=-1\) and \(k_{0}\tau_{0}=0\), which yields \(\tau_{0}=0\) and \(\mu_{0}\neq 0.\) However, from the first equation in (45), one has \[\tau^{\prime}=1+\frac{2h(1+\mu_{0}^{2})\mu_{0}^{2}-\mu_{0}^{2}}{(h^{2}+1)\mu_{0} ^{2}+h^{2}}=\frac{h^{2}(\mu_{0}^{2}+1)+2h(1+\mu_{0}^{2})\mu_{0}^{2}}{(h^{2}+1) \mu_{0}^{2}+h^{2}}>0,\] which is a contradiction. Therefore, (45) has no constant solutions. From this fact, and since \(k=k(\tau,\mu)\) is defined on \(\mathbb{R}^{2},\) we conclude that any solution of (45) is defined on \(\mathbb{R}.\) **Claim 4.6**.: _Suppose that any integral curve \(\psi(s):=(\tau(s),\mu(s))\) of (45) satisfies that the limit \(\lim_{s\to+\infty}\tau(s)\) (resp. \(\lim_{s\to+\infty}\mu(s)\)) exists. Then, \(\lim_{s\to-\infty}\tau(s)\) (resp. \(\lim_{s\to-\infty}\mu(s)\)) also exists. Furthermore, if there exists some \(L\in[-\infty,+\infty]\) with the property that for any integral curve_ \[\lim_{s\to+\infty}\tau(s)=L\;\;(\text{resp.}\;\;\lim_{s\to+\infty}\mu(s)=L),\] _then it also holds that any integral curve satisfies_ \[\lim_{s\to-\infty}\tau(s)=-L\;\;(\text{resp.}\;\;\lim_{s\to-\infty}\mu(s)=-L).\] Proof of Claim 4.6.: Let \(\psi(s):=(\tau(s),\mu(s))\) be an integral curve of the system (45). Then, it is easily checked that \(\widetilde{\psi}(s):=-\psi(-s)\) is also an integral curve of that system. Setting \(\widetilde{\psi}=(\tilde{\tau},\overline{\mu}),\) we have that \(\bar{\tau}(s)=-\tau(-s)\) and \(\overline{\mu}(s)=-\mu(-s).\) By hypothesis, \(\lim_{s\to+\infty}\bar{\tau}(s)\) exists and the first part of the claim follows from observing that \(\lim_{s\to-\infty}\mu(s)=-\lim_{s\to+\infty}\overline{\mu}(s)\). The remainder of proof is argued analogously and will be omitted. **Claim 4.7**.: _The function \(\tau\) has precisely one zero \(s_{0}\) and \(\tau\) is negative in \((-\infty,s_{0})\) and positive in \((s_{0},+\infty).\) As a consequence, the function \(r^{2}=\tau^{2}+\mu^{2}\) has a global minimum and satisfies \(\lim_{s\to\pm\infty}r^{2}=+\infty.\)_ Proof of Claim 4.7.: First, observe that the equalities (44) yield \[(r^{2})^{\prime}=2(\tau\tau^{\prime}+\mu\mu^{\prime})=2(\tau(1+k\mu)+\mu(-k \tau))=2\tau,\] which implies that the zeroes of \(\tau\) are the critical points of \(r^{2}\). Also, as seen in the first part of the proof of Claim 4.5, if \(\tau(s_{0})=0\) for some \(s_{0},\) then \(\tau^{\prime}(s_{0})>0,\) which gives that \(\tau\) has at most one zero \(s_{0},\) in which case \(\tau\) is negative in \((-\infty,s_{0}),\) and positive in \((s_{0},+\infty).\) Next, we argue by contradiction and assume that \(\tau\) has no zeroes. We will also assume that \(\tau>0\) on \(\mathbb{R},\) since the complementary case \(\tau<0\) can be treated analogously. Under this assumption, the function \(r^{2}\) is strictly increasing. So, there exists \(\delta\geq 0\) such that \[\lim_{s\to-\infty}r^{2}(s)=\delta.\] In particular, since \(\tau=\frac{(r^{2})^{\prime}}{2},\) we also have that \[\lim_{s\to-\infty}\tau(s)=0, \tag{46}\] which implies that \(\mu^{2}\to\delta\) as \(s\to-\infty.\) However, the first equality in (45) yields \(\lim_{s\to-\infty}\tau^{\prime}(s)>0,\) which contradicts (46), proving that \(\tau\) has exactly one zero and that \(r^{2}\) has only one critical point. Consequently, both the limits of \(r^{2}\) as \(s\to\pm\infty\) exist in \([0,+\infty].\) To finish the proof of the claim, just note that if either \(\lim_{s\to-\infty}r^{2}=\delta\) or \(\lim_{s\to+\infty}r^{2}=\delta\) for some \(\delta>0\), the same arguments as before lead to a contradiction, thus \(\lim_{s\to\pm\infty}r^{2}(s)=+\infty\). **Claim 4.8**.: _The limits of \(\uptau\) and \(\mu\) as \(s\to\pm\infty\) exist (possibly being infinite)._ Proof of Claim 4.8.: First, we show that \(k\) has at most one zero in \(\mathbb{R}\). Assume that \(k(s_{0})=0\) for some \(s_{0}\in\mathbb{R}\). We have from (43) that, at \(s_{0}\), \[2(h^{2}+(\uptau+h\mu)^{2})((h+1)\uptau+h\mu)+h(h\uptau-\mu)=0. \tag{47}\] Also, by (44), \(\uptau^{\prime}(s_{0})=1\) and \(\mu^{\prime}(s_{0})=0\). This, together with (47), gives that, at \(s_{0}\), \[k^{\prime}=\frac{2(h+1)(h^{2}+(\uptau+h\mu)^{2})+4((h+1)\uptau+h\mu)(\uptau+h \mu)+h^{2}}{h((h^{2}+1)r^{2}+h^{2})}. \tag{48}\] If \(\uptau(s_{0})\mu(s_{0})\geq 0\), we have from (48) that \(k^{\prime}(s_{0})>0\). Assume then \(\uptau(s_{0})\mu(s_{0})<0\) and notice that, by (47), one has \[\operatorname{sign}((h+1)\uptau(s_{0})+h\mu(s_{0}))=\operatorname{sign}(\mu (s_{0})-h\uptau(s_{0})). \tag{49}\] If \(\uptau(s_{0})<0<\mu(s_{0})\), then both signs in (49) are positive. In addition, \[\uptau(s_{0})+h\mu(s_{0})=(h+1)\uptau(s_{0})+h\mu(s_{0})-h\uptau(s_{0})>0,\] and then (48) yields \(k^{\prime}(s_{0})>0\). Analogously, \(\mu(s_{0})<0<\uptau(s_{0})\) implies \(k^{\prime}(s_{0})>0\). It follows from the above that \(k\) has at most one zero \(s_{0}\in\mathbb{R}\) and, if so, \(k\) is negative in \((-\infty,s_{0})\) and positive in \((s_{0},+\infty)\). Since, by Claim 4.7, \(\uptau\) has exactly one zero, we have that \(\mu^{\prime}=-k\uptau\) has at most two zeros, which implies that \(\mu\) has at most two critical points. In particular, the limits \(\lim_{s\to\pm\infty}\mu(s)\) exist. To finish the proof of the claim, let us assume, by contradiction, that the limit of \(\uptau\) as \(s\to+\infty\) does not exist. In this case, for some \(\uptau_{0}>0\), there exists a strictly increasing sequence \((s_{n})_{n\in\mathbb{N}}\) diverging to \(+\infty\) and such that (see Fig. 7) \[\uptau(s_{n})=\uptau_{0}\quad\text{and}\quad\uptau^{\prime}(s_{n})\uptau^{ \prime}(s_{n+1})<0\quad\forall n\in\mathbb{N}.\] Claim 4.7 implies that \(\lim r^{2}(s_{n})=+\infty\), then we must have \(\lim\mu^{2}(s_{n})=+\infty\). In this case, our previous arguments show that either \(\lim\mu(s_{n})=+\infty\) or \(\lim\mu(s_{n})=-\infty\). In any case, we have from (43) that \[\lim_{n\to+\infty}(k(s_{n})\mu(s_{n}))=\lim_{n\to+\infty}\frac{2h^{3}\mu(s_{n })^{4}}{h(h^{2}+1)\mu(s_{n})^{2}}=+\infty.\] In particular, for any sufficiently large \(n\in\mathbb{N}\), \(\tau^{\prime}(s_{n})=1+k(s_{n})\mu(s_{n})>0\). This, however, contradicts the fact that \((\tau^{\prime}(s_{n}))_{n\in\mathbb{N}}\) is an alternating sequence. Therefore, \(\lim_{s\to+\infty}\tau(s)\) exists. Since \((\tau,\mu)\) is an arbitrary integral curve of (45), Claim 4.6 implies that \(\lim_{s\to-\infty}\tau(s)\) also exists, thereby finishing the proof of the claim. **Claim 4.9**.: \(\lim_{s\to\pm\infty}\tau(s)=\pm\infty\) _and \(\lim_{s\to\pm\infty}\mu(s)=\mp\infty.\)_ Proof of Claim 4.9.: By Claim 4.8, all the limits above exist and, arguing by contradiction, we first treat the case when \(\lim_{s\to+\infty}\mu(s)=L\in\mathbb{R}\). Under this assumption, we have from Claims 4.7 and 4.8 that \(\lim_{s\to+\infty}\tau(s)=+\infty\). Then, it follows from the second equation in (45) that \(\lim_{s\to+\infty}\mu^{\prime}(s)=-\infty\), which contradicts the assumed fact \(L\in\mathbb{R}\). Suppose now that \(\lim_{s\to+\infty}\mu(s)=+\infty\). From this assumption and Claim 4.7, we have that \(h^{2}+(\tau(s)+h\mu(s))^{2}>1/2\) for all sufficiently large \(s>0.\) Applying this last inequality to (43) yields \[k(s)>\frac{(h+1)\tau(s)+h\mu(s)+h(h\tau(s)-\mu(s))}{h((h^{2}+1)r^{2}+h^{2})}= \frac{(h^{2}+h+1)\tau(s)}{h((h^{2}+1)r^{2}+h^{2})}>0.\] However, for such values of \(s\), \(\mu^{\prime}(s)=-k(s)\tau(s)<0\), which is a contradiction. Therefore, \(\lim_{s\to+\infty}\mu(s)=-\infty\). Since \((\tau,\mu)\) is an arbitrary integral curve, Claim 4.6 applies to show that \(\lim_{s\to-\infty}\mu(s)=+\infty\). To finish the proof, assume, by contradiction, that \(\lim_{s\to+\infty}\tau(s)\) is finite. Then, since \(\lim_{s\to+\infty}\mu(s)=-\infty\), we have from (43) that \(\lim_{s\to+\infty}k(s)=-\infty.\) From this, we have \(\lim_{s\to+\infty}\tau^{\prime}(s)=\lim_{s\to+\infty}(1+k(s)\mu(s))=+\infty\), which is a contradiction. Therefore, Claim 4.7 gives that \(\lim_{s\to+\infty}\tau(s)=+\infty\). Once again, \(\lim_{s\to-\infty}\tau(s)=-\infty\) follows from Claim 4.6. **Claim 4.10**.: _The function \(\nu:=-\tau/\mu\) is bounded outside of a compact interval._ Proof of Claim 4.10.: It follows from Claim 4.9 that \(\nu\) is well defined and positive at all points outside of a compact interval of \(\mathbb{R}.\) Assume by contradiction that there exists a sequence \((s_{n})_{n\in\mathbb{N}}\) in \(\mathbb{R}\) which diverges to infinity, and such that \(\lim\nu(s_{n})=+\infty\), i.e., \(\lim(-\mu(s_{n})/\tau(s_{n}))=0\). From (43), one has \[k\tau=\frac{2(h^{2}+(\tau+h\mu)^{2})(h+1+h\mu/\tau)+h^{2}-h\mu/\tau}{h((h^{2}+ 1)(1+\mu^{2}/\tau^{2})+h^{2}/\tau^{2})}\,.\] Hence, after passing to a subsequence, we can assume that \(k(s_{n})\tau(s_{n})\) is positive and bounded away from zero for all \(n\in\mathbb{N}.\) However, \[+\infty=\lim\nu(s_{n})=\lim\frac{\tau(s_{n})}{-\mu(s_{n})}=\lim\frac{\tau^{ \prime}(s_{n})}{-\mu^{\prime}(s_{n})}=\lim\left(\frac{1}{k(s_{n})\tau(s_{n})} +\frac{\mu(s_{n})}{\tau(s_{n})}\right)<+\infty,\] which is a contradiction. Analogously, we derive a contradiction by assuming that there exists \(s_{n}\to-\infty\) satisfying \(\lim\nu(s_{n})=+\infty.\) This proves Claim 4.10. In what follows, we shall denote by \(\omega=\omega(s)\) the angle function of \(\alpha\), that is, \[\alpha=r(\cos\omega,\sin\omega).\] It then follows from (44) that the equality \[T=\frac{\tau}{r^{2}}\alpha+\omega^{\prime}J\alpha \tag{50}\] holds at all points where \(r\neq 0.\) **Claim 4.11**.: \(\omega(s)\to+\infty\) _as \(s\to\pm\infty.\)_ Proof of Claim 4.11.: Considering (50) and the equality \((r^{2})^{\prime}=2\uptau,\) we have that \[r^{\prime}=\frac{\uptau}{r}\quad\text{and}\quad\omega^{\prime}=-\frac{\mu}{r^{ 2}}\cdot\] So, given a differentiable function \(\varphi=\varphi(r),\)\(r\in(0,+\infty),\) one has \[\frac{d\varphi}{d\omega}=\frac{d\varphi}{dr}\frac{dr}{ds}\frac{ds}{d\omega}=-r \varphi^{\prime}(r)\frac{\uptau}{\mu}. \tag{51}\] Now, define \(\varphi(r)=\log(\log r).\) Then, \(\varphi(r)\to+\infty\) as \(r\to+\infty\) and \[r\varphi^{\prime}(r)=\frac{1}{\log r}\to 0\,\,\,\text{as}\,\,\,r\to+\infty. \tag{52}\] Since, by Claim 4.10, \(-\uptau/\mu\) is bounded outside of a compact interval, it follows from Claim 4.7 and (51)-(52) that \(d\varphi/d\omega\to 0\) as \(s\to\pm\infty.\) Consequently, \(d\omega/d\varphi\to+\infty\) as \(s\to\pm\infty,\) which proves Claim 4.11. It follows from the above that the trace of \(\alpha\) has one point \(p_{0}\) closest to the origin \(o\) (Claim 4.7), and consists of two properly embedded arms centered at \(p_{0}\) (Claim 4.9) which proceed to infinity by spiraling around \(o\) (Claim 4.11). This finishes our proof. Let \(\varSigma=X(\mathbb{R}^{2})\) be a helicoidal surface of pitch \(h\) in \(\mathbb{H}^{3}\) as given in (38). Consider the subgroup \(\mathcal{G}=\{\Gamma_{t}\,;\,t\in\mathbb{R}\}\subset\text{Iso}(\mathbb{H}^{3})\) of downward translations of constant speed \(h,\) i.e., \(\Gamma_{t}(p)=e^{-ht}p,\) and notice that the Killing field on \(\mathbb{H}^{3}\) determined by \(\mathcal{G}\) is \(\xi(p)=-hp.\) Now, recall that the unit normal to \(\varSigma\) is \(\eta=e^{h\upsilon}\bar{\eta},\) with \(\bar{\eta}\) as in (40). From this, we have: \[\langle\xi(X),\eta\rangle=-h\varrho\left(\mu-\frac{\uptau+h\mu}{h}\right)= \varrho\uptau,\] so that \(\varSigma\) is a \(\mathcal{G}\)-soliton if and only if its mean curvature function is given by \[H=\varrho\uptau=\frac{h\uptau}{\sqrt{h^{2}+(h\mu+\uptau)^{2}}}.\] From this last equality and Lemma 4.3, we have the following result. **Proposition 4.12**.: _Let \(\varSigma=X(\mathbb{R}^{2})\) be a helicoidal surface of pitch \(h\) in \(\mathbb{H}^{3}.\) Then, the following assertions are equivalent:_ * \(\varSigma\) _is a rotator to MCF._ * \(\varSigma\) _is a translator to MCF with respect to the Killing field_ \(\xi(p)=-hp.\)__ ## 5. The Classification of Minimal Translators In this section, we prove Theorem 3.2, which is a classification of complete, properly immersed minimal surfaces of \(\mathbb{H}^{3}\) invariant under the \(1\)-parameter group \(\{\Gamma_{t}\}_{t\in\mathbb{R}}\) of hyperbolic isometries of \(\mathbb{H}^{3}\) defined (in the half-space model) by \[(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}_{+}\mapsto\Gamma_{t}(x_{1},x_{2},x_{3})= (e^{t}x_{1},e^{t}x_{2},e^{t}x_{3}).\] We let \(\overline{\mathbb{H}^{3}}\) be the topological space given by the compactification of \(\mathbb{H}^{3}\) with respect to the so-called _cone topology_ (as defined in [5]) and let \(S^{2}(\infty)\) denote the asymptotic boundary of \(\mathbb{H}^{3}\). In the upper half space model of \(\mathbb{H}^{3}\), \(S^{2}(\infty)\) is identified with the one point compactification of \(\mathbb{R}^{2}=\{x_{3}=0\}\): \[S^{2}(\infty)=\mathbb{R}^{2}\cup\{\infty\}.\] Along the proof, given a surface \(\varSigma\subset\mathbb{H}^{3}\), we will write \(\partial_{\infty}\varSigma\) for the asymptotic boundary of \(\varSigma\), that is, \(\partial_{\infty}\varSigma:=\overline{\varSigma}\cap S^{2}(\infty)\), where \(\overline{\varSigma}\) is the closure of \(\varSigma\) in \(\overline{\mathbb{H}^{3}}\). Proof of Theorem 3.2.: Consider \(\alpha\) a curve in the horosphere \(\mathscr{H}:=\{x_{3}=1\}\) and assume that \(\varSigma=\{e^{t}\alpha\mid t\in\mathbb{R}\}\) is a complete, properly immersed minimal surface invariant under the action of \(\{\Gamma_{t}\}_{t\in\mathbb{R}}\) and generated by \(\alpha\). For the remainder of the proof, we will assume that \(\alpha\) is parameterized by arc length over a maximal interval \(I\). **Claim 5.1**.: _Let \(\theta\in[0,\pi)\). If \(\alpha\) intersects the line \(L_{\theta}=\{(r\cos(\theta),r\sin(\theta),1)\mid r\in\mathbb{R}\}\) in two (or more) points, then \(\alpha=L_{\theta}\). In particular, \(\alpha\) is properly embedded, \(I=\mathbb{R}\) and, if \((0,0,1)\in\alpha\), \(\varSigma\) is a vertical plane._ Proof.: After a rotation, it suffices to prove the claim for \(\theta=0\). Assume that there are two distinct points \(p_{1}=(r_{1},0,1),\,p_{2}=(r_{2},0,1)\in\alpha\) and, arguing by contradiction, assume that \(\alpha\neq L_{0}\). Consider the compact arc \(a\) that \(\{p_{1},p_{2}\}\) bounds in \(\alpha\). Since \(\alpha\neq L_{0}\), there exists a point \(\widehat{p}\) in the interior of \(a\) where the second coordinate function \(x_{2}\) has a local maximum or a local minimum, and we may rotate \(\alpha\) once again to assume it is a local maximum, attained at \((\widehat{x}_{1},\widehat{x}_{2},1)\) with \(\widehat{x}_{2}>0\). Let \(P\) be the tilted plane of \(\mathbb{H}^{3}\) that contains the line \(\{(r,\widehat{x}_{2},1)\mid r\in\mathbb{R}\}\) and whose asymptotic boundary contains \((0,0,0)\). Then, \(P\) is an equidistant surface to the totally geodesic plane \(\{x_{2}=0\}\) and its mean curvature vector points upwards. Since \(\varSigma\) locally stays in the mean convex side of \(P\) and intersects \(P\) tangentially along the line \(\{e^{t}\widehat{p}\mid t\in\mathbb{R}\}\), we obtain a contradiction with the mean curvature comparison principle. Assuming that \(\varSigma\) is not a vertical plane, we may parameterize \(\alpha\) as \[\alpha(s)=(r(s)\cos(\theta(s)),r(s)\sin(\theta(s)),1), \tag{53}\] where \(s\) is the arc length of \(\alpha\) and \(r(s)>0\) for all \(s\in\mathbb{R}\). Claim 5.1 implies that, after a rotation in \(\mathbb{H}^{3}\) (and possibly reparameterizing on the opposite orientation) the function \(\theta\) must satisfy \(\theta^{\prime}(s)\geq 0\) and \[\lim_{s\rightarrow-\infty}\theta(s)=0,\quad\lim_{s\rightarrow+\infty}\theta(s) =\theta_{+}>0. \tag{54}\] In fact, \(\theta_{+}\in(0,\pi]\). Indeed, arguing by contradiction, assume that \(\theta_{+}>\pi\) and choose \(\theta^{*}\in(\pi,\theta_{+})\). Then, Claim 5.1 implies that \(\alpha\) intersects \(L=L_{\theta^{+}-\pi}\) at most in one point, so the fact that \((0,0,1)\not\in\alpha\) implies that either \(\theta(s)\in(0,\theta^{*})\) for all \(s\in I\) or \(\theta(s)\in(\theta^{*}-\pi,\theta_{+})\) for all \(s\in I\), both situations in contradiction with (54). **Claim 5.2**.: \(\partial_{\infty}\varSigma\cap\mathbb{R}^{2}\) _is a \(\theta_{+}\)-hinge\({}^{\rm(ii)}\) with vertex at the origin \(\mathbf{0}:=(0,0,0)\)._ Proof.: Using the notation of (53), we may parameterize \(\varSigma\) as \[\varSigma=\{(e^{t}r(s)\cos(\theta(s)),e^{t}r(s)\sin(\theta(s)),e^{t})\mid t,s \in\mathbb{R}\}.\] Our next argument is to show that \(\ell_{0}\cup\ell_{\theta_{+}}\cup\{\mathbf{0}\}\subset\partial_{\infty}\varSigma\), where, for \(\theta\in[0,2\pi)\), \(\ell_{\theta}=\{(r\cos(\theta),r\sin(\theta),0)\mid r>0\}\). Note that \(\lim_{s\to\pm\infty}r(s)=+\infty\), as \(\alpha\) is properly embedded and noncompact. Let \((s_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathbb{R}\) such that \(s_{n}\to+\infty\), thus \(\lim_{n\to\infty}r(s_{n})=+\infty\). For a given \(r>0\), let \(t_{n}=\log(r/r(s_{n}))\) and let \[p_{n}=e^{t_{n}}\alpha(s_{n})=\left(r\cos(\theta(s_{n})),r\sin(\theta(s_{n})), \frac{r}{r(s_{n})}\right)\in\Sigma.\] Since \(r>0\) and \(r(s_{n})\to+\infty\), \(\lim_{n\to\infty}\frac{r}{r(s_{n})}=0\). Furthermore, (54) implies that \(\lim_{n\to\infty}\theta(s_{n})=\theta_{+}\), and it follows that \[\lim_{n\to\infty}p_{n}=(r\cos(\theta_{+}),r\sin(\theta_{+}),0)\in\partial_{ \infty}\Sigma.\] Since \(r\) is arbitrary, this gives \(\ell_{\theta_{+}}\subset\partial_{\infty}\Sigma\). Analogously, we may prove that \(\{\mathbf{0}\}\cup\ell_{0}\subset\partial_{\infty}\Sigma\). Next, we prove that \(\ell_{0}\cup\ell_{\theta_{+}}\cup\{\mathbf{0}\}\supset(\partial_{\infty} \Sigma\cap\mathbb{R}^{2}).\) Choose \(\overline{p}\in\partial_{\infty}\Sigma\cap\mathbb{R}^{2}\) and, assuming that \(\overline{p}\neq\mathbf{0}\), write \(\overline{p}=(r\cos(\theta),r\sin(\theta),0)\) for some \(r>0\) and \(\theta\in[0,2\pi)\). Let \((p_{n})_{n\in\mathbb{N}}\) be a sequence in \(\Sigma\) such that \(p_{n}\to\overline{p}\), so there exist uniquely defined \(s_{n},t_{n}\in\mathbb{R}\) such that \[p_{n}=e^{t_{n}}\alpha(s_{n})=\left(e^{t_{n}}r(s_{n})\cos(\theta(s_{n})),e^{t_{ n}}r(s_{n})\sin(\theta(s_{n})),e^{t_{n}}\right).\] The fact that \(p_{n}\to\overline{p}\) implies that \(e^{t_{n}}\to 0\). Moreover, \(\lim_{n\to\infty}e^{t_{n}}r(s_{n})=r\), thus \(r(s_{n})\to+\infty\), and it follows that either \(s_{n}\to+\infty\) (in which case \(\theta(s_{n})\to\theta_{+}\)) or \(s_{n}\to-\infty\) (and \(\theta(s_{n})\to 0\)). In both situations we obtain \(\overline{p}\in\ell_{0}\cup\ell_{\theta_{+}}\), which proves the claim. At this point, we have shown that a properly immersed minimal surface \(\Sigma\subset\mathbb{H}^{3}\) which is invariant under the group \(\{\Gamma_{t}\}_{t\in\mathbb{R}}\) is in fact properly embedded and its asymptotic boundary \(\partial_{\infty}\Sigma\cap\mathbb{R}^{2}\) is a \(\theta_{+}\)-hinge with vertex at \(\mathbf{0}\). The existence and uniqueness of such surfaces was proven in [6] (unpublished) and also presented in [17, Proposition A.1], which finishes the proof of Theorem 3.2.
2305.01424
Uncertain Machine Ethical Decisions Using Hypothetical Retrospection
We propose the use of the hypothetical retrospection argumentation procedure, developed by Sven Ove Hansson to improve existing approaches to machine ethical reasoning by accounting for probability and uncertainty from a position of Philosophy that resonates with humans. Actions are represented with a branching set of potential outcomes, each with a state, utility, and either a numeric or poetic probability estimate. Actions are chosen based on comparisons between sets of arguments favouring actions from the perspective of their branches, even those branches that led to an undesirable outcome. This use of arguments allows a variety of philosophical theories for ethical reasoning to be used, potentially in flexible combination with each other. We implement the procedure, applying consequentialist and deontological ethical theories, independently and concurrently, to an autonomous library system use case. We introduce a preliminary framework that seems to meet the varied requirements of a machine ethics system: versatility under multiple theories and a resonance with humans that enables transparency and explainability.
Simon Kolker, Louise Dennis, Ramon Fraga Pereira, Mengwei Xu
2023-05-02T13:54:04Z
http://arxiv.org/abs/2305.01424v2
# Uncertain Machine Ethical Decisions Using Hypothetical Retrospection ###### Abstract We propose the use of the hypothetical retrospection argumentation procedure, developed by Sven Ove Hansson to improve existing approaches to machine ethical reasoning by accounting for probability and uncertainty from a position of Philosophy that resonates with humans. Actions are represented with a branching set of potential outcomes, each with a state, utility, and either a numeric or poetic probability estimate. Actions are chosen based on comparisons between sets of arguments favouring actions from the perspective of their branches, even those branches that led to an undesirable outcome. This use of arguments allows a variety of philosophical theories for ethical reasoning to be used, potentially in flexible combination with each other. We implement the procedure, applying consequentialist and deontological ethical theories, independently and concurrently, to an autonomous library system use case. We introduce a preliminary framework that seems to meet the varied requirements of a machine ethics system: versatility under multiple theories and a resonance with humans that enables transparency and explainability. Keywords:Machine Ethics Uncertainty Argumentation Moral theory ## 1 Introduction Autonomous machines are an increasingly prevalent feature of the modern world. From spam filters [28] and fraud detectors [3], to drivers [32], medical practitioners [43] and soldiers [40], machines are being developed to automate tasks. Any decision affecting real people has the potential for ethical impact. Therefore machines are increasingly recognised as ethical agents. Moor [34] categorises such agents as either _implicitly_ or _explicitly ethical_. Implicit ethical agents are built and situated by humans to have a neutral or positive effect, like an ATM machine; they do not utilise concepts of right and wrong in their internal decision making. As autonomous systems make more decisions with more responsibility, they need to reason about ethics _explicitly_. Allen et al. identify two strategies for designing explicitly ethical systems [4]: _bottom-up_ approaches train systems to make ethical decisions with learning techniques based on data from human decision making; _top-down_ approaches encode principles and theories of moral behaviour (often drawn from philosophy) into rules for a selection algorithm, generally using techniques from the field of symbolic Artificial Intelligence (AI). In this paper, we propose and implement a top-down, explicitly ethical approach. When an action is taken in the real world, its exact results are typically uncertain. As such, a top-down machine ethics system needs a mechanism for handling uncertainty over outcomes. There are mechanisms for handling uncertainty in AI, including Bayesian methods, Dempster-Shafer theory, fuzzy logics and others [36]. Nevertheless, it is currently unclear how they might integrate with machine ethics; there may be unanticipated philosophical implications. Instead, we opted to operationalise and implement Sven-Ove Hansson's hypothetical retrospection procedure [26]. Originating in Philosophy, the procedure was designed to guide ethical reasoning under uncertainty. It favours no specific ethical theory, but systematises the foresight argument pattern, extending an assessor's perspective to judge decisions by the circumstances in which they were made. Therefore, arguments can be grounded in a variety of ethical theories. Over the past ten years, the field of machine ethics has implemented many such theories [41], yet there is no consensus over which is most effective. Philosophy too has not agreed which is morally correct, leaving implementers to choose from the perspective of stakeholder requirements and preferences. Thus, a mechanism for handling uncertainty that adapts to different ethical theories is desirable. We outline the procedure via an example from Hansson [26]. Suppose an agent is given the choice between an apple and flipping a coin. If the coin lands heads, they win a free holiday to Hawaii. If the coin lands tails, they get nothing. Selecting the coin is clearly a valid choice. How might this decision be justified? Under hypothetical retrospection, we list each possible outcome: choosing the apple; choosing to toss the coin and winning the Hawaii holiday; choosing to toss the coin and losing. Next, we _hypothetically retrospect_ from each outcome's endpoint. Intuitively, the objective is to find an action whose outcomes do not lead the agent to _regret_ the ethical implications of their action.1 First, consider the coin's outcomes: after winning the holiday, there cannot be regret since the Hawaii holiday is the best outcome; after losing the coin flip, the agent has nothing which is the worst outcome, but there is no regret since the agent justifies that they had a good chance of winning Hawaii, which is far better than an apple. Now, consider choosing the apple. Here, the agent regrets that they missed a chance of a holiday worth far more than an apple. We saw that choosing the coin did not lead to such regret. Therefore, the procedure advises we pick the coin, matching our intuition. Footnote 1: We recognise there is little ethical impact in this decision, besides maximising utility. It serves as an abstract example where one decision openly defeats another. This paper operationalises the hypothetical retrospection procedure, and the foresight argument pattern it is based on. We implement and evaluate it with moral theories from Philosophy. We consider Deontology, actions that are strictly forbidden [2], and a theory of consequentialism, which specifies an action is good if its consequences maximise good for the greatest number of people [35]. We illustrate our approach with the novel scenario of an autonomous library system. We demonstrate the system's potential for explainability and versatility, while discussing issues and future work. In Section 2, we will cover related work in the area and highlight this paper's contribution. In Section 3, we will cover background on symbolic argumentation and uncertainty in Ethical Philosophy. In Section 4 we will recap Hansson's description of hypothetical retrospection; in Section 5 we overview the implementation, including notation, the representation of probability and the argumentation model. Section 6 describes our test case of the autonomous library system, its formalism, and our results. Finally, in Section 7 we will identify the system's potential benefits and its shortfalls left for future work. ## 2 Related Work This is not the first attempt at building a top-down explicitly ethical machine. Tolmeijer et al. presents an exhaustive survey of implementations as of 2020, but finds the effect of uncertainty is rarely addressed [41]. Dennis et al. developed a framework suggesting how an autonomous system should act in unforeseen circumstances, with no positive outcomes. However, it does not address uncertainty between the likelihood of outcomes [20]. Probabilistic reasoning, such as Bayesian networks [39] and Markov models [19], has been applied to machine ethics, mostly with regards to maximising expected utility [17]. There are a number of criticisms of this approach which we will touch on in Section 3. Killough et al. goes further, architecting agents sensitive to utility risk and reward, with an ability to dynamically adjust risk-tolerance for the environment [30]. This paper is interested in a framework that incorporates a variety of philosophical ethical theories and allows for the combination of multiple theories, such as Deontology [2], Contractualism [8] and Virtue Ethics [27]. Different philosophical theories can advise on different courses of action, not only in tricky dilemma situations but sometimes even in situations where the moral choice seems intuitively obvious. There has been some work within machine ethics on comparing and combining different theories. For instance, Sholla et al. weights different principles and then uses fuzzy logic to decide between their recommendations [38]. Ecoffet and Lehman [23] use a voting procedure in which different ethical theories vote on recommendations but they struggle with the difficulty of comparing Utilitarian theories that return a score for actions with deontological theories that tend to return a judgement that the action is either permissible or impermissible. Our framework enables a flexible approach in which the construction of an argument can treat all ethical theories equally, or allow one to have precedence over another. The HERA project [31] is of interest here - while it does not combine ethical theories it provides a single framework in which many theories can be formalised and operationalised, allowing their recommendations to be compared. Cointe et. al [18] do something similar with an Answer-Set Pro gramming approach though focused, in this case, on enabling the agent to make moral judgements about others. These systems could, potentially, be integrated into our argumentation framework to supply judgements on the rightness of an action and its consequences from the perspective of a particular moral theory. Atkinson and Bench-Capon have developed a framework for ethical argumentation [9]. Like our work, assessments of action's outcomes are modelled as arguments. However, Atkinson and Bench-Capon's work remains concerned with epistemic conflicts between arguments (i.e. disputes between the truth of argument's circumstances) and annotates attacks and defends within the argumentation framework with values, aligning it with the philosophical theory of Virtue Ethics. Our work pivots away, focused purely on the ethical conflicts between arguments. We can assume epistemic truth because arguments are based only on potential, purely hypothetical, versions of events, each created from a single, shared set of information. This allows us to address moral conflict directly. It also lets us build uncertainty into the argumentation mechanism, instead of delegating it to a detail of argument attacks. ## 3 Background The effect of uncertainty on machine ethics has been relatively unexplored largely due to the lack of research on how uncertainty impacts ethics in general. As Altham explains, there seems to be a gap in moral theory for uncertain situations [5]. He postulates this could be due to a belief among philosophers that no special principles are required; Moral Philosophy decides the virtues and it is up to Decision Theory to decide how they should be maximised under uncertainty. Hansson shows that Utilitarian theories are straightforward in this regard [26]. These theories judge decisions based on numeric utilities assigned to their consequences. Expected utility Utilitarianism uses probabilities as weights to discount the utility of improbable outcomes. Hansson critiques this adaptation for the same reason as actual Utilitarianism: its assumption that outcomes can be appraised in terms of a single number (or at least done so both easily and accurately) often produces unintuitive outcomes. In the Apple-Coin scenario from Section 1, although it is evident that a trip to Hawaii holds more value than an apple, the extent of the difference in value remains uncertain. Adding more apples, such as 100, 1000, or 1001, does not necessarily make the deal any more appealing. In other words, apples and holidays are not proportionally comparable. There is no method of assigning relative utilities to all possible states. Brundage briefly surveys other critiques against consequentialist theories. First, they fail to account for personal social commitments, i.e. to friends and family. Second, they do not consider individual differences and rights, tending to favour the majority over any minority. Lastly, they place excessive demands on individuals to contribute to others [14]. Traditional Deontological systems [2] are made of principles which should never be violated. Hansson shows that any form of probabilistic absolutism, where an action is not permitted if there is any chance of a rule violation, would be too restrictive. Therefore, an approach involving probability thresholds is often suggested. Here, an action is only forbidden when the probability that it violates a law exceeds some limit. The exact value of this limit is open for debate. It is tempting to suggest the limit should have some relation to the action's potential benefits, but this could soon reduce to some elaborate form of Utilitarianism, adamantly against the essence of the original theory. Noticeably, most humans do not consciously rely on one philosophical, moral theory to make their decisions [13]. Nor do we think it is our place to choose a single theory to apply to machine ethics. As such, one of Hansson's key contributions is providing an argumentation procedure that can frame multiple, possibly conflicting theories rationally. To model this, we look to the study of abstract argumentation. Dung creates a framework of logically generated, non-monotonic arguments [22]. They can discredit each other with attacks, modelled as a binary relation between the arguments. Dung goes on to specify properties of a well-founded framework; he gives procedures for believing arguments based on their membership to framework extensions. This paper will take only take the simple structure of Dung's framework. We leave it to Hansson's philosophy to define attacks and select arguments. ## 4 Hypothetical Retrospection Hypothetical retrospection systematises ethical decision making with uncertain outcomes such that its judgements resonate with humans. In this section, we overview Hansson's description of the procedure from [26], before we operationalise it in Section 5. Much of moral philosophy can be interpreted as an attempt to extend a decision maker's perspective. In promoting empathy, we invoke a perspective extending argument pattern to consider other's perceptions of our actions. For cases of uncertainty, Hansson argues it is helpful to extend our perspective with future perceptions of our actions. This means viewing, or hypothetically retrospecting on, a choice from the endpoint of its major foreseeable outcomes. As a result, the hypothetical outcomes, or the _potential branches of future development_, can be used to build resonate arguments about what to do in the present. Although Hansson proposes moral arguments that go beyond utility, duty or rights based calculations, the procedure is compatible with many theories of Ethics. Hansson determines each action's branches of future development like a search problem. Theoretically, a decision's effects may be infinitely complex and far-reaching. The major search principle, therefore, is to find the most probable future developments which are the most difficult to defend morally. This will increase the chance of considering unethical scenarios. Branches should be developed to an endpoint sufficiently far to capture all morally relevant information. Intermediate information must be captured too: rule violations occurring before the point of retrospection still need to be considered. Additionally, and for the sake of comparison, branches should be described with the same type of infor mation where possible2. Hansson sees no reason not to create alternate branches based on the uncertainty of the decision maker's own future choices, considering human's inability to control their future actions. Whether an autonomous system has uncertainty over its future actions depends on the nature of the agent and its application architecture. Footnote 2: The way in which consequences are discussed here may seem to exclude non-consequentialist theories. Hansson emphasizes that this is not the case. In his approach, consequences are broadly defined and their _information_ includes agency, virtue intentions, and any other information necessary for moral appraisal. Our implementation assesses actions assuming their potential branches are provided. In future work, a planning algorithm could be adapted to the requirements above. For instance, the Probabilistic Planning Domain Definition Language (PPDDL) [42] is able to formalise different stochastic planning settings, e.g., Markov Decision Process (MDP) [25], Stochastic Shortest Path problems (SSP) [12], and Fully Observable Non-Deterministic planning [16]. This was superseded recently by the Relational Dynamic Influence Diagram Language (RDDL) [37] which has been adopted by the International Probabilistic Planning Competition (IPPC)3 and is thus the target input language for many planning implementations. Footnote 3: [https://ataitler.github.io/IPPC2023/](https://ataitler.github.io/IPPC2023/) Using their potential branches, actions can be assessed with a selection of ethical theories. Hansson stresses we are not to assess actions in isolation; assessments are purely comparative. This is because decisions are not made in isolation. Given a choice between actions A and B, choosing A is choosing A-instead-of-B. Building action assessments from comparisons ensures all morally relevant information is taken into account. Actions are compared by hypothetically retrospecting from the endpoint of each action's potential branches of future development. We search for an action which never leads an agent to morally regret its choice in retrospect. Hansson argues against the term _regret_ since it is considered a psychological reaction; humans often feel regret for actions they did not commit, or that they could not have known were wrong. By regret, therefore, we mean that the decision making was logically flawed under retrospection. As a result, we use the term _negative retrospection_ to reflect this more technical definition. By hypothetically retrospecting between actions' branches, we search for an action which does not lead to negative retrospection, or has full acceptability among its branches. If no such action exists, one should be selected that maximises acceptability in its most probable branches. Therefore, hypothetical retrospection's decisions are based on relevant ethical information using moral arguments that resonate with humans. ## 5 Implementation ### Formalism We define an ethical decision problem as a tuple \(\langle A,B,S,U,F,I,m\rangle\), composed of an ethical environment and a set of available actions, each with a set of potential branches of future development. An environment's ethically relevant properties are represented by the set \(S\) of Boolean variables; the set \(I\) defines the initial truth assignment to \(S\), before actions are taken. For example, in the Coin-Apple scenario there are three state variables in \(S\): \(s_{1}\) represents whether or not we have an apple, \(s_{2}\) whether or not we have gambled, and \(s_{3}\) is whether or not we won a trip to Hawaii. In the initial state \(I\), all these variables are false. Ethical information for consequentialist and deontological theories are formalised with sets \(U\) and \(F\). To capture the issue from Section 1, where different event outcomes have an immeasurably greater/lower utility, we have introduced the notion of utility classes. Definition 1 (Utility Class): A utility class is an unordered set of individual utility assignments represented as tuples of \(\langle s_{k},\phi,v\rangle\), where \(s_{k}\) denotes a state variable in \(S\) and \(v\in\mathbb{R}\) represents the variable's utility when assigned Boolean value \(\phi\). The ordered set \(U\) contains utility classes in descending order of importance. Where \(i<j\), all the positive utilities in \(u_{i}\) are considered greater than any utility in \(u_{j}\); all the negative utilities in \(u_{i}\) are considered less than any utility in \(u_{j}\). To reiterate, the absolute utilities in lower indexed classes are considered immeasurably greater. In the Coin-Apple example, there are two utility classes in \(U\). The first contains the utility assignment, \(\langle s_{3},True,1\rangle\) representing a utility of \(1\) for getting the Hawaii holiday. The second class has utilities immeasurably lower. It contains one assignment, \(\langle s_{1},True,1\rangle\) representing a utility of \(1\) for getting the apple. The set \(F\) describes the states forbidden by a given deontological theory. This is not the same as defining a negative utility in \(U\) since utilities can be outweighed by a greater positive utility. In deterministic decision making environments, forbidden states can not be outweighed. They could represent, for instance, that someone was deceived, that a law (e.g., trespass) was broken, and so on - any action or outcome that can not be justified. The formalism assumes that the high-level rules have been translated into domain-level rules, applicable to the state variables in \(S\). Definition 2 (Forbidden State): A Forbidden State is a tuple \(\langle s,\phi\rangle\) where \(s\in S\) is a state variable forbidden from being assigned the Boolean value \(\phi\). In the Coin-Apple scenario, \(F\) could contain a forbidden state, \(\langle s_{2},True\rangle\) representing a rule against gambling. With an environment of ethical values, we define set \(A\) of available actions and set \(B\) of all potential branches of future development. We define a mapping, \(m\), that associates every action with its potential branches of future development. Each branch, \(b\in m(a)\) is an ordered sequence of _events_ that could occur after action \(a\). Definition 3 (Event): An event is a tuple of \(\langle s,\phi,p\rangle\) where \(s\in S\), \(\phi\) is the new Boolean value of \(s\), and \(p\) is the probability that the event occurs. An event therefore represents the change in value of one state variable in \(S\). A branch is a sequence of events that can occur after the action is taken. For the Coin-Apple example, there are two available actions in \(A\). Action \(a_{1}\) represents choosing the apple. It maps to one branch \(b_{1}\in m(a_{1})\), containing one event, \(\langle s_{1},True,1\rangle\)--if we choose to have an apple, we gain an apple; we have not gambled nor won a holiday to Hawaii. Action \(a_{2}\) represents flipping the coin. It maps to two branches, \(b_{2},b_{3}\in m(a_{2})\). The branch \(b_{2}\) contains one event, \(\langle s_{2},True,1\rangle\)--we gambled, but we have no apple and no holiday to Hawaii. The branch \(b_{3}\) is the sequence of events \(\langle s_{2},True,1\rangle\) then \(\langle s_{3},True,0.5\rangle\)--first we gambled, then we won a holiday to Hawaii. The Coin-Apple problem is shown in Figure 1. We define the ethical decision problem and a permissible action. The definition of acceptability depends on the ethical theories under consideration (see Section 5.3). Definition 4 (Ethical Decision Problem): An ethical decision problem is a tuple of \(\langle A,B,S,U,F,I,m\rangle\) where \(A\) stands for a set of available actions, \(B\) the set of all potential branches of future development, \(S\) the set of Boolean state variables, \(U\) an ordered set of utility classes, \(F\) a set of forbidden state Figure 1: Diagram for Coin-Apple scenario. Event nodes represent True assignment to a state variable. Actions map to a set of branches, represented by rows of event nodes. Probability of conjunction of branchs’ events given under branch probability. assignments, \(I\) the initial assignment of Boolean values to the variables in \(S\), representing the initial state, and \(m:A\rightarrow\mathcal{P}(B)\) (where \(\mathcal{P}\) is the powerset function) is a mapping of actions to potential branches of development._ Definition 5 (Permissible Action): Given an ethical decision problem, defined as a tuple of \(\langle A,B,S,U,F,I,m\rangle\), a permissible action is an action, \(a\in A\), such that for all potential branches of future development \(b\in m(a)\), there is acceptability over their events in state space \(S\). If no such actions exist, action \(a\) is permissible if it maximises the cumulative probability of its acceptable branches. ### Probability Representation In many scenarios, while a person may have an intuition that some events are more probable than others, their exact probabilities are unknown. This is most common when interacting with humans and complex systems. Our implementation supports the use of estimative as well as exact probability estimates. Kent found that intelligence reports tend to use _poetic_ words like _probable_ or _unlikely_[29]. The issue is that people have different interpretations of their meaning. Kent defined a relation for poetic words to mathematical probability ranges, as given in Table 1 from [29]. Our implementation supports both estimative and exact probabilities. ### Argumentation Model Hansson does not give steps for comparing action's potential branches of future development in [26]. For our implementation, we chose to build comparative moral assessments with a simple argumentation network, based partially on the work of Atkinson et al. [10]. Here, arguments are generated logically from an _argument scheme_. For an action \(a\in A\), selected in initial state \(I\), resulting in the branch \(b\in m(a)\) with probability \(p\), the following argument is generated: _"From the initial state \(I\), it was acceptable to perform action \(a\), resulting in consequences \(b\) with probability \(p\)."_ \begin{table} \begin{tabular}{|l|c|l|c|} \hline \multicolumn{4}{|c|}{100\% Certainty} \\ \hline The & 93\% & Give or take 6\% & Almost Certain \\ \cline{2-3} General & 75\% & Give or take 12\% & Probable \\ \cline{2-3} Area of & 50\% & Give or take 10\% & Chances about even \\ \cline{2-3} Possibility & 30\% & Give or take 10\% & Probably not \\ \cline{2-3} & 7\% & Give or take 5\% & Almost certainly not \\ \hline \multicolumn{4}{|c|}{0\% Impossibility} \\ \hline \end{tabular} \end{table} Table 1: Mathematical to poetic relation from Kent’s estimative probability [29]. For notation, this is written \(Argument(b)\). We view this as a default argument that any action is acceptable. In our running example, the retrospective argument below is generated for \(b_{3}\), tossing the coin and winning the Hawaii holiday. _"From the initial state \(I\), where \(s_{1}=s_{2}=s_{3}=False\), it was acceptable to perform the action \(a_{2}\), resulting in consequences with \(s_{2}=s_{3}=True\) with probability 0.5."_ To determine an argument's validity, we search for attacks from other actions' arguments. Incoming attacks imply negative retrospection for not choosing an attacking action. To formalise Hansson's retrospection, we generate attacks by posing critical questions on arguments' claims [10]. For the branches \(b_{1}\in m(a_{1})\), \(b_{2}\in m(a_{2})\) and any generic moral principle, the following critical questions are asked for \(Argument(b_{1})\) to attack \(Argument(b_{2})\). **CQ1**: _Did \(b_{2}\) violate a moral principle that \(b_{1}\) did not_? **CQ2**: _Did \(a_{2}\) hold a greater probability of breaking the moral principle than \(a_{1}\)_? \(Argument(b_{1})\) only attacks \(Argument(b_{2})\) if both of these questions are answered positively. They represent negative retrospection for missing the chance to avoid violating a principle. The critical questions are asked both ways between all arguments supporting different actions, for every moral principle under consideration. The time and space complexity of answering the questions will differ for different theories. The desired ethical theories have to be encoded into the critical questions relative to a domain. For Utilitarianism and a generic deontological do-no-harm principle critical questions are embedded as follows: * Utilitarian CQ1: _Did \(b_{2}\) bring greater utility value than \(b_{1}\)_? * Utilitarian CQ2: _Did \(a_{2}\) expect greater utility value than \(a_{1}\)_? * Do-no-harm CQ1: _Did \(b_{2}\) cause harm where \(b_{1}\) did not_? * Do-no-harm CQ2: _Did \(a_{2}\) expect greater probability of causing harm than \(a_{1}\)_? After searching for attacks on all branches, an action should be selected with complete acceptability. If no such action exists, an action should be selected with maximal acceptability, i.e. summing the probability of each non-attacked argument and selecting an action with a maximal sum. ### Algorithm We outline our implementation in Algorithm 1. Given an ethical decision problem, all actions are compared by their potential branches of future development (lines 2-4). There is a hypothetical retrospective argument made from the perspective of each branch in favour of its action. Attacks are generated between arguments by asking two critical questions based on an ethical theory. For our implementation we use a Utilitarian and a Deontological theory (lines 5-6), detailed later in Algorithm 2 and 3. Attacked branches are marked as such (lines 7-13). An action's acceptability defaults to 1 and is subtracted by the cumulative probability of attacked branches. The action with maximum acceptability is selected (lines 17-25). ``` 1:array\(attacked\leftarrow[False,...,False]\) of size\(length(B)\) 2:for each\(a_{i},a_{j}\)in\(\{(a_{i},a_{j})|a_{i},a_{j}\in A\) and \(a_{i}\neq a_{j}\}\)do 3:for each\(b_{k}\)in\(m(a_{i})\)do 4:for each\(b_{l}\)in\(m(a_{j})\)do 5: uTarget \(\leftarrow\) Target in Utilitarian CQs (\(b_{k}\in m(a_{i})\), \(b_{l}\in m(a_{j})\), \(U\)) 6: dTarget \(\leftarrow\) Target in Deontological CQs (\(b_{k}\in m(a_{i})\), \(b_{l}\in m(a_{j})\), \(F\)) 7:if dTarget == uTarget **and** dTarget **is** not None **then** 8: attacked[uTarget] \(\leftarrow\)\(True\) 9:elseif dTarget!= uTarget **and** dTarget **is** None **then** 10: attacked[uTarget] \(\leftarrow\)\(True\) 11:elseif dTarget!= uTarget **and** uTarget **is** None **then** 12: attacked[dTarget] \(\leftarrow\)\(True\) 13:endif 14:endfor 15:endfor 16:endfor 17:array\(acceptability\leftarrow[1,...,1]\) of size\(length(A)\) 18:for each\(a_{i}\in A\)do 19:for each\(b_{k}\in m(a_{i})\)do 20:if\(attacked[k]\)then 21:\(acceptability[i]\)\(\leftarrow\)\(acceptability[i]-Probability(b_{k})\) 22:endif 23:endfor 24:endfor 25:return\(\leftarrow\)\(\arg\max_{i}(acceptability[i])\) ``` **Algorithm 1** Arguments action's potential branches of future development. Returns index of action with maximum acceptability. Algorithm 2 embeds the theory of Utilitarianism into the critical questions. As explained in Section 5.1, branches are made from a list of events which each change a Boolean state variable with some probability. Variable utilities are defined by a set of utility classes, with assignments in lower indexed classes immeasurably greater. Algorithm 2 compares two potential branches and returns the index of a branch if it is defeated by the other branch through the critical questions. It is invoked by Algorithm 1 on line 5. Algorithm 2 counts from the lowest utility class upwards to find the first class where branch utilities are unequal. If found, critical question 1 is answered positively. The branch with the greater utility becomes the _attacker_, the other is the _defender_ (lines 1-9). If utilities are equal through all classes, there are no attacks (lines 10-12). Otherwise, the defender branch attempts to use the foresight argument to defend itself: for each lower indexed class, if the defender's action has greater expected utility, defence is successful and there is no attack (lines 13-17). If the attacker action has greater or equal expected utility across all classes, defence fails and critical question 2 is positive. Thus, the defender branch is attacked (line 18). ``` 0: Action Branches \(b_{k}\in m(a_{i}),b_{l}\in m(a_{j})\), Utility Classes \(U\) 0: Index of Attacked Branch \(x\) 1:for\(c\gets 0\) to \(length(U)\)do 2:\(value[i]\leftarrow\) Utility of \(b_{k}\) in \(U[c]\) 3:\(value[j]\leftarrow\) Utility of \(b_{l}\) in \(U[c]\) 4:if\(value[i]\) is not \(value[j]\)then 5:\(attacker\leftarrow\operatorname*{arg\,max}_{x}(value[x])\) 6:\(defender\leftarrow\operatorname*{arg\,min}_{x}(value[x])\) 7: break 8:endif 9:endfor 10:if\(attacker\) is None then 11:return\(\leftarrow\) None 12:endif 13:for lower \(\gets 0\) to \(c\)do 14:if Expected Utility of \(a_{attacker}\) in \(U[lower]<\) Expected Utility of \(a_{defender}\) in \(U[lower]\)then 15:return\(\leftarrow\) None 16:endif 17:endfor 18:return\(\leftarrow\) defender ``` **Algorithm 2** For two potential branches of future development, finds target with lower utility in utility classes and no greater utility expectation to defend. Algorithm 3 shows Deontology embedded into the critical questions, similar to Algorithm 2. Algorithm 3 iterates across the set of forbidden assignments and checks the events in either for a violation (lines 1-3). See Section 5.1 for forbidden assignments. If one branch has a violation that the other does not, then critical question 1 is positive (line 4 and 9). To defend itself, the violating branch's action must have a greater probability of not making the assignment. If this is not true, critical question 2 is positive and the index of the violating branch is returned (lines 4-13). If no branch is attacked, neither index is returned (line 15). Our implementation has no planning element, searching for action's branches as discussed in Section 4. This is left for future work. Instead, we pass an ethical decision problem to an implementation of Algorithm 1 and a permissible action is output. We implement a web app with Flask and Python 3.8.9 to graph retrospection and alter utilities and deontological laws. The source code is available on GitHub at [https://github.com/sameysimon/HypotheticalRetrospectionMachine](https://github.com/sameysimon/HypotheticalRetrospectionMachine). ## 6 Autonomous Library Test Case To demonstrate our implementation, we present an uncertain ethical decision problem and discuss our implementation's selected action given five sets of ethical considerations. Suppose a student logs onto their University's autonomous library to revise for a test the next morning. All the other students started revision a month ago. As the student constructs various search terms for a recommendation, the system recognises that all other students have taken out the same book, implying it is very useful. Should the autonomous library use this data to recommend the book, allowing the student to revise quicker on the night before the test? If other students find out, they may feel unfairly treated; students who wait for a reference would get the same credit as those who find it themselves. We model the scenario as an ethical decision problem, \(\langle A,B,S,U,F,I,m\rangle\), with two actions in \(A\) mapping to ten branches in \(B\), acting across four state variables in \(S\). For action \(a_{1}\), to _recommend_ the book, student data is compromised, the truth of which is represented by Boolean variable \(s_{1}\). Given a recommendation, there is a 0.6 chance the book is used, represented by \(s_{2}\). If they have the book, there is a 0.7 chance they will pass, \(s_{3}\), otherwise without the book there is a 0.3 chance they will pass, \(s_{3}\). Finally, there is a 0.05 chance other students will find out their data was compromised, \(s_{4}\). If the system ignores the book, with action \(a_{2}\), there is a 0.3 chance the student will pass, again represented as \(s_{3}\)4. Figure 2 is a decision tree labelled with probabilities and branch notation. An argument is generated from each branch's endpoint, representing positive retrospection. Using the argument scheme from Section 5.3, \(Argument(b_{1})\) is the following: Footnote 4: There is discourse on whether a decision to act should be judged the same as a decision not to act [24]. We consider ignoring the book an action, an act of discrimination for example, which is assessed the same as the act to recommend. _"From the initial state, \(I\), where \(s_{1}=s_{2}=s_{3}=s_{4}=False\), it was acceptable to perform the action, \(a_{1}\), resulting in consequences with \(s_{1}=s_{2}=s_{3}=True\) and \(s_{4}=False\), with probability 0.399."_ The argument claims it was acceptable to recommend the book, resulting in a data protection violation (\(s_{1}\)), the student reading the book (\(s_{2}\)) and passing the test (\(s_{3}\)), with the data breach kept a secret (\(s_{4}=False\)), at a probability of 0.399. ### Consequentialism with One Assignment First we test our implementation considering the ethical theory of consequentialism. We set \(U\) to have one utility class with one utility assignment, \(\langle passesTest,1,True\rangle\). The only value is the student passing. Intuitively, the action maximising the probability of passing should be chosen; hypothetical retrospection agrees. The argumentation graph in Figure 3 shows the retrospection. Every branch has acceptability, except \(b_{10}\in m(a_{2})\) where the student fails after the system choo2ses _ignore_, with 0 utility and 0.3 probability (_'probably not'_ in Kent's words). This branch has a lower utility than the four _recommend_ branches where the student passes: \(b_{1},b_{2},b_{5},b_{6}\in m(a_{1})\). They cause \(Argument(b_{10})\) to answer critical question 1 positively when attacked by these arguments. Since _recommend_ has a greater utility expectation, or a greater probability of the student passing, \(Argument(b_{10})\) cannot defend itself in critical question 2. Thus, there is no reason to select _ignore_; from the perspective of \(b_{10}\)'s endpoint there is negative retrospection. There are no other attacks. Therefore by hypothetical retrospection action \(a_{1}\), _recommend_, should be selected. Figure 2: Decision tree of possible events in Autonomous Library problem. Triangles represent actions and boxes variable assignments, \(\neg\) represents \(False\) assignment. ### Consequentialism with Two Equal Assignments Now we consider two utility assignments of the same class: \(\langle passesTest,1,True\rangle\) and \(\langle othersFindOut,-1,True\rangle\). This invokes the risk of others finding out their data was used, with others finding out judged as bad as the student passing is good. Retrospection is shown in Figure 4. Again, only branch \(b_{10}\in m(a_{2})\) has negative retrospection, when the student fails after the system chooses to _ignore_ the book. This time only two of _recommend_'s branches have greater utility, \(b_{1},b_{5}\in m(a_{1})\). Action _recommend_ still has a greater utility expectation, so _ignore_ cannot be defended in critical question 2. Therefore, _recommend_ is selected. ### Consequentialism with Unequal Assignments The utility of students discovering the data compromise can be lowered such that _recommend_'s expected utility is lower than _ignore_'s, for example with the assignment \(\langle othersFindOut,-5,True\rangle\). Now, attacks fire the other way, displayed in Figure 5. When _recommend_ is chosen and other students find out, as in \(b_{2},b_{4},b_{6},b_{8}\in m(a_{1})\), the utility is lower than _ignore_'s branches. This answers critical question 1 positively for attacks on these branch's arguments. There is no defence since _ignore_ has a greater utility expectation so critical question 2 is positive. _Recommend_ can lead to the highest utility branches with \(b_{1}\) and \(b_{5}\) Figure 4: Graph of retrospection between hypothetical branches of development with the cost of others finding out data was compromised equaling the utility of the student passing. Figure 3: Graph of retrospection between hypothetical branches of development with only the utility of the student passing in consideration. Incoming edges on an argument represent negative retrospection for not selecting the attacking argument’s action. but unlike before, \(b_{10}\) defends citing its higher utility expectation. Thus, _ignore_ is selected with full acceptability. Deciding utilities is difficult without further details, i.e. the student's grades, data preferences, etc. Ideally, branches would be developed until enough morally relevant information is described, but this is not always computationally viable. Even so, exact utilities are subjective. We confront this issue with utility classes. Supposing \(othersFindOut\) has utility immeasurably lower than \(passesTest\), we form two classes. The first has assignment \(\langle othersFindOut,-1,True\rangle\); the second has \(\langle passesTest,1,True\rangle\). The resulting retrospection is the same as in Figure 5, with the cost of others' knowledge outweighing the benefits of passing. ### Deontology with Consequentialism Finally we consider a deontological theory against the misuse of others' data. This could be the UK Law, requiring under the Data Protection Act that personal data is to only be used for specified, explicit purposes [1]. Otherwise, there could be a violation of the Doctrine of Double Effect, having four conditions [33]: 1. that the action in itself from its very object be good or at least indifferent; 2. that the good effect and not the evil effect be intended; 3. that the good effect be not produced by means of the evil effect; 4. that there be a proportionately grave reason for permitting the evil effect. If we consider non-consensual use of students' data as bad and helping a student to pass the exam to be good, then the fact that the bad effect is required in order to bring about the good effect breaks the third condition above, and, therefore, is not permissible. We build on our first test in Section 6.1 which selected _recommend_ with utility assignment \(\langle passesTest,1,True\rangle\). Adding forbidden state \(\langle dataProtectionViolation,True\rangle\) to \(F\) results in the retrospection shown by Figure 6. Every argument from _ignore_ attacks every argument from _recommend_ since _ignore_ avoids violating the law. Under our previous Consequentialism, _recommend_ is still chosen with the same attacks on \(Argument(b_{10})\) as before. This conflict represents a moral dilemma, where no choice is normatively inferior to another [26]. The aim is to maximise acceptability amongst the most probable branches. Since all arguments from _recommend_ are attacked, there is 0 acceptability for that action; one Figure 5: Graph of retrospection between hypothetical branches of development with the cost of others finding data was compromised outweighing the utility of passing. argument from _ignore_ is attacked with 0.7 probability meaning _ignore_ is selected with the maximum acceptability of 0.3. ## 7 Discussion Our goal here is to extend the typical approach to machine ethics, which is the assessment of a single action from the perspective of a single ethical theory, often without any account of probability or uncertainty. We have formalised Hansson's hypothetical retrospection procedure, systematising moral assessments as comparisons between consequences. This forms richer judgements beyond the evaluation of utilities. Furthermore, our moral assessments are comparisons between retrospective justifications of hypothetical consequences. One might ask how this differs from directly analysing the properties of consequences? For machines, it gives a procedure for selecting actions and providing justifications. For humans, it offers a resonance that allows us to make clearer judgements [26]. It also allows us, in the future, to build on existing work for evaluating actions from the perspective of individual ethical theories and combining those judgements into arguments. Essentially our proposal extends, rather than replaces, existing mechanisms for evaluating actions against a single ethical theory. The retrospective procedure formalised by the critical questions resembles real life discussion: a claim against an argument and a chance to refute. Say someone takes action \(a_{2}\) in preference to \(a_{1}\) and a principle is broken. Retrospective argumentation through the critical questions produces a dialogue similar to the following: 1. You should have chosen \(a_{1}\) because it didn't break this moral principle. 2. No, because there is a greater probability of breaking some other principle with \(a_{1}\). If I was given the decision again, I would make the same choice. Real life discussion may not be so civil, but if facts were agreed upon, this is the logical dialogue. Resonance with real life has utility for agent transparency and explainability, important for ethical AI [11] and stakeholder buy-in. Figure 6: Graph of retrospection between potential branches of development with one consequentialist assignment and one Deontological law. Consequentialist attacks are dashed blue; Deontological attacks are solid black. The implementation is theory-neutral, allowing multiple principles and theories to be considered at once, more analogous to human decision-making. Implementational work remains, not least the integration into a planning system to generate branches, but also evaluation against a wider range of ethical theories (e.g. Virtue Ethics) to see how easily they answer the critical questions. We also wish to develop the evaluation of action's consequences along branches, not just at the branches end - for instance, if someone is made unhappy as a consequence of some action, but then we compensate them by the end of the branch, can we ignore that we caused them (albeit temporary) unhappiness? Implementations of hypothetical retrospection could be integrated into more general agent reasoning either as modules on top of an existing autonomous system, possibly similar to Arkin's governor architecture [7]. Cardoso et. al have, for instance, considered how such ethical governors might integrate with BDI agents [15]. Alternatively hypothetical retrospection could be implemented as a general decision-making process in which, for instance, the extent to which an action enables an agent to achieve or maintain goals could be included together with the arguments based upon ethical theories. Systems of this kind - in which all reasoning is encompassed within the ethical reasoning system can be seen in, for instance, the GenEth System [6] where "maintain readiness" is treated as an ethical duty or the HERA system [31] where in [21] the system defaults to Utilitarianism to decide among actions all of which are considered equally valid according to some ethical theory. Our current implementation has a fairly simple approach to the integration of ethical theories. Some theories are directly incompatible, potentially leading to "worst of both worlds" solutions. Additionally, the use of utility classes needs careful handling. When utilities are of a greater class, they are prioritised, no matter how remote their probabilities. Extending the Coin-Apple scenario, suppose an agent is offered a free apple every day - as opposed to some number of apples all at once, or suppose the chance of winning the Hawaii holiday is extremely low, or both. The justification for sacrificing a lifetime supply of apples for a small chance of a holiday is considerably weaker than sacrificing one apple for a 50/50 chance of a holiday. Expected utility clearly has a part to play, even if the calculation of such utilities is non-trivial. The difficultly in estimating utilities, and the fact that utilities may depend upon unknown factors such as a person's financial situation, mean there is uncertainty in the evaluation of state utilities which our framework currently does not address. There will be some computational complexity in searching and representing actions' potential branches of future development. In Section 4, we note Hansson's principles for optimising search but it remains to be seen if this can be practically implemented to keep planning tractable for common problems. Nevertheless we believe the hypothetical retrospection framework practically handles many of the issues in machine ethics - particularly the handling of uncertainty and the lack of any real agreement on the best moral theory. ## Acknowledgements We would like to thank the University of Manchester for funding and EPSRC, under project Computational Agent Responsibility (EP/W01081X/1). ## Open Data Statement This work is licensed under a Creative Commons Attribution 4.0 International License. The tools/examples shown in this paper and instructions on reproducibility are openly available on GitHub at: [https://github.com/sameysimon/HypotheticalRetrospectionMachine](https://github.com/sameysimon/HypotheticalRetrospectionMachine)
2305.16158
A New Era of Mobility: Exploring Digital Twin Applications in Autonomous Vehicular Systems
Digital Twins (DTs) are virtual representations of physical objects or processes that can collect information from the real environment to represent, validate, and replicate the physical twin's present and future behavior. The DTs are becoming increasingly prevalent in a variety of fields, including manufacturing, automobiles, medicine, smart cities, and other related areas. In this paper, we presented a systematic reviews on DTs in the autonomous vehicular industry. We addressed DTs and their essential characteristics, emphasized on accurate data collection, real-time analytics, and efficient simulation capabilities, while highlighting their role in enhancing performance and reliability. Next, we explored the technical challenges and central technologies of DTs. We illustrated the comparison analysis of different methodologies that have been used for autonomous vehicles in smart cities. Finally, we addressed the application challenges and limitations of DTs in the autonomous vehicular industry.
S M Mostaq Hossain, Sohag Kumar Saha, Shampa Banik, Trapa Banik
2023-05-09T06:39:57Z
http://arxiv.org/abs/2305.16158v1
# A New Era of Mobility: Exploring Digital Twin Applications in Autonomous Vehicular Systems ###### Abstract Digital Twins (DTs) are virtual representations of physical objects or processes that can collect information from the real environment to represent, validate, and replicate the physical twin's present and future behavior. The DTs are becoming increasingly prevalent in a variety of fields, including manufacturing, automobiles, medicine, smart cities, and other related areas. In this paper, we presented a systematic reviews on DTs in the autonomous vehicular industry. We addressed DTs and their essential characteristics, emphasized on accurate data collection, real-time analytics, and efficient simulation capabilities, while highlighting their role in enhancing performance and reliability. Next, we explored the technical challenges and central technologies of DTs. We illustrated the comparison analysis of different methodologies that have been used for autonomous vehicles in smart cities. Finally, we addressed the application challenges and limitations of DTs in the autonomous vehicular industry. digital twin; vehicular network; smart vehicles; autonomous driving; literature review; cyber-physical systems. ## I Introduction Over the past decade, autonomous driving (AD) has grown fast, transforming the transportation system in terms of safety and efficiency [1]. As AVs proliferate, safety and dependability are paramount in AV system development. The latest research implies that AVs could considerably improve vehicle safety [2], but this won't be achievable until a fleet of AVs has tested billions of kilometers in all weather situations. Running a fleet of AVs and development infrastructures that use physical testing data would take decades and tens of billions of dollars to meet AV [3] safety targets. Digital Twin (DT) technology has various uses, from real-time remote monitoring and control in industry to risk assessment in transportation to smart scheduling in smart cities, therefore it has received a lot of attention recently. According to Z. Hu et al. [4], figure 1 shows the major DT development milestones. In a high-fidelity virtual environment, simulation-based digital twins can speed up AV verification and save development expenses [5]. Software must change for digital twins. Digital twins can simulate various environmental and traffic situations, avoiding the need for extensive physical testing. Virtual, controllable testing of autonomous vehicles could save several orders of magnitude in development time and cost. We may have to wait months for significant snow to test AVs on the road. A digital twin can build a road, simulate a major snowstorm, and generate a lot of high-quality testing data [6]. Our initial deployment of a digital twin system that combines physical and digital twin testing has been successful, but its flexibility allows for further improvement [7]. Vehicle dynamic simulators, including cruise control system simulators, have been widely used in the automobile industry for testing [8]. Aerospace has long used simulation. Testing autonomous vehicles in virtual and controllable environments could save development costs and time by orders of magnitude. In the development of AD software, simulators have been used extensively to test and evaluate the decision-making module and path-planning module under various scenarios by providing perception data (such as the position and moving states of the ego vehicle and other traffic participants) [9]. This strategy is easy and scalable, however it does not represent the reality well, which causes problems. Simulation tests cannot compare to physical AV software pipeline tests. This pipeline includes sensing, localisation, decision-making, path planning, and vehicle control. Physical examinations. Physical elements, such as weather and lighting, also prevent investigations. First, these simulators use virtual town maps instead of real-world road testing [10]. Not simulated are road conditions. A digital twin map is needed to evaluate AD functions like exit or entry highway ramps that depend on road geometry and traffic legislation. Simulators' car and pedestrian animations are pre-programmed. Simulations cannot replicate real traffic's intricate interactions. Junctions and aggressive driving are unjudgable. AV software testing suffers from low-fidelity sensor data. Depth mapping and ray casts replicate lidar sensors but don't account for reflection and diffusion. Simulations differ. Unlike AV software development, automobile hardware development accelerates with "digital twin" physical simulation tools like MATLAB and Modelica. Figure 2 shows how our two-tiered framework creates a linked vehicle digital twin system [11]. Virtual is above actual. This system's communication module is vital. Cellular data transport powers this experiment. The physical layer of the digital twin framework can represent all physical entities and their interactions, such as automobiles and their parts, drivers and passengers, road infrastructure, weather, other road users, etc., defined on a global coordinate system and developing over time. Sensors and actuators. The sensors may detect and aggregate vehicle speed, driver gaze, and traffic light status at various resolutions. The communication module analyzes data online. The item or process, sensors, actuators, and computing resources are shown in Figure 2[13]. The digital world comprises databases, data processing infrastructures, machine learning, and the digital twin. Wi-Fi and Bluetooth protocols and interfaces link them. Architecture monitoring requires visualization. The key contributions of this work are as follows: * We investigated the overview and recent researchers on the DTs concept specifically for autonomous vehicular systems. * The state-of-the-art research methodologies are discussed based on current literature's of DTs. * The comparison of different methods, their role in technical development and limitations are stated. The remaining sections of this paper is organized in the following manner: Section II provides a technological overview, Section III presents a brief literature review, Section IV outlines the methodologies used, and Section V presents a comparative analysis. The remaining sections such as: Section VI and Section VII, consists of the Discussion and Conclusion, respectively. ## II An overview of digital twin technology for smart vehicles Digital twins are used in the automotive industry to create digital copies of vehicles. Data on car use and performance enables for more personalized service and maintenance. Digital copies can be model copies or networked systems [14]. Engineers investigate AI before sending a car to the assembly line. Simulation models can predict breakdowns and wear. Instead of road testing and maintenance, autonomous vehicle digital twins could save unforeseen costs. Digital twin technology may imitate and improve many aspects of a smart electric vehicle's system, which has far-reaching effects [15]. Digital twin vehicle modeling requires understanding the digital twin environment. Automakers employ digital twins to make digital duplicates of automobiles. Car performance data allows for more targeted service and maintenance. Model copies or networked digital copies [14]. Engineers study AI before assembling an automobile. Simulations forecast breakdowns and wear. Autonomous vehicle digital twins could eliminate road testing and maintenance. Digital twin technology can replicate and improve a smart electric vehicle's system. Digital twin vehicle modeling requires digital twin environment knowledge. ## III Literature Review In this section, we've presented a brief overview of the DT concept for the autonomous vehicular industry. The following discussion has included the literature overview regarding our goal. In the paper, [16] B. Yu, et al. shared real-world experiences of the digital twins, a practical method for developing autonomous driving (AD) systems that creates a complete, accurate, and reliable model of the physical environment to Fig. 1: History of digital twin technology reduce the need for physical testing. Their main contributions are: * they have identified the limitations of conventional approaches to AD simulation and show how digital twins can be used to overcome them. * To begin, they synthesized their practical development experience into three overarching concepts for the AD digital twin system's design. * at the end, they described the AD digital twin system's structure and its components, including how real-world mapping data is collected, sensor data is mimicked, and traffic actors are synthesized. The followings are the paper's [17] unique contributions in comparison to previous recent research on the validation of coordination strategies: * At the end, they described the AD digital twin system's structure and its components, including how real-world mapping data is collected, sensor data is mimicked, and traffic actors are synthesized. * Under the digital twin paradigm, a working prototype has been created. Time lag and precision of localization are just two of the worrying metrics tracked. * The system includes HMI devices. In 3D, the Hololens controls automobiles. First-person perspective driving simulators allow drivers full control. The study [18] examined digital twin technology's origins and deployment phases. This research highlights digital twin technologies like predictive mobility, autonomous motion control, driver assistance systems, vehicle health management systems, battery management systems, intelligent charging, vehicle power electronic converters, and electric power drive systems. Barriers to adoption and important supporting technologies are also identified, which will aid future eco-friendly and sustainable transportation endeavors. The contribution of the paper [19] are follows: * In densely populated locations with considerable traffic, automobiles are increasingly being used as services due to commercial expansions. This study suggested using DT to enable CaaS. * The suggested design includes the city, a middleware to connect all entities, DTs models that run over the middleware, and applications like car-sharing. * The case study showed how the proposed concept might be implemented and highlighted several key areas for development in future efforts. The paper [20] presented a Petri-net-based DT to simulate the electric car development process from start to finish. Real-time data sharing between the physical system and its digital shadow can help make better, faster decisions. The two systems can directly calculate and implement actions to contain these facts, calculate new time plans, and inform the user of optimistic, most likely, and pessimistic scenarios for task delays. Our research will improve search algorithms for non-computable solutions and optimize physical-digital subsystem communication and interaction. This study proposes a V2C-communicating linked car digital twin structure [21]. The vehicle's driver-vehicle interface (DVI) displays the cloud server's advisory speed, letting Fig. 2: General framework of the digital twin system for connected vehicles [11] the driver control the vehicle. ADAS uses the digital twin paradigm. The suggested digital twin structure is tested in real-world traffic on three passenger vehicles in a cooperative ramp merging case study. In paper [22], the author's plan is to create an unsupervised prognosis and control platform tailored to electric propulsion drive systems (EPDS) performance estimation as a final product. A number of subsidiary tasks and goals must be formulated in order to accomplish this primary aim. * Construct the DT of the energy system by creating physical models of its constituent parts (motors, generators, gearboxes, bearings, etc.) and their corresponding reduced models (testbed). * Create a working prototype of the Virtual Sensors idea, which is built on the existing DT concept. * Make a system based on artificial intelligence that lets the virtual sensors be used to control EPDS. * Demonstrate the understanding of these ideas and how they apply to achieve the aforementioned end goal using the autonomous vehicles as a case study. ## IV Methodologies of the selected pieces of literature There have been several methodologies have been discussed of the selected pieces of literature. Among them are discussed below. ### _Three Principles architecture_ The authors have created a digital twin system using a game engine, which includes graphics and physics engines for 3D modeling, image rendering, and physical simulation, allowing for the representation of structural, physical, and behavioral information in a virtual world [16]. In order to implement the digital twin properties, three additional building blocks are added on top of the game engine: It can be seen that 1) the traffic controller is the logical twin, 2) sensor models are the physical twin, and 3) the 3D digital twin map is the structural twin. ### _Multi-vehicle Experiment Platform_ Tsinghua is developing a CAV-focused digital twin system. Even without real cars, this technology will allow multi-vehicle tests. This study [18] advises using real and virtual autos to get the desired outcome. A sand table testbed helps small cars run smoothly. A game engine creates cyberspace through full-element modeling. Cyberspace can show the sand table's real-time state. This research proposes a cloud vehicle to replace smaller autos. ### _Car-as-a-Service Concept with DT_ Pana offers Car-as-a-Service (CaaS) employing sensors, actuators, and radio devices [23]. Smart cities should use linked automobiles for passenger service, the authors say. GNSS, high-precision distance estimation, radio connectivity, environmental sensors for measuring temperature, humidity, pressure, etc., motion sensors like an Inertial Measurement Unit (IMU) for measuring traffic flow, vehicle heading and roll, and road quality, a centralized cloud processing unit, and social network analytics are the backbone of CaaS [19]. CaaS may entail carpooling. Users rent and share cars. This service cuts traffic and saves occasional drivers. Archer [24] cites studies showing that five to fifteen privately owned cars are replaced for every shared car added to the fleet, assuming car-sharing programs reduce car ownership. Ferrero [25] investigates car-sharing. Technical and modeling studies have examined car-sharing businesses. This service is hard to sell. European, US, Japanese, Chinese, and Australian car-sharing schemes are widespread. Mattia [26] expects 12 million users by 2021. User settings [27] and driver monitoring will increase comfort and safety in this car-sharing concept. [28] found that Drive Monitoring Assistance System (DMAS) must monitor drivers' attention levels for safe driving. Distraction and fatigue cause most traffic accidents. ### _Petri nets and variations_ Modeling, simulating, and tracking development Arc-extended timed Petri nets will be used [20]. Formally, an Ordinary Petri Net (OPN) is a five-tuple with these elements: PN requires a finite number of locations (P= p1, p2,..., pnp), transitions (T= t1, t2,..., tnt), vertices (V), and an empty set (PT=) as their intersection. In the PN token distribution, m0 is the initial token distribution, I and O are input and output functions (PNs initial marking). Locations in a Petri net network represent resources, governing conditions, transitions, and arcs [29]. T-timed PNs implement transition time delays. They behave like Ordinary PNs but can predict event durations, making them better for simulation. Time-delayed PNs, or T-timed PNs, are defined as TPN= "P, T, I, O, m0, D," where D is a function of positive real integers. Using arc extensions to activate or deactivate PN sections when certain criteria are satisfied is critical for control. Standard, inhibitor, and activator arcs (illustrated as dashed vectors; [30, 31]) are used across the literature. Arc extensions boost the baseline model's simulation capacity by allowing more complex ideas with fewer node connections. ### _Vehicle-to-Cloud Based Advanced Driver Assistance Systems_ Authors create a two-layer digital twin for linked cars [21]. Cyber tops physical. Communication links two system framework levels. Study communication is cellular. The physical layer of the digital twin system, specified on a world coordinate throughout time, may contain cars, components, drivers, passengers, roadway infrastructure, meteorology, other road users, etc. Sensors and actuators are key layers. Vehicle speed, driver look, and traffic signal status can be detected by the sensors. Cyberspace processes data from the communication module. Through the communication module, cyberworld beings and processes become corporeal. ADAS-equipped people can drive connected cars. Cyberworld actuation guidance guides the automatic controller or human driver of linked cars to make cooperative or intelligent motions, enhancing safety, mobility, and sustainability. The digital twin's cyber domain computes this two-layer system. Physical items and processes have important digital copies. Physical world data is cleaned, integrated, and time-synchronized. Pre-processed data can be kept in the database for digital traceability or sent to the data mining & knowledge discovery module for machine learning. Data mining and knowledge discovery create the physical world model. Vehicle, driving, and traffic simulators can be modeled. Data refreshes cyberworld knowledge. Modeling/simulation aids prediction and decision-making. For system performance, physical actuators think. ## V Comparison Analysis The findings in Table 1 show the different methodologies have been mentioned on those selected papers. The third column defines the role of the DT technologies for the corresponding methods and their use cases. Then the fourth column includes the evaluation of those methods. The fifth column finally includes the future aspects of those pieces of literary works. ## VI Discussion ### _Application Challenges and Limitations_ The difficulties that may develop, according to the authors, may vary depending on the scope and integration complexity of the application. Based on the research that was analyzed, there were five major problems with DT technology implementation that were found to be universal across all fields. These problems adequately wrap up the investigation and answer the supplementary question. SQ1 and contribute to answering the primary research question as it stands now: Vi-A1 Data-related concerns (trust, privacy, cyber security, convergence and governance, acquisition and large-scale analysis) If a behavior cannot be reduced to a set of numbers, it becomes far more challenging for designers to replicate it. Examples include ecological stability [32], socioeconomic inequality [33], and political instability [32]. These societal and environmental innovations will focus on preliminary SRL stages, where the potential influence on selected stakeholders, the larger society, and the environment is more understood. In addition, this difficulty is associated with Table 1's levels 3 and 4, where the complexity of DT implementations is exacerbated by the need to enrich models in real time and with bidirectional flow of information. #### Vi-A2 A deficiency in DT implementation standards, guidelines, and regulations Lack of standards and acknowledged interoperability, particularly in the manufacturing industry, are cited as reasons for the current restrictions on DT implementations by the authors of [34]. Adopting a widespread, concrete understanding of DTs and their importance requires articles that explain the benefits, define ideas and structures of DTs, and review the state of the art of the technology. In addition, researchers can influence lower levels of the TRL by focusing on this specific topic through surveys and literature reviews, thereby increasing the dissemination of fundamental principles and concepts. #### Vi-A3 High implementation costs owing to more sensors and computing power DT implementations are costly, therefore their use is constrained by the availability of such resources, which is often lacking in impoverished nations [35]. Achieving level 3 on Table 1's maturity spectrum is difficult because of the rise in required sensors and the resulting increase in complexity in data connectivity and processing (where the digital model needs to be enriched with real-time information). Practitioners are hampered by this difficulty in their pursuit of greater TRLs, where pilot systems are proven and DTs are integrated into commercial designs and widespread deployments. #### Vi-A4 AI and big data for long-term and large-scale data analysis Big data algorithms and internet of things technology are powerful allies that can provide significant support to successful implementations of DT [36]. This is because DT systems generate and analyze a significant amount of data, and these two technologies work hand in hand. In addition to this, the information that is flowing from the many levels of indicator systems creates a problem in the process of defining uniform rules and standards. This challenge aims to effectively target levels 4 and 5 of the maturity spectrum, and if it is successful, it may make it possible to enable a bidirectional flow of information, control of the physical world from the digital model, and even autonomous operations and asset maintenance. #### Vi-A5 Challenges associated with communication networks The development of superior communication standards like 5G is essential. In [37], the authors discuss the importance of enabling real-time data connectivity and operational efficiency for the DT, as well as other benefits of using 5G technology in smart cities, such as the ability to connect many more sensors and devices at high speeds with ubiquitous connectivity, improved reliability and redundancy, and ultra-low power consumption. ## VII Conclusion The digital twin technology opens up new opportunities for the development of sustainable electric vehicle technologies in terms of both cost-effectiveness and efficiency, beginning with the design phase and continuing through the operation. Because its sister technologies, such as the internet of things, decentralized wireless networking, and artificial intelligence, were not as far along in their development at the time as digital twin technology was being conceived, the car industry has only just begun to implement it. Nevertheless, a prime period for the development of digital twin technologies and other smart development approaches has now begun in the field of science, which has now entered this era. In addition, the next decade will mark a significant turning point in human history as a result of the extraordinary environmental difficulties that will be faced. As a result, the fundamental objective of the research community should be to foster sustainable technology and the enablers of such technologies. With this goal in mind, the purpose of this review is to demonstrate the implementation of digital twin technology in the automobile industry, with a focus on smart electric vehicles serving as the background for the discussion. In this article, the history of digital twin technology along with its development and the many stages of its deployment are discussed. This study focuses on the digital twin technologies that have been adapted for smart electric vehicle use cases. Some of these technologies include predictive mobility, autonomous motion control, driver assistance systems, vehicle as a service system, Petri net model discussion, and vehicle-to-cloud-based driver assistance systems.
2304.06062
Quantum Phases in the Honeycomb-Lattice $J_1$--$J_3$ Ferro-Antiferromagnetic Model
Using large-scale density-matrix renormalzation group calculations and minimally augmented spin-wave theory, we demonstrate that the phase diagram of the quantum $S\!=\!\frac12$ $J_1$--$J_3$ ferro-antiferromagnetic model on the honeycomb lattice differs dramatically from the classical one. It hosts the double-zigzag and Ising-z phases as unexpected intermediaries between ferromagnetic and zigzag states that are also extended beyond their classical regions of stability. In broad agreement with quantum order-by-disorder arguments, these collinear phases replace the classical spiral state.
Shengtao Jiang, Steven R. White, A. L. Chernyshev
2023-04-12T18:00:00Z
http://arxiv.org/abs/2304.06062v4
# Quantum Phases in the Honeycomb-Lattice \(J_{1}\)-\(J_{3}\) Ferro-Antiferromagnetic Model ###### Abstract Using large-scale density-matrix renormalization group calculations and minimally augmented spin-wave theory, we demonstrate that the phase diagram of the quantum \(S\!=\!\frac{1}{2}\)\(J_{1}\)-\(J_{3}\) ferro-antiferromagnetic model on the honeycomb lattice differs dramatically from the classical one. It hosts double-zigzag and Ising-z phases, as unexpected intermediaries between ferromagnetic and zigzag states that are also extended beyond their classical regions of stability. In broad agreement with quantum order-by-disorder arguments, these collinear phases replace the classical spiral state. _Introduction._--Ever since the Anderson's seminal work on the resonating valence-bond state Anderson (1958), the significant role that can be played by quantum fluctuations in magnets with competing interactions has remained at the forefront of condensed matter physics, inspiring a multitude of quests for exotic states, models that can realize them, and real materials that can host them Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson and Anderson (1958). The elusive spin-liquid states with strongly entangled spins are but one example Anderson (1958); others include valence-bond phases with spatial symmetry breaking Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958), quantum multipolar spin statistics that are quantum analogues of liquid crystals Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958), and an especially extensive class of unconventional magnetically _ordered_ phases that do not appear in the classical solutions of the underlying spin models Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958); Anderson and Anderson (1958); Anderson ( Our phase diagram for the \(S\!=\!\frac{1}{2}\)\(J_{1}^{\Delta}\)-\(J_{3}\) model is given in Fig. 1(b). The solid lines are phase boundaries interpolating transition points obtained from the DMRG long-cylinder DMRG "scans" by varying \(J_{3}\) or \(\Delta\), as well as from the more precise measurements. The dashed lines are phase boundaries of the same phases obtained by MAGSWT, with both approaches described below. The qualitative agreement between these approaches is quite remarkable. Both methods produce the classically unstable dZZ and Iz phases, both expand the FM and ZZ phases beyond their classical ranges, and both eliminate the Sp phase. These findings are also in a broad agreement with order-by-disorder arguments [20; 27], which generally favor collinear phases. We note that recent studies of related models also found the Sp phase to be absent [54; 55]. However, we disagree with these works on the existence and extent of the quantum phases. The \(U(1)\)-preserving Iz phase, with spins ordered Neel-like along the \(z\) axis, has been first discovered in the \(XY\)\(J_{1}\)-\(J_{2}\) AFM-AFM model [50], where Iz order is stabilized solely by quantum effects with no exchange coupling favoring it. In our case, we find the \(z\) axis component of the \(J_{3}\)-exchange in the \(J_{1}^{\Delta}\)-\(J_{3}\) model crucial for stabilizing the Iz phase in a wide range of parameters, see Fig. 1(b). In contrast to Ref. [55], we find only a very narrow Iz phase in the \(J_{1}^{\Delta}\)-\(J_{3}^{\Delta}\) model. Claims of the spin-liquid phases in this model [54; 55] are also not supported. The dZZ phase has been recently reported experimentally [36] and found favored by the _bond-dependent_ extensions of the \(XY\)\(J_{1}^{\Delta}\)-\(J_{3}^{\Delta}\) model [38; 39]. Instead, we find the dZZ phase already in the Heisenberg limit of the principal \(J_{1}\)-\(J_{3}\) model (1), see Fig. 1(b). _DMRG calculations._--DMRG calculations were performed on the \(L_{x}\)\(\times\)\(L_{y}\)-site honeycomb-lattice open cylinders of width \(L_{y}\) up to 16 (8 honeycomb cells), using the ITensor library [56]. The majority of the results were obtained on the so-called X-cylinders (XC) [49], in which the first-neighbor bond is horizontal, while both X- and Y-cylinders (YC) were used for more delicate phases [57]. We allow for a spontaneous breaking of the spin \(U(1)\) symmetry [58], enabling us to measure the local ordered moment \(\langle\mathbf{S}_{i}\rangle\) instead of the correlation function. Our main exploratory tool is the long-cylinder "scans," in which one parameter, \(J_{3}\) or \(\Delta\), is varied along the length of the cylinder with \(L_{x}\) up to 40. It provides 1D cuts through the 2D phase diagram [59; 60; 61; 62; 63], see Fig. 2, which give approximate phase boundaries. By narrowing parameter ranges of the scans one can determine the boundaries with increased precision, distinguish first- and second-order transitions [15], and uncover hidden phases. In cases when the phase boundary is less obvious, we utilize the fixed parameter (non-scan) calculations on clusters up to 16\(\times\)16, with the aspect ratio that closely approximates the 2D thermodynamic limit [64]. In Fig. 2, we present two long-cylinder scans for the \(J_{1}^{\Delta}\)-\(J_{3}\) model (1), one in the Heisenberg limit, \(\Delta\!=\!1\), and the other in the \(XY\) limit, \(\Delta\!=\!0\), vs \(J_{3}\). In the Heisenberg limit, Fig. 2(a), the transition from FM to ZZ is very sharp and FM phase seems to terminate right at Figure 2: Long-cylinder scans of the \(J_{1}^{\Delta}\)–\(J_{3}\) model (1) vs \(J_{3}\) in the (a) Heisenberg (\(\Delta\!=\!1\)) and (b) \(XY\) (\(\Delta\!=\!0\)) limit. The arrows show the local ordered moment \(\langle\mathbf{S}_{i}\rangle\). FM, ZZ, and Iz phases are indicated and transitions are determined as described in text. The honeycomb lattice is in the \(xy\) plane while spins shown in the figure are in the \(xz\) plane. the classical boundary of this state, \(J_{3}^{\rm cl}\!=\!0.25\). However, one would expect that the FM phase should retreat from this boundary, as the competing ZZ state is fluctuating in the Heisenberg limit, while the FM state is exact. The subsequent analysis reveals a hidden intermediate dZZ state, discussed next. We note that the scan calculation in Fig. 2(a) misses it not only due to the narrow region of the dZZ phase, but also because of the high symmetry of the model in the Heisenberg limit, which requires additional effort to avoid metastable states. Fig. 2(b) for the \(XY\) limit shows transitions from the FM to Iz and from Iz to ZZ vs \(J_{3}\). By using scans in the narrower ranges of \(J_{3}\), we verify that the spiral-like spin patterns in the transition regions in Fig. 2(b) are proximity effects of the neighboring phases, not additional phases. The phase boundaries shown in Fig. 2(b) and used in the phase diagram in Fig. 1(b) are the crossing points of the order parameters vs \(J_{3}\)[65]. The error bars are the width of the transition region in the scans, where a discontinuous transition is assigned a width equal to the parameter change over one lattice spacing. In the Heisenberg limit, the three states, FM, dZZ, and ZZ, compete in the proximity of the classical FM boundary \(J_{3}\!=\!0.25\). Because of the high spin-symmetry of the model, and depending on the initial state, all three can be stabilized in the non-scan DMRG simulations, such as the one shown in Fig. 3(a) for \(J_{3}\!=\!0.24\) in the 16\(\times\)16 cluster. As is shown in Fig. 3(b), the energy of the dZZ is the lowest, with the FM and ZZ being metastable, suggesting that the transitions between the corresponding phases are first order. To identify their phase boundaries, we compare the energies of these three states as a function of \(J_{3}\) using extrapolations based on the spin-spin correlations extracted at \(J_{3}\!=\!0.24\) from the center of the cluster for each of the states. While the FM line is exact in this limit, the extrapolated energies for ZZ and dZZ are also very close to the ones given by a direct DMRG calculation at a different value of \(J_{3}\), justifying the analysis, see Fig. 3(b). The dZZ phase is found to be confined between \(J_{3}\!=\!0.2333\) and 0.2596. The lower spin-symmetry away from the Heisenberg limit helps to reveal the dZZ phase more readily, see Fig. 4(a) for a long-cylinder scan along the \(\Delta\) axis and fixed \(J_{3}\!=\!0.25\), confirming the presence of this phase in an extended region of the phase diagram in Fig. 1. A similar \(\Delta\)-scan for \(J_{3}\!=\!0.4\) in Fig. 4(b) compliments the \(J_{3}\)-scans in establishing boundaries of the Iz phase. By using a combination of the narrower ranges of the scans and fixed-parameter non-scans, we find that the dZZ phase persists somewhat below \(\Delta\!=\!0.5\) while the Iz phase ends close to \(\Delta\!=\!0.4\), where the FM-to-ZZ transition appears to be direct, see Fig. 1 and [65]. Although we cannot completely rule out the Iz state for \(\Delta\!=\!0.4\), it must be extremely narrow if it exists. _Minimally-augmented spin-wave theory.--_The standard SWT is successful at accounting for quantum effects in the ordered states [66], but cannot describe either the ordered phases that are not classically stable, or the shifts of the phase boundaries by quantum fluctuations. An analytical approach to address this problem, originally proposed for the classically unstable field-induced states in the transverse-field Ising and frustrated Heisenberg models [67; 68; 69], can be successfully applied here. The method consists of introducing a local field in the direction of the ordered moment \({\bf n}_{i}\) for the proposed (unstable) classical spin configurations, leading to a shift of the chemical potential in the bosonic SWT language \[\delta{\cal H}=\mu\sum_{i}\left(S-{\bf S}_{i}\cdot{\bf n}_{i}\right)=\mu\sum_{ i}a_{i}^{\dagger}a_{i}, \tag{2}\] while leaving the classical energy of the state unchanged. The _minimal_ value of \(\mu\) is chosen to ensure stability of the spectrum, i.e., that the squares of all eigenvalues of the SWT matrix are positive definite. Then, the energy of the proposed spin state, \({\cal E}\!=\!E_{cl}+\delta E\), with the \(1/S\)-correction to the groundstate energy \(\delta E\), is well-defined and can be compared with the energies of the competing states calculated to the same \(O(S)\) order. The power of the method, coined as the _minimally augmented SWT_ (MAGSWT), is not only in its simplicity, but in the form of Eq. (2), which guarantees that its contribution to the Hamiltonian is positively defined for \(\mu\!>\!0\). In turn, this implies that the so-obtained groundstate energy \({\cal E}\) is an _upper bound_ for the energy of the suggested spin state to the order \(O(S)\). This method allows Figure 3: (a) Ordered moments in the 16\(\times\)16 non-scan cluster for \(J_{3}\!=\!0.24\), showing dZZ pattern. (b) Energies of the three competing phases vs \(J_{3}\), crosses are DMRG results and higher-energy states are metastable. Lines are extrapolated energies, \(\langle\psi_{i}|H(J_{3})|\psi_{i}\rangle\), where \(\psi_{i}\) are the three states at \(J_{3}\!=\!0.24\). one to consider the phase beyond its classical range of stability and inspect states that are classically not competitive, but can lower their energy due to quantum fluctuations. The new phase boundaries are determined from the crossings of the energies \(\mathcal{E}\) for the competing phases as a function of the varied parameter(s). We note that MAGSWT may not be applied to an arbitrary classically-unstable state [69], with the absence of the linear-bosonic terms in the \(1/S\)-expansion for a given state being a sufficient criterion of its applicability. _MAGSWT results._--In case of the \(XXZ\)\(J_{1}^{\Delta}\)-\(J_{3}\) model (1), all four competing phases of interest are collinear, which guarantees the absence of the linear-bosonic terms, while the non-collinear Sp state is not the subject of MAGSWT, as it corresponds to a minimum of the classical energy in its entire possible range of existence. The technical procedure of extracting minimal \(\mu\) vs \(J_{3}\) and \(\Delta\) for each phase is discussed in Ref. [65]. We note that the limiting \(XY\) and Heisenberg cases and select momenta are useful for obtaining analytical expressions for \(\mu(J_{3},\Delta)\), eliminating the need of a numerical scan of the momentum space for spectrum instabilities. With that, the energy surfaces \(\mathcal{E}(J_{3},\Delta)\) are readily obtained for each phase and the MAGSWT phase boundaries are drawn from the intersections of such surfaces. The resulting phase boundaries are shown in Fig. 1(b) by the dashed lines. Most, if not all, of the features already discussed above are present. The noncollinear Sp phase is not effective at benefiting from quantum fluctuations, in agreement with the order-by-disorder arguments [20], and is wiped out. The classically-unstable dZZ and Iz phases are extensive and both FM and ZZ expand beyond their classical borders. A close quantitative agreement with the DMRG phase boundaries can also be observed, with most discrepancies concerning the borders of the less-fluctuating FM phase [65]. Otherwise, the entire picture for the \(J_{1}^{\Delta}\)-\(J_{3}\) model in Fig. 1(b) is in rather astonishing agreement with the numerical data. _The \(J_{1}^{\Delta}\)-\(J_{3}^{\Delta}\) model._--The phase diagram of the full \(XXZ\) model (1) with equal anisotropies in both terms, obtained using the same methods as described above, is presented in Fig. 5. It repeats most of the trends of the partial \(XXZ\) model in Fig. 1(b), such as the absence of the Sp phase, expansion of the FM and ZZ, and the presence of the two unconventional phases, Iz and dZZ. In contrast to the recent studies [54; 55], our results do not support the proposed spin-liquid states in the Heisenberg [55], or strongly-anisotropic (\(\Delta\!=\!0.25\)) nearly \(XY\)[54] limits. The \(J_{3}\)-width of the quantum Iz phase in the same \(XY\) limit (\(\Delta\!=\!0\)) is also an order of magnitude narrower in our case than the one suggested in [55], likely based on the overly generous estimates. While the first of the quantum phases, dZZ, missed by the previous works due to small cluster sizes or an approximate nature of their approaches [55], is nearly the same in the partial and full \(XXZ\) models in Fig. 1(b) and Fig. 5, respectively, the Iz phase is substantially more tenuous. In fact, the initial DMRG scans have shown a direct FM-ZZ transition, with some possible narrow intermediate state. Dedicated non-scans in that region did uncover short-range correlations in both XC and YC clusters [65], not unlike the ones reported in Ref. [54]. However, these spin-liquid-suspects either order on the cylinder width increase (XC), or indicate a sufficiently robust Iz order in the range of \(J_{3}\)=0.315-0.325 for \(\Delta\!=\!0.25\) and \(J_{3}\)=0.34-0.36 for \(\Delta\!=\!0\), see [65]. It is worth noting that MAGSWT in the \(XY\) limit of the full \(XXZ\) model shows a close, but insufficient, competition of the strongly fluctuating Iz phase, rendering it absent from its version of the phase diagram in Fig. 5. _Summary._--In this letter, we have studied the emergence of the quantum phases that are not stable classically within a simple model of great current interest. We have combined state-of-the-art DMRG and analytical approaches to obtain conclusive phase diagrams of this model. It is established beyond any reasonable doubt that the two unconventional quantum phases occupy a significant portion of this diagram, with the known phases also extending well beyond their classical regions and completely replacing the less-fluctuating noncollinear phase. The results of the analytical MAGSWT approach are shown to be in a close accord with the numerical DMRG data, providing additional insights into the energetics of the quantum stabilization of the non-classical phases and offering a systematic path for the explorations of similar models. The proposed phase diagrams have direct relevance to a group of novel materials and provide important guidance to the ongoing theoretical and experimental searches of the unconventional quantum states. _Acknowledgments._--The work of S. J. and S. R. W. was supported by the NSF through grant DMR-2110041. The work of A. L. C. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0021221.
2306.08636
Using Wikipedia Editor Information to Build High-performance Recommender Systems
Wikipedia has high-quality articles on a variety of topics and has been used in diverse research areas. In this study, a method is presented for using Wikipedia's editor information to build recommender systems in various domains that outperform content-based systems.
Katsuhiko Hayashi
2023-06-14T17:05:57Z
http://arxiv.org/abs/2306.08636v1
# Using Wikipedia Editor Information to Build High-performance Recommender Systems ###### Abstract Wikipedia has high-quality articles on a variety of topics and has been used in diverse research areas. In this study, a method is presented for using Wikipedia's editor information to build recommender systems in various domains that outperform content-based systems. **Keywords:** recommendation, collaborative filtering, Wikipedia, editor information, user preference ## Introduction Wikipedia is an online encyclopedia that can be edited by anyone, and due to the ease of editing and the large number of editors, there are high-quality articles on a variety of topics (hereafter "entities"). Information obtained from Wikipedia has thus been used in various research fields. Among them, entity similarity estimation is a hot research topic that has many applications such as recommendation systems [13] and search engines [1]. To estimate similarities between entities, it is necessary to extract entity features. A commonly used source for feature extraction is Wikipedia's content information such as text abstracts and hyperlinks (see Figure 1). Given that Wikipedia is an encyclopedia, one of its basic policies is that content must be written from a "neutral point of view", which means that articles are edited by carefully analyzing reliable information sources and eliminating as much editorial bias as possible1. Unlike critiques and reviews, Wikipedia's content is free of personal opinions and impressions. In other words, Wikipedia's content information is limited to superficial attribute information about entities, and is unlikely to reflect the editor's personal opinions and preferences. Therefore, a drawback of conventional content-based methods is that it is difficult to capture complex similarities between entities inherent in human preferences. Footnote 1: [https://en.wikipedia.org/wiki/Wikipedia](https://en.wikipedia.org/wiki/Wikipedia): Neutral_point_of_view An alternative to content-based methods is collaborative filtering, which is based on user preference information and enables the capture of the complex similarities inherent among entities. It has been developed mainly in the field of recommendation research. Since building a recommender system on the basis of collaborative filtering requires user profile information such as purchase history, the domains in which it can be applied are limited. In this paper, We have developed a new collaborative filtering method for estimating similarities between entities in Wikipedia. As shown in Figure 1, Wikipedia keeps a history record of who edited each article, and the proposed method utilizes that information on the editors of articles. Since editors generally edit articles on topics in which they are interested in, we can assume from the viewpoint of collaborative filtering that entities corresponding to articles edited by the same editor are similar to each other. The proposed method is thus be able to estimate complex similarities between entities that are difficult to capture with content-based methods. We experimentally investigated the effectiveness of the proposed method on three recommendation datasets for movies, music artists, and books. The results show that the proposed method has higher recommendation accuracy than content-based methods using text abstracts, hy Figure 1: Examples of information found in English Wikipedia. Images were cited from [https://en.wikipedia.org/wiki/Star_Wars](https://en.wikipedia.org/wiki/Star_Wars) and [https://en.wikipedia.org/wiki/The_Matrix](https://en.wikipedia.org/wiki/The_Matrix). perlinks, or categories2. Footnote 2: This paper is an English version of the Japan domestic conference paper [13]. ## Similarity Estimation We use the EASE [14] model for estimating similarity between entities (i.e. Wikipedia articles). Given \(N\) Wikipedia articles \((D_{1},D_{2},\cdots,D_{N})\), \(D_{i}\) can be represented as a feature vector \([f_{i1},f_{i2},\cdots,f_{iM}]^{\mathrm{T}}\) in which \(f_{ij}=1\) if a feature \(w_{j}\) appears in a Wikipedia article \(D_{i}\) and \(f_{ij}=0\) if not3. We define a Wikipedia article matrix: Footnote 3: For \(f_{ij}\), we can also consider the number of times the feature \(w_{j}\) appears in the Wikipedia article \(D_{i}\). Footnote 4: [https://grouplens.org/datasets/movielens/20m/](https://grouplens.org/datasets/movielens/20m/) Footnote 5: [https://github.com/sisinflab/LinkedDatasets](https://github.com/sisinflab/LinkedDatasets) \[\mathbf{F}=\left(\begin{array}{cccc}f_{11}&f_{12}&\cdots&f_{1M}\\ f_{21}&f_{22}&\cdots&f_{2M}\\ \vdots&\vdots&\ddots&\vdots\\ f_{N1}&f_{N2}&\cdots&f_{NM}\end{array}\right)\in\{0,1\}^{N\times M}.\] We can use word \(n\)-grams, hyperlinks, or editors as features. To estimate a similarity matrix \(\mathbf{B}\in\mathbb{R}^{N\times N}\) with EASE, we formulate the estimation problem as a linear regression formula: \[\widehat{\mathbf{B}}=\operatorname*{arg\,min}_{\mathbf{B}}\left\{|| \mathbf{F}-\mathbf{FB}||_{F}^{2}+\lambda||\mathbf{B}||_{F}^{2}\right\} \tag{1}\] \[\text{s.t.}\quad\text{diag }(\mathbf{B})=\mathbf{0}.\] The objective is to obtain a weight (similarity) matrix \(\mathbf{B}\) that reconstructs the Wikipedia article matrix \(\mathbf{F}\) from \(\mathbf{FB}\) by minimizing the squared loss with L2 regularization. However, when \(\mathbf{B}=\mathbf{I}\), the minimization of Eq.(1) can be achieved in an obvious way, and so \(\text{diag }(\mathbf{B})=\mathbf{0}\) is imposed as a constraint condition. This constraint means that all diagonal components of \(\mathbf{B}\) must be zero. ## Experiments ### Recommendation Datasets We used three recommendation datasets for evaluation: (1) MovieLens-20M (ML-20M)4, (2) Last.fm hetrec2011 (Last.fm) [2] and (3) Library-Thing (LT)5. In our experiments, we considered an implicit feedback setting, so users did not need to express their tastes explicitly. In this setting, the user's preference scores were treated as binary values. From the Wikipedia articles that correspond to three types of entities (movies, music artists and books) of the three datasets, we extracted information about the editors of the articles in English Wikipedia, the editors of the articles in multilingual Wikipedia, the English text abstracts, the hyperlinks and the categories. Table 1 summarizes the dataset statistics. Footnote 4: This paper is an English version of the Japan domestic conference paper [13]. Footnote 5: For \(f_{ij}\), we can also consider the number of times the feature \(w_{j}\) appears in the Wikipedia article \(D_{i}\). Footnote 6: [https://grouplens.org/datasets/movielens/20m/](https://grouplens.org/datasets/movielens/20m/) Footnote 7: [https://github.com/sisinflab/LinkedDatasets](https://github.com/sisinflab/LinkedDatasets) ### Recommendation Evaluation We evaluated our proposed method on the recommendation task. In the following, we describe the procedure of the recommendation task. Evaluation ProcedureTo evaluate the recommender systems, we split each recommendation dataset into history and answer data. The history data was "past preference" profile information of users. The answer data is "expected preference" profile information for the same users as the history data, but the entities preferred by a user in the answer data were completely different from those in the history data. For each recommendation dataset, the users were randomly divided into five groups, and for each group of users, the preference profiles were roughly divided into 80% (history data) and 20% (answer data). The answer data were used for evaluating recommendations predicted by a recommender system. ResultsWe evaluated recommendation performance using two metrics, Recall@\(R\) and nDCG@\(R\), where \(R\) is the number of recommended entities. Figure 2 shows the results. We reported the mean and standard deviation of each metric for the five split data. Compared with using content-based information, using editor information resulted in better performance for all the datasets. ### Contributions and Findings We summarize our contributions as follows: * We presented a new method for using Wikipedia editor information to estimate entity (i.e. article) similarity. * Our method largely outperforms conventional content-based ones on several recommendation datasets.
2305.11637
Posetal Diagrams for Logically-Structured Semistrict Higher Categories
We now have a wide range of proof assistants available for compositional reasoning in monoidal or higher categories which are free on some generating signature. However, none of these allow us to represent categorical operations such as products, equalizers, and similar logical techniques. Here we show how the foundational mathematical formalism of one such proof assistant can be generalized, replacing the conventional notion of string diagram as a geometrical entity living inside an n-cube with a posetal variant that allows exotic branching structure. We show that these generalized diagrams have richer behaviour with respect to categorical limits, and give an algorithm for computing limits in this setting, with a view towards future application in proof assistants.
Chiara Sarti, Jamie Vicary
2023-05-19T12:36:46Z
http://arxiv.org/abs/2305.11637v4
# Postal diagrams for logically-structured semistrict higher categories ###### Abstract. We now have a wide range of proof assistants available for compositional reasoning in monoidal or higher categories which are free on some generating signature. However, none of these allow us to represent categorical operations such as products, equalizers, and similar logical techniques. Here we show how the foundational mathematical formalism of one such proof assistant can be generalized, replacing the conventional notion of string diagram as a geometrical entity living inside an \(n\)-cube with a posetal variant that allows exotic branching structure. We show that these generalized diagrams have richer behaviour with respect to categorical limits, and give an algorithm for computing limits in this setting, with a view towards future application in proof assistants. [email protected], [email protected] ## 1. Introduction The development of proof assistants for category theory and higher category theory has recently been a active area for the applied category theory community, in particular from a string diagrammatic perspective. Recent work has included the Cartographer tool of Sobocinski, Wilson and Zanasi applying hypergraph rewriting for symmetric monoidal diagrams [14]; the DisCoPy python library from de Felice et al for string diagram manipulation [7]; the Rewalt tool by Hadzihasanovic and Kessler for rewriting with diagrammatic sets [8]; the wiggle.py tool due to Burton for 3d string diagram rendering [5]; the Quantomatic system developed by Dixon, Duncan, Kissinger and others for applying the ZX calculus in quantum information [6]; and the Globular[2, 3] and homotopy.io[10, 12] web-based systems for finitely-generated semistrict \(n\)-categories rendered as higher string diagrams. While these tools represent a wide range of perspectives and use-cases, they share a common goal of allowing the user to manipulate terms in a monoidal or higher categorical structure which is freely generated under composition from some signature, perhaps with additional algebraic elements (for example, such as Frobenius algebras, in the case of Quantomatic.) The geometrical essence of these proof assistants allows the user to avoid some of the bureaucracy associated with some algebraic approaches to higher categories. However, much of the power of category theory arises from methods that go beyond direct composition, such as products, equalizers, colimits, and other standard categorical structures. These cannot be represented with any of the current family of string diagrammatic proof assistants. Here we explore an alternative foundation for such proof assistants which may suggest a path towards new classes of tools, with the potential to combine the clarity and usability of string diagrammatic techniques, with the power of algebraic categorical methods. We illustrate our approach with the _zigzag construction_, a simple ## 1. Introduction The \(2\)-element total order \(\{a\to b\}\) is defined as the _\(2\)-element total order_\(\{a\to b\}\), where \(a\) is the \(2\)-element total order of \(\{a\to b\}\). The _\(2\)-element total order_\(\{a\to b\}\) is defined as the _\(2\)-element total order_\(\{a\to b\}\), where \(a\) is the \(2\)-element total order of \(\{a\to b\}\). conjecture that the posetal zigzag construction will yield a notion of higher category with richer categorical structure, but retaining a geometrical essence which could in principle be implemented in a similar tool to homotopy.io. Establishing this conjecture will require considerable future work. In this paper we take the first steps, giving the first detailed investigation of posetal zigzags. We begin here with an informal illustration of posetal zigzags, generalizing the linear example of Figure 1 above. In Figure 2 we illustrate the posetal zigzag construction for the poset \(\{x\ |\ y\}\), with two disconnected objects. While the linear zigzags in Figure 1 had a linear sequence of time slices (since \(a\) must precede \(b\)), here a richer structure appears, with \(x\) and \(y\) now interpreted as "events" that can occur in either order. Now we have a dual distributive lattice \(D=\operatorname{FPos}(\{x\ \ y\},\{\bot\to\top\})\) with elements \(\{\}\), \(\{x\}\), \(\{y\}\), \(\{x,y\}\) under the subset order, once again denoting in braces the preimage of \(\top\). Our regular and singular levels are indexed by pairs of related elements \([A,B]\) in \(D\), which we call _intervals_; by construction, \(A,B\) will be subsets of the original poset, with \(A\subseteq B\). For this example there are 9 such intervals, and we list them on the left of Figure 2, once again considering them as distinct "time slices". For example, \([\{x\},\{x,y\}]\) represents the time slice in which \(x\) has already occurred, and \(y\) is in the moment of occurring. More generally, one can interpret any interval \([A,B]\) as the time slice in which the events \(A\) has already occurred, and the events \(B\setminus A\) are occurring at that exact moment. On the right of Figure 2, we present these 9 intervals in a different way, as nodes on a diamond grid, interpreted as a diagram in an underlying process category C. Morphisms arise as interval refinements, and we observe the presence of 4 squares, which are required to commute. As we move from the bottom to the top node by various paths, we observe different sequences of time slices: moving clockwise we observe the sequence \([\{\},\{\}]\), \([\{\},\{x\}]\), \([\{x\},\{x\}]\), \([\{x\},\{x,y\}]\), \([\{x,y\},\{x,y\}]\), interpreted as \(x\) occurring before \(y\); moving anticlockwise we observe the sequence \([\{\},\{\}]\), \([\{\},\{y\}]\), \([\{y\},\{y\}]\), \([\{y\},\{x,y\}]\), \([\{x,y\},\{x,y\}]\), interpreted as \(y\) occurring before \(x\); and moving vertically one observes the sequence \([\{\},\{\}]\), \([\{x,y\}]\), \([\{x,y\},\{x,y\}]\), \([\{x,y\},\{x,y\}]\), in which \(x\) and \(y\) occur simultaneously. The posetal zigzag diagram provides semantic data in the base category C for all of these possibilities. The fact that the 4 squares commute means that whatever sequence one chooses, the semantic effect in C is the same. The fact that the squares are pullbacks means that much of this data is in fact redundant; the central part, drawn in gray, can Figure 2. The posetal zigzag over the poset \(\{x\ |\ y\}\). be considered as "filler data" that need not be explicitly stored. Figure 1 contained similar redundant filler data, with the singular level \(S_{a,b}\) required to arise as a pullback of \(S_{a}\) and \(S_{b}\). This account is of course informal, and intended to give intuition ahead of the precise mathematical development that follows. In Section 2 we introduce the notion of interval for a poset, and give a notion of labelled diagram structure, with label data assigned to all intervals, but with no pullback conditions imposed on filler data. In Section 3 we introduce the more refined posetal zigzag construction P(C), where the filler data is now obtained canonically via pullback, yielding a well-behaved conservative extension of the traditional linear zigzag formalism. In Section 4 we consider the construction of limits in posetal zigzag categories, giving an explicit construction procedure, and establishing our main result, Corollary 4.12, which states that if C has all finite limits, then so does P(C). In this way, we establish the posetal zigzag construction as a potential foundation for a new class of geometrical proof assistant, with additional expressive power. In this setting, higher string diagrams would retain their semistrict geometrical essence, yet also exhibit new posetal features such as branches, sinks and forks; \(n\)-dimensional diagrams are no longer inscribed neatly within the \(n\)-cube, as with traditional higher string diagrams. Depending on the poset structure, the sequencing of morphisms within the string diagram would be dynamic, with re-sequencing steps appearing as higher cells. While the future applications of these ideas of course remain speculative, we hope our work may lead to further development of geometrical proof assistants with exciting new capabilities. ### Acknowledgements We thank Lukas Heidemann, Alex Rice and Calin Tataru for many insightful discussions. We also thank the anonymous reviewers for their invaluable feedback. The first author acknowledges funding from King's College Cambridge. The second author acknowledges funding from the Royal Society. ## 2. The Interval Construction ### From Posets to Labelled Intervals Our formal development begins with a reconstruction of the categorical origins of the zigzag construction, which we generalise to poset shapes. We will initially focus our analysis on the combinatorial aspects of our theory, setting solid foundations for the establishing the theoretical results of Section 4, and allowing for the geometric content of our theory to emerge organically as a by-product of our categorical analysis. In our discussion, we study the maps of our diagrams as factored or decomposed in two parts: one which may change the posetal shape but not the labels, and another which may change the labels, but fixes the shape. By adding the possibility of extra "filler data" in our diagrams, this relabelling map can be specified as a natural transformation between two functors. This sort of decomposition can be expressed naturally in the language of Grothendieck fibrations, unlocking powerful theoretical tools. Without getting too ahead of our formal development, this Section will build up to a formal description of the construction we depicted in Figure 1 and Figure 2, starting from the following key idea: **Definition 2.1** (Interval).: An _interval_ in a poset \(P\) is a pair \([a,a^{\prime}]\) of \(a,a^{\prime}\in P\) with \(a\leq a^{\prime}\). We denote the set of intervals in \(P\) by \([P]\) and order it by the precision relation \(\supseteq\) \[[a,a^{\prime}]\supseteq[b,b^{\prime}]:\longleftrightarrow(a\leq b)\wedge(b^{ \prime}\leq a^{\prime})\] The strict counterpart of this relation is denoted \(\supset\). **Proposition 2.2**.: _For every finite poset \(P\), \([P]\) is also a finite poset._ Proof.: Finiteness is evident. We must show that \([P]\) inherits transitivity, reflexivity and anti-symmetry from \(P\). For transitivity, if we have \([a,a^{\prime}]\supseteq[b,b^{\prime}]\) and \([b,b^{\prime}]\supseteq[c,c^{\prime}]\), then we must have \(a\leq b\leq c\) and \(c^{\prime}\leq b^{\prime}\leq a^{\prime}\), so by transitivity of \(P\), we have \(a\leq c\) and \(c^{\prime}\leq a^{\prime}\), and hence \([a,a^{\prime}]\supseteq[c,c^{\prime}]\). Reflexivity is evident, and for anti-symmetry, if we have \([a,a^{\prime}]\supseteq[b,b^{\prime}]\) and \([b,b^{\prime}]\supseteq[a,a^{\prime}]\), then we must have \(a\leq b\leq a\) and \(b^{\prime}\leq a^{\prime}\leq b^{\prime}\), and hence \(a=b\) and \(a^{\prime}=b^{\prime}\) by anti-symmetry of \(P\). **Example 2.3**.: The interval construction \([P]\) on the poset \(P=\{a<(b\mid c)<d<e\}\) is depicted as the left-most diagram below. The data of the intervals below the gray arrows is easily rendered as the posetal string diagram on the right. This requires us to ignore the witnesses from the higher intervals, which have no clear interpretation on the diagrammatic side. With anti-symmetry in mind, we specialise our terminology as follows: **Definition 2.4** (Degenerate Intervals).: We refer to an interval \([a,a^{\prime}]\) in \(P\) as a _degenerate_, if \(a=a^{\prime}\), or _non-degenerate_, if \(a<a^{\prime}\). In the account of this construction offered in [12], degenerate intervals are called regular heights, and non-degenerate intervals are called singular heights. Though the alternative choice of terminology is supported by good motivation, we avoid it in our discussion to spare the reader from unnecessary confusion. **Definition 2.5** (Map of Intervals).: Let \(f:P\to Q\) be a monotone function between posets. The associated _map of interval posets_\([f]:[P]\to[Q]\) is the function sending an interval \([a,a^{\prime}]\) in \(P\) to \([f(a),f(a^{\prime})]\). **Proposition 2.6**.: _Let \(f:P\to Q\) be a monotone function between posets. Then \([f]\) is a well-defined monotone map, and moreover, \([-]\) defines an endofunctor on \(\mathrm{FPos}\)._ Proof.: Let \(f:P\to Q\) be a monotone map. Then \(a\leq a^{\prime}\) implies \(f(a)\leq f(a^{\prime})\), hence if \([a,a^{\prime}]\) is an interval in \(P\), \([f][a,a^{\prime}]\) is an interval in \(Q\). Moreover, if we have \([a,a^{\prime}]\supseteq[b,b^{\prime}]\), i.e. \(a\leq b\) and \(b^{\prime}\leq a^{\prime}\), then \(f(a)\leq f(b)\) and \(f(b^{\prime})\leq f(a^{\prime})\), and thus \([f][a,a^{\prime}]\supseteq[f][b,b^{\prime}]\), which shows \([f]\) is monotone. Finally, the assignment \(P\mapsto[P]\) and \(f\mapsto[f]\) respects identities and composites, and thus makes \([-]\) into an endofunctor \(\mathrm{FPos}\). **Lemma 2.7**.: _The functor \([-]\) preserves products._ Proof.: Let \(P\) and \(Q\) be finite posets. An interval in \(P\times Q\) is a pair of pairs \([(a,b),(a^{\prime},b^{\prime})]\) with \(a\leq a^{\prime}\) and \(b\leq b^{\prime}\), and thus defines two intervals \([a,a^{\prime}]\) and \([b,b^{\prime}]\). Moreover, the assignment and its inverse are monotone: \[[(a,b),(a^{\prime},b^{\prime})]\supseteq[(c,d),(c^{\prime},d^{ \prime})] \longleftrightarrow(a,b)\leq(c,d)\wedge(c^{\prime},d^{\prime}) \leq(a^{\prime},b^{\prime})\] \[\longleftrightarrow a\leq c\wedge b\leq d\wedge c^{\prime}\leq a ^{\prime}\wedge d^{\prime}\leq b^{\prime}\] \[\longleftrightarrow[a,a^{\prime}]\supseteq[c,c^{\prime}]\wedge[b,b ^{\prime}]\supseteq[d,d^{\prime}].\] We now identify \(\operatorname{FPos}\) with a corresponding full subcategory of \(\operatorname{Cat}\), by identifying each poset \(P\) with the category \(P\) with the elements of \(P\) for objects, and arrows \(a\to a^{\prime}\), whenever \(a\leq a^{\prime}\). The above data is combined as follows: **Definition 2.8** (Labelled Interval Category).: Let \(\operatorname{C}\) be a category. The category \(\operatorname{\mathsf{L}(C)}\) of _intervals labelled in \(\operatorname{C}\)_ is defined as the Grothendieck construction of the functor \[L_{\operatorname{C}}:\operatorname{FPos}^{\operatorname{op}} \longrightarrow\operatorname{Cat}\] \[P \longmapsto\operatorname{Func}([P],\operatorname{C}),\] \[f \longmapsto-\circ[f].\] Explicitly, the objects of \(\operatorname{\mathsf{L}(C)}\) are pairs \((P,X)\), with \(P\in\operatorname{FPos}\), a shape poset, and \(X:[P]\to\operatorname{C}\), a labelling of \([P]\) in \(\operatorname{C}\). As for the morphisms, they are given by pairs \((f,\alpha)\), with \(f:P\to Q\), a change of shape map, and \(\alpha:X\to Y\circ[f]\) a relabelling natural transformation. **Example 2.9**.: Let \(P\) be the poset \(\{a<(b\,|\,c)<d<e\}\) of Example 2.3 and \(\operatorname{C}\) be the thin category generated by \(\{x<(f\,|\,g\,|\,h)<(\alpha\,|\,\beta\,|\,\gamma)<\mu\}\). Then the pair \((P,X)\) defines an object of \(\operatorname{\mathsf{L}(C)}\), where \(X:[P]\to\operatorname{C}\) is the functor represented in the diagram below: **Example 2.10**.: As shapes for cells in a higher category, labelled intervals can admit extremely undesirable behaviour: they need not even be connected. For instance, if \(P:=1+1\) is the discrete two-element poset, then \([P]\cong P\) and thus a labelled interval of shape \(P\) simply picks out two objects of \(\operatorname{C}\). **Example 2.11**.: A map of labelled intervals can be decomposed as a change of shape monotone function and a relabelling natural transformation. In the diagram below, the monotone map is between the posets \(P:=\{\bot\to\top\}\) and \(Q:=\{\text{a}<(\text{b}\mid\text{c})<\text{d}\}\), and acts by \(\bot\mapsto a,\top\mapsto d\). The relabelling has components \(\operatorname{id}_{x}\), \(f\to g\) and \(\operatorname{id}_{y}\). By virtue of its definition through a Grothendieck construction, we have an underlying shape functor \(U:\operatorname{\mathsf{L}(C)}\to\operatorname{FPos}\), which is a Grothendieck fibration. If we adopt the convention of writing \((x^{0},x^{1})\) for the components of an object \(x\in\operatorname{\mathsf{L}(C)}\), and similarly for morphisms, then the action of \(U\) is simply specified by taking first projections \(x\mapsto x^{0},f\mapsto f^{0}\). Given a functor \(F:\mathrm{C}\to\mathrm{D}\), we may define a post-composition with \(F\) mapping \(\mathsf{L}(F):\mathsf{L}(\mathrm{C})\to\mathsf{L}(\mathrm{D})\), which acts by \((P,X)\mapsto(P,F\circ X)\) and \((f,\alpha)\mapsto(f,F\alpha)\). Then \(\mathsf{L}(F)\) is a functor, and moreover it allows us to make an assignment \(\mathsf{L}(-):\mathrm{C}\mapsto\mathsf{L}(\mathrm{C}),F\mapsto\mathsf{L}(F)\), which is an endofunctor on \(\mathrm{Cat}\), with the usual caveats about size concerns we trust our readers to judge unproblematic. In particular, this allows us to iterate the construction, as \(\mathsf{L}(\mathrm{C})\) can itself be taken as a category of labels, and we may thus define iterated versions of this construction by setting \(\mathsf{L}^{n+1}(\mathrm{C}):=\mathsf{L}(\mathsf{L}^{n}(\mathrm{C}))\). It seems useful at this point, before drawing this Section to an end, to briefly summarise our progress towards the overarching goal of posetal diagrams. The interval construction introduced in Definition 2.1 captures the combinatorial aspect of our intended construction remarkably well. Unfortunately, as we have seen in Example 2.3, the interval construction introduces labels which are foreign to the diagrammatic calculus. In keeping with our aim of being able to reconstruct the combinatorial object from the geometric representation, we need to ensure that those labels do not carry information that cannot be inferred from the explicit datum of a diagram. We will see in the next Section that this canonicity requirement can be succinctly stated in terms of limit constructions. ## 3. Posetal Diagrams ### Posetal Diagrams as Local Functors In the previous Section we have presented the construction of the category of labelled intervals, arguing that it correctly captures the desiderata combinatorics our theory. However, this approach requires too much filler data to be specified, preventing a clean diagrammatic presentation of our structures. In drawing intuition from Example 2.3, we wish to find technical conditions under which the missing filler data in the diagrams can be faithfully reconstructed. A close inspection of Example 2.3 and Example 2.9 shows that all the intervals whose data we wish to suppress satisfy the universal property of being a pullback in \([P]\). This would suggest we consider labellings \(X:[P]\to\mathrm{C}\) which preserve pullbacks. However, this condition is far too strong for our interest: \([P]\) is a thin category, and thus every arrow is monic. If \(X\) were to preserve all pullbacks, it could only involve monic arrows in \(\mathrm{C}\), which is exceedingly restrictive, especially in light of our wishes to iterate the construction. We will thus exercise some care in determining exactly which pullbacks in \([P]\) should be preserved: **Definition 3.1** (Atomic Cospan, Local Functor).: Let \(\mathrm{J}\), \(\mathrm{C}\) be categories. A cospan \((a\stackrel{{ f}}{{\to}}x\stackrel{{ g}}{{\leftarrow}}b)\) in \(\mathrm{J}\) is _atomic_ if for any cospan \((a\stackrel{{ f^{\prime}}}{{\to}}y\stackrel{{ g^{ \prime}}}{{\leftarrow}}b)\) and map \(h:y\to x\) with \(h\circ f=f^{\prime}\) and \(h\circ g=g^{\prime}\), then \(h\) is an isomorphism. We say a functor \(X:\mathrm{J}\to\mathrm{C}\) is _local_ if it preserves all pullbacks of atomic cospans. **Example 3.2**.: If \(f\) is not an isomorphism, then \((a\stackrel{{ f}}{{\to}}x\stackrel{{ f}}{{\leftarrow}}a)\) is never atomic. Hence non-iso monic arrows need not be preserved by local functors. **Example 3.3**.: The cospan \([b,e]\supset[e,e]\subset[c,e]\) in our Example 2.3 is also not atomic. **Example 3.4**.: The labelling of the diamond \(\{a<(b\,|\,c)<d\}\) in Set depicted below is a local functor: Though this technical condition seems cumbersome to check explicitly, a remarkable fact about our formalism is that for a rather large class of poset shapes we almost never actually need to do this. We will prove in the remainder of the paper that so long as the diagrams are constructed by taking finite limits of other local diagrams of the right shape, we will remain within this fragment of our framework. For our interest in using this combinatorial framework as the basis of a future proof assistant, this shows we can maintain strong invariants on our data structures, so that any actual implementation of our procedures could drastically reduce the data it needs to explicitly keep track of. Another, even more surprising property of our formalism is that this class of well-behaved poset shapes can be morally taken to be the whole of FPos, albeit viewed through a looking glass. This is because we may take our well-behaved posets to be finite distributive lattices, by which we mean posets \(P\) admitting finite meets and joins and that satisfying the condition that for all \(a,b,c\in P\) we have \(a\wedge(b\lor c)=(a\lor b)\wedge(a\lor c)\) and \(a\vee(b\wedge c)=(a\wedge b)\vee(a\wedge c)\). Such posets assemble into a non-full subcategory FDLat of FPos by taking as maps monotone functions \(f:P\to Q\) preserving finite meets and joins. Note that, in particular, our lattices are always bounded, and moreover if \(f\) is a lattice homomorphism then we must have \(f(\bot)=\bot\) and \(f(\top)=\top\). Although in our formal development some individual results would hold with weaker regularity conditions, the category FDLat enjoys the property of being equivalent to FPos\({}^{\text{op}}\) by the Birkhoff Representation Theorem [16, page 262]. The sum of these wonderful properties suggests us the following definition: **Definition 3.5** (Labelled Posetal Diagram).: The category \(\mathsf{P}(\mathsf{C})\) of _posetal diagrams labelled in_\(\mathsf{C}\) is defined to be the subcategory of labelled intervals \(\mathsf{L}(\mathsf{C})\) with objects pairs \((P,X)\) with \(P\) being a distributive lattice and \(X\) a local functor, and morphisms pairs \((f,\alpha)\), with \(f\) a lattice homomorphism. ### Intervals in Lattices Having formally introduced our key notion of posetal diagrams in Section 3.1, we will now embark a fine-grained analysis of the relation between logical properties of a poset \(P\) and the locality property of the labelled interval with shape \(P\). Our key result for this Section will be an explicit characterisation of atomic cospans in \([P]\) for distributive lattices. This will allow us to extract useful consequences about preservation of locality under proposition by interval maps associated with lattice homomorphisms. To do this, however, we first require some intermediate lemmas. **Lemma 3.6**.: _Let \(P\) be a finite poset with binary meets and joins. Then, for any two intervals \([a,a^{\prime}]\) and \([b,b^{\prime}]\) in \([P]\), the meet \([a,a^{\prime}]\wedge[b,b^{\prime}]\) exists and is given by \([a\wedge b,a^{\prime}\lor b^{\prime}]\)._ Proof.: We have \(a\wedge b\leq a\) and \(a^{\prime}\leq a^{\prime}\lor b^{\prime}\), and similarly for \([b,b^{\prime}]\), hence \([a\wedge b,a^{\prime}\lor b^{\prime}]\) is an interval and \([a\wedge b,a^{\prime}\lor b^{\prime}]\supseteq[a,a^{\prime}],[b,b^{\prime}]\). Moreover, for any other interval \([c,c^{\prime}]\) in \(P\) with \([c,c^{\prime}]\supseteq[a,a^{\prime}],[b,b^{\prime}]\), we must have \(c\leq a\) and \(c\leq b\), and thus \(c\leq a\wedge b\). The dual calculation shows \([c,c^{\prime}]\supseteq[a\wedge b,a^{\prime}\lor b^{\prime}]\) **Lemma 3.7**.: _Let \(P\) be a finite poset with binary meets and joins. Then, for any two intervals \([a,a^{\prime}]\) and \([b,b^{\prime}]\) in \([P]\), the join \([a,a^{\prime}]\vee[b,b^{\prime}]\) exists iff \(a\lor b\leq a^{\prime}\wedge b^{\prime}\), in which case it is given by \([a\lor b,a^{\prime}\wedge b^{\prime}]\)._ Proof.: Assume \([a,a^{\prime}]\vee[b,b^{\prime}]\) exists and equal to some interval \([c,c^{\prime}]\). Then in particular we have \(a\leq c\) and \(b\leq c\), and thus \(a\lor b\leq c\), and dually \(c^{\prime}\leq a^{\prime}\wedge b^{\prime}\). Since \(c\leq c^{\prime}\), we must have \(a\lor b\leq a^{\prime}\wedge b^{\prime}\). We establish the other direction by verifying the universal property of the interval \([a\lor b,a^{\prime}\wedge b^{\prime}]\), whenever it exists. We have \([a,a^{\prime}]\supseteq[a\lor b,a^{\prime}\wedge b^{\prime}]\), and similarly for \([b,b^{\prime}]\), and moreover for every interval \([c,c^{\prime}]\) satisfying the containments \([a,a^{\prime}]\supseteq[c,c^{\prime}]\subseteq[b,b^{\prime}]\), we must have \([a\lor b,a^{\prime}\wedge b^{\prime}]\supseteq[c,c^{\prime}]\) by the above verifications. **Proposition 3.8**.: _Let \(P\) be a finite lattice. Then a cospan \([a,a^{\prime}]\supseteq[b,b^{\prime}]\subseteq[c,c^{\prime}]\) in \([P]\) is atomic iff \(a\lor c\leq a^{\prime}\wedge c^{\prime}\) and \([b,b^{\prime}]=[a\lor c,a^{\prime}\wedge c^{\prime}]\)._ Proof.: In the forward direction, we have that \(a\leq b\) and \(c\leq b\), hence \(a\lor c\leq b\) and dually \(b^{\prime}\leq a^{\prime}\wedge c^{\prime}\). In particular, \([a\lor c,a^{\prime}\wedge c^{\prime}]\supseteq[b,b^{\prime}]\), and thus we have \([a\lor c,a^{\prime}\wedge c^{\prime}]=[b,b^{\prime}]\) by atomicity and anti-symmetry. The backward direction is given by the universal property of \(a\lor c\) and \(a^{\prime}\wedge c^{\prime}\). This yields an immediate but essential consequence for our forthcoming study of limits of posetal diagrams in Section 4.2: **Corollary 3.9**.: _If \(f:P\to Q\) is a lattice homomorphism, \([f]\) preserves atomic cospans and their pullbacks._ Proof.: By Proposition 3.8, a cospan \([a,a^{\prime}]\supseteq[b,b^{\prime}]\subseteq[c,c^{\prime}]\) is atomic iff \([b,b^{\prime}]=[a\lor c,a^{\prime}\wedge c^{\prime}]\). But lattice homomorphisms preserve binary meets and joins, so \([f][b,b^{\prime}]=[f(a)\lor f(c),f(a^{\prime})\wedge f(c^{\prime})]\), hence the cospan \([f][a,a^{\prime}]\supseteq[f][b,b^{\prime}]\subseteq[f][c,c^{\prime}]\) is atomic. Moreover, \([P]\) is a thin category, so the pullback of our atomic cospan coincides with the product of \([a,a^{\prime}]\) and \([c,c^{\prime}]\). By Lemma 3.6, this is given by \([a\wedge c,a^{\prime}\lor c^{\prime}]\), and thus it is preserved by \(f\). ## 4. Limits of Posetal Diagrams ### Limit Procedure Having given the construction of the category \(\mathsf{L}(\mathrm{C})\) in our preceding Section, we will now describe a procedure to compute limits in \(\mathsf{L}(\mathrm{C})\) from limits in \(\mathrm{C}\) and \(\mathrm{FPos}\). By taking the category of labels \(\mathrm{C}\) to be a suitable \(n\)-fold iterate of the labelled interval construction \(\mathsf{L}^{n}(\mathrm{D})\), this procedure provides a recursive strategy for computing limits entirely in terms of those in the base category \(\mathrm{D}\). Before we describe the limit procedure, since we are interested in stating our results for categories which may be fail to admit many limits, we need to fix some terminology. Recall that for functors \(F:\mathrm{J}\to\mathrm{C}\) and \(G:\mathrm{C}\to\mathrm{D}\), we say that \(G\) preserves \(F\)-limits if whenever \((L,\eta:\Delta_{L}\to F)\) is a limit for \(F\), \((GL,G\eta)\) is a limit for \(G\circ F\). We also say that \(G\) reflects \(F\)-limits if whenever we have a cone \((L,\eta)\) over \(F\) such that \((GL,G\eta)\) is a limit for \(G\circ F\), then \((L,\eta)\) is a limit for \(F\)[13, 3.3.1]. **Definition 4.1** (Pointwise Limit).: Let \((L,\eta)\) be a limit for a diagram \(F:\mathrm{J}\to\mathrm{Func}(\mathrm{C},\mathrm{D})\). We say \((L,\eta)\) is _pointwise_ if for each \(c\in\mathrm{C}\), it is preserved by the evaluation at \(c\) functor \(\mathrm{ev}_{c}:\mathrm{Func}(\mathrm{C},\mathrm{D})\to\mathrm{D}\). If \(\mathrm{D}\) has all \(\mathrm{J}\)-limits, every \(\mathrm{J}\)-limit in \(\mathrm{Func}(\mathrm{C},\mathrm{D})\) is pointwise, but this need not be the case otherwise, and our analysis will necessitate the distinction. **Construction 4.2** (Limit Procedure).: Given a category of labels \(\mathrm{C}\) and a finite diagram \(F:\mathrm{J}\to\mathsf{L}(\mathrm{C})\), we compute the limit of \(F\) or fail according to the following procedure: 1. We take the limit \((L,\rho)\) of the diagram \(U\circ F:\mathsf{J}\to\mathsf{L}(\mathrm{C})\to\mathrm{FPos}\), 2. We use \(\rho\) to produce from \(F\) a diagram \(G:\mathsf{J}\to\mathrm{Func}([L],\mathrm{C})\), 3. For each \(j\in\mathsf{J}\), we take \(Gj:=(Fj)^{1}\circ[\rho_{j}]\), 4. For each \(h\in\mathsf{J}(j,j^{\prime})\), we take \(Gh:Gj\to Gj^{\prime}\) to have components \[(Gh)_{[a,b]}:=(Fh)^{1}_{[\rho_{j}][a,b]}:(Fj)^{1}[\rho_{j}][a,b]\longrightarrow (Fj^{\prime})^{1}[\rho_{j^{\prime}}][a,b],\] 5. We define \(\varepsilon:(\Delta_{L},G)\to F\) to have components \(\varepsilon_{j}:=(\rho_{j},\mathrm{id}_{(Gj)^{1}})\), 6. For each interval \([a,b]\) in \(L\), we take the limit \((L_{[a,b]},\eta_{[a,b]})\) of \(\mathrm{ev}_{[a,b]}\circ G:\mathsf{J}\to\mathrm{C}\), if it exists, and fail if not, 7. For each pair \([a,b]\supseteq[c,d]\) of intervals in \(L\), we use naturality of \(\mathrm{ev}_{-}\) to get cones \((L_{[a,b]},\mathrm{ev}_{[a,b]\supseteq[c,d]}\circ\eta_{[a,b]})\) over \(\mathrm{ev}_{[c,d]}\circ G\), 8. We get cone maps \(L_{[a,b]\supseteq[c,d]}:L_{[a,b]}\to L_{[c,d]}\) via the u.p. of \((L_{[c,d]},\eta_{[c,d]})\), 9. We assemble the above into a limit cone \((L,\eta)\) with \(L:[a,b]\mapsto L_{[a,b]}\), \(([a,b]\supseteq[c,d])\mapsto L_{[a,b]\supseteq[c,d]}\) and \((\eta_{j})_{[a,b]}:=(\eta_{[a,b]})_{j}\). 10. We return the cone \(((L,L),\eta\circ\varepsilon)\) as a limit for \(F\). The end-to-end procedure is depicted in Figure 3. **Lemma 4.3**.: _For every finite diagram \(F:\mathsf{J}\to\mathsf{L}(\mathrm{C})\), Construction 4.2 is well-defined._ Proof.: The limit at Step (1) exists because \(\mathrm{FPos}\) is finitely complete [1, 12.6.1]. Since \((L,\rho)\) is a cone, for every \(h\in\mathsf{J}(j,j^{\prime})\), we have \((Fh)^{0}\circ\rho_{j}=\rho_{j^{\prime}}\), hence \(Gh\) is well-defined. Moreover, since \(F\) and \([-]\) are functors, \(G\) respects identities, and since for \(h\in\mathsf{J}(j,j^{\prime})\) and \(h^{\prime}\in\mathsf{J}(j^{\prime},j^{\prime\prime})\), we have \((F(h^{\prime}\circ h)^{1})_{[\rho_{j}][a,b]}=(F(h^{\prime})\circ[\rho_{j^{ \prime}}])\circ Fh)^{1})_{[\rho_{j}][a,b]}\), \(G\) respects composites and thus is a functor. The naturality condition for \(\varepsilon\) in Step (5) follows by unwinding definitions. Functoriality of \(L\) in Step (9) follows by uniqueness of the cone maps, and finally, naturality of \(\eta\) holds along \(\mathsf{J}\) due to \((\eta_{-})_{[a,b]}:=\eta_{[a,b]}\) being a cone, and along \([P]\) due to \(L(-\supseteq-)\) being a map of cones. **Proposition 4.4**.: _Let \(f:P\to Q\) be a monotone function and \(\mathrm{C}\) a category. Then \(-\circ[f]:\mathrm{Func}([Q],\mathrm{C})\to\mathrm{Func}([P],\mathrm{C})\) preserves all pointwise limits which exist in \(\mathrm{Func}([Q],\mathrm{C})\)._ Proof.: Let \((L,\eta)\) be a pointwise limit for a diagram \(G:\mathsf{J}\to L(Q)\), and let \((K,\chi)\) be a cone over \(G\circ[f]\). We define a natural transformation \(\gamma:K\to L\circ[f]\) by taking for each interval \([a,b]\) in \(P\), the component \(\gamma_{[a,b]}\) to be the unique map of cones \((K[a,b],\mathrm{ev}_{[a,b]}\chi)\to(Lf[a,b],\mathrm{ev}_{[f][a,b]}\eta)\), which is given by the universal Figure 3. A diagrammatic presentation of Construction 4.2. property of the pointwise limit. Naturality and uniqueness of \(\gamma\) then follow both by uniqueness of the components. **Theorem 4.5**.: _Let \(\mathrm{C}\) be a category. If \(F:\mathrm{J}\to\mathsf{L}(\mathrm{C})\) is a finite diagram, and Construction 4.2 succeeds for \(F\), its output \(((L,L),\varepsilon\circ\eta)\) is a limit for \(F\)._ Proof.: By Lemma 4.3, Construction 4.2 is well-defined. Let \(((K,K),\gamma)\) be a cone over \(F\). Since \(L\) is a limit for \(U\circ F\), we have a unique map \(k:K\to L\). Since \((L,\eta)\) is a pointwise limit, by Proposition 4.4, \(-\circ[k]\) preserves it, so we have a unique map of cones \(\chi:K\to L\), so we take \((k,\chi)\) as our map. **Corollary 4.6**.: _Let \(\mathrm{J}\) be a finite category. If a category \(\mathrm{C}\) has \(\mathrm{J}\)-limits, then so does \(\mathsf{L}(\mathrm{C})\)._ Proof.: This is a known result about fibred categories, see e.g. [15, Thm.1], which we can extract as a consequence of Theorem 4.5. Since \(\mathrm{C}\) has \(\mathrm{J}\)-limits, so will each functor category \(\mathrm{Func}([P],\mathrm{C})\), as they inherit the pointwise limits from \(\mathrm{C}\). Hence Construction 4.2 will succeed for all diagrams \(F:\mathrm{J}\to\mathsf{L}(\mathrm{C})\), and thus by Theorem 4.5 every such diagram has a limit. For a converse statement, we can prove that if \(\mathrm{C}\) has an initial object, then our Construction 4.2 is searching for a limit in the correct fibre: **Proposition 4.7**.: _Let \(\mathrm{C}\) be a category with an initial object \(0\in\mathrm{C}\). The fibration \(U:\mathsf{L}(\mathrm{C})\to\mathrm{FPos}\) preserves all existing limits._ Proof.: The functor \(U\) has a left-adjoint \(F:\mathrm{FPos}\to\mathsf{L}(\mathrm{C})\), which acts as \(P\mapsto(P,\Delta_{0}:[P]\to\mathrm{C})\) and \(f\mapsto(f,!)\), where \(\Delta_{0}\) is the constant functor on \(0\in\mathrm{C}\). The following lemma allows us to safely invoke completion arguments: **Lemma 4.8**.: _Let \(F:\mathrm{C}\to\mathrm{D}\) be a functor. If \(F\) preserves, resp. reflects, all \(\mathrm{J}\)-limits which exist in \(\mathrm{C}\), resp. \(\mathrm{D}\), then \(\mathsf{L}(F):\mathsf{L}(\mathrm{C})\to\mathsf{L}(\mathrm{D})\) preserves, resp. reflects, all \(\mathrm{J}\)-limits produced by Construction 4.2._ Proof.: For preservation, let \(((L,L),\eta)\) be a limit for a diagram \(G:\mathrm{J}\to\mathsf{L}(\mathrm{C})\) obtained via Construction 4.2. Then \(\mathsf{L}(F)\) sends this limit to \(((L,F\circ L),\varepsilon)\), where \(\varepsilon:=(\eta_{j}^{0},Fn_{j}^{1})\). Since \(F\) preserves \(\mathrm{J}\)-limits in \(\mathrm{C}\), this cone matches the output of Construction 4.2 for \(\mathsf{L}(F)\circ G\), and thus by Theorem 4.5 is limiting. For reflection, let \(((L,L),\eta)\) be a cone over a diagram \(G:\mathrm{J}\to\mathsf{L}(\mathrm{C})\) which is mapped under \(\mathsf{L}(F)\) to the limit cone \(((K,K),\varepsilon)\) obtained via Construction 4.2 on \(\mathsf{L}(F)\circ G\). Since \(\mathsf{L}(F)\) acts only on labels, we must have \(L=K\). Moreover, since the limit \((L,K)\) is pointwise and \(F\) reflects \(\mathrm{J}\)-limits in \(\mathrm{D}\), \(((L,L),\eta)\) satisfies the specification of Construction 4.2, and thus is a limit for \(G\). ### Limits of Posetal Diagrams Having presented a procedure for computing limits in \(\mathsf{L}(\mathrm{C})\), we will conclude our technical exposition with a study of the corresponding limit procedure for the subcategory \(\mathsf{P}(\mathrm{C})\), and extract some consequence for local diagrams. A lot of the heavy lifting of our results in this section hinges on the following lemma, which allows a translation of Construction 4.2 from labelled intervals to posetal diagrams. **Lemma 4.9**.: _The subcategory inclusion \(\mathrm{FDLat}\to\mathrm{FPos}\) preserves finite limits._ Proof.: Let \((L,\eta)\) be a limit cone for a finite diagram \(F:\mathrm{J}\to\mathrm{FDLat}\). By the Birkhoff Representation Theorem [16, page 262], the functors \(\mathrm{FPos}(-,2):\mathrm{FPos}^{\mathrm{op}}\to\mathrm{FDLat}\) and \(\mathrm{FDLat}(-,2):\mathrm{FDLat}^{\mathrm{op}}\to\mathrm{FPos}\) form an adjoint equivalence, where \(2\) denotes the two-element distributive lattice \(\{\bot\to\top\}\) and the hom-sets are equipped with their respective pointwise orders. Since \(\mathrm{FDLat}(-,2)\) is a right adjoint, it preserves the limit \((L,\eta)\), sending it to the colimit of \(\operatorname{FDLat}(F-,2):\operatorname{J^{op}}\to\operatorname{FPos}\). It suffices now to show that the composite of the dualising functor \(\operatorname{FPos}(-,2)\) and subcategory inclusion \(\operatorname{FDLat}\to\operatorname{FPos}\) preserves finite colimits. But this is just given by the internal contravariant hom \(\operatorname{FPos}(-,2):\operatorname{FPos}^{op}\to\operatorname{FPos}\) for the Cartesian closed category \(\operatorname{FPos}\)[1, 27.3.1]. By the enriched variant of the familiar continuity result [11, 3.29], this sends the colimit \((\operatorname{FDLat}(L,2),\operatorname{FDLat}(\eta,2))\) to the limit \((\operatorname{FPos}(\operatorname{FDLat}(L,2),2),\operatorname{FPos}( \operatorname{FDLat}(\eta,2),2))\) of the composite diagram \(\operatorname{J}\to\operatorname{FPos}\), which, by the duality, is isomorphic to the image of the initial cone under the inclusion \(\operatorname{FDLat}\to\operatorname{FPos}\). **Proposition 4.10**.: _If \(\operatorname{C}\) has all equalisers, then \(\operatorname{\mathsf{P}(C)}\) is closed under equalisers taken in \(\operatorname{\mathsf{L}(C)}\)._ Proof.: Let us identify \(\operatorname{\mathsf{P}(C)}\) with its image in \(\operatorname{\mathsf{L}(C)}\), and consider the parallel pair \((f,\alpha),(g,\beta):(P,X)\to(Q,Y)\) of posetal maps. Let \((E,e:E\subseteq P)\) be the equaliser of \(f\) and \(g\) in \(\operatorname{FPos}\), where \(E\) is identified with its image in \(P\) under \(e\), and \((W,\gamma)\) be the equaliser of \(\alpha_{[c]}\) and \(\beta_{[e]}\) in \(\operatorname{Func}([E],\operatorname{C})\). We wish to show \(W\) is local, so let \([a,a^{\prime}]\supseteq[b,b^{\prime}]\subseteq[c,c^{\prime}]\) be an atomic cospan of intervals in \(E\) with pullback \([d,d^{\prime}]\). By Lemma 4.9, \(e\) is a lattice homomorphism, so by Corollary 3.9, the image of the cospan under \(e\) is an atomic cospan in \(P\) with pullback \([d,d^{\prime}]\), and similarly for \([f][d,d^{\prime}]\) under \(f\circ e=g\circ e\). Since \(X\) and \(Y\) are local, we are left to prove that the square \(W([d,d^{\prime}]\supseteq[a,a^{\prime}],[c,c^{\prime}]\supseteq[b,b^{\prime}])\) of pointwise equalisers of two pullback squares is again a pullback. Our result hence follows either by general preservation of limits by limits, or by a diagram chase on the following: **Proposition 4.11**.: _Let \(\operatorname{C}\) be a category with finite products. The subcategory \(\operatorname{\mathsf{P}(C)}\) is closed under finite products taken in \(\operatorname{\mathsf{L}(C)}\)._ Proof.: Let \((P,X),(Q,Y)\) be posetal diagrams, and denote their product in \(\operatorname{\mathsf{L}(C)}\) by \((P\times Q,W)\). By Lemma 2.7, we have an isomorphism \([P\times Q]\cong[P]\times[Q]\), under which \(W\) acts by \(([a,a^{\prime}],[b,b^{\prime}])\mapsto X[a,a^{\prime}]\times Y[b,b^{\prime}]\). By Lemma 4.9, \(P\times Q\) is a lattice, and thus by Corollary 3.9 the projections \([P\times Q]\to[P],[Q]\) preserve atomic cospans and their pullbacks. Since \(X,Y\) are local, so is \(W\). Furthermore, the terminal object \((1,\Delta_{1})\) is always local, so the result holds. **Corollary 4.12**.: _If \(\operatorname{C}\) has all finite limits, then \(\operatorname{\mathsf{P}(C)}\) has all finite limits._ Proof.: By Proposition 4.10 and Proposition 4.11, \(\operatorname{\mathsf{P}(C)}\) is closed under arbitrary finite limits in \(\operatorname{\mathsf{L}(C)}\). But by Corollary 4.6, \(\operatorname{\mathsf{L}(C)}\) is finitely complete, hence so is \(\operatorname{\mathsf{P}(C)}\).
2301.12811
SAN: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer
Generative adversarial networks (GANs) learn a target probability distribution by optimizing a generator and a discriminator with minimax objectives. This paper addresses the question of whether such optimization actually provides the generator with gradients that make its distribution close to the target distribution. We derive metrizable conditions, sufficient conditions for the discriminator to serve as the distance between the distributions by connecting the GAN formulation with the concept of sliced optimal transport. Furthermore, by leveraging these theoretical results, we propose a novel GAN training scheme, called slicing adversarial network (SAN). With only simple modifications, a broad class of existing GANs can be converted to SANs. Experiments on synthetic and image datasets support our theoretical results and the SAN's effectiveness as compared to usual GANs. Furthermore, we also apply SAN to StyleGAN-XL, which leads to state-of-the-art FID score amongst GANs for class conditional generation on ImageNet 256$\times$256. Our implementation is available on https://ytakida.github.io/san.
Yuhta Takida, Masaaki Imaizumi, Takashi Shibuya, Chieh-Hsin Lai, Toshimitsu Uesaka, Naoki Murata, Yuki Mitsufuji
2023-01-30T12:03:44Z
http://arxiv.org/abs/2301.12811v4
# Adversarially Slicing Generative Networks: ###### Abstract Generative adversarial networks (GANs) learn a target probability distribution by optimizing a generator and a discriminator with minimax objectives. This paper addresses the question of whether such optimization actually provides the generator with gradients that make its distribution close to the target distribution. We derive sufficient conditions for the discriminator to serve as the distance between the distributions by connecting the GAN formulation with the concept of sliced optimal transport. Furthermore, by leveraging these theoretical results, we propose a novel GAN training scheme, called adversarially slicing generative network (ASGN). With only simple modifications, the ASGN is applicable to a broad class of existing GANs. Experiments on synthetic and image datasets support our theoretical results and the ASGN's effectiveness as compared to usual GANs. Machine Learning, Generative Networks, Generative Networks, Generative Networks, Generative Networks, Generative Networks. ## 1 Introduction A generative adversarial network (GAN) (Goodfellow et al., 2014) is a popular approach for generative modeling. GANs have achieved remarkable performance in various domains such as image (Brock et al., 2019; Karras et al., 2019, 2021), audio (Kumar et al., 2019; Donahue et al., 2019; Kong et al., 2020), and video (Tulyakov et al., 2018; Hao et al., 2021). The aim of GAN is to learn a target probability measure via a neural network, called a generator. To achieve this, a discriminator is introduced, and the generator and discriminator are optimized in a minimax way. Here, we pose the question of whether GAN optimization actually makes the generator distribution close to the target distribution. For example, likelihood-based generative models such as variational autoencoders (Kingma and Welling, 2014; Higgins et al., 2017; Zhao et al., 2019; Takida et al., 2022), normalizing flows (Tabak and Vanden-Eijnden, 2010; Tabak and Turner, 2013; Rezende and Mohamed, 2015), and denoising diffusion probabilistic models (Ho et al., 2020; Song et al., 2020; Nichol and Dhariwal, 2021) are optimized via the principle of minimizing the exact Kullback-Leibler divergence or its upper bound (Jordan et al., 1999). For GANs, in contrast, it is known that solving the minimization problem with the optimal discriminator is equivalent to minimizing a specific dissimilarity (Goodfellow et al., 2014; Nowozin et al., 2016; Lim and Ye, 2017; Miyato et al., 2018). Furthermore, GANs have been analyzed on the basis of the optimal discriminator assumption (Chu et al., 2020). However, real-world GAN optimizations hardly achieve maximization (Fiez et al., 2022), and analysis of GAN optimization without that assumption is still challenging. To analyze and investigate GAN optimization without the optimality assumption, there are approaches from the perspectives of training convergence (Mescheder et al., 2017; Nagarajan and Kolter, 2017; Mescheder et al., 2018; Sanjabi et al., 2018; Xu et al., 2020), loss landscape around the saddle points of minimax problems (Farnia and Ozdaglar, 2020; Berard et al., 2020), and the smoothness of optimization (Fiez et al., 2022). Although these studies have been insightful for GAN stabilization, there has been little discussion on whether trained discriminators indeed provide generator optimization with gradients that reduce dissimilarities. In this paper, we provide a novel perspective on GAN optimization, which helps us to consider whether a discriminator is _metrizable_. **Definition 1.1** (Metrizable discriminator).: Let \(\mu_{\theta}\) and \(\mu_{0}\) be measures. Given an objective function for \(\theta\), \(\mathcal{J}(\theta;\cdot)\), a discriminator \(f\) is \((\mathcal{J},\mathcal{D})\)- or \(\mathcal{J}\)_-metrizable_ for \(\mu_{\theta}\) and \(\mu_{0}\), if \(\mathcal{J}(\theta;f)\) is minimized only with \(\theta\in\arg\min_{\theta}\mathcal{D}(\mu_{0},\mu_{\theta})\) for a certain distance on measures, \(\mathcal{D}(\cdot,\cdot)\). To evaluate the dissimilarity with a given GAN minimization problem \(\mathcal{J}\), we are interested in other conditions besides the discriminator's optimality. Hence, we propose _metrizable conditions_, namely, _direction optimality_, _separability_, and _injectivity_, that induce \(\mathcal{J}\)_-metrizable_ discriminator. To achieve this, we first introduce a divergence, called functional mean divergence (FM\({}^{*}\)), in Sec. 3. We connect the FM\({}^{*}\) with the minimization objective function of Wasserstein GAN. Then, we obtain the _metrizable conditions_ for Wasserstein GAN by investigating the following question. **Question 1.2**.: Under what conditions is FM\({}^{*}\) a distance? We provide an answer to this question in Sec. 4 by relating the FM\({}^{*}\) to the concept of sliced optimal transport (Bonneel et al., 2015; Kolouri et al., 2019). Then, in Sec. 5, we formalize the proposed conditions for Wasserstein GAN and further extend the result to generic GAN. By applying the derived _metrizable conditions_, we propose the adversarially slicing generative network (ASGN) in Sec. 6. As seen in Table 1, we find that optimal discriminators for most existing GANs (except for Wasserstein GAN) do not satisfy _direction optimality_. Hence, we develop a modification scheme for GAN maximization problems to enforce _direction optimality_ on our discriminator. Owing to the scheme's simplicity, GANs can easily be converted to ASGNs. As our theory relies on the concept of sliced optimal transport, the discriminator in an ASGN is interpreted as (1) extracting high-dimensional nonlinear features from data samples and (2) projecting them on a one-dimensional space by slicing them in the most informative direction. We conduct experiments to verify our perspective and demonstrate that ASGNs are superior to GANs in for certain generation tasks on synthetic and image datasets. We defer the proofs for all the theorems to the appendix. ## 2 Preliminaries ### Notations We consider a sample space \(X\subseteq\mathbb{R}^{D_{x}}\) and a latent space \(Z\subseteq\mathbb{R}^{D_{x}}\). Let \(\mathcal{P}(X)\) be the set of all probability measures on \(X\), and let \(L^{\infty}(X,\mathbb{R}^{D})\) denote the \(L^{\infty}\) space for functions \(X\mapsto\mathbb{R}^{D}\). Let \(\mu\) (or \(\nu\)) represent a probability measure with probability density function \(I_{\mu}\) (or \(I_{\nu}\)). We use the notation of the pushforward operator \(\sharp\), which is defined as \(g_{\sharp}\sigma:=\sigma(g^{-1}(B))\) for \(B\in X\) with a function \(g:Z\to X\) and a probability measure \(\sigma\in\mathcal{P}(Z)\). We denote the Euclidean inner product by \(\langle\cdot,\cdot\rangle\). Lastly, \(\hat{(\cdot)}\) denotes a normalized vector. ### Problem Formulation in GANs Assume that we have data obtained by discrete sampling from a target probability distribution \(\mu_{0}\in\mathcal{P}(X)\). Then, we introduce a trainable generator function \(g_{\theta}:Z\to X\) with parameter \(\theta\in\mathbb{R}^{d_{\theta}}\) to model a trainable probability measure as \(\mu_{\theta}=g_{\theta\sharp}\sigma\) with \(\sigma\in\mathcal{P}(Z)\) The aim of generative modeling here is to learn \(g_{\theta}\) so that it approximates the target measure as \(\mu_{\theta}\approx\mu_{0}\). For generative modeling in GAN, we introduce the notion of a discriminator \(f:X\rightarrow\mathbb{R}\). We formulate the GAN's optimization problem as a two-player game between the generator and discriminator with \(\mathcal{V}:\mathcal{F}(X)\times\mathcal{P}(X)\times\mathcal{P}(X)\rightarrow \mathbb{R}\) and \(\mathcal{J}:\mathbb{R}^{d_{\theta}}\times\mathcal{F}(X)\rightarrow\mathbb{R}\), as follows: \[\max_{f\in\mathcal{F}}\mathcal{V}(f;\mu_{0},\mu_{\theta})\quad\text{and}\quad \min_{\theta\in\mathbb{R}^{d_{\theta}}}\mathcal{J}(\theta;f). \tag{1}\] Regarding the choices of \(\mathcal{V}\) and \(\mathcal{J}\), there are GAN variants (Goodfellow et al., 2014; Nowozin et al., 2016; Lim and Ye, 2017; Arjovsky et al., 2017) that lead to different dissimilarities between \(\mu_{0}\) and \(\mu_{\theta}\) with the maximizer \(f\). In this paper, we use a representation of the discriminator in an inner-product form, which is naturally represented by a neural network model1: Footnote 1: In practice, the discriminator \(f\) is implemented as in (2), e.g., \(f_{\phi}(x)=w_{\phi_{L}}^{\top}(l_{\phi_{L-1}}\circ l_{\phi_{L-2}}\circ\cdots \circ l_{\phi_{1}})(x)\) with nonlinear layers \(\{l_{\phi_{L}}\}_{L=1}^{\top}\), \(w_{\phi_{L}}\in\mathbb{R}^{D}\), and their weights \(\phi:=\{\phi_{\ell}\}_{\ell=1}^{L}\in\mathbb{R}^{d_{\phi}}\) \[f(x)=\langle\omega,h(x)\rangle, \tag{2}\] where \(\omega\in\mathbb{S}^{D-1}\) and \(h(x)\in L(X,\mathbb{R}^{D})\). ### Wasserstein Distance and Its Use for GANs We consider the Wasserstein-\(p\) distance (\(p\in[1,\infty)\)) (Vililani, 2009) between probability measures \(\mu\) and \(\nu\) such that \[W_{p}(\mu,\nu):=\left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{X\times X}\|x-x^{\prime} \|_{p}^{p}d\pi(x,x^{\prime})\right)^{\frac{1}{p}}, \tag{3}\] \begin{table} \begin{tabular}{c c c c} \hline \hline & Direction optimality & Separability & Injectivity \\ \hline Wassertein GAN & ✓ & weak & * \\ GAN (Hinge, Saturating, Non-saturating) & ✗ & ✓ & * \\ ASGN (Hinge, Saturating, Non-saturating) & ✓ & ✓ & * \\ \hline \hline \end{tabular} \end{table} Table 1: Common GAN losses do not simultaneously all the sufficient conditions given in Theorem 5.3. Thus, we propose the ASGN to address one of the conditions, _direction optimality_. Even if a direction \(\omega\) is the maximizer of the inner problems \(\mathcal{V}\), it does not satisfy _direction optimality_ except in Wasserstein GAN (see Sec. 6). The results in Sec. 7.1 empirically demonstrate that a discriminator trained on Wasserstein GAN tends not to satisfy the second condition, _separability_. The last condition of _injectivity_ depends not directly on the loss functions, \(\mathcal{V}\) and \(\mathcal{L}\), but on the discriminator implementation (see Sec. 7.1 for empirical verification). where \(\Pi(\mu,\nu)\) is the set of all coupling measures whose marginal distributions are \(\mu\) and \(\nu\). The idea of Wasserstein GAN is to learn a generator by minimizing the Wasserstein-1 distance between \(\mu_{0}\) and \(\mu_{\theta}\). For this goal, one can adopt the Kantorovich--Rubinstein (KR) duality representation to rewrite Eq. (3) and obtain the following optimization problem: \[\max_{f\in\mathcal{F}_{\text{Lip}}}\mathcal{V}_{\text{Wass}}(f;\mu_{0},\mu_{ \theta}):=d_{f}(\mu_{0},\mu_{\theta}) \tag{4}\] where \(d_{f}(\mu,\nu):=\mathbb{E}_{x\sim\mu}[f(x)]-\mathbb{E}_{x\sim\nu}[f(x)]\), and \(\mathcal{F}_{\text{Lip}}\) denotes the class of 1-Lipschitz functions. On the other hand, we formulate an optimization problem for the generator as minimization of the right side of Eq. (4) w.r.t. the generator parameter: \[\min_{\theta}\mathcal{J}_{\text{Wass}}(\theta;f):=-\mathbb{E}_{x\sim\mu_{ \theta}}[f(x)]. \tag{5}\] Here, it can be seen that the two-player game of the Wasserstein GAN is formulated as a zero-sum minimax problem. ### Sliced Optimal Transport The Wasserstein distance is highly intractable when the dimension \(D_{x}\) is large (Arjovsky et al., 2017). However, it is well known that _sliced optimal transport_ can be applied to break this intractability, by projecting the data on a one-dimensional space. That is, for the \(D_{x}=1\) case, the Wasserstein distance has a closed-form solution: \[\textit{W}_{p}(\mu,\nu)=\left(\int_{0}^{1}|F_{\mu}^{-1}(\rho)-F_{\nu}^{-1}( \rho)|^{p}d\rho\right), \tag{6}\] where \(F_{\mu}^{-1}(\cdot)\) denotes the quantile function for \(I_{\mu}\), which is the inverse of the cumulative distribution function. The closed-form solution for a one-dimensional space prompted the emergence of the concept of sliced optimal transport. In the original sliced Wasserstein (SW) distance (Bonneel et al., 2015), a probability density function \(I\) on the data space \(X\) is mapped to a probability density function of \(\xi\in\mathbb{R}\) by the standard Radon transform (Natterer, 2001; Helgason, 2011) as \(\mathcal{RI}(\xi,\omega):=\int_{X}I(x)\delta(\xi-\langle x,\omega\rangle)dx\), where \(\delta(\cdot)\) is the Dirac delta function and \(\omega\in\mathbb{S}^{D_{x}-1}\) is a direction. The sliced Wasserstein distance between \(\mu\) and \(\nu\) is defined as \(\textit{SW}_{p}^{h,\omega}(\mu,\nu):=(\int_{\omega\in\mathbb{S}^{D_{x-1}}} \textit{W}_{p}^{p}(\mathcal{RI}_{I}(\cdot,\omega),\mathcal{RI}_{I}(\cdot, \omega))d\omega)^{1/p}\). Intuitively, the idea behind this distance is to decompose high-dimensional distributions into an infinite number of pairs of tractable distributions by linear projections. Various extensions of the sliced Wasserstein distance have been proposed (Kolouri et al., 2019; Deshpande et al., 2019; Nguyen et al., 2021). Here, we review an extension called augmented sliced Wasserstein (ASW) distance (Chen et al., 2022). Given a measurable injective function \(h:X\rightarrow\mathbb{R}^{D}\), the distance is obtained via the spatial Radon transform (SRT), which is defined for any \(\xi\in\mathbb{R}\) and \(\omega\in\mathbb{S}^{d-1}\), as follows: \[\mathcal{S}^{h}I(\xi,\omega):=\int_{X}I(x)\delta(\xi-\langle\omega,h(x)\rangle )dx. \tag{7}\] The ASW-\(p\) distance is then obtained via the SRT in the same fashion as the standard sliced Wasserstein distance: \[\textit{ASW}_{p}^{h}(\mu,\nu):=\left(\int_{\omega\in\mathbb{S}^{d-1}}\textit{ W}_{p}^{p}(\mathcal{S}^{h}I_{\mu}(\cdot,\omega),\mathcal{S}^{h}I_{\nu}(\cdot, \omega))d\omega\right)^{\frac{1}{p}}. \tag{8}\] The closed-form representation in Eq. (6) can be used to evaluate the integrand in Eq. (8), which is usually evaluated via approximated quantile functions with sorted finite samples from \(\mathcal{S}^{h}I_{\nu}(\cdot,\omega)\) and \(\mathcal{S}^{h}I_{\nu}(\cdot,\omega)\). ## 3 Formulation of Question 1.2 Next, we introduce a divergence, the functional mean divergence, FM or FM\({}^{*}\), which is defined for a given functional space or function. Minimization of the FM\({}^{*}\) can be formulated as an optimization problem involving \(\mathcal{J}_{\text{Wass}}\), and we cast Question 1.2 in this context. In Sec. 4, we provide an answer to that question, which in turn provides the _metrizable conditions_ with \(\mathcal{J}_{\text{Wass}}\) in Sec. 5. ### Proposed Framework: Functional Mean Divergence We start by defining the FM with a given functional space. **Definition 3.1** (Functional Mean Divergence (FM)).: We define a family of functional mean divergences as \[\mathscr{D}_{D}^{\text{FM}}:= \tag{9}\] \[\left\{(\mu,\nu)\mapsto\max_{h\in\mathcal{F}}\left\|d_{h}(\mu, \nu)\right\|_{2}|\mathcal{F}(X)\subseteq L^{\infty}(X,\mathbb{R}^{D})\right\}.\] where \(d_{h}(\mu,\nu):=\mathbb{E}_{x\sim\mu}[h(x)]-\mathbb{E}_{x\sim\nu}[h(x)]\). Further, we denote an instance in the family as \(\textit{FM}_{\mathcal{F}}(\mu,\nu)\in\mathscr{D}_{D}^{\text{FM}}\), where \(\mathcal{F}(X)\subseteq L^{\infty}(X,\mathbb{R}^{D})\). By definition, the FM family includes the integral probability metric (IPM) (Muller, 1997), which includes the Wasserstein distance in KR form as a special case. **Proposition 3.2**.: _For \(\mathcal{F}(X)\in L^{\infty}(X,\mathbb{R})\), \(\textit{IPM}_{\mathcal{F}}(\cdot,\cdot):=\max_{f\in\mathcal{F}}d_{f}(\cdot, \cdot)\in\mathscr{D}_{1}^{\text{FM}}\)._ The FM is an extension of the IPM to deal with vector-valued functional spaces. Although the FM with a properly selected functional space yields a distance between target distributions, the maximization in Eq. (9) is generally hard to achieve. Instead, we use the following metric, which is defined for a given function. **Definition 3.3** (Functional Mean Divergence* (FM*)).: Given a functional space \(\mathcal{F}(X)\subseteq L^{\infty}(X,\mathbb{R}^{D})\), we define a family of functional mean divergences* as Footnote *: For a given \(h\in\mathcal{F}(X)\), we define \(\mathcal{J}_{\mathcal{F}}\), where \(h\in\mathcal{F}(X)\). \[\mathscr{D}_{\mathcal{F}}^{\text{FM*}}:=\left\{(\mu,\nu)\mapsto \|d_{h}(\mu,\nu)\|_{2}|h\in\mathcal{F}(X)\right\}, \tag{10}\] Further, we denote an instance in the family as \(\textit{FM}^{*}_{h}(\mu,\nu)\in\mathscr{D}_{\mathcal{F}}^{\text{FM*}}\), where \(h\in\mathcal{F}(X)\). Here, we are interested in the following problem, which is a mathematical formulation of Question 1.2. **Question 3.4**.: Under what conditions for \(\mathcal{F}(X)\in L^{\infty}(X,\mathbb{R}^{D})\) is every \(\textit{FM}^{*}_{h}(\cdot,\cdot)\in\mathscr{D}_{\mathcal{F}}^{\text{FM*}}\) a distance? We give an answer to this question in Sec. 4. Because optimization of the \(\text{FM*}\) is related to \(\mathcal{J}_{\text{Wass}}\) in Sec. 3.2, the conditions for \(\mathcal{F}(X)\) in Question 3.4 enable us to derive \((\mathcal{J}_{\text{Wass}},\textit{FM}^{*}_{h})\)-_metrizable conditions_ in Sec. 5. ### Direction Optimality to Connect FM* and \(\mathcal{J}_{\text{Wass}}\) Optimization of the FM* with a given \(h\in L^{\infty}(X,\mathbb{R}^{D})\) returns us to an optimization problem involving \(\mathcal{J}_{\text{Wass}}\). **Proposition 3.5** (_Direction optimality_ connects FM* and \(\mathcal{J}_{\text{Wass}}\)).: _Let \(\omega\) be on \(\mathbb{S}^{D-1}\). For any \(h\in L^{\infty}(X,\mathbb{R}^{D})\), minimization of \(\textit{FM}^{*}_{h}(\mu_{\theta},\mu_{0})\) is equivalent to optimization of \(\min_{\theta}\max_{\omega\in\mathbb{S}^{D-1}}\mathcal{J}_{\text{Wass}}(\theta; \langle\omega,h\rangle)\). Thus,_ \[\nabla_{\theta}\textit{FM}^{*}_{h}(\mu_{\theta},\mu_{0})=\nabla _{\theta}\mathcal{J}_{\text{Wass}}(\theta;\langle\omega^{*},h\rangle), \tag{11}\] _where \(\omega^{*}\) is the optimal solution (direction) given as follows:_ \[\omega^{*}=\operatorname*{arg\,max}_{\omega\in\mathbb{S}^{D-1}} \,d_{\langle\omega,h\rangle}(\mu_{0},\mu_{\theta}). \tag{12}\] Recall that we formulated the discriminator in the inner-product form (2), which is aligned with this proposition. We refer to the condition for the direction in Eq. (24) as _direction optimality_. It is obvious here that, given function \(h\), the maximizer \(\omega^{*}\) becomes \(\hat{d}_{h}(\mu_{0},\mu_{\theta})\). From the discussion in this section, Proposition 3.5 supports the notion that investigation of Question 3.4 will reveal the \((\mathcal{J}_{\text{Wass}},\textit{FM}^{*}_{h})\)-_metrizable conditions_. ## 4 Conditions for Metrizability: Analysis by Max-ASW framework In this section, we provide an answer to Question 3.4. ### Strategy for Answering Question 3.4 We consider the conditions of \(\mathcal{F}(X)\) for Question 3.4 in the context of sliced optimal transport. To this end, we define a variant of sliced optimal transport, called maximum augmented sliced Wasserstein divergence (max-ASW) in Definition 4.1. In Sec. 4.2, we first introduce a condition, called _separable_ condition, where divergences included in the FM family are also included in the max-ASW family. In Sec. 4.3, we further introduce a condition, called _injective_ condition, where the max-ASW is a distance. Finally, imposing these conditions on \(\mathcal{F}(X)\) brings us the desired conditions (see Fig. 1 for the discussion flow). **Definition 4.1** (Maximum Augmented Sliced Wasserstein Divergence (max-ASW)).: Given a functional space \(\mathcal{F}(X)\subseteq L^{\infty}(X,\mathbb{R}^{D})\), we define a family of maximum augmented sliced Wasserstein divergences as \[\mathscr{D}_{\mathcal{F}}^{\text{max-ASW}}:= \tag{13}\] \[\left\{(\mu,\nu)\mapsto \max_{\omega\in\mathbb{S}^{D-1}}\!\!W_{1}\left(\mathcal{S}^{h}I_ {\mu}(\cdot,\omega),\mathcal{S}^{h}I_{\nu}(\cdot,\omega)\right)|h\in\mathcal{F} \right\}.\] Further, we denote an instance of the max-ASW family as \(\textit{max-ASW}_{h}(\mu,\nu)\in\mathscr{D}_{\mathcal{F}}^{\text{max-ASW}}\), where \(h\in\mathcal{F}(X)\). Note that although the formulation in the definition includes the SRT, similarly to the ASW in Eq. (8), they differ in terms of the method of direction sampling (\(\omega\)). ### _Seperability_ for Equivalence of FM* and Max-ASW We introduce a property for the function \(h\), called _separability_ (refer to Remark 4.4 for intuitive explanation). **Definition 4.2** (Separable).: Given \(\mu,\nu\in\mathcal{P}(X)\), let \(\omega\) be on \(\mathbb{S}^{D-1}\), and let \(F_{\mu}^{h,\omega}(\cdot)\) be the cumulative distribution func Figure 1: Outline of Sec. 4. Proposition 4.6 is a major step toward our main theorem. tion of \(\mathcal{S}^{h}I_{\mu}(\cdot,\omega)\). If \(\omega^{*}=d_{h}(\mu,\nu)\) satisfies \(F_{\mu}^{h,\omega^{*}}(\xi)\leq F_{\nu}^{h,\omega^{*}}(\xi)\) for any \(\xi\in\mathbb{R}\), \(h\in L^{\infty}(X,\mathbb{R}^{D})\) is separable for those probability measures. We denote the class of all these separable functions for them as \(\mathcal{F}_{\text{Sep}(\mu,\nu)}(X)\). Under this definition, _separable_ functions connect the FM\({}^{*}\) and the max-ASW as follows. **Lemma 4.3**.: _Given \(\mu,\nu\in\mathcal{P}(X)\), every \(h\in\mathcal{F}_{\text{Sep}(\mu,\nu)}(X)\) satisfies \(\text{FM}_{h}^{*}(\mu,\nu)\in\mathscr{D}_{\mathcal{F}_{\text{Sep}(\mu,\nu)}}^{ \text{max-ASW}}\)._ **Remark 4.4**.: For a general function \(h\in L^{\infty}(X,\mathbb{R}^{D})\), \(\text{FM}_{h}^{*}(\cdot,\cdot)\) is not necessarily included in the max-ASW family, i.e., \(\mathcal{D}(\cdot,\cdot)\in\mathscr{D}_{\mathcal{F}}^{\text{max-ASW}}(\cdot, \cdot)\not\Leftrightarrow\mathcal{D}(\cdot,\cdot)\in\mathscr{D}_{\mathcal{F} }^{\text{FM}^{*}}\) for general \(\mathcal{F}\subseteq L^{\infty}(X,\mathbb{R}^{D})\). In Fig. 2 intuitively illustrates why _separability_ is crucial. Given \(h\), calculation of the max-ASW via the closed-form representation in Eq. (6) generally involves evaluation of the sign of the difference between the quantile functions. Intuitively, the equivalence between the FM and max-ASW distances holds if the sign is always positive regardless of \(\rho\in[0,1]\); otherwise, the sign's dependence on \(\rho\) breaks the equivalence. ### Injectivity for Max-ASW to be a distance Imposing _injectivity_ on \(h\) guarantees that the induced max-ASW is indeed a distance. **Lemma 4.5**.: _Every max-ASW\({}_{h}(\cdot,\cdot)\in\mathscr{D}_{\mathcal{F}_{\text{Lip}}}^{\text{ max-ASW}}\) is a distance, where \(\mathcal{F}_{\text{Lip}}\) indicates a class of all the injective functions in \(L^{\infty}(X,\mathbb{R}^{D})\)._ According to Lemmas 4.3 and 4.5, \(\text{FM}_{h}(\mu,\nu)\) with a _separable_ and _injective_\(h\) is indeed a distance since it is included in the family of max-ASW _distances_. **Proposition 4.6** (\(\text{FM}^{*}\) distance).: _Given \(\mu,\nu\in\mathcal{P}(X)\), every \(\text{FM}_{h}^{*}(\mu,\nu)\in\mathscr{D}_{\mathcal{F}_{\text{Lip}}^{\text{ FM}^{*}}\cap\mathcal{F}_{\text{Sep}(\mu,\nu)}}^{\text{FM}^{*}}\) is indeed a distance._ With Proposition 4.6, we have now one of the answers to Question 3.4, which hints us our main theorem. ## 5 GAN Training Indeed Minimizes Distance? In this section, we present Theorem 5.3, which is our main theoretical result and gives sufficient conditions for the discriminator to be \(\mathcal{J}\)-metrizable. Then, we explain certain implications of the theorem. ### Metrizable Discriminator in GAN We directly apply the discussion in the previous section to \(\mathcal{J}_{\text{Wass}}\), and by extending the result for Wasserstein GAN to a general GAN, we derive the main result. First, a simple combination of Propositions 3.5 and 4.6 yields the following lemma. **Lemma 5.1** (\(\mathcal{J}_{\text{Wass}}\)-metrizable).: _Given \(h\in L^{\infty}(X,\mathbb{R}^{D})\in\mathcal{F}_{\text{Lip}}\cap\mathcal{F}_{ \text{Sep}(\mu_{0},\mu_{\theta})}\), let \(\omega^{*}\in\mathbb{S}^{D-1}\) be \(\hat{d}_{h}(\mu_{0},\mu_{\theta})\). Then \(f(x)=\langle\omega,h(x)\rangle\) is \((\mathcal{J}_{\text{Wass}},\text{FM}_{h}^{*})\)-metrizable._ Lemma 5.1 provides the conditions for the discriminator to be \(\mathcal{J}_{\text{Wass}}\)-_metrizable_. Next, we generalize this result to more generic minimization problems. The scope is minimization problems of general GANs that are formalized in the form \(\mathcal{J}(\theta;f)=\mathbb{E}_{x\sim\mu_{\theta}}[R_{\mathcal{J}}\circ f(x)]\) with \(R_{\mathcal{J}}:\mathbb{R}\rightarrow\mathbb{R}\). We use the gradient of such minimization problems w.r.t. \(\theta\): \[\nabla_{\theta}\mathcal{J}(\theta;f)=-\mathbb{E}_{z\sim\sigma}\left[r_{\mathcal{ J}}\circ f(g_{\theta}(z))\nabla_{\theta}f(g_{\theta}(z))\right], \tag{14}\] where \(r_{\mathcal{J}}(x)=R_{\mathcal{J}}^{*}(x)\) as seen in Table 2. By ignoring a scaling factor, Eq. (14) can be regarded as a gradient of \(d_{f}(\tilde{\mu}_{0}^{r,\gamma},\tilde{\mu}_{\theta}^{r_{\mathcal{J}}})\), where \(\tilde{\mu}^{r}\) is defined via \(I_{\tilde{\mu}^{r}}(x)\propto r(x)I_{\mu}(x)\). To examine whether updating the generator with the gradient in Eq. (14) can minimize a certain distance between \(\mu_{0}\) and \(\mu_{\theta}\), we introduce the following lemma. **Lemma 5.2**.: _For any \(r:X\rightarrow\mathbb{R}_{+}\) and a distance for probability measures \(\mathcal{D}(\cdot,\cdot)\), \(\mathcal{D}(\tilde{\mu}^{r},\tilde{\nu}^{r})\) indicates a distance between \(\mu\) and \(\nu\)._ By leveraging Lemma 5.2 and applying Propositions 3.5 and 4.6 to \(\tilde{\mu}_{0}^{r,\gamma}\) and \(\tilde{\mu}_{\theta}^{r,\gamma}\), we finally derive the \(\mathcal{J}\)_-metrizable conditions_ for \(\mu_{0}\) and \(\nu_{\theta}\), which is our main result. **Theorem 5.3** (\(\mathcal{J}\)-Metrizability).: _Given a functional \(\mathcal{J}(\theta;f):=\mathbb{E}_{x\sim\mu_{\theta}}[R(f(x))]\) with \(R^{\prime}(\cdot):\mathbb{R}\rightarrow\mathbb{R}_{+}\), let \(h\in\mathcal{F}^{D}(X)\) and \(\omega\in\mathbb{S}^{D-1}\) satisfy the following conditions:_ * _(Direction optimality)_ \(\omega\) _maximizes_ \(d_{(\omega,h)}(\tilde{\mu}_{0}^{r},\tilde{\mu}_{\theta}^{r})\)_,_ * _(Separability)_ \(h\) _is separable for_ \(\tilde{\mu}_{0}^{r}\) _and_ \(\tilde{\mu}_{\theta}^{r}\)_,_ * _(Injectivity)_ \(h\) _is an injective function._ _Then \(f(x)=\langle\omega,h(x)\rangle\) is \(\mathcal{J}\)-metrizable for \(\mu_{\theta}\) and \(\mu_{0}\)._ We refer to the conditions in Theorem 5.3 as _metrizable conditions_. According to the theorem, the discriminator \(f=\langle\omega,h\rangle\) can serve as a distance between the generator and target distributions even if it is not the optimal solution to the original maximization problem \(\mathcal{V}\). ### Implications of Theorem 5.3 We are interested in the question of whether discriminators in existing GANs can satisfy the _metrizable conditions_. We summarize our observations in Table 1. First, as explained in the next section, most existing GANs besides Wasserstein GAN do not satisfy _direction optimality_ with the maximizer \(\omega\) of \(\mathcal{V}\). This fact inspires us to develop novel maximization objectives (see Sec. 6). Second, it is generally hard to make the function \(h\) rigorously satisfy _separability_, or verify that it is really satisfied. In Sec. 7.1, a simple experiment demonstrates that Wasserstein GAN tends to fail to learn separable functions. Third, the property of _injectivity_ largely depends on the discriminator design. There are various ways to impose _injectivity_ on the discriminator. One way is to implement the discriminator with an invertible neural network. Although this topic has been actively studied (Behrmann et al., 2019; Karami et al., 2019; Song et al., 2019), such networks have higher computational costs for training (Chen et al., 2022). Another way is to add regularization terms to the maximization problem. For example, a gradient penalty (GP) (Gulrajani et al., 2017) can promote injectivity by explicitly regularizing the discriminator's gradient. In contrast, simple removal of operators that can destroy _injectivity_, such as ReLU activation, can implicitly overcome this issue. We empirically verify this discussion in Sec. 7.1. ## 6 Adversarially Slicing Generative Network This section describes our proposed model, the Adversarially Slicing Generative Network (ASGN) to achieve the _direction optimality_ in Theorem 5.3. We develop the ASGN by modifying the maximization problem on \(\mathcal{V}\) to guarantee that the optimal solution \(\omega\) achieves _direction optimality_. The proposed modification scheme is applicable to most existing GAN objectives and does not necessitate additional hyperparameters and computational complexity. Furthermore, an ASGN is applicable even to learning (class) conditional target distributions. AsgnAs mentioned in Sec. 5.1, given a function \(h\), maximization problems in most GANs (besides Wasserstein GAN) cannot achieve _direction optimality_ with the maximum solution of \(\mathcal{V}\), as reported in Table 1. We use hinge GAN as an example to illustrate this claim. The objective function to be maximized in hinge GAN is formulated as \[\mathcal{V}_{\text{Hinge}}(\langle\omega,h\rangle;\mu_{0},\mu_{ \theta}):=\mathbb{E}_{x\sim\mu_{0}}[\min(0,-1+\langle\omega,h(x)\rangle)]\\ +\mathbb{E}_{x\sim\mu_{\theta}}[\min(0,-1-\langle\omega,h(x) \rangle)]. \tag{15}\] Given \(h\), the maximizer \(\omega\) becomes \(\hat{d}_{h}(\mu_{0}^{\text{tr}},\mu_{\theta}^{\text{tr}})\), where \(\mu_{0}^{\text{tr}}\) and \(\mu_{\theta}^{\text{tr}}\) denote truncated distributions whose supports are restricted by conditioning \(x\) on \(\langle\omega,h(x)\rangle<1\) and \(\langle\omega,h(x)\rangle>-1\), respectively. Because \((\mu_{0}^{\text{tr}},\mu_{\theta}^{\text{tr}})\) is generally different from \((\tilde{\mu}_{0}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr },\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr },\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr },\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr },\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}}^{ \text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr}, \tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{ \text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{\theta}^{\text{tr},\tilde{\mu}_{ \theta}}^{\ Similarly to some variants of the maximum sliced Wasserstein distance (Deshpande et al., 2019; Kolouri et al., 2019), the ASGN's discriminator is interpreted as extracting nonlinear features \(h(x)\) in a high-dimensional space and slicing them in the most distinctive direction \(\omega\). Besides the unnecessity of sorting (see Remark 4.4), there are significant differences in terms of the training schemes. First, in conventional methods, the _optimal direction_ is estimated per batch, which is optimal not for the target distributions but for the sets of finite batch samples. On the other hand, in ASGN, the direction \(\omega\) is trained to satisfy _direction optimality_ for the distributions during training. Furthermore, \(h\) is trained together with the learned direction, which may yield a more _separable_ function \(h\). ## 7 Experiments We perform experiments with synthetic and image datasets (1) to verify our perspective on GANs as presented in Sec. 5 in terms of _direction optimality_, _separability_, and _injectivity_, and (2) to show the effectiveness of ASGN against GAN. For fair comparisons, we essentially use the same architectures in ASGN and GAN. However, we modify the last linear layer of ASGN's discriminators (see Sec 6). ### Mixture of Gaussians To verify the implications of Theorem 5.3 in Sec. 5, we conduct experiments on a mixture of Gaussian (MoG). We use a two-dimensional sample space \(X=\mathbb{R}^{2}\). The target MoG on \(X\) comprises eight isotropic Gaussians with variances \(0.05^{2}\) and means distributed evenly on a circle of radius \(1.0\). We use a 10-dimensional latent space \(Z\) to model a generator measure. For both the generator and the discriminator, we adopt simple architectures comprising fully connected layers by following the previous works (Mescheder et al., 2017; Nagarajan and Kolter, 2017; Sinha et al., 2020). We basically use leaky ReLU (LReLU) with a negative slope of \(0.1\) for the discriminator. We visualize the generated samples mainly to confirm that the generator measures cover all the modes of the eight Gaussians. In addition, we plot the cumulative density functions of \(\mathcal{S}^{h}I_{\mu_{0}}(\cdot,\omega)\) and \(\mathcal{S}^{h}I_{\mu_{\theta}}(\cdot,\omega)\) to verify _separability_. \begin{table} \begin{tabular}{c c c} \hline \hline & Minimization problem \(\mathcal{J}\) & Weighting \(r_{\mathcal{J}}\circ f(x)\) \\ \hline Wasserstein GAN / Hinge GAN & \(-\mathbb{E}_{x\sim\mu_{\theta}}\left[f(x)\right]\) & 1 \\ Saturating GAN & \(-\mathbb{E}_{x\sim\mu_{\theta}}\left[\log\varsigma(f(x))\right]\) & \(1-\varsigma(f(x))\) \\ Non-saturating GAN & \(\mathbb{E}_{x\sim\mu_{\theta}}\left[\log\varsigma(1-f(x))\right]\) & \(\varsigma(f(x))\) \\ \hline \hline \end{tabular} \end{table} Table 2: Maximization problem and weighting function for direction optimization. Figure 4: Inner product of trained \(\omega\) and numerically estimated _optimal direction_ for \(\tilde{\mu}_{0}^{r_{\mathcal{J}}}\) and \(\tilde{\mu}_{0}^{r_{\mathcal{J}}}\) during training. The trained \(\omega\) were closer to the _optimal direction_ with ASGN than with GAN. Figure 5: (a)-(g) Estimated cumulative density function at 5,000 iterations. _Separability_ is satisfied in other GANs and ASGNs. In contrast, the discriminator does not satisfy _separability_ in Wasserstein GAN. (h) Data samples and generated samples during training with Wasserstein GAN. Different colors represent different iterations (2,000, 4,000,..., 10,000). Rotational behavior is observed. Figure 3: Comparison of the learned distributions (at 10,000 iterations) between GAN and ASGN with various objectives. Orange and Blue dots are sampling points from the trained generator and ground truth, respectively. In all cases, ASGNs cover all modes whereas mode collapse occurs in some GAN cases. #### 7.1.1 Direction Optimality We compare ASGN and GAN with various objectives. As shown in Fig. 3, the generator measures trained with ASGN cover all modes whereas mode collapse (Srivastava et al., 2017) occurs with hinge GAN and non-saturating GAN. In addition, Fig. 4 shows a plot of the inner product of the learned direction \(\omega\) (or the normalized weight in GAN's last linear layer) and the estimated _optimal direction_ for \(\tilde{\mu}_{0}^{r,\mathcal{J}}\) and \(\tilde{\mu}_{\theta}^{r,\mathcal{J}}\). Recall that there is no guarantee that a non-optimal direction \(\omega\) induces a distance. #### 7.1.2 Separability As shown in Fig. 5, the cumulative density function for the generator measures trained with Wasserstein GAN does not satisfy _separability_, whereas the generator measures trained with other GAN losses satisfy this property. This may cause Wasserstein GAN's rotation behavior, as seen in Fig. 5-(h). #### 7.1.3 Injectivity To investigate the effect of _injectivity_ on the training results, we train Hinge GAN using a discriminator with ReLU as a baseline. We apply two techniques to induce _injectivity_: (1) the use of LReLU and (2) the addition of a GP to the discriminator's maximization problem. As shown in Fig. 6, with either of these techniques, the training is improved and mode collapse does not occur. ### Image Generation We further evaluate our method by comparing it with GAN on image generation tasks. For all of these experiments, to evaluate the performance of the generative models, we calculate Frechet Inception distance (FID) (Heusel et al., 2017) on sets of training and generated samples that consist of 50,000 images, respectively. We first train ASGNs and GANs with various objective functions on CIFAR10 (Krizhevsky et al., 2009) and CelebA (128\(\times\)128) (Liu et al., 2015). We adopt the DCGAN architectures (Radford et al., 2016), and for the discriminator, we apply spectral normalization (Miyato et al., 2018). As reported in Table 3, ASGNs outperform GANs in terms of the FID score in all cases. Next, we apply ASGN to BigGAN (Brock et al., 2019)2 on both unconditional and conditional generation tasks. In this experiment, we calculate the Inception Score (IS) (Salimans et al., 2016), as well as the FID, to evaluate the sample quality by following the experiment in the original paper. As in Table 4, the adoption of ASGN consistently improves the generation performance in terms of both metrics. Footnote 2: We use the authors’ PyTorch implementation [https://github.com/ajbrock/BigGAN-PyTorch](https://github.com/ajbrock/BigGAN-PyTorch). ## 8 Conclusion We first proposed a novel perspective on GANs to derive sufficient conditions for the discriminator to serve as a distance between the data and generator probability measures. To this end, we introduced the FM\({}^{*}\) and max-ASW families. By using a class of metrics that are included in both families, we derived the _metrizable conditions_ for Wasserstein GAN. We then extended the result to a general GAN. The derived conditions consist of _direction optimality_, _separability_, and _injectivity_. By leveraging the theoretical results, we proposed ASGN, in which a generator and discriminator are trained adversarially but with a modified GAN training scheme. This model can impose _direction optimality_ on the discriminator. ASGNs experimentally outperformed GANs on synthetic and image datasets in terms of the sample quality and mode coverage. \begin{table} \begin{tabular}{l c c} \hline Method & IS (\(\uparrow\)) & FID (\(\downarrow\)) \\ \hline Unconditional & & \\ Hinge GAN\({}^{*}\) & 8.42 \(\pm\) 0.11 & 17.16 \(\pm\) 1.34 \\ Hinge ASGN\({}^{*}\) & **8.81**\(\pm\) 0.04 & **14.45**\(\pm\) 0.58 \\ \hline Conditional & & \\ Hinge GAN\({}^{\dagger}\) & 9.22 & 14.73 \\ Hinge GAN\({}^{*}\) & 9.05 \(\pm\) 0.05 & 8.25 \(\pm\) 0.82 \\ Hinge ASGN\({}^{*}\) & **9.16**\(\pm\) 0.08 & **6.20**\(\pm\) 0.27 \\ \hline \end{tabular} \end{table} Table 4: FID and IS results on CIFAR10 with the experimental setup of BigGAN (Brock et al., 2019). Scores marked with \(*\) are results from our implementation, which is based on BigGAN author’s PyTorch implementation. For reference, scores reported in their paper are put with \(\dagger\). \begin{table} \begin{tabular}{l c c} \hline Method & CIFAR10 & CelebA \\ \hline Hinge GAN & 24.07\(\pm\)0.56 & 32.51\(\pm\)2.53 \\ Hinge ASGN & **20.23**\(\pm\)0.86 & **27.79**\(\pm\)1.60 \\ Saturating GAN & 25.63\(\pm\)0.98 & 37.33\(\pm\)1.02 \\ Saturating ASGN & **20.62**\(\pm\)0.94 & **28.16**\(\pm\)1.60 \\ Non-saturating GAN & 24.90\(\pm\)0.21 & 28.22\(\pm\)2.16 \\ Non-saturating ASGN & **20.51**\(\pm\)0.36 & **27.78**\(\pm\)4.59 \\ \hline \end{tabular} \end{table} Table 3: FID scores (\(\downarrow\)) on DCGAN. Figure 6: Effects of techniques to induce injectivity. The use of LReLU instead of ReLU and the addition of a GP to the maximization problem both improve the performance.
2307.10350
Improving Multimodal Datasets with Image Captioning
Massive web datasets play a key role in the success of large vision-language models like CLIP and Flamingo. However, the raw web data is noisy, and existing filtering methods to reduce noise often come at the expense of data diversity. Our work focuses on caption quality as one major source of noise, and studies how generated captions can increase the utility of web-scraped datapoints with nondescript text. Through exploring different mixing strategies for raw and generated captions, we outperform the best filtering method proposed by the DataComp benchmark by 2% on ImageNet and 4% on average across 38 tasks, given a candidate pool of 128M image-text pairs. Our best approach is also 2x better at Flickr and MS-COCO retrieval. We then analyze what makes synthetic captions an effective source of text supervision. In experimenting with different image captioning models, we also demonstrate that the performance of a model on standard image captioning benchmarks (e.g., NoCaps CIDEr) is not a reliable indicator of the utility of the captions it generates for multimodal training. Finally, our experiments with using generated captions at DataComp's large scale (1.28B image-text pairs) offer insights into the limitations of synthetic text, as well as the importance of image curation with increasing training data quantity. The synthetic captions used in our experiments are now available on HuggingFace.
Thao Nguyen, Samir Yitzhak Gadre, Gabriel Ilharco, Sewoong Oh, Ludwig Schmidt
2023-07-19T17:47:12Z
http://arxiv.org/abs/2307.10350v2
# Improving Multimodal Datasets with Image Captioning ###### Abstract Massive web datasets play a key role in the success of large vision-language models like CLIP and Flamingo. However, the raw web data is noisy, and existing filtering methods to reduce noise often come at the expense of data diversity. Our work focuses on caption quality as one major source of noise, and studies how generated captions can increase the utility of web-scraped datapoints with nondescript text. Through exploring different mixing strategies for raw and generated captions, we outperform the best filtering method proposed by the DataComp benchmark by 2% on ImageNet and 4% on average across 38 tasks, given a candidate pool of 128M image-text pairs. Our best approach is also \(2\times\) better at Flickr and MS-COCO retrieval. We then analyze what makes synthetic captions an effective source of text supervision. In experimenting with different image captioning models, we also demonstrate that the performance of a model on standard image captioning benchmarks (e.g., NoCaps CIDEr) is not a reliable indicator of the utility of the captions it generates for multimodal training. Finally, our experiments with using generated captions at DataComp's large scale (1.28B image-text pairs) offer insights into the limitations of synthetic text, as well as the importance of image curation with increasing training data quantity. ## 1 Introduction Pre-training large multimodal models on image-text pairs sourced from the web has become a standard approach to obtaining high performance on vision tasks [3, 24, 36, 39]. However, raw web data can be noisy or uninformative (Figure 1). Many existing data preprocessing efforts revolve around human-defined heuristics based on image and text content separately--e.g., caption length, presence of nouns, sentence complexity, image aspect ratio, minimum image size [8, 10, 45, 46]--or the reliability of the data source [14]. More complex filtering approaches target poorly aligned image-text pairs, by using trained CLIP models [39] to rank the cosine similarity score between image and text embeddings [45], or ensuring mentions of image objects in the captions [46]. These approaches discard between 60% to 90% of the initial data collected, regardless of whether the images themselves are suitable for training. In this work, we seek to restore the utility of such discarded examples with the help of synthetic captions. To do so, we leverage the DataComp benchmark [18], where initial data processing is kept to a minimum, i.e. only filtering out NSFW examples and train-test overlap. This allows us to perform controlled experiments on the raw Common Crawl data and bypass subjective human-design choices that may be employed in the creation of other datasets (e.g., LAION-5B [45]). We study several image captioning models and find that recent releases (e.g., BLIP2 [29] and OpenCLIP-CoCa [37]) can generate captions that improve CLIP training and lead to a significant boost in zero-shot performance over existing data curation methods. In particular, at the medium scale (128M samples seen), training on the _entire candidate pool_ with synthetic captions is sufficient to outperform common filtering baselines that are applied to raw data (e.g., selecting top 30% examples with highest image-text cosine similarity based on OpenAI's CLIP-ViT/L14). Section 5 describes our experiments with a variety of mixing strategies to combine signals from both raw and synthetic text. To explain the performance benefits of synthetic captions, we measure caption noise and diversity in various training sets, and demonstrate the importance of both factors in achieving good performance. While existing data filtering methods are effective at reducing noise, they often hurt the diversity of the original training data in the process. Synthetic captions can help address this limitation. In Section 6, we analyze various properties of caption data, as well as specific advantages of training with synthetic captions (e.g., improved retrieval capabilities). Remarkably, our empirical investigation in Section 4 shows that choosing a captioning model to yield competitive downstream performance is non-trivial, as better performance on image captioning benchmarks does not necessarily mean better generated captions for CLIP training. We also note that while this work focuses on the quality of captions used in multimodal training, image quality is another equally important topic of study. As the size of the data pool we experiment with grows, we start to observe changes in the relative importance of text quality versus image quality in building a good pre-training dataset. We comment on this in Section 7. To summarize, our findings serve as a first step towards improving the quality of _web-scale_ datasets via the use of synthetic captions. In the process, we offer insights on several research directions: \(\bullet\)_What are the considerations for choosing a captioning model?_ We find that specializing a pre-trained network towards image captioning via fine-tuning, and optimizing for high CIDEr score on Figure 1: **Raw captions crawled from the web contain significant noise; cosine similarity filtering helps reduce noise but discards many images that are useful for training. Here we show some images that would be filtered out if only the top 30% examples from the candidate pool with highest image-text cosine similarities are used for training. In these pairs, captions generated by BLIP2 tend to be more faithful to the respective images compared to raw captions obtained from the Internet. In Appendix A, we show 20 other samples drawn completely at random from the discarded pool.** standard benchmarks in general, end up producing captions that are less effective for multimodal training. Reference-free captioning metrics (e.g., CLIP-S [21]) more reliably reflect the training quality of the generated captions. * _How to combine signals from multiple sources of captions?_ We investigate different strategies for filtering and mixing raw and synthetic captions. This leads to performance gains on DataComp benchmark at small (12.8M pool size) and medium (128M pool size) scales, compared to existing approaches that utilize only raw data. At large scale (1.28B pool size), mixing in generated captions does not improve ImageNet accuracy, but average performance across 38 tasks increases by 2%. Across _all_ data scales, retrieval performance benefits significantly. * _What makes synthetic captions effective?_ Our analysis of text properties shows that on an individual level, synthetic captions are less noisy and contain more visual information. However, at the population level, synthetic captions are less diverse than raw captions. Consequently, using _both_ sources of captions helps improve the overall caption quality, measured in terms of text diversity as well as image-text alignment. * _How do benefits of synthetic captions scale?_ Unlike what was found in the original DataComp experiments, given access to generated captions, the best filtering approach differs across scales. Experimenting with data quantities ranging from 12.8M to 1.28B also allows us to observe some limitations of synthetic captions. We posit that image quality control, as well as the diversity gap between model-generated and web-scraped captions, play an increasingly important role in large data regimes. More broadly, our results have important implications for future work as additional progress (captured by the right metric) in image captioning can further enhance the quality of text used for vision-language pre-training. Moreover, the effectiveness of synthetic captions unlocks another massive source of training data: uncaptioned web images from Common Crawl. This can ultimately drive more large-scale multimodal training by improving the availability of properly aligned and sufficiently diverse image-text data. ## 2 Related work Synthetic data.Previous work has explored using synthetic data to create new datasets or augment existing ones [12, 15, 19, 25, 35, 40, 55, _inter alia_]. Closer to our work, Azizi et al. [5], Bansal and Grover [6], He et al. [20] use image generation models to create synthetic images for classification tasks. In the context of CLIP, Santurkar et al. [43] show that a model trained on synthetic captions can outperform a model trained on human-provided captions. The captions were generated procedurally for the 120K images in the MS-COCO training set [11] using multi-object image labels verified by Mechanical Turk workers, which would be difficult to obtain for web-scale datasets like LAION-5B [45] or CommonPool [18] that are about four orders of magnitude larger. Most similar to our work is the LAION-COCO dataset [44], containing 600M image-text pairs from LAION-5B [45] with synthetic captions generated using BLIP [28] and ranked using CLIP models [23, 39]. While [44] heavily filters the raw data pool before generating captions, we work with uncurated web datasets. In addition, the generated captions provided by LAION-COCO still significantly lag behind the corresponding web-crawled captions when it comes to yielding good CLIP performance--we provide empirical evidence and address this gap in Appendix G. Image captioning.Building models able to generate captions from images has been a long-standing subject of research [13, 26, 27, 30, 51, 52, _inter alia_]. More recently, models like BLIP2 [28, 29], Flamingo [3], and CoCa [37, 54] have made significant progress on this task. It is worth noting that the training data for BLIP [28] and BLIP2 [29] contains synthetic captions, as the authors find that this helps boost the captioning ability of the resulting model compared to training on just noisy web data. Zhu et al. [56] couple large language models with image captioning models to generate more enriched image descriptions. We expect that as these image captioning systems become more capable, the impact of using synthetic data will bring larger improvements over existing noisy image-text datasets. Improving image-text datasets.Given the importance of the pre-training data for multimodal networks [17, 18, 32], several authors have proposed techniques for improving the quality of image-text datasets. Radenovic et al. [38] propose a filtering technique called Complexity, Action, and Text-spotting (CAT), designed to select only informative image-text pairs. Cao et al. [9] filter out samples that contain text regions in the image and advocate for the benefits of increasing the number of samples given a fixed compute budget. Instead of discarding all text-spotting examples, Maini et al. [31] proposes masking out the text part in the image and only removing image-text pairs in which the masked image contains no useful visual features. Abbas et al. [1] identify and remove samples that are semantically similar to each other. Many image-text datasets also have their own preprocessing techniques, often not fully disclosed [10, 14, 24, 36, 39, 45]. All of these filtering approaches are complementary to the use of synthetic captions proposed by this work. Concurrent to our work, Fan et al. [16] present a form of data augmentation for training CLIP models where the captions are rewritten by a large language model. However, the rewriting process assumes access to some raw text and is not conditioned on the images, which may limit its effectiveness when the original captions are not descriptive (e.g., see Figure 1). In contrast, our work uses image captioning models, which are able to generate relevant captions for images regardless of the original text associated with them. We also work with raw Common Crawl data instead of preprocessed datasets to study the trade-offs between raw and generated captions in a systematic manner. Finally, Gadre et al. [18] introduces DataComp, a benchmark for designing better pre-training datasets for CLIP, which we use in experiments throughout the paper. ## 3 Experiment setup Data.Most of our experiments involve the CommonPool provided by the DataComp benchmark [18]. CommonPool contains image-text pairs sourced from Common Crawl dumps between 2014 and 2022, deduplicated and randomly shuffled. The small, medium, and large scales of the benchmark contain 12.8M, 128M and 1.28B candidate pairs respectively. Data preprocessing is kept to a minimum, involving only NSFW filtering, evaluation set deduplication, and face blurring, to allow maximum flexibility for dataset design. We also experiment with LAION-COCO [44] and discuss in Appendix G why it is not ideal for studying how to improve the quality of raw training data. Captioning models.We experiment with BLIP [28] and BLIP2 [29] using HuggingFace's Transformers framework. Both models were pre-trained on 129M image-text pairs from the web including MS-COCO [11] and LAION-400M [45], in addition to the bootstrapped version of the web data with synthetic captions generated by BLIP's captioner. We also look at OpenCLIP-CoCa [23, 37], which was trained on LAION-2B [45]. For each architecture, we experiment with both the pre-trained model and the one that has been fine-tuned on MS-COCO. Caption generation uses top-K sampling with K = 50, minimum caption length 5, and maximum caption length 40. Training.Given CommonPool data of a particular scale, we generate synthetic captions for the images in the pool using the captioning models described above. Then we train a CLIP model on the resulting image-text datasets, using ViT-B/32 as the image encoder for the small and medium scales, and ViT-B/16 for the large scale. Following DataComp's setup [18], the compute budget, architecture and hyperparameters for each scale are fixed in order to isolate data quality as the main factor influencing performance. Given a candidate pool of \(N\) image-text pairs, the CLIP model is then trained with \(N\) samples seen in total. Refer to Appendix B for more details. Evaluation.We adopt DataComp's zero-shot evaluation suite and report both ImageNet accuracy and the average accuracy over 38 classification and retrieval tasks proposed by the benchmark [18]. We also pay particular attention to retrieval performance on Flickr30K [53] and MS-COCO [11]. The retrieval score reported is the average of text-to-image Recall@1 and image-to-text Recall@1. Unless specified otherwise, in the subsequent sections, "CLIP score filtering" or "top x%" refers to selecting top x% examples from the initial training set, based on the cosine similarity between image and text embeddings output by OpenAI's CLIP ViT-L/14 model [39], and "BLIP2" refers to captions generated by BLIP2, using top-K sampling with softmax temperature 0.75, which we have found to yield the best downstream performance compared to other sampling temperatures (see Appendix C). ## 4 Impact of model specialization on captions generated for multimodal training Given the abundance of image captioning models to choose from, a natural question to ask is: does performance on standard image captioning benchmarks correlate with how useful the generated captions are as text supervision for CLIP training? In particular, CIDEr [50], together with other reference-based metrics like SPICE [4] and BLEU-4 [34], has been widely adopted as a yardstick for determining state-of-the-art on image captioning benchmarks [3, 22, 28, 29, 54]. Consequently, previous work [28, 29, 54] also experiments with fine-tuning captioning models on MS-COCO and obtains competitive CIDEr scores on popular evaluation sets like NoCaps [2]. We compare the utility of synthetic captions produced by BLIP2 and OpenCLIP-CoCa with and without fine-tuning on MS-COCO, by training CLIP on the generated captions and evaluating the trained model on ImageNet classification and Flickr retrieval (Table 1). Fine-tuned captioning models produce captions that boost the retrieval capabilities of CLIP. However, on ImageNet, we find that fine-tuning captioning models hurts the quality of text supervision produced for CLIP training. We hypothesize that fine-tuning on MS-COCO reduces the diversity of the generated text, as evidenced by the lower number of unique trigrams across 1M caption samples (Table 1). Notably, captioning models that are only pre-trained have very poor CIDEr scores; going with this metric would have suggested that these models are not suitable for caption generation at all. While many image captioning metrics like CIDEr, SPICE and BLEU-4 emphasize similarity between generated captions and reference captions provided by humans, prior work has also proposed reference-free metrics--for example, CLIP-S [21], which uses a trained CLIP model to assess the compatibility between an image and the generated caption. We compute CLIP-S for the medium candidate pool with different synthetic captions and find that this metric is more reflective of the ImageNet performance trend. Fine-tuned captioning models produce captions that have lower CLIP-S and image-text cosine similarity in general. Since BLIP2 (no fine-tuning) produces sufficiently good text supervision for CLIP to do well on both ImageNet classification and Flickr retrieval, we use it as the captioning model of choice in subsequent experiments that look at how to combine raw and synthetic captions. ## 5 Filtering raw and synthetic captions Here we explore in more detail different ways of filtering and combining raw and generated captions at the medium scale of DataComp [18]: * _No filtering:_ we train on the entire, unmodified pool (i.e., 128M samples). * _CLIP score filtering:_ we select the top x% of examples with highest image-text cosine similarity. * _CLIP score intersect with ImageNet\(1k\) clustering:_ Gadre et al. [18] propose clustering image embeddings and only selecting images whose cluster center is a nearest neighbor to an image from ImageNet1k. The authors then find the intersection between this set of examples and those that are in the top x% based on CLIP score. This is the best baseline using raw captions on DataComp. * _Combining raw and synthetic captions:_ we use raw captions for the top x% of examples based on CLIP score. For the remaining images (that would otherwise be filtered out), we generate \begin{table} \begin{tabular}{|l|l l l l|l l|} \hline Captioning model & NoCaps & CLIP-S & Cosine & No. of unique & ImageNet & Flickr \\ & CIDEr [50] & [21] & similarity & trigrams & accuracy & retrieval \\ \hline BLIP, ViT-L/16 (finetuned) & 113.2* & 0.698 & 0.231 & \(2.82\times 10^{6}\) & 0.207 & 0.498 \\ BLIP2, ViT-g & 80.6 & 0.729 & 0.251 & \(2.72\times 10^{6}\) & 0.281 & 0.507 \\ BLIP2, ViT-g (finetuned) & 119.7* & 0.709 & 0.235 & \(1.97\times 10^{6}\) & 0.227 & **0.549** \\ OpenCLIP-CoCa, ViT-L/14 & 0.354* & 0.752 & 0.260 & \(4.45\times 10^{6}\) & **0.321** & 0.395 \\ OpenCLIP-CoCa, ViT-L/14 (finetuned) & 106.5* & 0.702 & 0.232 & \(1.81\times 10^{6}\) & 0.252 & 0.542 \\ \hline \end{tabular} \end{table} Table 1: **CIDEr score does not reliably predict how effective a captioning model is at generating synthetic captions for multimodal pre-training; fine-tuning image captioning models leads to lower ImageNet accuracy when training CLIP on the generated captions.** * indicates numbers obtained from previous work and from contacting the authors. We fix the architecture and compare captions generated from captioning models with and without fine-tuning on MS-COCO [11] as sources of text supervision for CLIP. Remarkably, fine-tuning pre-trained networks on the task of image captioning ends up producing synthetic captions that are worse for training CLIP (see ImageNet accuracy), possibly due to reduced text diversity. On the contrary, retrieval performance is higher when using captions generated by fine-tuned models. corresponding BLIP2 captions and add them back to the training pool. We also experiment with filtering these additional image-text pairs with the same cosine similarity threshold set in the first step (i.e., BLIP2 (x%, filtered) in Figure 2). In Appendix D, we investigate other baselines and report how well each approach does with varying cosine similarity thresholds. Figure 2 (left) shows the relative performance of select baselines (the degree of CLIP score filtering has been tuned and only the best accuracy is plotted). We find that the best performance at medium scale, measured by either ImageNet or average accuracy, is achieved by mixing raw and synthetic captions, subject to a cosine similarity threshold. We also remark that including BLIP2 captions in the training pool improves retrieval performance by more than \(2\times\), see Table 2. In the right plot of Figure 2, we compare ImageNet performance at various filtering thresholds for methods that involve only one source of captions and those that involve both. We observe that given image-raw-text pairs filtered with certain cosine similarity threshold (blue line), adding BLIP2 captions for some (red line) or all of the remaining images (green line) always helps. It \begin{table} \begin{tabular}{l|c} Raw & 13.2 \\ \hline Raw (top 30\% intersect IN1k) & 18.2 \\ \hline Raw (top 30\%) & 19.7 \\ \hline \hline Raw (top 30\%) + BLIP2 (70\%, filtered) & 38.0 \\ \hline BLIP2 (top 75\% intersect IN1k) & 38.9 \\ \hline BLIP2 (top 50\%) & 40.1 \\ \hline Raw (top 30\%) + BLIP2 (70\%) & 40.5 \\ \hline BLIP2 & 41.7 \\ \end{tabular} \end{table} Table 2: **Training on generated captions substantially boosts retrieval capabilities of the resulting CLIP models.** Here we report the average text-to-image and image-to-text retrieval performance across both MS-COCO and Flickr for different data filtering baselines. More specific breakdown can be found in Appendix Figure 9. Overall, we observe a \(2\times\) improvement at the medium scale of DataComp when synthetic captions are included in the training set. Figure 2: **At the 128M scale of DataComp, we obtain improvement on ImageNet and average accuracies compared to the best filtering method on raw data, by using a mixture of raw and synthetic captions, selecting only image-text pairs with cosine similarity above a certain threshold.** (Left) We visualize how various data filtering strategies perform at medium scale, on ImageNet and across 38 tasks. Including BLIP2 captions in the training data significantly outperforms competitive baselines from DataComp trained on only raw text [18]. (Right) As we vary the percentage of top examples chosen from the pool (based on CLIP score), we see consistent benefits from \((i)\) using BLIP2 captions for samples that would be discarded otherwise, \((ii)\) applying the same filtering threshold to new image-text pairs containing BLIP2 captions to keep noise level low. The exact accuracy numbers can be found in Appendix D. is worth noting that as we lower the threshold and include more raw captions in the training mix, the performance starts to become lower than using just synthetic captions (orange line). ## 6 What makes synthetic captions effective? ### Defining caption quality As seen from sample images in Figure 1, web-scraped text may not contain specific visual information (e.g., "Italien - Ligurien") or may not reflect the content of the image (e.g., "Image Not Found"). We seek to understand how generated captions can help overcome these issues. To approximate the richness of information conveyed in the text data, we take a 1M random subset from each training set and measure the number of words, as well as the grounding ratio [48] (i.e., the fraction of tokens that describe visual concepts, with the vocabulary defined by MS-COCO), in the corresponding captions. In Figure 3, we observe that synthetic captions and raw captions follow different distributions, with the former generally containing more words (left pane) and more visual tokens (right pane) per sample. Performing CLIP score filtering on raw captions leads to improvements on both of these properties; so does mixing raw and synthetic captions. Regarding the issue of poor image-text alignment, we measure image-text cosine similarity and find that web-crawled captions indeed have lower similarities overall compared to model-generated ones (Figure 4). Figure 4: **Generated captions overall exhibit higher image-text alignment than raw captions; this indicates that the former is less noisy as a training source.** We randomly sample 1% of the 128M candidate pool and given the same set of images, compare the cosine similarity distribution between raw caption data and BLIP2 caption data. We find that overall BLIP2 captions have much higher image-text cosine similarity (mean similarity 0.251 vs 0.208). Figure 3: **Individual synthetic captions can contain more information (especially visual one) than raw captions.** We calculate the number of words and the fraction of those being visual tokens in each caption for different training sets. Individual BLIP2 captions tend to yield higher numbers on these two metrics compared to individual web-crawled captions, suggesting that on a caption-per-caption basis, synthetic data may contain richer information. The analyses above measure properties of individual captions. We next aim to capture a single diversity metric over _all_ text in the training set. We again select a random subset, the size of which scales with the training set size, and calculate the number of unique trigrams across all captions in the subset. With this diversity metric, we find that BLIP2 captions actually lag behind raw captions (Figure 5). Using only the top 30% raw captions (based on CLIP score) is even more detrimental. We summarize these different aspects of caption quality in a noise versus diversity framework (Figure 5), which also offers some intuition for our best baseline uncovered in Section 5. CLIP score filtering that has been commonly adopted in prior work [18, 45] is effective at improving performance on raw data by removing examples with noisy captions. However, this procedure also lowers diversity (note: Figure 5 only provides a measure of text diversity, but image diversity is affected as well). By generating synthetic captions for the images that would be discarded otherwise, and subsequently only using pairs where the image-text similarities still meet the threshold, we manage to keep the overall noise level similarly low, while adding more diversity to the training pool. Progress along both axes enables further performance improvement compared to just filtering raw data. ### Performance analysis After diving deeper into properties of synthetic captions, we next analyze the training implications of these captions in more detail. We examine two models, one trained using only raw captions and the other using only BLIP2 captions, with both training sets having been filtered with CLIP score for top 30% pairs, and achieving similar performance on ImageNet (27.3% vs 27.5%). Averaged across 38 evaluation tasks, training on generated captions offers a 2.8% improvement. We break down performance difference between the two models on individual tasks (Figure 6), and observe that BLIP2 captions also perform better on ImageNet-derived distribution shifts and text recognition (e.g., MNIST, SVHN). Notably, among the tasks with the biggest performance gains are Flickr and MS-COCO retrieval. We provide a similar analysis in Appendix Figure 10, where expanding a filtered raw dataset with additional images and their BLIP2 captions improves CLIP performance on 30 out of 38 tasks. The two models compared above share similar ImageNet accuracy but may not be trained on the same images. In Figure 7, we fix the set of training samples to be the top 30% with highest cosine Figure 5: **Combining raw and synthetic captions subject to a cosine similarity threshold helps reduce noise level while boosting data diversity, both of which are essential for achieving good performance.** In this plot, circle size denotes the relative size of the resulting training set. While removing noisy image-text pairs, CLIP score filtering also lowers the diversity of the caption set substantially, as measured by the number of unique trigrams in the pool. Adding more useful training data by using BLIP2 captions for filtered out images, while respecting the existing CLIP score threshold, helps overcome this limitation and improves the training data quality along both axes. similarity between image and _raw_ text. Replacing the raw captions with BLIP2 captions increases retrieval performance on Flickr and MS-COCO by more than \(1.5\times\) (first two columns of each task). We include retrieval performance of training on the entire pool with BLIP2 captions (generated using either the pre-trained or the fine-tuned captioning model), as well as that of training on a mixture of raw and BLIP2 captions, to demonstrate the consistent gains that synthetic captions offer. ## 7 Performance at scale We next apply select baselines described in Section 5 to a wider range of candidate pool sizes, ranging from 12.8M to 1.28B samples. In particular, we examine training on the entire pool with Figure 6: **Given similar ImageNet accuracy, training with generated captions improves performance on 23 out of 38 tasks compared to training with raw captions, especially on ImageNet distribution shifts, text recognition and retrieval tasks.** We compare performance on each task of the DataComp benchmark between training with only BLIP2 captions and training with only raw captions; both datasets have been filtered with CLIP score to select the top 30% examples. Even though the two training sets yield similar ImageNet accuracy (\(\sim\)27%), using generated captions leads to 2.8% improvement on average accuracy, including minor gains on ImageNet distribution shifts and significant gains on MNIST, SVHN, Flickr and MS-COCO retrieval. Figure 7: **Synthetic captions display a clear advantage over raw captions on retrieval tasks.** We highlight the superior performance on Flickr and MS-COCO retrieval obtained from training CLIP on captions generated by BLIP2 (pre-trained model or model that has been fine-tuned on MS-COCO), compared to training on raw captions. In particular, the first two columns of each task represent two models trained on the same set of images (i.e., those whose cosine similarity between image and _raw_ text embeddings are in the top 30%), just with different captions. This suggests that substantial gains on retrieval tasks can be obtained solely by using better aligned captions. only raw captions or only BLIP2 captions, CLIP score filtering, using the intersection of top CLIP score examples and examples that lie in clusters close to ImageNet train set, as well as mixing raw and synthetic captions--our best baseline from the medium scale. The filtering percentage for each method is tuned on the medium scale candidate pool and then applied to experiments at other scales. Given a starting pool of \(N\) samples, we limit the training budget to \(N\) steps. The 400M and 1.28B scales use the large training settings from DataComp (see [18]). We focus on ImageNet classification and Flickr retrieval performance (note: MS-COCO training set was included in BLIP2's pre-training data so we have excluded MS-COCO retrieval from this comparison). At larger data quantity regimes, using synthetic captions continues to substantially outperform existing raw-text filtering baselines at retrieval (Figure 8). However, on ImageNet, adding BLIP2 captions to the training mix does not perform better than existing state-of-the-art baseline trained on raw data, Raw (top 30% intersect IN1k), at 400M and 1.28B scales. To give some intuition for this result, we offer two candidate hypotheses: * As noted in Section 6, both caption noise and diversity are important considerations for performance. Noise level, measured by average image-text cosine similarity, stays about the same across all scales for each training distribution. In contrast, the diversity gap between model-generated text and web-scraped text may become more significant with increasing data quantities. We repeat the caption quality analyses from Section 6 with varying random subset size, and find that when using the number of unique nouns and unique trigrams as proxies for text diversity, generated captions exhibit a worse scaling trend than raw captions (Appendix Figure 12). * Image quality becomes increasingly important at larger scales: Figure 8: **With access to generated captions, we find that the best data filtering method for ImageNet classification varies with the scale of the candidate pool; however, when it comes to retrieval, training on synthetic captions is beneficial across all scales. We apply select baselines from Section 5 to a range of candidate pool sizes, and find that the best method on Flickr retrieval always involves synthetic captions (right plot). On ImageNet (left plot), selecting meaningful images (e.g., those that lie close to the ImageNet train set in the embedding space) becomes increasingly important at larger scales (see dotted versus striked columns). At the largest data regime, mixing raw captions with synthetic captions is no longer the best performing approach, possibly due to the saturation of text diversity obtained from image captioning models.** (i) from 12.8M to 128M scale, training on the _entire candidate pool_ with BLIP2 captions outperform competitive filtering baselines done on raw data (e.g., Raw (top 30%)). This is not the case for larger scales. (ii) starting from 128M scale, baselines that also curate image content (i.e., intersection of top CLIP score examples and those that lie in clusters close to ImageNet1k train set) consistently outperform baselines that involve only CLIP score filtering, using either raw or BLIP2 captions. Exact performance numbers can be found in Appendix D, Table 4. Overall, we find that given a fixed training budget, making more datapoints useful by carefully replacing noisy raw captions with synthetic captions--i.e., Raw (top 30%) + BLIP2 (70%, filtered) versus Raw (top 30%)--still offers classification and retrieval gains across _all_ scales. However, for synthetic captions to perform competitively on ImageNet at larger data regimes, we need to start paying attention to image quality, as well as enhancing the diversity of the generated text. ## 8 Conclusion In this work, we demonstrate the effectiveness of synthetic captions in improving caption quality for multimodal training, as well as enhancing certain capabilities of the resulting model (e.g., retrieval). Notably, we find that fine-tuning general-purpose models towards the task of image captioning actually makes them less effective at producing good captions for CLIP training. Our experiments with various data pool sizes, ranging from 12.8M to 1.28B image-text pairs, show that including generated captions in the training data can be highly effective at small and medium scales. However, with larger data quantities, the diversity gap between model-generated and web-scraped text begin to hinder performance gains, and we can also no longer expect to obtain state-of-the-art ImageNet accuracy by just improving text supervision alone. Limitations.Our experiments do not involve an exhaustive list of image captioning systems currently available. Given a captioning model of sufficient capability--i.e., it can generate captions for training CLIP to reach a good performance--a major theme of our work is understanding how to combine signals from both raw and synthetic captions, as well as the differences between these two sources of text. We note that even with improved caption quality, multimodal web datasets may still contain harmful stereotypes, some of which have been extensively discussed in prior work [7]. Besides, generated captions also inherit biases from the captioning models, and using these captions to train the next generation of models can amplify the biases. The risks from using model outputs to replace human annotations have been studied in simplified settings in [47, 49]. Future work.Our findings motivate a number of interesting future directions. One concrete question is improving the diversity of generated captions at large scale, such as by varying the softmax temperature (we only experiment with \(T=0.75\) at this scale, chosen based on our ablation study at the medium scale), or by combining synthetic caption data from multiple image captioning systems. Another direction is proposing new algorithms to combine information from raw and generated captions, beyond what we already investigated in Section 5 and Appendix D. Future work could also explore using text-to-image generation [33, 41, 42] to create synthetic training images for concepts that are underrepresented in existing captions, in order to boost data diversity and close knowledge gaps in the resulting model. ## Acknowledgements We thank Stability AI for the generous assistance with compute resources. We are grateful to Josh Gardner and Simon Kornblith for providing feedback on the manuscript. We also thank Maciej Kilian, Anas Awadalla, Alex Fang, and Jonathan Hayase for helpful discussions while working on this paper. SYG is supported by a NSF Graduate Research Fellowship. This work is supported in part by Open Philanthropy, the Allen Institute for AI, and NSF grants DMS-2134012 and CCF-2019844 as a part of NSF Institute for Foundations of Machine Learning (IFML).
2304.09351
Machine Vision System for Early-stage Apple Flowers and Flower Clusters Detection for Precision Thinning and Pollination
Early-stage identification of fruit flowers that are in both opened and unopened condition in an orchard environment is significant information to perform crop load management operations such as flower thinning and pollination using automated and robotic platforms. These operations are important in tree-fruit agriculture to enhance fruit quality, manage crop load, and enhance the overall profit. The recent development in agricultural automation suggests that this can be done using robotics which includes machine vision technology. In this article, we proposed a vision system that detects early-stage flowers in an unstructured orchard environment using YOLOv5 object detection algorithm. For the robotics implementation, the position of a cluster of the flower blossom is important to navigate the robot and the end effector. The centroid of individual flowers (both open and unopen) was identified and associated with flower clusters via K-means clustering. The accuracy of the opened and unopened flower detection is achieved up to mAP of 81.9% in commercial orchard images.
Salik Ram Khanal, Ranjan Sapkota, Dawood Ahmed, Uddhav Bhattarai, Manoj Karkee
2023-04-19T00:16:42Z
http://arxiv.org/abs/2304.09351v1
# Machine Vision System for Early-stage ###### Abstract Early-stage identification of fruit flowers that are in both opened and unopened condition in an orchard environment is significant information to perform crop load management operations such as flower thinning and pollination using automated and robotic platforms. These operations are important in tree-fruit agriculture to enhance fruit quality, manage crop load, and enhance the overall profit. The recent development in agricultural automation suggests that this can be done using robotics which includes machine vision technology. In this article, we proposed a vision system that detects early-stage flowers in an unstructured orchard environment using YOLOv5 object detection algorithm. For the robotics implementation, the position of a cluster of the flower blossom is important to navigate the robot and the end effector. The centroid of individual flowers (both open and unopen) was identified and associated with flower clusters via K-means clustering. The accuracy of the opened and unopened flower detection is achieved up to mAP of 81.9% in commercial orchard images. Agriculture automation, Precision Thinning, Flower Detection, Flower Clustering, ## 1 Introduction The United States (U.S.) specialty crop production contributes to 30% to 40% of the total U.S. crop value (USDA, 2019). However, it faces various challenges such as labor shortages for manual flower thinning, and bee shortages for pollination, resulting in a big threat to global food security. Since agricultural robots have the potential to replace/reduce human labor, there is a dire need for the development of automated and robotic platforms for crop load management in agriculture (Bochtis et al., 2020). Crop load management in tree fruit production is a balancing act of reducing crop load in the current season for desired fruit size and quality and achieving adequate return bloom for the coming season (Terence Robinson and Hoying, 2016). The machine-vision-based robotic crop load management system could have the potential to perform automated flower thinning, pollination, and green fruit thinning in a real-time and natural environment. Among these, pollination and flower removal activities can be performed at the same time. In addition, the same information opens up the door for the development of a robotic flower removal system (Ren and Yang, 2016; Iwanami et al., 2018). The planning strategies of crop load based on early-stage flowers could be one of the most effective ways to manage crop load in modern orchards. Therefore, a robust system for the identification of early-stage flowers could be a significant pathway to automated crop load management. Most of the recent articles proposed algorithms that could only be used for flowers that resemble the full bloom conditions of flowers. But there are very few or no concrete studies that have been reported regarding the identification of flower blossoms during the early growing season. As precise crop load management operation starts from the early season; it is crucial to have information related to flower clusters of opened and unopened flowers. Additionally, the identification of a king flower in a cluster during the early season could be significant information to perform precise and automated pollination. Many researchers are reported that the You Only Look Once (YOLO) object detection algorithm performs better accuracy and faster speed in custom object detection as compared with Region-Based Convolutional Neural Network (RCNN) based object detection. Therefore, in this paper, the YOLO object detection technique (YOLOV5) is used to detect both open and unopened flowers. However, to our knowledge, there is not any proposed robotic vision system that could perform automated flower thinning and precise pollination using a robotic platform during the early growing season of apples. To overcome these limitations, the main objectives of this study are: * to detect unopened and open flower clusters during the early flowering season using deep learning techniques. * to associate individual flower detection to flower cluster using the k-means algorithm. ## 2 Related Works Various machine vision technologies have been applied using various types of object detection algorithms in agriculture automation for more than a decade. Nilsback and Zisserman (2006) proposed a visual vocabulary that can support object classification for flowers that have a significant visual similarity using traditional image processing methods. Traditional image processing techniques consist of various operations such as image enhancement, image restoration, and image analysis that can be performed to manipulate images using digital computers. Another early study on feature extraction of lesquerella flower was presented by Thorp and Dierig (2011), where the authors developed an image processing algorithm to detect flowers using image segmentation by transforming the image to the hue, saturation, and intensity (HSI) color space. Ho'cevar et al. (2014) estimated the number of flower clusters of individual trees in a high-density apple orchard by implementing apple flower detection based on thresholding and morphological image processing in hue, saturation, luminance (HSL) color space image. Likewise, there are several studies that performed flower detection using the traditional image processing methods such as morphological image processing and various segmentation techniques including Otsu's method, and classifying the flower color group with contour shapes using k-means clustering (Biradar and Shrikhande, 2015; Hong and Choi, 2012; Tiay et al., 2014). However, the accuracy of this traditional method is relatively low compared to deep learning algorithms, and the approach is limited to specific scenarios such as the requirement of enough daylight and using artificial background such as a black cloth screen behind the trees to adjust illuminance making the system only applicable in a controlled environment. Additionally, these methods could not take morphological features into account which caused the requirement of adjusting thresholding parameters whenever changes in illumination, flowering density, and camera position occur. On the other hand, these techniques have their applicability impeded especially by variable lighting conditions and occlusion by leaves, flowers, or stems (Gongal et al., 2015). Moreover, most of these traditional techniques could not identify situations like the difference between buds and flowers, and flower overlapping for accurate production estimation. To overcome these challenges, Machine Learning (ML), for the last few years has become one effective way to process a large amount of data in agriculture. Tran et al. (2018) developed a flower and visitor (bees, insects, etc.) detection system in unconstrained conditions, where the author made use of Convolutional Neural Networks (CNN) for both flower-based image segmentation and visitor detection. However, the method generated a higher flower misdetection rate (8.12%) which further resulted in a higher visitor misdetection rate. Safar and Safar (2019) proposed another intelligent flower detection system based on an ML model "ResNet", using model enhancements such as fine-tuning, dropout ratio, and class weight to modify and improve the accuracy of the ResNet for flower detection. Likewise, Islam et al. (2020) proposed a CNN-based flower detection system for eight varieties of flowers using activation functions "ReLu" and "softmax", and optimizer function "Adam". More studies such as (de Luna et al., 2020; Yahata et al., 2017; Zawbaa et al., 2014) presented flower detection in tomato, soybean, and eight flower dataset respectively using ML classification techniques such as Simple Linear Iterative Clustering (SLIC), Support Vector Machine (SVM) and Random Forest (RF) in last few years. However, the accuracy reported from these studies is relatively lower. Most of the research studies in agriculture automation are specific to the plant or the product. Limited studies are proposed in state-of-art focused on the apple. Dias et al. (2018) recently developed a DL-based approach for apple flower detection using CNN which was pre-trained for saliency detection. The existing network has been finetuned by combining CNN and SVM together to become flower sensitive. Cheng and Zhang (2020) have proposed a flower detection system for smart gardens based on a deep learning model called You Only Look Once (YOLOv4), where the author applied CSPDarnet53 network as a backbone network to reduce network computation and increase the speed of flower detection. Patel (2020) proposed flower classification approach using an optimized deep CNN by integrating Neural Architecture Search-Feature Pyramid Network (NAS-FPN) with Faster Region-based Convolutional Neural Network (Faster R-CNN), and using transfer learning based on COCO dataset (Li et al., 2022). Using the YOLOv4 object detection model and fine-tuning, various flower detection models are proposed (Zhou et al., 2022; Matsui et al., 2009). Bhattarai and Karkee (2022) recently developed a regression-based neural network, also a weakly supervised approach called CountNet, that detects and counts the apple flower to estimate bloom density, crop load, and yield. However, all these recent studies for flower detection using DL are only validated for the flowers that exhibit full bloom conditions. ## 3 Materials and Methods ### Data Acquisition and Data Preparation The 2D RGB images for this study were collected in a commercial apple orchard in Prosser, Washington, USA using Intel Realsense 435i camera (Santa Clara, California, USA). The images were collected during the early bloom season from April 9 - April 20, 2022. The unopened and king bloom condition of the flower clusters was visually assessed for six different sessions and image images were collected during each session. The images were collected in the early bloom season and objects were classified into two classes - opened and unopened flowers. The image dataset contains 529 images with more than 5000 labels (around 90% unopened and 10% opened) of classes. The images were labeled with LabelImg annotation application. ### Proposed Approach Object detection is one of the key outcomes of deep learning algorithms. Many object detection algorithms have been proposed and come into use in the last decade and each algorithm has specific features. The YOLO (You Only Look Once) object detection algorithm family is one of the popular algorithms because of its processing speed. In this study, the YOLOv5 object detection algorithm was used to detect early-stage apple flowers which include both unopened and opened flowers as the different categories, and the results are post-processed to find the flower clusters. Based on the centroid of the flower clusters, the robotic system can be navigated to perform desired field operations. The proposed system consists of two steps: early-stage flower detection and clustering. In machine learning algorithms, the parameters used in machine learning models are optimized during model training and validation. In the YOLOv5 algorithm (Jocher et al., 2022), the hyperparameters can be optimized using parameter optimization (Hutter et al., 2019). The 25 hyperparameters were optimized in ten iterations during the training phase. For further experiments, the optimized values of the hyperparameters were used. After detecting the unopen and open flowers, the centroid of each detected bounding box was calculated, followed by the k-means algorithm without defining the value of \(k\) as the number of clusters in each image frame can vary. \(k\) value varies from 1 to \(n\), where \(n\) is the maximum number of clusters in an image frame. The best value of the \(k\) was calculated using Silhouette analysis (Rousseeuw, 1987). The Silhouette coefficient _S(i)_ is calculated using this equation. \[S(i)=\frac{b(i)-a(i)}{max(a(i),b(i))} \tag{1}\] where _S(i)_ is the silhouette coefficient of the data point \(i\), _a(i)_ is the average distance between \(i\) and all the other data points in the cluster to which \(i\) belong, and _b(i)_ is the average distance from \(i\) to all clusters to which \(i\) does not belong. After detecting each cluster and its centroid, each cluster is assigned a cluster identity, so that the decision-making system can decide on the next cluster after completing the thinning or pollination in each cluster. The overall proposed methodology is illustrated in Figure 1. ### Data Preparation and Model Training Based on the number of convolutional layers in the architecture, YOLOv5 has five different models represented by N, S, M, L, and X which stand for Nano, Small, Medium, Large, and extra-large, respectively. The performance of the models is compared to our dataset. The important parameters to evaluate the models are execution speed, accuracy, mAP, etc. For further experiments, the best models were chosen based on both accuracy and execution speed for further data analysis. All the images are separated into training, validation, and testing in the ratio of 80:10:10. Using the training dataset, all the experiments were carried out in 300 epochs with an image size of 640x640 and the batch size of 16. The algorithm is evaluated using recall, precision, mAP, etc. Figure 1: Block diagram of the proposed algorithm \[recall=\frac{True\ Positive}{True\ Positive+False\ Negative} \tag{2}\] \[precision=\frac{True\ Positive}{True\ Positive+False\ Positive}\] (3) \[mAP=\frac{1}{N}\sum_{i=1}^{N}AP_{i} \tag{4}\] ### Clustering Technique and Implementation Plan After detecting the early-stage flowers, the next step is to find flower clusters so that we can find the centroid point of each cluster to locate the end-effector of the thinning or pollination robot. The overall algorithm for the clustering is given below. ``` 0: Flower image frame for each frame do Detect flowers and calculate their centroids Apply k-means clustering technique: find the flower clusters. Initialize: number of max items in a cluster = m. maximum number of clusters = n. for n= 1 to n do Calculate Silhouette coefficient (s). k = k[i] where max[s[i]]]. Apply k-means clusters with k. Capture next frame end for ``` **Algorithm 1** Proposed Algorithm ## 4 Experiments and Results The first step was to find the best model among five different YOLOv5 models. The selection was based on both accuracy and execution speed. In our experiments, the heaviest model (YOLOv5X) was too large and does not fit in our GPU, so it is excluded. The rest of the four models were trained. The results of the experiments with four different models of YOLOv5 are illustrated in Table 1. Based on the [email protected], the best result is obtained from the YOLOv5s model with a reasonable execution speed. For further experiments, YOLOv5s will be used. Figure 2 shows the result of object detection. The early stage flowers are represented by an orange bounding box and the open flower is represented by a dark red bounding box. The model is also able to detect overlapping flowers. From the results, the detection accuracy is promising for robotics operations such as flower thinning and pollination. The most important and final step of the experiments is to find the clusters of flowers and find the centroid of each cluster. Figure 3 shows the results of the association of individual flower detections to unique flower clusters and estimated flower cluster centroid. The centroid of each cluster is an important reference point set up the end-effector of the pollination or thinning robot. ## 5 Discussion One of the initial steps in the apple orchard is apple flower thinning for quality fruit production using proper crop load management. In recent years, much research has proposed flower-thinning robots and the most important part of this type of robot is machine vision. In this article, the early-stage flowers where most flowers were not opened were detected to conduct efficient flower thinning and pollination. Figure 3: Detection and clustering of the unopened flowers. Each (+) represents the centroid of each cluster. Figure 2: Illustration of the open and unopened flower detection using YOLOv5 object detection model. Overall processing for early-stage flower detection is more accurate than the open flowers. Using the machine vision system and flower cluster geometry, the king flower can also be detected which might be the best choice to select the position or orientation of the end effector. Unopen flowers are separated from each other and are easy to detect. In agriculture robotics automation, the motion and all the movements are controlled by the output of the vision system. Most of the research articles on flower thinning and pollination consider only open flower detection, which is not fair as not all the flowers open at the same time. This article covers a wide range of open and unopen flower detection. In this article, the centroid of each cluster of flowers is calculated so that we can position the end-effectors of the robot to conduct thinning and pollination. Especially, the design of the robot end-effector is easier with the unopened flower, and easier to remove unwanted flower buds. The detection evaluation matrices indicate the results are quite enough to implement in the field. The mAP of the bud is 0.819. In the case of clustering, the accuracy is quite enough, and we don't need to define the k-value in k-means. The important contribution is the proposed algorithm for the implementation where the proposed algorithm is not limited to flower thinning. Some protocols or algorithms can be applied to pollination, green apple thinning, and harvesting. According to [20], the control of the robot's pose, and position is based on the detected object and repositioned for the next operation. This research study has a few limitations and future work. The following are the limitations of this result study. 1) All the experiments were carried out using RGB images, so for the real implementation, we need to consider the 3D information. The next proposed step is to work on pose estimation. 2) All the experiments were carried out using flower images collected in the same orchard, the results might be different in different orchards and image data collection timing. To design a universal model, diverse types of images are necessary to collect. ## 6 Conclusions In this article, both open and unopen flowers are detected using the YOLOv5 algorithm and applied k-clustering algorithm to find the centroid of each cluster. The centroid location of each cluster is important to locate the end-effector of the robot. The efficacy of the vision system in thinning and pollination robots is enough to implement in the field environment. In automation robotics in agriculture, the motion and movement of the robot are controlled according to the 3D position of the object and robot. The future direction of this study is to work with 3D pose estimation of the flowers or blossoms. ## Acknowledgements This work was funded by Agricultural AI for Transforming Workforce and Decision Support (AgAID).
2305.10529
The descriptive complexity of the set of Poisson generic numbers
Let $b\ge 2$ be an integer. We show that the set of real numbers that are Poisson generic in base $b$ is $\boldsymbol{\Pi}^0_3$-complete in the Borel hierarchy of subsets of the real line. Furthermore, the set of real numbers that are Borel normal in base $b$ and not Poisson generic in base $b$ is complete for the class given by the differences between $\boldsymbol{\Pi}^0_3$ sets. We also show that the effective versions of these results hold in the effective Borel hierarchy.
Verónica Becher, Stephen Jackson, Dominik Kwietniak, Bill Mance
2023-05-17T19:25:49Z
http://arxiv.org/abs/2305.10529v1
# The descriptive complexity ###### Abstract. Let \(b\geq 2\) be an integer. We show that the set of real numbers that are Poisson generic in base \(b\) is \(\mathbf{\Pi}^{0}_{3}\)-complete in the Borel hierarchy of subsets of the real line. Furthermore, the set of real numbers that are Borel normal in base \(b\) and not Poisson generic in base \(b\) is complete for the class given by the differences between \(\mathbf{\Pi}^{0}_{3}\) sets. We also show that the effective versions of these results hold in the effective Borel hierarchy. **Keywords**: Poisson generic numbers; normal numbers; descriptive set theory; **MSC Classification**: 03E15; 11U99; 11K16. ## 1. Introduction and statement of results Years ago Zeev Rudnick introduced Poisson generic real numbers: a real number \(x\) is Poisson generic in an integer base \(b\geq 2\), if the counts of number of occurrences of words of length \(k\) over the alphabet \(\{0,1,\ldots,b-1\}\) appearing in the initial segments of the base \(b\) expansion of \(x\) tends to the Poisson distribution with parameter \(\lambda\) as \(k\to\infty\) for every \(\lambda>0\). That is, we look at the fraction of \(k\) words appearing a given number of times among the first digits tends in distribution to the Poisson distribution with parameter \(\lambda\) as \(k\to\infty\). Peres and Weiss [12] proved that Lebesgue almost all real numbers are Poisson generic. Their proof is presented in [3, Theorem 1]. Poisson genericity implies (Borel) normality. For the rest of the paper, given an integer \(b\geq 2\), we identify real numbers in the unit interval \([0,1)\) with their base \(b\) expansions, that is, we identify each \(x\in[0,1)\) with a sequence \(x_{1}x_{2}x_{3}\ldots\) with values in \(\{0,1,\ldots,b-1\}\) such that \[x=\sum_{j=1}^{\infty}\frac{x_{j}}{b^{j}}\] and \(x_{j}\neq 0\) for infinitely many \(j\geq 1\). All real numbers in \([0,1)\) have at least one, and for all, but countably many real numbers the base \(b\) expansion is unique. In the sequel we consider an integer \(b\geq 2\) that we take as the given base. For a real number \(x\) and an interval \(A=[q,r]\) of real numbers (respectively \(A=[q,r)\)), where \(1\leq q<r\) we write \(x\upharpoonright A\) to denote the segment of the base-\(b\) expansion of \(x\) corresponding to positive integers in the interval \(A\). Many times instead of writing \(x\upharpoonright[1,r]\) for some \(r>1\) we write we write \(x\upharpoonright r\) to denote the initial segment of the base \(b\) expansion of \(x\) up to position \(\lfloor r\rfloor\). Since Poisson genericity in base \(b\) is a property that depends only of the tail of the base \(b\) representation of that real number, the integer part of the number is irrelevant. Thus, we present our results just for the real numbers in the unit interval, but they also hold when the unit interval is replaced by the real line. **Definition** (Poisson generic number).: Let \(\lambda\) be positive real number. A real number \(x\in[0,1)\) is \(\lambda\)-Poisson generic in base \(b\) if for every non-negative integer \(j\) we have \[\lim_{k\to\infty}Z_{j,k}^{\lambda}(x)=e^{-\lambda}\frac{\lambda^{j}}{j!},\] where \[Z_{j,k}^{\lambda}(x)=\frac{1}{b^{k}}|\{w\in\{0,\ldots(b-1)\}^{k}\colon w\text{ occurs }j \text{ times in }x\upharpoonright\lambda b^{k}+k\}|.\] A real number \(x\) is Poisson generic in base \(b\) if it is \(\lambda\)-Poisson generic in base \(b\) for every positive real \(\lambda\). Let \(\mathcal{P}_{b}\) be the set of real numbers that are Poisson generic in base \(b\). It is easy to see that \(\mathcal{P}_{b}\) is a Borel set. Our goal is to give the descriptive complexity of \(\mathcal{P}_{b}\). In other words, we would like to locate the exact position of \(\mathcal{P}_{b}\) in the Borel hierarchy (both, lightface and boldface). Recall that the Borel hierarchy for subsets of the real numbers is the stratification of the \(\sigma\)-algebra generated by the open sets with the usual topology. For references see Kechris's textbook [10]. A set \(A\) is \(\mathbf{\Sigma}_{1}^{0}\) if and only if \(A\) is open and \(A\) is \(\mathbf{\Pi}_{1}^{0}\) if and only if \(A\) is closed. \(A\) is \(\mathbf{\Sigma}_{n+1}^{0}\) if and only if it is a countable union of \(\mathbf{\Pi}_{n}^{0}\) sets, and \(A\) is \(\mathbf{\Pi}_{n+1}^{0}\) if and only if it is a countable intersection of \(\mathbf{\Sigma}_{n}^{0}\) sets. A set \(A\) is hard for a Borel class if and only if every set in the class is reducible to \(A\) by a continuous map. A set \(A\) is complete in a class if it is hard for this class and belongs to the class. By Wadge's celebrated theorem, in spaces like the real numbers with the usual interval topology, a \(\mathbf{\Sigma}_{n}^{0}\) set is \(\mathbf{\Sigma}_{n}^{0}\)-complete if and only if it is not \(\mathbf{\Pi}_{n}^{0}\). When we restrict to intervals with rational endpoints and computable countable unions and intersections, we obtain the effective or lightface Borel hierarchy. One way to present the finite levels of the effective Borel hierarchy is by means of the arithmetical hierarchy of formulas in the language of second-order arithmetic. Atomic formulas in this language assert algebraic identities between integers or membership of real numbers in intervals with rational endpoints. A formula in the arithmetic hierarchy involves only quantification over integers. A formula is \(\Pi_{0}^{0}\) and \(\Sigma_{0}^{0}\) if all its quantifiers are bounded. It is \(\Sigma_{n+1}^{0}\) if it has the form \(\exists x\,\theta\) where \(\theta\) is \(\Pi_{n}^{0}\), and it is \(\Pi_{n+1}^{0}\) if it has the form \(\forall x\,\theta\) where \(\theta\) is \(\Sigma_{n}^{0}\). A set \(A\) of real numbers is \(\Sigma_{n}^{0}\) (respectively \(\Pi_{n}^{0}\)) in the effective Borel hierarchy if and only if membership in that set is definable by a formula which is \(\Sigma_{n}^{0}\) (respectively \(\Pi_{n}^{0}\)). Notice that every \(\Sigma_{n}^{0}\) set is \(\mathbf{\Sigma}_{n}^{0}\) and every \(\Pi_{n}^{0}\) set is \(\mathbf{\Pi}_{n}^{0}\). In fact, for every set \(A\) in \(\mathbf{\Sigma}_{n}^{0}\) there is a \(\Sigma_{n}^{0}\) formula and real parameter such that membership in \(A\) is defined by that \(\Sigma_{n}^{0}\) formula relative to that real parameter. A set \(A\) is hard for an effective Borel class if and only if every set in the class is reducible to \(A\) by a computable map. As before, \(A\) is complete in an effective class if it is hard for this class and belongs to the class. Since computable maps are continuous, proofs of hardness in the effective hierarchy often yield proofs of hardness in general by relativization. The difference hierarchy over a pointclass is generated by taking differences of sets. In the sequel we are just interested in the class \(D_{2}\)-\(\mathbf{\Pi}_{3}^{0}\) which consists of all the sets that are difference between two sets in \(\mathbf{\Pi}_{3}^{0}\). The class \(D_{2}\)-\(\Pi_{3}^{0}\) is the effective counterpart. Although the definition of Poisson genericity in a given base \(b\) asks for \(\lambda\)-Poisson genericity in base \(b\) for every positive real \(\lambda\), it suffices to consider \(\lambda\)-Poisson genericity in base \(b\) for every positive rational \(\lambda\). This is proved in Lemma 3. Then, by the form of its definition, the set \(\mathcal{P}_{b}\) is a \(\Pi_{3}^{0}\) property, hence \(\mathcal{P}_{b}\) is a Borel set appearing as \(\mathbf{\Pi}_{3}^{0}\) set in the Borel hierarchy. We shall prove completeness. We first prove the boldface case, and then we add the needed subtleties to prove the lightface case. We start with the following result. **Theorem 1**.: \(\mathcal{P}_{b}\) _is \(\mathbf{\Pi}^{0}_{3}\)-complete._ **Definition** (Borel normal number).: Let an integer \(b\geq 2\). A real number \(x\) is Borel normal in base \(b\) if for every block \(w\) of digits in \(\{0,\ldots(b-1)\}\), \[\lim_{n\to\infty}\frac{\text{the number of occurrences of $w$ in $x\upharpoonright n$}}{n}=b^{-|w|}.\] The set \(\mathcal{P}_{b}\) of real numbers that are Borel normal in base \(b\) is \(\mathbf{\Pi}^{0}_{3}\)-complete [6, 11]. Every real Poisson generic in base \(b\) is Borel normal in base \(b\), see [12] or [5, Theorem 2]. We study the descriptive complexity of the difference set. Let \(\mathcal{N}_{b}\) be the set of real numbers that Borel normal in base \(b\). **Theorem 2**.: \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\) _is \(D_{2}\)-\(\mathbf{\Pi}^{0}_{3}\)-complete._ The next two results are the lightface improvements of Theorems 1 and 2. **Theorem 3**.: \(\mathcal{P}_{b}\) _is \(\Pi^{0}_{3}\)-complete._ **Theorem 4**.: \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\) _is \(D_{2}\)-\(\Pi^{0}_{3}\)-complete._ Similarly to previous consequences of differences sets of normal numbers for Cantor series expansions being \(D_{2}\)-\(\mathbf{\Pi}^{0}_{3}\)-complete [2], Theorem 2 imposes limitations on the relationship between \(\mathcal{N}_{b}\) and \(\mathcal{P}_{b}\). An immediate consequence of Theorem 2 is that the set \(\mathcal{N}_{n}\setminus\mathcal{P}_{b}\) is uncountable. Also, since \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\) is \(D_{2}\)-\(\mathbf{\Pi}^{0}_{3}\)-complete, there cannot be a \(\mathbf{\Sigma}^{0}_{3}\) set \(A\) such that \(A\cap\mathcal{N}_{b}=\mathcal{P}_{b}\) (as otherwise, we would have \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}=\mathcal{N}_{b}\setminus A\in\mathbf{ \Pi}^{3}_{0}\), a contradiction). Thus, no \(\mathbf{\Sigma}^{0}_{3}\) condition can be added to normality to give Poisson genericity. Equivalently, any time a \(\mathbf{\Sigma}^{0}_{3}\) set contains \(\mathcal{P}_{b}\), it must contain elements of \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). As an application, consider the following definition of weakly Poisson generic: **Definition** (Weakly-Poisson generic number).: Say \(x\in[0,1)\) with base \(b\) expansion (\(x_{j}\)) is weakly Poisson generic in base \(b\) if for every \(\epsilon>0\), every rational \(\lambda\), and non-negative integer \(j\), we have that for infinitely many \(k\) that \(|Z^{\lambda}_{j,k}(x)-e^{-\lambda\frac{\lambda}{j!}}|<\epsilon\). Note that being Poisson generic in base \(b\) implies being weakly-Poisson generic. However, being weakly-Poisson generic is a \(\mathbf{\Pi}^{0}_{2}\) condition. So, from Theorem 2 we get the following: **Corollary 1**.: For every base \(b\) there is a base-\(b\) normal number which is weakly Poisson generic but not Poisson generic. As another application, consider the following version of discrepency. Suppose \(f\) is a function assigning to each word \(w\in b^{<\omega}\) and each positive integer \(n\) a positive real number \(f(w,n)\). Given \(x\in[0,1)\) with base \(b\) expansion (\(b_{j}\)), say the \((w,n)\)-discrepancy is \(D(x,w,n)=|\frac{n}{b^{|w|}}-W(x\upharpoonright n,w)|\), where \(W(u,w)\) is the number of occurrences of \(w\) in \(u\). We say a real number \(x\) has base \(b\)\(f\)-large discrepancy if for all \(w\) and all \(n\) we have that \(D(x,w,n)>f(w,n)\). The set of \(x\) with \(f\)-large discrepancy, for any fixed \(f\), is easily a \(\mathbf{\Pi}^{0}_{1}\) set. The set of numbers that are Borel normal to base \(b\) are exactly those for which the discrepancy of their initial segments of their expansion in base \(b\) goes to zero. We conjecture that the Poisson generic numbers in base \(b\) can not have very low discrepancy of their initial segments (for instance, the infinite de Bruijn sequences exist in bases \(b\geq 3\), they satisfy that \(Z^{1}_{1,k}=1\) for every \(k\), hence they do not correspond to Poisson generic numbers, and they have low discrepancy.) However, we have the following, which states that the Poisson generic reals cannot be characterized as the set of normal numbers satisfying a large discrepancy condition. **Corollary 2**.: For every function \(f\), the set of base-\(b\) Poisson generic reals is not equal to the set of normal numbers with \(f\)-large discrepancy. There are also many other naturally occurring sets of real numbers are defined by conditions which make them \(\mathbf{\Sigma}_{3}^{0}\). Examples include countable sets, co-countable sets, the class BA of _badly approximable_ numbers (which is a \(\mathbf{\Sigma}_{2}^{0}\) set), the Liouville numbers (which is a \(\mathbf{\Pi}_{2}^{0}\) set), and the set of \(x\in[0,1]\) where a particular continuous function \(f\colon[0,1]\to\mathbb{R}\) is not differentiable. In all these cases, the theorem implies that either the set omits some Poisson generic number, or else contains a number which is normal but not Poisson generic. Of course, many of these statements are easy to see directly, but the point is that they all follow immediately from the general complexity result, Theorem 2. The set of real numbers whose expansion in _every_ integer base is Poisson generic is of course \(\Pi_{3}^{0}\), but we do not know yet how to prove that this set is \(\Pi_{3}^{0}\)-complete. The set of real numbers whose expansion in one base is \(\lambda^{\prime}\)-Poisson generic but not \(\lambda^{\prime}\)-Poisson generic, for different positive real numbers \(\lambda\) and \(\lambda^{\prime}\), is \(D_{2}\)-\(\Pi_{3}^{0}\) but we do not know if it is complete. The result in the present note contribute to the corpus of work on the descriptive complexity of properties of real numbers that started with the questions of Kechris on the descriptive complexity of the set of Borel normal numbers. He conjectured that set of absolutely normal numbers (normal to all integer bases) is \(\mathbf{\Pi}_{3}^{0}(\mathbb{R})\)-complete. Ki and Linton [11] gave the first result towards solving the conjecture by showing that the set of numbers that are normal to base \(2\) is \(\mathbf{\Pi}_{3}^{0}\)-complete. Then V. Becher, P. A. Heiber, and T. A. Slaman [4] settled Kechris' conjecture. Furthermore, V. Becher and T. A. Slaman [8] proved that the set of numbers normal in at least one base is \(\mathbf{\Sigma}_{4}^{0}(\mathbb{R})\)-complete. In another direction, D. Airey, S. Jackson, D. Kwietniak, and B. Mance [1] and, more generally K. Deka, S. Jackson, D. Kwietniak, and B. Mance in [9] showed that for any dynamical system with a weak form of the specification property, the set of generic points for the system is \(\mathbf{\Pi}_{3}^{0}\)-complete. This result generalizes the Ki-Linton result to many numeration systems other than the standard base \(b\) one. In general, the Cantor series expansions are not covered in this generality, so D. Airey, S. Jackson, and B. Mance [2] determined the descriptive complexity of various sets of normal numbers in these numeration systems. ## 2. Boldface We write \(\mu\) for the Lebesgue measure on the real numbers. From Peres and Weiss metric theorem [12, 3] asserting that \(\mu\)-almost all real numbers in the unit interval are Poisson generic in each integer base \(b\), we have the following. For \(\mu\) almost all real numbers \(x\) in the unit interval the following holds. Fix an integer base \(b\geq 2\) and any \(\alpha\in(0,1)\). Then for any non negative integer \(j\), and any \(\epsilon>0\), for all large enough \(k\) we have that \[\Big{|}Z_{j,k}^{(1-\alpha)}(x)-e^{-(1-\alpha)}\frac{(1-\alpha)^{j}}{j!}\Big{|} <\epsilon.\] Proof of Theorem 1.: Let \(\mathcal{C}=\{z\in(\omega\setminus\{0,1\})^{\omega}\colon\lim_{i}z(i)=\infty\}\). So, \(\mathcal{C}\) is \(\mathbf{\Pi}_{3}^{0}\)-complete. We define a continuous map \(f\colon\omega^{\omega}\to(0,1)\) which reduces \(\mathcal{C}\) to \(\mathcal{P}_{b}\), that is, \(f(z)\in\mathcal{P}_{b}\) if and only if \(z\in\mathcal{C}\). Fix \(z\in(\omega\setminus\{0,1\})^{\omega}\). At step \(i\) we define \(f(z)\restriction[b^{k_{i-1}},b^{k_{i}})\), where \(\{k_{i}\}\) is a sufficiently fast-growing sequence of positive integers. Let \[B_{i} :=[b^{k_{i-1}},b^{k_{i}}),\] \[B^{\prime}_{i} :=\left[b^{k_{i-1}},\big{(}1-\frac{1}{z(i)}\big{)}b^{k_{i}}\right).\] The set \(B^{\prime}_{i}\) is non-empty as we may assume \(k_{i}>2k_{i-1}\). We set \[f(z)\upharpoonright B^{\prime}_{i}=x\upharpoonright B^{\prime}_{i}\text{ and }f(z)\upharpoonright B_{i}\setminus B^{\prime}_{i}=0.\] First suppose \(z\notin\mathcal{C}\), and fix \(p\in\omega\) such that for infinitely many \(i\) we have \(z(i)=p\). Consider step \(i\) in the construction of \(f(z)\) for such an \(i\). For any \(\epsilon>0\), if \(i\) is large enough then the number of words \(w\) of length \(k_{i}\) which occur in \(x\upharpoonright[1,(1-\frac{1}{z(i)})b^{k_{i}}]\) is at most \[b^{k_{i}}(1-e^{-(1-\frac{1}{z(i)})}+\epsilon).\] So, the number \(Z_{i}\) of words \(w\) of length \(k_{i}\) which occur in \(f(z)\upharpoonright b^{k_{i}}\) is at most \[b^{k_{i}}\big{(}1-e^{-(1-\frac{1}{z(i)})}+\epsilon\big{)}+b^{k_{i-1}}.\] So, \[\frac{1}{b^{k_{i}}}Z_{i}\leq\big{(}1-e^{-(1-\frac{1}{p})}+2\epsilon\big{)}\] if \(i\) is large enough using the fact that the \(k_{i}\) grow sufficiently fast. On the other hand, the Poisson estimate for the proportion of words of length \(k_{i}\) occurring in a Poisson generic sequence of length \(b^{k_{i}}\) is \(1-1/e\). Since \(p\) is fixed, as \(i\) gets large we have a contradiction. So, \(f(z)\) is not \(1\)-Poisson generic. Next suppose that \(z\in\mathcal{C}\). We show that \(f(z)\) is Poisson generic in base \(b\). Fix \(\lambda>0\) and \(\ell\in\omega\). Fix also \(\epsilon>0\). Consider \(k\in\omega\), and let \(i\) be such that \(k_{i-1}\leq k<k_{i}\). We show that for \(k\) (and hence \(i\)) sufficiently large that \(|Z^{\lambda}_{\ell,k}(f(z))-e^{-\lambda\frac{\lambda^{\ell}}{\ell!}}|<\epsilon\). Assume \(i\) is large enough so that \(\frac{1}{z(j)}<\epsilon\) for all \(j\geq i-1\). First consider the case \(\lambda\leq 1\). Note that, as \(z(i)\geq 2\), \[b^{k}\leq\frac{1}{b}b^{k_{i}}\leq b^{k_{i}}\big{(}1-\frac{1}{z(i)}\big{)}.\] We have that \[\begin{split}|\frac{1}{b^{k}}Z^{\lambda}_{\ell,k}(f(z))-\frac{1} {b^{k}}Z^{\lambda}_{\ell,k}(x)|&\leq\frac{1}{b^{k}}\left(b^{k_{i -1}}\frac{1}{z(i-1)}+6k+b^{k_{i-2}}\right)\\ &\leq\frac{1}{z(i-1)}+\epsilon\\ &\leq 2\epsilon.\end{split} \tag{1}\] for \(i\) large enough. We have used here the fact that \[|Z^{\lambda}_{\ell,k}(f(z))-Z^{\lambda}_{\ell,k}(x)|\] is at most the number of words of length \(k\) which appear in one of \(f(z)\upharpoonright b^{k}\), \(x\upharpoonright b^{k}\) at a position which overlaps the block \([b^{k_{i-1}}(1-\frac{1}{z(i-1)}),b^{k_{i-1}})\), or else overlaps the block \([1,b^{k_{i-2}}]\), which gives the above estimate. Consider now the case \(\lambda>1\). If \(\lambda b^{k}<b^{k_{i}}(1-\frac{1}{z(i)})\), then the same estimate above works. So, suppose \(b^{k}\geq\frac{1}{\lambda}b^{k_{i}}(1-\frac{1}{z(i)})\). We may assume that \[\lambda b^{k}<\frac{1}{2}b^{k_{i+1}}\leq(1-\frac{1}{z(i+1)})b^{k_{i+1}}\] since \(\lambda\) is fixed and the \(k_{i}\) grow sufficiently fast (in particular \(\frac{b^{k_{i+1}}}{b^{k_{i}}}\to\infty\)). In this case we also count the number of words \(w\) of length \(k\) which might overlap the block of \(0\)s in \(f(z)\upharpoonright[b^{k_{i}}(1-\frac{1}{z(i)}),b^{k_{i}}]\). We then get \[\Big{|}\frac{1}{b^{k}}Z_{\ell,k}^{\lambda}(f(z))-\frac{1}{b^{k}} Z_{\ell,k}^{\lambda}(x)\Big{|} \leq\frac{1}{b^{k}}\Big{(}b^{k_{i-1}}\frac{1}{z(i-1)}+b^{k_{i}} \frac{1}{z(i)}+10k+b^{k_{i-2}}\Big{)}\] \[\leq\frac{1}{z(i-1)}+\frac{b^{k_{i}}}{b^{k}}\frac{1}{z(i)}+\epsilon\] \[\leq\frac{1}{z(i-1)}+\lambda\frac{1}{1-\frac{1}{z(i)}}\frac{1}{ z(i)}+\epsilon\] \[\leq 2\epsilon.\] if \(i\) is sufficiently large, since \(\lambda\) is fixed and \(z(i)\to\infty\). For the proof of Theorem 2 we require the following two lemmas. **Lemma 1**.: Fix an integer \(b\geq 2\). Almost all real numbers in \((0,1)\) have the property that for any \(\alpha\) of the form \(\alpha=\frac{1}{2^{k}}\) we have \[\lim_{i\to\infty}\frac{1}{b^{k_{i}}}|H_{i}|=(1-e^{-\alpha})(e^{-(1-\alpha)}),\] where \(H_{i}\) is the set of words of length \(k_{i}\) which occur in the base-\(b\) expansion of \(x\) with a starting position \([(1-\alpha)b^{k_{i}},b^{k_{i}})\), but do not occur with a starting position in \([b^{k_{i-1}},(1-\alpha)b^{k_{i}}]\). In fact, this claim holds for any \(x\) which is Poisson generic in base \(b\). Proof.: Let \(x\in(0,1)\) be Poisson generic in base \(b\) and fix \(\alpha\) a negative power of \(2\). Let * \(A_{i}\) be the set of words of length \(k_{i}\) occurring in \([b^{k_{i-1}},b^{k_{i}})\). * \(C_{i}\) be the set of words of length \(k_{i}\) occurring in \([b^{k_{i-1}},(1-\alpha)b^{k_{i}}))\). Clearly \(C_{i}\subseteq A_{i}\). The words which occur in \([(1-\alpha)b^{k_{i}},b^{k_{i}})\) but not in \([b^{k_{i-1}},(1-\alpha)b^{k_{i}}))\) are exactly the words which occur in \(A_{i}\) but not \(C_{i}\). Let * \(A_{i}^{\prime}\) be the set of words that occur in \([1,b^{k_{i}})\) * \(C_{i}^{\prime}\) be the set of words that occur in \([1,(1-\alpha)b^{k_{i}})\). Then \[||A_{i}\setminus C_{i}|-|A_{i}^{\prime}\setminus C_{i}^{\prime}||\leq b^{k_{i-1 }}.\] Since \(x\) is Poisson generic in base \(b\), for any \(\epsilon>0\) we have that for all large enough \(i\) that \[\Big{|}\frac{1}{b^{k_{i}}}|A_{i}^{\prime}|-(1-\frac{1}{e})\Big{|}<\epsilon.\] Similarly, as \(x\) is Poisson generic in base \(b\), and using \(\lambda=1-\alpha\), we have that \[\Big{|}\frac{1}{b^{k_{i}}}|C_{i}^{\prime}|-(1-e^{-(1-\alpha)})\Big{|}<\epsilon.\] So, \[\frac{1}{b^{k_{i}}}|A_{i}\setminus C_{i}| \leq\frac{1}{b^{k_{i}}}|A^{\prime}_{i}\setminus C^{\prime}_{i}|+ \frac{b^{k_{i-1}}}{b^{k_{i}}}\] \[\leq(1-\frac{1}{e})-(1-e^{-(1-\alpha)})+\frac{b^{k_{i-1}}}{b^{k_{ i}}}+2\epsilon\] \[\leq e^{-(1-\alpha)}(1-e^{-\alpha})+3\epsilon.\qed\] Assume \(x\) lies in the measure one set of Lemma 1 and that the \(k_{i}\) grow fast enough, then \[\Big{|}\frac{1}{b^{k_{i}}}|H_{i}|-(1-e^{-\alpha})(e^{-(1-\alpha)})\Big{|}< \frac{1}{2^{i}}.\] A standard probability computation shows the following. **Lemma 2**.: There is a function \(g\colon\omega\to\omega\) such that the following holds. Suppose \(k_{0}<k_{1}<\cdots\) are such that \(b^{k_{i}}-b^{k_{i-1}}>g(i-1)\) for all \(i\). Then \(\mu\)-almost all \(x\in(0,1)\) satisfy the following: for any \(j\in\omega\), any \(w\in b^{j}\) and any \(\epsilon>0\), for all large enough \(i\), and any \(n>g(i-1)\) \[\Big{|}\frac{1}{n}W(x\upharpoonright[b^{k_{i-1}},b^{k_{i-1}}+n),w)-\frac{1}{b^ {j}}\Big{|}<\epsilon\] where \(W(s,w)\) is the number of occurrences of the word \(w\) in \(s\). Proof.: We can take \(g(n)=n\). Fix \(j\) and \(w\in b^{j}\), and fix \(\epsilon>0\). It suffices to show that for almost all \(x\) that for all large enough \(i\) and any \(n>g(i-1)=i-1\) that \[\left|\frac{1}{n}W(x\upharpoonright[b^{k_{i-1}},b^{k_{i-1}}+n),w)-\frac{1}{b^{ j}}\right|<\epsilon.\] There are constants \(\alpha,\beta>0\) such that for all \(n\), the probability that a string \(s\in b^{n}\) violates the inequality \(|\frac{1}{n}W(s,w)-\frac{1}{b^{j}}|<\epsilon\) is less than \(\alpha e^{-\beta n}\). So, the probability that an \(x\in(0,1)\) violates \(|\frac{1}{n}W(x\upharpoonright[b^{k_{i-1}},b^{k_{i-1}}+n),w)-\frac{1}{b^{j}}|<\epsilon\) for some \(i\geq i_{0}\) and \(n\geq i\) is at most \[\sum_{i\geq i_{0}}\sum_{n\geq i}\alpha e^{-\beta n}\leq\sum_{i\geq i_{0}} \alpha\frac{e^{-\beta i}}{1-e^{-\beta}}=\frac{\alpha e^{-\beta i_{0}}}{(1-e^ {-\beta})^{2}}.\] Since this tends to \(0\) with \(i_{0}\), the result follows. We can now give the proof of the \(D_{2}\)-\(\mathbf{\Pi}_{3}^{0}\) completeness of the difference set \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). Proof of Theorem 2.: We fix a sufficiently fast growing sequence \(k_{0}<k_{1}<\cdots\) as in Lemma 2, and then fix \(x\in(0,1)\) to be Poisson generic in base \(b\) (so that Lemma 1 holds) and also to be in the measure one set where Lemma 2 holds for this sequence \((k_{i})_{i\geq 0}\). We let \(C=\{z\in\omega^{\omega}\colon z(2n)\to\infty\}\), and \(D=\{z\in\omega^{\omega}\colon z(2n+1)\to\infty\}\). We define a continuous map \(f\colon\omega^{\omega}\to(0,1)\) which reduces \(C\setminus D\) to \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). The idea to define \(f\) so that for \(z\in\omega^{\omega}\), the even digits \(z(2i)\) will control whether \(f(z)\in\mathcal{N}_{b}\) and the odd digits \(z(2i+1)\) will control whether \(f(z)\in\mathcal{P}_{b}\). When we wish to violate Poisson genericity, we will do so for \(\lambda=1\) and \(j=0\). We may assume without loss of generality that all \(z(i)\) and all \(k_{i}\) are positive powers of \(2\). As before, at step \(i\) we define \(f(z)\upharpoonright B_{i}\), where \(B_{i}=[b^{k_{i-1}},b^{k_{i}})\). Let \[B_{i}^{1} :=[b^{k_{i-1}},(1-\frac{1}{z(2i)}-\frac{1}{z(2i+1)})b^{k_{i}})\] \[B_{i}^{2} :=[(1-\frac{1}{z(2i)}-\frac{1}{z(2i+1)})b^{k_{i}},(1-\frac{1}{z(2i+ 1)})b^{k_{i}})\] \[B_{i}^{3} :=[(1-\frac{1}{z(2i+1)})b^{k_{i}},b^{k_{i}})\] So, \[|B_{i}^{2}| =\frac{1}{z(2i)}b^{k_{i}},\] \[|B_{i}^{3}| =\frac{1}{z(2i+1)}b^{k_{i}}.\] We let \[f(z)\upharpoonright B_{i}^{1} :=x\upharpoonright B_{i}^{1},\] \[f(z)\upharpoonright B_{i}^{2} :=0,\] \[f(z)\upharpoonright B_{i}^{3} :=x\upharpoonright[b^{k_{i-1}},b^{k_{i-1}}+|B_{i}^{3}|)=x \upharpoonright\Big{[}b^{k_{i-1}},b^{k_{i-1}}+\frac{1}{z(2i+1)}b^{k_{i}}\Big{)}.\] We show that \(f\) is a reduction from \(C\setminus D\) to \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). First assume \(z\notin C\), that is \(z(2i)\) does not tend to \(\infty\). Fix \(\ell\) such that \(z(2i)=\ell\) for infinitely many \(i\). We easily have that \(f(z)\notin\mathcal{N}_{b}\). For example, if the digit \(0\) occurs with approximately the right frequency \(\frac{1}{b}\) in \[f(z)\upharpoonright[1,b^{k_{i-1}}+|B_{i}^{1}|)=\Big{[}0,b^{k_{i}}\Big{(}1- \frac{1}{z(2i)}-\frac{1}{z(2i+1)}\Big{)}\Big{)},\] then \(0\) will occur with too large a frequency in \[f(z)\upharpoonright[1,b^{k_{i-1}}+|B_{i}^{1}|+|B_{i}^{2}|)=\Big{[}0,b^{k_{i-1}}+ |B_{i}^{1}|+\frac{1}{\ell}b^{k_{i}}\Big{)}.\] This is because \(f(z)\upharpoonright B_{2}^{i}=0\) and \(|B_{2}^{i}|=\frac{1}{\ell}b^{k_{i}}\) for such \(i\). So we may henceforth assume that \(z\in C\), so \(\frac{1}{b^{k_{i}}}|B_{i}^{2}|=\frac{1}{z(2i)}\to 0\). We observe that this implies that \(f(z)\in\mathcal{N}_{b}\). This follows from Lemma 2 and that we may assume \(\lim_{i}\frac{g(i-1)}{b^{k_{i-1}}}=0\). Now assume that \(z\in D\), so \(z\notin C\setminus D\). We show \(f(z)\in\mathcal{P}_{b}\), and so \(f(z)\notin\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). Since we are assuming \(z\in C\) also, we have \(\lim_{i\to\infty}z(i)=\infty\). So, \(\lim_{i\to\infty}\frac{1}{b^{k_{i}}}(|B_{i}^{2}|+|B_{i}^{3}|)=0\). It then follows exactly as in Equation 1 in the proof of Theorem 1 that \(f(z)\in\mathcal{P}_{b}\). Assume next that \(z\notin D\) (but \(z\in C\) still). We show that \(f(z)\notin\mathcal{P}_{b}\), which shows \(f(z)\in\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). Fix \(m\) so that for infinitely many \(i\) we have \(z(2i+1)=m\), and we may assume \(m\) is of the form \(m=2^{\ell}\). Recall \(\frac{1}{b^{k_{i}}}|B_{i}^{3}|=\frac{1}{z(2i+1)}=\frac{1}{2^{\ell}}\) for such \(i\). We restrict our attention to this set of \(i\) in the following argument. If \(f(z)\) were Poisson generic, then from Lemma 1 we would have that for large enough \(i\) in our set that \[\frac{1}{b^{k_{i}}}|H_{i}|\approx(1-e^{-\alpha})(e^{-(1-\alpha)}),\] where \(H_{i}\) is the set of words of length \(k_{i}\) which occur in \(f(z)\) with a starting position in \([(1-\alpha)b^{k_{i}},b^{k_{i}})\), but do not occur in \(f(z)\) with a starting position in \([b^{k_{i-1}},(1-\alpha)b^{k_{i}})\). However, by the construction of \(f(z)\) we have that every word which occurs in \([(1-\alpha)b^{k_{i}},b^{k_{i}})\) also occurs in \([b^{k_{i-1}},(1-\alpha)b^{k_{i}})\), and so \(|H_{i}|=0\). ## 3. Lightface refinements The existence of computable Poisson generic real number was proved in [3, Theorem 2]. We start showing how to compute an instance of a Poisson generic real number in base \(b\). **Definition** (Values \(N_{n}\) and sets \(E_{n}\)).: For each \(n\geq 1\) define \[N_{n}:= b^{2n}\] \[E_{n}:= (0,1)\setminus\bigcup_{N_{n}\leq k<N_{n+1}}Bad_{k}\] where \[Bad_{k}:= \bigcup_{j\in J_{k}}\bigcup_{\lambda\in L_{k}}Bad(\lambda,k,j,1/k))\] \[J_{k}:= \{0,\ldots,b^{k}-1\}.\] \[L_{k}:= \{p/q:q\in\{1,\ldots,k\},p/q<k\}\] \[Bad(\lambda,k,j,\varepsilon):= \left\{x\in(0,1):|Z^{\lambda}_{j,k}(x)-\frac{e^{-\lambda}\lambda ^{j}}{j!}|>\varepsilon\right\}\] Observe that each set \(Bad_{k}\) is a finite union of intervals with rational endpoints. Also each set \(E_{n}\) is a finite union of intervals with rational endpoints. **Fact 1**.: There is \(n_{0}\) such that for every \(n\) greater than \(n_{0}\), \(\mu(E_{n})>1-\frac{1}{N_{n}^{2}}\). Proof.: By [3, Proof of Theorem 2] there is \(k_{0}\) such that for every \(k\geq k_{0}\), for every \(j\geq 0\), \[\mu(Bad(\lambda,k,j,1/k))<2e^{-\frac{b^{k}}{2\lambda k^{4}}}\] and \[\mu(Bad_{k})=\mu\Big{(}\bigcup_{j\in J_{k}}\bigcup_{\lambda\in L_{k}}Bad( \lambda,k,j,1/k)\Big{)}<2b^{k}k^{3}e^{-b^{k}/(2k^{5})}.\] Recall \(N_{n}=1/b^{2n}\). Let \(n_{0}\) be the least integer greater than or equal to \(k_{0}\) such that for every \(n\geq n_{0}\), \[\mu(Bad_{N_{n}})<\frac{1}{2N_{n}^{2}}\] and \[\mu\Big{(}\bigcup_{N_{n}\leq k<N_{n+1}}Bad_{k}\Big{)}<2\mu(Bad_{N_{n}}).\] Since \(E_{n}=(0,1)\setminus\bigcup_{N_{n}\leq k<N_{n+1}}Bad_{k}\), we have \[\mu(E_{n})\geq 1-2\mu(Bad_{N_{n}}).\] Hence we obtain the wanted inequality, \[\mu(E_{n})>1-\frac{1}{N_{n}^{2}}.\] Fact 1 ensures that the set \(\bigcap_{n\geq n_{0}}E_{n}\) has positive measure. Let see that \(\bigcap_{n\geq n_{0}}E_{n}\) consists entirely of Poisson generic real numbers for base \(b\). Suppose that \(x\) is not Poisson generic for base \(b\). By Lemma 3\(x\) is not \(\lambda\)-Poisson generic in base \(b\) for some positive rational \(\lambda\). Then, there is a positive \(\varepsilon\) and a non-negative integer \(j\) such that for infinitely many \(n\)s, \[\Big{|}Z_{j,n}^{\lambda}(x)-\frac{e^{-\lambda}\lambda^{j}}{j!}\Big{|}>\varepsilon.\] Let \(n_{1}=n_{1}(\lambda,\varepsilon,j)\) be the smallest such that \(\lambda\in L_{n_{1}},j\in J_{n_{1}},\varepsilon\geq 1/n_{1}\). Since sets \(J_{n}\) and \(L_{n}\) are subset increasing in \(n\), for every \(n\geq n_{1}\) we have \(\lambda\in L_{n}\) and \(j\in J_{n}\). And since \(\varepsilon>1/n_{1}\) we have \(\varepsilon>1/n\), for every \(n>n_{1}\). Then, for infinitely many values of \(n\) greater than or equal to \(n_{1}\), \(x\in Bad_{n}\). Hence, for infinitely many values of \(n\), \(x\not\in E_{n}\), and thus \(x\not\in\bigcap_{n\geq n_{0}}E_{n}\). The following algorithm is an adaptation of Turing's algorithm for computing an absolutely normal number (see [7]). We modified it to obtain a real that is Poisson generic in base \(b\). **Algorithm**.: _Let \(n_{0}\) be determined by Fact 1. Let \(I_{n_{0}}:=(0,1)\). At each step \(n>n_{0}\), divide \(I_{n-1}\) in \(b\) equal parts \(I_{n-1}^{0},I_{n-1}^{1},\ldots,I_{n-1}^{b-1}\). Let \(v\) be the smallest in \(\{0,..,(b-1)\}\) such that \(\mu(I_{n-1}^{v}\cap E_{n})>\frac{1}{N_{n}}\). \(I_{n}:=I_{n-1}^{v}\). The \(n\)-digit in the base-\(b\) expansion of \(x\) is the digit \(v\)._ **Remark**.: Observe that the number \(x\) computed by the algorithm ensures that for each \(n\geq n_{0}\), \(x\in I_{n}\cap E_{n}\). Since the intervals \(I_{n}\) and rested, we have \[x\in I_{n}\cap\Big{(}\bigcap_{n_{0}\leq m\leq n}E_{m}\Big{)},\] where \(E_{n}=(0,1)\setminus\bigcup_{N_{n}\leq k\leq N_{n+1}}Bad_{k}\) with \(N_{n}=b^{2n}\). Thus, to define \(x\upharpoonright n\) the algorithm looks at all the possible continuations up to \(x\upharpoonright b^{N_{n+1}}\). We prove that the number \(x\) produced by the algorithm is indeed Poisson generic for base \(b\). The algorithm defines a sequence of intervals \((I_{n})_{n\geq n_{0}}\) such that \(I_{n}=\big{(}\frac{a}{b^{n}},\frac{a+1}{b^{n}}\big{)}\) for some \(a\in\{0,\ldots,b^{n}-1\}\), \(I_{n+1}\subseteq I_{n}\) and \(\mu(I_{n})=b^{-n}\). The number \(x\) defined is the unique element in \(\bigcap_{n\geq n_{0}}I_{n}\). We first prove that for every \(n\geq n_{0}\), \[\mu\Big{(}I_{n}\cap\bigcap_{i=n_{0}}^{n}E_{i}\Big{)}>0.\] To show this we prove by induction on \(n\), \[\mu\Big{(}I_{n}\cap\bigcap_{i=n_{0}}^{n}E_{n}\Big{)}>\frac{1}{N_{n}}.\] _Base case._ For \(n_{0}\) it is immediate because \(I_{n_{0}}=(0,1)\), so \(\mu(I_{n_{0}})=1\) and \[\mu(E_{n_{0}})>1-\frac{1}{N_{n_{0}}^{2}}>\frac{1}{N_{n_{0}}}.\] _Inductive case_. Assume the inductive hypothesis \[\mu\Big{(}I_{n}\cap\bigcap_{i=n_{0}}^{n}E_{i}\Big{)}>\frac{1}{N_{n}}.\] Let's see it holds for \(n+1\). Using the inductive hypothesis and Fact 1, we have \[\mu\Big{(}I_{n}\cap\bigcap_{i=n_{0}}^{n+1}E_{i}\Big{)} =\mu\Big{(}\Big{(}I_{n}\cap\bigcap_{i=n_{0}}^{n}E_{i}\Big{)}\cap E _{n+1}\Big{)}\] \[>\mu\Big{(}I_{n}\cap\bigcap_{i=n_{0}}^{n}E_{i}\Big{)}-\mu((0,1)-E _{n+1})\] \[>\frac{1}{N_{n}}-\frac{1}{{N_{n}}^{2}}\] \[>\frac{b}{N_{n+1}}.\] Then, it is impossible that for each of the \(b\) possible \(v\)s, \(v=0,v=1,\ldots,v=(b-1)\), \[\mu\Big{(}I_{n}^{v}\cap\bigcap_{i=1}^{n+1}E_{i}\Big{)}\leq\frac{1}{N_{n+1}}.\] So, there is at least one \(v\in\{0,\ldots,(b-1)\}\) such that \[\mu\Big{(}I_{n}^{v}\cap\bigcap_{i=n_{0}}^{n+1}E_{i}\Big{)}>\frac{1}{N_{n+1}}.\] Since the algorithm sets \(I_{n+1}\) to be the leftmost \(I_{n}^{v}\) with this property, we have \[I_{n+1}\cap\bigcap_{i=n_{0}}^{n+1}E_{i}>\frac{1}{N_{n+1}}.\] We conclude that \(x\in\bigcap_{n\geq n_{0}}E_{n}\). So \(x\) is \(\lambda\)-Poisson generic in base \(b\) for all positive rational \(\lambda\). Finally, to conclude that \(x\) is \(\lambda\)-Poisson generic in base \(b\) for every positive real \(\lambda\), hence Poisson generic in base \(b\) we need the following lemma. **Lemma 3** (adapted from [3]).: Let integer \(b\geq 2\). If \(x\in(0,1)\) is \(\lambda\)-Poisson generic in base \(b\) for all positive rational \(\lambda\) then \(x\) is Poisson generic in base \(b\). Proof.: For each \(x\in(0,1)\) and for each \(k\in\mathbb{N}\), on the space of words of length \(k\) with uniform measure define the integer-valued random measure \(M_{k}^{x}=M_{k}^{x}(v)\) on the real half-line \(\mathbb{R}^{+}=[0,+\infty)\) by setting for all Borel sets \(S\subseteq\mathbb{R}^{+}\), \[M_{k}^{x}(S)(v):=\sum_{p\in\mathbb{N}\cap b^{k}S}I_{p}(x,v),\] where \(I_{p}\) is the indicator function that \(v\) occurs in \(x\) at position \(p\) and \(\mathbb{N}\cap b^{k}S\) denotes the set of integer values in \(\{b^{k}s:s\in S\}\). Then, \(M_{k}^{x}(\cdot)\) is a point process on \(\mathbb{R}^{+}\). The function \(Z_{j,k}^{\lambda}(x)\)can be formulated in terms of of \(M_{k}^{x}(S)\) for the sets \(S=(0,\lambda]\), as follows: \[Z_{j,k}^{\lambda}(x)=\frac{1}{b^{k}}\#\{v\in\{0,\ldots,(b-1)\}^{k}:M_{k}^{x}( (0,\lambda])(v)=j)\}.\] Observe that for every pair of positive reals \(\lambda,\lambda^{\prime},\) with \(\lambda<\lambda^{\prime},\) \[M_{k}^{x}((0,\lambda^{\prime}])(v)-M_{k}^{x}((0,\lambda])(v)=\sum_{p\in\mathbb{N }\cap b^{k}[\lambda,\lambda^{\prime})}I_{p}(x,v).\] The classical total variation distance \(d_{TV}\) between two probability measures \(P\) and \(Q\) on a \(\sigma-\)algebra \(\mathcal{F}\) is defined via \[d_{TV}(P,Q):=\sup_{A\in\mathcal{F}}\left|P(A)-Q(A)\right|.\] For a random variable \(X\) taking values in \(\mathbb{R},\) the distribution of \(X\) is the probability measure \(\mu_{X}\) on \(\mathbb{R}\) defined as the push-forward of the probability measure on the sample space of \(X.\) The total variation distance between two random variables \(X\) and \(Y\) is simply \(d_{TV}(X,Y)=d_{TV}(\mu_{X},\mu_{Y}).\) Hence, the total distance variation \[d_{TV}(M_{k}^{x}((0,\lambda^{\prime}]),M_{k}^{x}((0,\lambda]))\leq\frac{1}{b^{ k}}\#(\mathbb{N}\cap b^{k}[\lambda,\lambda^{\prime}))=\lambda^{\prime}-\lambda+ \operatorname{O}(b^{-k}).\] Also observe that \(d_{TV}(Po(\lambda^{\prime}),Po(\lambda))\to 0\) as \(\lambda\to\lambda^{\prime}\). From these two observations and the fact that the rational numbers are a dense subset of the real numbers we conclude that being \(\lambda\)-Poisson generic for every positive rational \(\lambda\) implies Poisson generic. The proofs of Theorems 3 and 4 are very similar to those of Theorems 1 and 2. However we now include what is needed to prove the lightface results. We now start we a computable Poisson generic number in base \(b\) that we obtain with the Algorithm above, and we computably determine the sequence of values \((k_{i})_{i\geq 1}\) using the input sequence \(z\in\omega^{\omega}.\) Proof of Theorem 3.: Let \(\mathcal{C}=\{z\in\omega^{\omega}\colon\lim_{i\to\infty}z(i)=\infty\}\). So, \(\mathcal{C}\) is \(\Pi^{0}_{3}\)-complete. We define a computable map \(f\colon\omega^{\omega}\to(0,1)\) which reduces \(\mathcal{C}\) to \(\mathcal{P}_{b}\). Fix \(z\in\omega^{\omega}\). At step \(i\), let \(k_{i}\) be the least integer such that \(k_{i}>k_{i-1},\) and \(k_{i}>z(i)\). Fix \(k_{0}=0\). For \(i>0\), we define \(f(z)\upharpoonright[b^{k_{i-1}},b^{k_{i}})\) as follows. Let \[B_{i} :=[b^{k_{i-1}},b^{k_{i}})\] \[B_{i}^{\prime} :=\Big{[}b^{k_{i-1}},\Big{(}1-\frac{1}{z(i)}\Big{)}b^{k_{i}}\Big{)}.\] We set \[f(z)\upharpoonright B_{i}^{\prime}:=x\upharpoonright B_{i}^{\prime},\text{ and }f(z)\upharpoonright B_{i}\setminus B_{i}^{\prime}:=0.\] First suppose \(z\notin\mathcal{C}\), and fix \(\ell\in\omega\) such that for infinitely many \(i\) we have \(z(i)=\ell\). Consider step \(i\) in the construction of \(f(z)\) for such an \(i\). For any \(\epsilon>0\), if \(i\) is large enough then the number of words \(w\) of length \(k_{i}\) which occur in \(x\upharpoonright[1,(1-\frac{1}{z(i)})b^{k_{i}}]\) is at most \[b^{k_{i}}(1-e^{-(1-\frac{1}{z(i)})}+\epsilon).\] Then, the number \(Z_{i}\) of words \(w\) of length \(k_{i}\) which occur in \(f(z)\upharpoonright b^{k_{i}}\) is at most \[b^{k_{i}}(1-e^{-(1-\frac{1}{z(i)})}+\epsilon).\] So, \[\frac{1}{b^{k_{i}}}Z_{i}\leq(1-e^{-(1-\frac{1}{\ell})}+2\epsilon).\] On the other hand, the Poisson estimate for the proportion of words of length \(k_{i}\) occurring in an initial segment of length \(b^{k_{i}}\) is \(1-1/e\). Since \(\ell\) is fixed, as \(i\) gets large we have a contradiction. So, \(f(z)\) is not \(1\)-Poisson generic n base \(b\). Next suppose that \(z\in\mathcal{C}\). We show that \(f(z)\) is Poisson generic in base \(b\). Fix a positive rational \(\lambda\) and \(\epsilon>0\). Consider any \(k\in\omega\), large enough so that the following holds: Let \(i\) be such that \(k_{i-1}\leq k<k_{i}\), * if \(\lambda=\frac{p}{q}\) then \(k_{i-1}\geq q\), * \(k_{i}>\frac{1}{\epsilon}\), * for all \(s\geq i-1\) we have \(\frac{1}{z(s)}<\epsilon\). We show that for any such \(k\), and for every non negative \(j\) less than \(b^{k}\), \(|Z^{\lambda}_{j,k}(f(z))-e^{-\lambda}\frac{\lambda^{j}}{j!}|<\epsilon\). First consider the case \(\lambda\leq 1\). Fix \(j\). We have that \[\Big{|}\frac{1}{b^{k}}Z^{\lambda}_{j,k}(f(z))-\frac{1}{b^{k}}Z^{ \lambda}_{j,k}(x)\Big{|} \leq\frac{1}{b^{k}}\Big{(}b^{k_{i-1}}\frac{1}{z(i-1)}+2k+\sum_{m<i -1}b^{k_{m}}\Big{)}\] \[\leq\frac{1}{z(i-1)}+\epsilon\] \[\leq 2\epsilon\] for \(i\) large enough. We have used here the fact that \(|Z^{\lambda}_{j,k}(f(z))-Z^{\lambda}_{j,k}(x)|\) is at most the number of words of length \(k\) which appear in one of \(f(z)\upharpoonright b^{k}\), \(x\upharpoonright b^{k}\) but not the other. Such a word must overlap the block of \(0\)s in \(f(z)\upharpoonright[b^{k_{i-1}}(1-\frac{1}{z(i-1)}),b^{k_{i-1}})\), or else overlap \([1,b^{k_{i-2}}]\), which gives the above estimate. Consider now the case \(\lambda>1\). If \(\lambda b^{k}<b^{k_{i}}(1-\frac{1}{z(i)})\), then the same estimate above works. So, suppose \(b^{k}\geq\frac{1}{\lambda}b^{k_{i}}(1-\frac{1}{z(i)})\). In this case we also count the number of words \(w\) of length \(k\) which might overlap the block of \(0\)s in \(f(z)\upharpoonright[b^{k_{i}}(1-\frac{1}{z(i)}),b^{k_{i}}]\). We then get \[\Big{|}\frac{1}{b^{k}}Z^{\lambda}_{j,k}(f(z))-\frac{1}{b^{k}}Z^{ \lambda}_{j,k}(x)\Big{|} \leq\frac{1}{b^{k}}\Big{(}b^{k_{i-1}}\frac{1}{z(i-1)}+b^{k_{i}} \frac{1}{z(i)}+3k+\sum_{s<i-1}b^{k_{s}}\Big{)}\] \[\leq\frac{1}{z(i-1)}+\frac{b^{k_{i}}}{b^{k}}\frac{1}{z(i)}+\epsilon\] \[\leq\frac{1}{z(i-1)}+\lambda\frac{1}{1-\frac{1}{z(i)}}\frac{1}{z (i)}+\epsilon\] \[\leq 2\epsilon\] if \(i\) is sufficiently large, since \(\lambda\) is fixed and \(z(i)\to\infty\). We can now prove the \(D_{2}\)-\(\Pi^{0}_{3}\)-completeness of the difference set \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). Proof of Theorem 4.: The proof is exactly as that of Theorem 2 except that we start with a computable real \(x\) and we determine the sequence \((k_{i})_{i\geq 1}\) using the input sequence \(z\in\omega^{\omega}\). Let \(x\) be the number obtained by the Algorithm. Let \(C:=\{z\in\omega^{\omega}\colon z(2n)\to\infty\}\) and \(D:=\{z\in\omega^{\omega}\colon z(2n+1)\to\infty\}\). We assume without loss of generality that all \(z(i)\) are powers of \(2\). We define a computable map \(f\colon\omega^{\omega}\to(0,1)\) which reduces \(C\setminus D\) to \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). Fix \(z\in\omega^{\omega}\). At step \(i\), let \(k_{i}\) be the least power of \(2\) such that \(k_{i}>k_{i-1}\), and \(k_{i}>z(i)\). We define \(f\) so that for \(z\in\omega^{\omega}\), the even digits \(z(2i)\) will control whether \(f(z)\in\mathcal{N}_{b}\) and the odd digits \(z(2i+1)\) control whether \(f(z)\in\mathcal{P}_{b}\). When we wish to violate Poisson genericity, we do so for \(\lambda=1\) and \(j=0\). As in the proof of Theorem 2, at step \(i\) we define \(f(z)\upharpoonright B_{i}\), where \(B_{i}=[b^{k_{i-1}},b^{k_{i}})\). Let \[f(z)\upharpoonright B_{i}^{1} :=x\upharpoonright B_{i}^{1}\] \[f(z)\upharpoonright B_{i}^{2} :=0\] \[f(z)\upharpoonright B_{i}^{3} :=x\upharpoonright\left[b^{k_{i-1}},b^{k_{i-1}}+\frac{1}{z(2i+1)}b ^{k_{i}}\right)\] where \[B_{i}^{1} :=\Big{[}b^{k_{i-1}},\big{(}1-\frac{1}{z(2i)}-\frac{1}{z(2i+1)} \big{)}b^{k_{i}}\Big{)}\] \[B_{i}^{2} :=\Big{[}\big{(}1-\frac{1}{z(2i)}-\frac{1}{z(2i+1)}\big{)}b^{k_{i }},\big{(}1-\frac{1}{z(2i+1)}b^{k_{i}}\big{)}\Big{)}\] \[B_{i}^{3} :=\Big{[}(1-\frac{1}{z(2i+1)})b^{k_{i}}),b^{k_{i}}\Big{)}.\] Notice that \(|B_{i}^{2}|=\frac{1}{z(2i)}b^{k_{i}}\), and \(|B_{i}^{3}|=\frac{1}{z(2i+1)}b^{k_{i}}\). We show that \(f\) is a reduction from \(C\setminus D\) to \(\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). First assume \(z\notin C\), that is \(z(2i)\) does not tend to infinity when \(i\) goes to infinity. Fix \(\ell\) such that \(z(2i)=\ell\) for infinitely many \(i\). We easily have that \(f(z)\notin\mathcal{N}_{b}\). For example, if the digit \(0\) occurs with approximately the right frequency \(\frac{1}{b}\) in \(f(z)\upharpoonright[0,b^{k_{i-1}}+|B_{i}^{1}|)=[0,b^{k_{i}}(1-\frac{1}{z(2i)}- \frac{1}{z(2i+1)}))\), then \(0\) will occur with too large a frequency in \[f(z)\upharpoonright\Big{[}1,b^{k_{i-1}}+|B_{i}^{1}|+|B_{i}^{2}|)=[0,b^{k_{i-1}} +|B_{i}^{1}|+\frac{1}{\ell}b^{k_{i}}\Big{)}.\] We use here that \(\frac{1}{b^{k_{i}}}\sum_{k<i}b_{k}\to 0\). This is because \(f(z)\upharpoonright B_{2}^{i}=0\) and \(|B_{2}^{i}|=\frac{1}{\ell}b^{k_{i}}\) for such \(i\). Now assume that \(z\in C\), so \(\frac{1}{b^{k_{i}}}|B_{i}^{2}|=\frac{1}{z(2i)}\to 0\). Then, we have \(f(z)\in\mathcal{N}_{b}\). This follows from the the definition of \(f\) and the fact that by that \(x\) is Borel normal to base \(b\), see [5, Theorem 2]. Assume first that \(z\in D\), so \(z\notin C\setminus D\). We show \(f(z)\in\mathcal{P}_{b}\), and so \(f(z)\notin\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). Since we are assuming \(z\in C\) also, we have \(\lim_{i\to\infty}z(i)=\infty\). So, \(\lim_{i\to\infty}\frac{1}{b^{k_{i}}}(|B_{i}^{2}|+|B_{i}^{3}|)=0\). It then follows exactly as in the proof of Theorem 3 that \(f(z)\in\mathcal{P}_{b}\). Assume next that \(z\notin D\) (but \(z\in C\) still). We show that \(f(z)\notin\mathcal{P}_{b}\), which shows \(f(z)\in\mathcal{N}_{b}\setminus\mathcal{P}_{b}\). Fix \(m\) so that for infinitely many \(i\) we have \(z(2i+1)=m\), and \(m\) is of the form \(m=2^{\ell}\). Recall \(\frac{1}{b^{k_{i}}}|B_{i}^{3}|=\frac{1}{z(2i+1)}=\frac{1}{2^{\ell}}\) for such \(i\). We restrict our attention to this set of \(i\) in the following argument: If \(f(z)\) were Poisson generic, then from Lemma 1 we would have that for large enough \(i\) in our set that \[\frac{1}{b^{k_{i}}}|H_{i}|=(1-e^{-\alpha})(e^{-(1-\alpha)}),\] where \(H_{i}\) is the set of words of length \(k_{i}\) which occur in \(f(z)\) with a starting position in \([(1-\alpha)b^{k_{i}},b^{k_{i}})\), but do not occur in \(x\) with a starting position in \([b^{k_{i-1}},(1-\alpha)b^{k_{i}})\). However, by the construction of \(f(z)\) we have that every word which occurs in \([(1-\alpha)b^{k_{i}},b^{k_{i}})\) also occurs in \([b^{k_{i-1}},(1-\alpha)b^{k_{i}})\), and so \(|H_{i}|=0\). This completes the proof of Theorem 4. **Acknowledgements.** V. Becher is supported by grant PICT 2018-2315 of Agencia Nacional de Promocion Cientifica y Tecnologica de Argentina. S. Jackson is supported by NSF grant DMS-1800323. W. Mance is supported by grant 2019/34/E/ST1/00082 for the project "Set theoretic methods in dynamics and number theory," NCN (The National Science Centre of Poland). D. Kwietniak is supported by NCN (the National Science Centre, Poland) Preludium Bis project no. 2019/35/O/ST1/02266.
2306.09528
Density distributions of tune shifts from space charge or beam-beam interactions in Gaussian bunches
The amplitude dependent tune shifts from either space charge or beam-beam interactions are calculated analytically with the inclusion of synchrotron oscillations and multiple interactions around the ring. Simpler formulae are derived under limits of bunches longer than the transverse sizes, equal and unequal transverse sizes etc. This is used to derive semi-analytical forms for the density distribution of the tune shifts. The tune spread and the density distribution are needed to understand beam decoherence or Landau damping with either interaction. The tune footprints due to space charge in IOTA are simulated using pyorbit and found to be in good agreement with the theoretical predictions.
Tanaji Sen
2023-06-15T22:15:32Z
http://arxiv.org/abs/2306.09528v3
Density distributions of tune shifts from space charge or beam-beam interactions in Gaussian bunches ###### Abstract The amplitude dependent tune shifts from either space charge or beam-beam interactions are calculated analytically with the inclusion of synchrotron oscillations and multiple interactions around the ring. Simpler formulae are derived under limits of bunches longer than the transverse sizes, equal and unequal transverse sizes etc. This is used to derive semi-analytical forms for the density distribution of the tune shifts. The tune spread and the density distribution are needed to understand beam decoherence or Landau damping with either interaction. The tune footprints due to space charge in IOTA are simulated using pyorbit and found to be in good agreement with the theoretical predictions. FERMILAB-PUB-23-279-AD ## 1 Introduction The space charge interaction in a low energy synchrotron and the beam-beam interaction in a collider are the dominant contributors to the incoherent tune spread in these machines. In this report we calculate first the incoherent amplitude dependent transverse tune shifts in Gaussian beams due to either interaction. We generalize these tube shifts to include the effects of synchrotron oscillations and especially in the case of space charge, we also include the contributions of the interactions from multiple locations around the ring. Next we calculate the beam density distributions as a function of these tuneshifts. This density distribution is needed to determine beam stability in different conditions. In terms of scaled tuneshifts (defined in Section 3), the density distribution has exactly the same form for both space charge and beam-beam interactions. However, the role of the tune spread and the density in determining beam stability is very different in the two interactions. The beam-beam interactions act as an external source of tune spread and can consequently be used to provide Landau damping [1] while with space charge, an external driving source such as octupoles [2] or perhaps electron lenses [3] is required for Landau damping. Nevertheless even with space charge, the contributions from the internal tune spread and the density distribution have to be included in determining beam stability. Another use of the incoherent tune spread with either interaction is finding the beam decoherence time when the centroid is offset from the center e.g. due to a dipole kick. . The calculation of tune shifts due to head-on beam-beam interactions in one dimension was reported in [4] while the fully 2D calculation of the tune spreads, resonance driving terms etc was done e.g in [5]. These were then generalized to long-range interactions [6] which were of greater interest in the Tevatron. The expressions for head-on interactions are easily found by taking the limit of zero separation. We note that the space charge tune shifts with amplitude for round beams without synchrotron oscillations were calculated in [7] which used some of the methods in [5]. The density distribution was extracted from numerical simulations, but insufficient sampling of the beam core led to an incorrect form for the density, especially close to the core where the space charge tune shift is largest. Our method is semi-analytical, in that numerical inversion of analytical functions followed by interpolation to obtain smooth functions is required. We also check that the zeroth to second moments of the distribution are preserved. ## 2 Incoherent tune shifts with synchrotron oscillations Here we consider the tune shifts with amplitude due to a Hamiltonian with linear transverse motion, longitudinal motion in an rf bucket and either a space charge interaction or a beam-beam interaction. In this section, we consider first the Hamiltonian with a space charge interaction experienced by a Gaussian distribution in three space dimensions. At the end of this section, we consider the beam-beam interaction and show that the tuneshift with amplitude scaled by the zero amplitude tune shift has the same form as with space charge. The Hamiltonian in the lab frame can be written in dimensionless form as \[H=\frac{1}{2}((x^{\prime})^{2}+(y^{\prime})^{2}+K_{x}x^{2}+K_{y}y^{2})+\frac{ e}{\beta^{2}\gamma m_{0}c^{2}}(V_{rf}+V_{sc}) \tag{2.1}\] where \(V_{rf}\) is the rf cavity voltage wave form and \(V_{SC}\) is the electric potential due to the space charge measured in the lab frame. Transforming to action-angle variables \((J_{x},\phi_{x},J_{y},\phi_{y})\) in the transverse planes, as e.g. \[x=\sqrt{2\beta_{x}J_{x}}\cos\phi_{x},\ \ \ x^{\prime}=\sqrt{2J_{x}}[\sqrt{ \frac{1}{\beta_{x}}}\sin\phi_{x}-2\alpha_{x}\cos\phi_{x}] \tag{2.2}\] This reduces the linear part of the transverse Hamiltonian to \[H_{\perp,0}=\frac{1}{R}(\nu_{x,0}J_{x}+\nu_{y,0}J_{y}) \tag{2.3}\] where \((\nu_{x,0},\nu_{y,0})\) are the tunes of the linear lattice and \(R\) is the machine radius. Consider a Gaussian distribution in 3 space dimensions for a bunch; the same bunch experiencing its space charge field or the opposing bunch for the case of beam-beam interactions \[\psi(x,y,z)=\frac{Ne}{(2\pi)^{3/2}\sigma_{x}\sigma_{y}\sigma_{z}}\exp[-\frac{x^{2 }}{2\sigma_{x}^{2}}-\frac{y^{2}}{2\sigma_{y}^{2}}-\frac{z^{2}}{2\sigma_{z}^{2}}] \tag{4}\] where \(\sigma_{x},\sigma_{y},\sigma_{z}\) are the rms bunch dimensions. The solution of Poisson's equation \(\nabla^{2}V=-\psi/\epsilon_{0}\) leads to the following solution for the electric scalar potential [8] \[V(x,y,z) = \frac{1}{4\pi\epsilon_{0}}\frac{Ne}{\pi^{1/2}\gamma}\int_{0}^{ \infty}dq\ \frac{1}{\sqrt{(2\sigma_{x}^{2}+q)(2\sigma_{y}^{2}+q)(2\gamma^{2}\sigma_{z}^{2 }+q)}} \tag{5}\] \[\left[1-\exp\left[(-\frac{x^{2}}{2\sigma_{x}^{2}+q}-\frac{y^{2}} {2\sigma_{y}^{2}+q}-\frac{\gamma^{2}z^{2}}{2\gamma^{2}\sigma_{z}^{2}+q}\right)\right]\] where the coordinates \((x,y,z)\) and the rms sizes \((\sigma_{x},\sigma_{y},\sigma_{z})\) are measured in the rest frame. The complete Hamiltonian in three degrees of freedom (3D) after scaling by \(R\) is \[H = \nu_{x,0}J_{x}+\nu_{y,0}J_{y}+\frac{eR}{\beta^{2}\gamma m_{0}c^{2 }}V_{rf}(\delta p/p,z)+C_{SC}\bar{V}(x,y,z) \tag{6}\] \[C_{SC} = \frac{N_{p}r_{p}}{\pi^{1/2}\beta^{2}\gamma^{2}}\] (7) \[\bar{V}(x,y,z) = \int_{0}^{\infty}dq\ \frac{1}{\sqrt{(2\sigma_{x}^{2}+q)(2\sigma_{y}^{2}+q)(2 \gamma^{2}\sigma_{z}^{2}+q)}}\] (8) \[\left[1-\exp(-\frac{x^{2}}{2\sigma_{x}^{2}+q}-\frac{y^{2}}{2 \sigma_{y}^{2}+q}-\frac{\gamma^{2}z^{2}}{2\gamma^{2}\sigma_{z}^{2}+q})\right]\] where \(r_{p}=e^{2}/(4\pi\epsilon_{0}m_{0}c^{2})\) is the classical particle radius. The tunes follow from the derivatives of the angle-averaged Hamiltonian. We will write only the expressions in \(x\), the one for \(y\) can be found by the replacement \(x\leftrightarrow y\). Our focus is on the transverse tune shifts with amplitude, thus we ignore dominantly longitudinal effects such as longitudinal space charge effects. We also do not consider the momentum dependence of the transverse tunes or the modulation of the revolution period by the synchrotron oscillations, these do not have a noticeable effect on the dynamics in IOTA because of the small synchrotron tune. We do include the impact of synchrotron oscillations on the transverse dynamics via the nonlinear interaction potential. With these assumptions, the transverse tune shifts are given by \[\Delta\nu_{x}=RC_{SC}\frac{\partial}{\partial J_{x}}\langle\bar{V}\rangle_{ \phi_{x},\phi_{y},\phi_{z},s} \tag{9}\] where the averaging is done over all three angles and over \(s\), the length along the ring. Hence after using action-angle variables, \[\Delta\nu_{x} = RC_{SC}\int_{0}^{\infty}dq\ \frac{1}{\sqrt{(2\sigma_{x}^{2}+q)^{3}(2 \sigma_{y}^{2}+q)(2\gamma^{2}\sigma_{z}^{2}+q)}}2\beta_{x}\cos^{2}\phi_{x}\] \[\exp\left[-\frac{2\beta_{x}J_{x}\cos^{2}\phi_{x}}{2\sigma_{x}^{2} +q}-\frac{y^{2}}{2\sigma_{y}^{2}+q}-\frac{\gamma^{2}z^{2}}{2\gamma^{2}\sigma_ {z}^{2}+q}\right]\] Change the integration variable from \(q\) to a dimensionless variable \(u\) as \[u=\frac{2\sigma_{x}^{2}}{(2\sigma_{x}^{2}+q)},\ \ \Rightarrow q=\frac{2\sigma_{x}^{2}}{u}-2 \sigma_{x}^{2}\] This converts the infinite range of integration over \(q\) to a finite range of integration over \(u\). Hence \[\Delta\nu_{x} = \frac{RC_{SC}\beta_{x}}{2^{1/2}\sigma_{x}^{2}\gamma\sigma_{z}} \int_{0}^{1}du\,\left\langle\cos^{2}\phi_{x}\left[\frac{1}{[(\sigma_{y}^{2}/ \sigma_{x}^{2}-1)u+1]}\frac{u}{[(1-\sigma_{x}^{2}/\gamma^{2}\sigma_{z}^{2})u+ \sigma_{x}^{2}/\gamma^{2}\sigma_{z}^{2}]}\right]^{1/2}\right. \tag{2.10}\] \[\exp\left[-\frac{2\beta_{x}J_{x}\cos^{2}\phi_{x}}{2\sigma_{x}^{2 }}u-\frac{2\beta_{y}J_{y}\cos^{2}\phi_{y}}{2\sigma_{y}^{2}}\frac{u}{2\sigma_{ y}^{2}[(1-\sigma_{x}^{2}/\sigma_{y}^{2})u+\sigma_{x}^{2}/\sigma_{y}^{2}]}\right.\] \[\left.-\frac{\gamma^{2}z^{2}u}{2\gamma^{2}\sigma_{z}^{2}[(1- \sigma_{x}^{2}/\gamma^{2}\sigma_{z}^{2})u+\sigma_{x}^{2}/\gamma^{2}\sigma_{z} ^{2}]}\right]\rangle_{\phi_{x},\phi_{y},s}\] In order to make progress, we need to assume that the longitudinal motion is simple harmonic. This implies that the rf cavity force be linear of equivalently that we approximate the cosine term in \(V_{rf}\) by the first two terms in its Taylor expansion. Writing the coordinates \((x,y,z)\) in terms of dimensionless amplitudes \((a_{x},a_{y},a_{z})\) and the corresponding rms sizes \((\sigma_{x},\sigma_{y},\sigma_{z})\) \[x=a_{x}\sigma_{x}\cos\phi_{x},\ \ \ y=a_{y}\sigma_{y}\cos\phi_{y},\ \ z=a_{z} \sigma_{z}\cos\phi_{z} \tag{2.11}\] We use the integral representation of \(I_{0}\) \[I_{0}(w)=\frac{1}{\pi}\int_{0}^{\pi}d\theta\exp[\pm w\cos\theta]=\frac{1}{2\pi }\int_{0}^{2\pi}d\theta\exp[\pm w\cos 2\theta] \tag{2.12}\] The integrals in the phase averages over \(\phi_{x},\phi_{y},\phi_{z}\) are of the form \[\frac{1}{2\pi}\int_{0}^{2\pi}d\phi\exp[-w\cos^{2}\phi] = \exp[-\frac{1}{2}w]I_{0}(\frac{1}{2}w)\] \[\frac{1}{2\pi}\int_{0}^{2\pi}d\phi\cos^{2}\phi\exp[-w\cos^{2}\phi] = \frac{1}{2}\exp[-\frac{1}{2}w][I_{0}(\frac{1}{2}w)-I_{1}(\frac{1}{ 2}w)]\] Gathering all terms together, the final expression is \[\Delta\nu_{x}(a_{x},a_{y},a_{z}) = C_{SC}\frac{R}{2\sqrt{2}\gamma\sigma_{z}\epsilon_{x}}\int_{0}^{1 }du\,\exp[-\frac{a_{x}^{2}u}{4}]\left[I_{0}(\frac{a_{x}^{2}u}{4})-I_{1}(\frac{ a_{x}^{2}u}{4})\right]\exp[-\frac{a_{y}^{2}u}{4}]\exp[-\frac{a_{z}^{2}u}{4}]\] \[\times\bigg{\langle}\left[\frac{1}{[(\sigma_{y}^{2}/\sigma_{x}^{ 2}-1)u+1]}\frac{u}{[(1-\sigma_{x}^{2}/\gamma^{2}\sigma_{z}^{2})u+\sigma_{x}^{ 2}/\gamma^{2}\sigma_{z}^{2}]}\right]^{1/2}\] \[\times I_{0}\left(\frac{a_{y}^{2}}{4}\frac{u}{(1-\sigma_{x}^{2}/ \sigma_{y}^{2})u+\sigma_{x}^{2}/\sigma_{y}^{2}}\right)I_{0}\left(\frac{a_{z}^ {2}u}{4[(1-\sigma_{x}^{2}/\gamma^{2}\sigma_{z}^{2})u+\sigma_{x}^{2}/\gamma^{2 }\sigma_{z}^{2}]}\right)\bigg{\rangle}_{s}\] From this general expression, we can obtain the tuneshifts for special cases. ### Bunch length longer than the transverse sizes Using the general expression Eq.(2.13) in this limit where the transverse sizes are both negligibly small compared to the bunch length, i.e. \((\sigma_{x},\sigma_{y})\ll\sigma_{z}\) we find \[\Delta\nu_{x}(a_{x},a_{y},a_{z}) = C_{SC}\frac{R}{2\sqrt{2}\gamma\sigma_{z}\epsilon_{x}}\Big{\langle} \int_{0}^{1}du\,\exp[-\frac{a_{z}^{2}u}{4}]I_{0}\left(\frac{a_{z}^{2}u}{4} \right)\exp[-\frac{a_{x}^{2}u}{4}]\left[I_{0}(\frac{a_{x}^{2}u}{4})-I_{1}(\frac {a_{x}^{2}u}{4})\right]\] \[\times\exp[-\frac{a_{y}^{2}u}{4}]\left[\frac{1}{[(\sigma_{y}^{2}/ \sigma_{x}^{2}-1)u+1]}\right]^{1/2}I_{0}\left(\frac{a_{y}^{2}}{4}\frac{u}{(1- \sigma_{x}^{2}/\sigma_{y}^{2})u+\sigma_{x}^{2}/\sigma_{y}^{2}}\right)\Big{\rangle} _{s}\] Eq.(2.14) and a similar one for \(\Delta\nu_{y}\) (with \(x\leftrightarrow y\)) are of the form which is generally applicable for bunches in hadron synchrotrons, even the low energy machines where \(\gamma\simeq 1\). The tuneshift at the origin is \[\Delta\nu_{x,SC}(0,0,0) = \frac{RC_{SC}}{2\sqrt{2}\gamma\epsilon_{x}\sigma_{z}}\int_{0}^{1} du\,\,\Big{\langle}\left[\frac{1}{[(\sigma_{y}^{2}/\sigma_{x}^{2}-1)u+1]}\right]^{1/2} \Big{\rangle}_{s} \tag{2.15}\] \[= \frac{RC_{SC}}{\sqrt{2}\gamma\epsilon_{x}\sigma_{z}}\langle\frac {1}{1+\sigma_{y}/\sigma_{x}}\rangle_{s}\] Substituting the expression for \(C_{SC}\) and assuming round beams at all locations, we obtain \[\Delta\nu_{x,SC}=\frac{N_{p}r_{p}}{\beta^{2}\gamma^{2}}\frac{R}{2\sqrt{2\pi} \gamma\sigma_{z}\epsilon_{x}}=\frac{r_{p}}{\beta\gamma^{2}\epsilon_{x,N}} \lambda_{G}R,\quad\lambda_{G}=\frac{N_{p}}{2\sqrt{2\pi}\sigma_{z}} \tag{2.16}\] where we used \(\epsilon_{x,N}=\beta\gamma\epsilon_{x}\) and \(\lambda_{G}\) is the longitudinal density. These are the standard expressions for the space charge tune shift parameters for Gaussian bunches. With a coasting bunch, the longitudinal density is \(\lambda=N_{p}/(2\pi R)\). The zero amplitude tune shifts for non-round beams can be written as \[\Delta\nu_{x}(0,0,0)=2\Big{\langle}\frac{1}{\sigma_{y}/\sigma_{x}+1}\Big{\rangle} _{s}\Delta\nu_{x,SC},\quad\Delta\nu_{y}(0,0,0)=2\Big{\langle}\frac{1}{\sigma_ {X}/\sigma_{y}+1}\Big{\rangle}_{s}\Delta\nu_{y,SC} \tag{2.17}\] These equations include the variation of beam sizes around the ring. At zero transverse amplitude in this limit of both long and round bunches \[\Delta\nu_{x}(0,0,a_{z})|_{Long,round} = \Delta\nu_{x,SC}\int_{0}^{1}du\,\exp[-\frac{a_{z}^{2}u}{4}]I_{0} \left(\frac{a_{z}^{2}u}{4}\right)\equiv f_{z}\Delta\nu_{x,SC} \tag{2.18}\] \[f_{z} = \exp[-\frac{a_{z}^{2}}{4}]\left(I_{0}(\frac{a_{z}^{2}}{4})+I_{1} (\frac{a_{z}^{2}}{4})\right) \tag{2.19}\] where \(f_{z}\) is the correction factor which describes the impact of synchrotron oscillations. Hence \(f_{z}\) plotted in Fig. 1 shows that the small amplitude tune shifts due to space charge of a longitudinal slice decreases with distance \(z\), falling by half at \(a_{z}=3\) corresponding to the edge of the bucket. The caveat is that the longitudinal motion is not simple harmonic; nevertheless this curve should be accurate at smaller amplitudes. The small transverse amplitude particles can oscillate over different synchrotron amplitudes; we can average over the full range of these amplitudes. Assuming a Gaussian distribution in \(a_{z}\), we find \[\langle f_{z}\rangle=\frac{\sqrt{2}(\pi^{2}-2\Gamma[3/4]^{4})}{\pi^{3/2}\Gamma[ 3/4]^{2}}\simeq 0.91 \tag{2.20}\] We find that synchrotron oscillation reduces the small amplitude tune shift by about 10%. However since synchrotron oscillations have a much longer time scale than betatron oscillations, this average value \(\langle f_{z}\rangle\) may not be a useful indicator of their impact. Zero synchrotron amplitude \(a_{z}=0\) with arbitrary transverse sizes \[\Delta\nu_{x}(a_{x},a_{y},0)|_{Long}= \Delta\nu_{x,SC}\Big{\langle}\int_{0}^{1}du\ \exp[-\frac{a_{x}^{2}u}{4}]\left[I_{0}(\frac{a_{x}^{2}u}{4})-I_{1}(\frac{a_{x} ^{2}u}{4})\right]\] \[\times\exp[-\frac{a_{y}^{2}u}{4}]\left[\frac{1}{[(\sigma_{y}^{2}/ \sigma_{x}^{2}-1)u+1]}I_{0}\left(\frac{a_{y}^{2}}{4}\frac{u}{(1-\sigma_{x}^{2} /\sigma_{y}^{2})u+\sigma_{x}^{2}/\sigma_{y}^{2}}\right)\right]\Big{\rangle}_{s} \tag{2.21}\] We observe that when the beams are not round, the tuneshift depends on the variation of the relative beam size \(\sigma_{y}/\sigma_{x}\) around the ring. The average over the ring can be calculated exactly with an integral evaluation at several points along the ring or approximately by replacing the average over the function by the function of the averaged argument. This approach would require a single integral and would be computationally faster. The quality of this approximation will be evaluated in Section 4 for IOTA parameters. Figure 1: Correction factor \(f_{z}\) for the reduction of the zero amplitude tune shift as a function of the longitudinal amplitude \(a_{z}\). With the further assumption of round bunches everywhere, the above simplifies to \[\Delta\nu_{x}(a_{x},a_{y},a_{z})|_{Long,round}= \Delta\nu_{x,SC}\int_{0}^{1}du\ \exp[-\frac{a_{z}^{2}u}{4}]I_{0}\left(\frac{a_{z}^{2}u}{4}\right)\] \[\times\exp[-\frac{a_{x}^{2}u}{4}]\left[I_{0}(\frac{a_{x}^{2}u}{4} )-I_{1}(\frac{a_{x}^{2}u}{4})\right]\exp[-\frac{a_{y}^{2}u}{4}]I_{0}\left( \frac{a_{y}^{2}}{4}u\right)\] In most machines the bunch is not round everywhere in the ring, nevertheless Eq.(2.22) can be used as a first approximation for the tuneshift with amplitude. In Section 4 we evaluate the tune shifts for IOTA parameters and consider the impact of synchrotron oscillations and the round beams approximation on the transverse tune shifts. ### Beam-beam tune shifts The beam-beam tune footprint can be found in a similar fashion as above. The major difference is that in a collider, the particles all move at very relativistic speeds, so \(\beta\simeq 1,\gamma\gg 1\). Consequently there is a large increase in the transverse fields in going from the rest frame to the lab frame due to the Lorentz boost while the longitudinal fields are unchanged. \[\mathbf{E}_{\perp,lab}\simeq\gamma\mathbf{E}_{\perp,rest}=-\gamma\mathbf{ \nabla}_{\perp}V_{BB},\ \ E_{z,lab}=E_{z,rest},\ \ \ B_{x,lab}=\frac{E_{y,lab}}{\beta},\ \ \ B_{y,lab}=-\frac{E_{x,lab}}{\beta} \tag{2.23}\] Here \(V_{BB}\) is the scalar potential for the beam-beam interaction and is the same as \(V_{SC}\) except that the beam parameters are of the opposing bunch. The net force due to the electric and magnetic fields are in the same direction for beam-beam interactions and oppose each other with space-charge. While the space charge forces act radially outward in all directions, the beam-beam forces are almost entirely in the transverse plane emanating from a squashed pancake like disc traveling with the opposing beam. In most circumstances we can think of the opposing beam as being point-like along the direction of motion and the beam-beam potential as effectively two dimensional. The longitudinal density has a role to play in beam-beam interactions in effects such as phase averaging in long bunches [9, 10] or when hourglass effects [11] or crossing angles are introduced; see [12] for a recent calculation of the luminosity and beam-beam tune shifts with both these effects in a Higgs factory \(e^{+}-e^{-}\) collider. In most cases where the beam-beam interaction is 2D, the results of section 2.1 are applicable here because \(\gamma\gg 1\). Following the same procedure as in obtaining Eq. (2.17) leads to the beam-beam tune shift at the origin \[\Delta\nu_{x,bb}=\frac{r_{p}N_{p}\beta_{x}^{*}}{2\pi\gamma}\frac{1}{\sigma_{x }^{*}(\sigma_{x}^{*}+\sigma_{y}^{*})},\ \ \Delta\nu_{y,bb}=\frac{r_{p}N_{p}\beta_{y}^{*}}{2\pi\gamma}\frac{1}{\sigma_{y} ^{*}(\sigma_{x}^{*}+\sigma_{y}^{*})} \tag{2.24}\] where \(\beta_{x}^{*},\beta_{y}^{*},\sigma_{x}^{*},\sigma_{y}^{*}\) are the values at the IP and we assumed that the beam parameters are the same at all the IPs. The important point here is that both the space charge and the beam-beam potential have the same dependence on the transverse amplitudes, consequently the two footprints are the same if we scale out the zero amplitude tune shifts. Thus, the horizontal beam-beam tune shift at transverse amplitudes \((a_{x},a_{y})\) can be written down using Eq.(2.21) \[\Delta\nu_{x,bb}(a_{x},a_{y}) =\Delta\nu_{x,bb}\int_{0}^{1}du\ \exp[-\frac{a_{x}^{2}u}{4}]\left[I_{0}( \frac{a_{x}^{2}u}{4})-I_{1}(\frac{a_{x}^{2}u}{4})\right]\exp[-\frac{a_{y}^{2} u}{4}]\] \[\times\left[\frac{1}{[(\sigma_{y}^{2}/\sigma_{x}^{2}-1)u+1]}I_{0} \left(\frac{a_{y}^{2}}{4}\frac{u}{(1-\sigma_{x}^{2}/\sigma_{y}^{2})u+\sigma_{ x}^{2}/\sigma_{y}^{2}}\right)\right] \tag{2.25}\] A similar expression holds in the vertical plane. ## 3 Density distribution in tunes We saw in the previous section that the beam and machine parameters describing the space charge and beam-beam tune shifts are all included in the zero amplitude tune shifts \(\Delta\nu_{x,sc},\Delta\nu_{y,sc}\). It is therefore useful to describe the universal functions of the dimensionless amplitudes as \[\xi_{x}(a_{x},a_{y},a_{z})=\frac{\Delta\nu_{x}(a_{x},a_{y},a_{z})}{\Delta\nu_ {x,sc}},\quad\xi_{y}(a_{x},a_{y},a_{z})=\frac{\Delta\nu_{y}(a_{x},a_{y},a_{z}) }{\Delta\nu_{y,sc}} \tag{3.1}\] The functions \(\xi_{x},\xi_{y}\) are universal in the sense that their behavior describes the amplitude dependence for any machine. In this section we consider the distribution assuming a Gaussian distribution in phase space. ### Density distribution in 1D We start with the tune density distribution in 1D for the sake of clarity. The density in action \(j_{x}\) is transformed to the dimensionless amplitude variable \(\alpha_{x}\) as \[\rho(j_{x})=\frac{1}{\varepsilon_{x}}\exp[-\frac{j_{x}}{ \varepsilon_{x}}]=\frac{1}{\varepsilon_{x}}\exp[-2\alpha_{x}],\ \ \ \alpha_{x}=\frac{a_{x}^{2}}{4}=\frac{j_{x}}{2 \varepsilon_{x}} \tag{3.2}\] \[\rho(\alpha_{x})=\rho(a_{x})[\frac{\partial\alpha_{x}}{\partial a _{x}}]^{-1}=a_{x}\exp[-\frac{1}{2}a_{x}^{2}][a_{x}/2]^{-1}=2\exp[-2\alpha_{x}] \tag{3.3}\] Now we need to transform from the amplitude to the scaled tuneshift which implies \[\rho(\xi_{x})=\frac{\rho(\alpha_{x})}{d\xi_{x}/d\alpha_{x}}=\frac {1}{2}\exp[-2\alpha_{x}][\int_{0}^{1}du\ u[H_{0}^{\prime}(\alpha_{x}u)-H_{1}^{ \prime}(\alpha_{x}u)]^{-1} \tag{3.4}\] \[H_{n}(z)\equiv\exp[-z]I_{n}(z) \tag{3.5}\] Hence, using the tuneshift expression in 1D \[\xi_{x}(\alpha_{x}) = \int_{0}^{1}du\;[H_{0}(\alpha_{x}u)-H_{1}(\alpha_{x}u)] \tag{3.6}\] \[\rho(\xi_{x}) = \rho(\alpha_{x})[\frac{\partial\xi_{x}}{\partial\alpha_{x}}]^{-1} =2\exp[-2\alpha_{x}][\int_{0}^{1}du\;u[H_{0}^{\prime}(\alpha_{x}u)-H_{1}^{ \prime}(\alpha_{x}u)]^{-1}\] (3.7) \[\equiv 2\frac{\exp[-2\alpha_{x}]}{\mathrm{Jac}(\alpha_{x},\xi_{x})} \tag{3.8}\] Here \(\mathrm{Jac}(\alpha_{x},\xi_{x})\) is the Jacobian of the transformation from \(\alpha_{x}\rightarrow\xi_{x}\). Eq.(3.6) defines \(\xi_{x}\) as a function of \(\alpha_{x}\). Inverting this relation (numerically) defines \(\alpha_{x}\) as a function of \(\xi_{x}\). We denote this function \(\alpha_{f}(\xi_{x})\). Inserting this function back into Eq.(3.8) yields the functional form \[\rho(\xi_{x})=2\frac{\exp[-2\alpha_{f}(\xi_{x})]}{\mathrm{Jac}(\alpha_{f}( \xi_{x}))} \tag{3.9}\] In 1D, the inverse function is straightforward to obtain. Fig.2 shows plots of the function \(\xi_{x}(\alpha_{x})\) and the inverse function \(\alpha_{f}(\xi_{x})\) which resembles a one-sided delta function. The \(p\)th moment of the tuneshift i.e. \(\langle\xi_{x}^{0}\rangle\) (the norm), \(\langle\xi_{x}^{1}\rangle,\langle\xi_{x}^{2}\rangle,...\) should agree in any coordinate system used for the density distribution. We can use this to test the accuracy of the distribution in \(\xi_{x}\). The two lowest moments can be calculated analytically using the known functional forms in terms of \(\alpha_{x}\). \[\int_{0}^{1}\rho(\xi_{x})d\xi_{x} = \int_{0}^{\infty}\rho(\alpha_{x})d\alpha_{x}=2\int_{0}^{\infty} \exp[-2\alpha_{x}]d\alpha_{x}=1 \tag{3.10}\] \[\langle\xi_{x}\rangle = \int_{0}^{\infty}\xi(\alpha_{x})\rho(\alpha_{x})d\alpha_{x}=2 \int_{0}^{\infty}\exp[-2\alpha_{x}]d\alpha_{x}\int_{0}^{1}du\;[H_{0}(\alpha_{ x}u)-H_{1}(\alpha_{x}u)]\] (3.11) \[= 2\int_{0}^{\infty}d\alpha_{x}\;\int_{0}^{1}du\;\exp[-(2+u)\alpha _{x}][I_{0}(\alpha_{x}u)-I_{1}(\alpha_{x}u)]\] We use the integration result from the table of integrals in [13] \[\int_{0}^{\infty}e^{-\alpha z}I_{p}(\beta z)dz=\frac{\beta^{p}}{\sqrt{\alpha^ {2}-\beta^{2}}(\alpha+\sqrt{\alpha^{2}-\beta^{2}})^{p}} \tag{3.12}\] Doing the integral over \(\alpha_{x}\) first followed by the integration over \(u\), we find \[\langle\xi_{x}\rangle = \int_{0}^{\infty}d\alpha_{x}\;\exp[-(2+u)\alpha_{x}][I_{0}( \alpha_{x}u)-I_{1}(\alpha_{x}u)] \tag{3.13}\] \[= \int_{0}^{1}du\;\left[\frac{1}{\sqrt{1+u}}-\frac{1}{\sqrt{1+u}} \frac{u}{(1+\sqrt{1+u})^{2}}\right]\] \[= 4(\mathrm{arcsinh}[1]-\log[2])=0.752906 \tag{3.14}\] Numerically the moments can also be calculated using either \(\rho(\alpha_{x})\) or using \(\rho(\xi_{x})\). The results are shown in table 1 where numerical 1 uses the first method and numerical ii uses the second. The fact that the moments calculated numerically with numerical ii agrees exactly with the other methods gives confidence that the expression for the density distribution in tuneshift \(\rho(\xi_{x})\) is correct. Finally, we calculate the median tuneshift which by definition is the value at which the number of particles is the same both above and below the median value \(\xi_{m}\), i.e. \[\int_{0}^{\xi_{m}}\rho(\xi_{x})d\xi_{x}=\int_{\xi_{m}}^{\infty}\rho(\xi_{x})d \xi_{x}=0.5\ \ \rightarrow\xi_{m}=0.783 \tag{3.15}\] The density distribution \(\rho(\xi_{x})\) along with the average and median tuneshifts are indicated in the bottom plot of Fig. 2. The 1D density vanishes in the range \(a\leq\xi_{x}\leq 0.2\) and reaches a maximum at \(\xi_{x}=1\), as expected since that is the region of maximum density. ### Density distribution in 2D The procedure is the same as in 1D, the details are slightly different because first a nonlinear equation in two variables has to be solved followed by a 2D interpolation is required. Let Figure 2: Top: Tune shift \(\xi\) as a function of the scaled amplitude \(a_{x}=2\sqrt{\alpha_{x}}\), \(a_{x}\) is the transverse amplitude in units of the rms size; Right: the inverse function \(\alpha_{x}(\xi)\). Bottom: the density distribution as a function of the tuneshift. the density in tune space be \(\rho(\xi_{x},\xi_{y})\). By the conservation of particle number, \[\rho(\alpha_{x},\alpha_{y})d\alpha_{x}d\alpha_{y}=\rho(\xi_{x},\xi_{y})d\xi_{x}d \xi_{y} \tag{3.16}\] which implies \[\rho(\xi_{x},\xi_{y})=\rho(\alpha_{x},\alpha_{y})/\mathrm{Jac}(\xi_{x},\xi_{y}; \alpha_{x},\alpha_{y}) \tag{3.17}\] where \[\mathrm{Jac}(\xi_{x},\xi_{y};\alpha_{x},\alpha_{y})=\left|\begin{array}{c} \frac{\partial\xi_{x}}{\partial\alpha_{x}}\frac{\partial\xi_{x}}{\partial \alpha_{y}}\\ \frac{\partial\xi_{y}}{\partial\alpha_{x}}\frac{\partial\xi_{y}}{\partial \alpha_{y}}\end{array}\right| \tag{3.18}\] The density can be written as \[\rho(\alpha_{x},\alpha_{y})=4\exp[-2\alpha_{x}-2\alpha_{y}] \tag{3.19}\] In terms of these variables, the scaled tune shifts are \[\xi_{x}(\alpha_{x},\alpha_{y})=\int_{0}^{1}du\;[H_{0}(\alpha_{x}u)-H_{1}( \alpha_{x}u)]H_{0}(\alpha_{y}u) \tag{3.20}\] \[\xi_{y}(\alpha_{x},\alpha_{y})=\int_{0}^{1}du\;H_{0}(\alpha_{x}u)[H_{0}(\alpha _{y}u)-H_{1}(\alpha_{y}u)] \tag{3.21}\] The derivatives are found for example as \[\frac{\partial\xi_{x}}{\partial\alpha_{x}}=\left(\int_{0}^{1}du\;uH_{0}( \alpha_{z}u)[H_{0}^{\prime}(\alpha_{x}u)-H_{1}^{\prime}(\alpha_{x}u)]H_{0}( \alpha_{y}u)\right) \tag{3.22}\] These can be used to write the density in tune space using Eqs. (3.17), (3.19), (3.18) in terms of \(\alpha_{x},\alpha_{y}\). Doing so yields \[\rho(\xi_{x},\xi_{y})=4\frac{\exp[-2\alpha_{x}-2\alpha_{y}]}{\mathrm{Jac}( \xi_{x},\xi_{y};\alpha_{x},\alpha_{y})} \tag{3.23}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline & analytical & numerical i & numerical ii \\ \hline & \multicolumn{4}{|c|}{1D} \\ \hline normalization & 1.0 & 1.0 & 1.0 \\ \(\langle\xi_{x}\rangle\) & 0.752906 & 0.752906 & 0.752906 \\ \(\langle\xi_{x}^{2}\rangle\) & n/a & 0.597766 & 0.597766 \\ \(\xi_{x,rms}\) & n/a & 0.175782 & 0.175782 \\ \hline & \multicolumn{4}{|c|}{2D} \\ \hline normalization & 1.0 & 1.0 & 0.981 \\ \(\langle\xi_{x}\rangle\) & 0.633389 & 0.633389 & 0.629 \\ \(\langle\xi_{x}^{2}\rangle\) & n/a & 0.4293 & 0.4596 \\ \(\xi_{x,rms}\) & n/a & 0.1678 & 0.253 \\ \hline \end{tabular} \end{table} Table 1: Moments of the density distribution calculated in different ways in 1D and 2D. The two numerical ways, labeled as i and ii, describe using the density in amplitude space \(\rho(\alpha_{x},\alpha_{y})\) and tune space \(\rho(\xi_{x},\xi_{y})\) respectively. In all cases numerical i is the more accurate. The RHS of this equation however is a function of the amplitudes \((\alpha_{x},\alpha_{y})\) while what we want is a function of the scaled tune shifts \((\xi_{x},\xi_{y})\). That requires an inversion of Eqs. (3.20) and (3.21). It is done in two steps: (1) solving these nonlinear equations to find \(\alpha_{x},\alpha_{y}\) as functions of \(\xi_{x},\xi_{y}\) and (2) interpolating these to write these as smooth functions of \(\xi_{x},\xi_{y}\). Fig. 3 shows the complete density \(\rho(\xi_{x},\xi_{y})\) as a function of \(\xi_{x},\xi_{y}\) viewed from two different angles as well the projected density \(\rho_{x}(\xi_{x})\) on the \(\xi_{x}\) axis, which is obtained by integrating over the \(\xi_{y}\) axis, i.e. \[\rho_{x}(\xi_{x})=\int_{0}^{1}d\xi_{y}\;\rho(\xi_{x},\xi_{y}) \tag{3.24}\] The density is exactly symmetric along the \(\xi_{x}=\xi_{y}\) axis, as it must be for round beams. The second plot shows that the density is zero at the origin and we observe that along either the \(\xi_{x}\) axis or the \(\xi_{y}\) axis the density is similar to the 1D density profile seen in Fig. 2. The projected density does not vanish at \(\xi_{x}=0\) and it has a maximum around \(\xi_{x}=0.9\) rather than at \(\xi_{x}=1\). A little thought shows that is true of any density that has a non-vanishing dependence on both \(\alpha_{x},\alpha_{y}\). The moments of the distribution are found as before \[\langle\xi_{x}\rangle = \int_{0}^{\infty}d\alpha_{x}\int_{0}^{\infty}d\alpha_{y}\;\xi_{x} (\alpha_{x},\alpha_{y})\rho(\alpha_{x},\alpha_{y}) \tag{3.25}\] \[= 4\int_{0}^{1}du\;\int_{0}^{\infty}d\alpha_{x}\int_{0}^{\infty}d \alpha_{y}\;[H_{0}(\alpha_{x}u)-H_{1}(\alpha_{x}u)]H_{0}(\alpha_{y}u)\exp[-2 \alpha_{x}-2\alpha_{y}]\] For the integration over \(\alpha_{y}\), we use the same integral result in Eq.(3.12) to obtain \[\int_{0}^{\infty}H_{0}(\alpha_{y}u)\exp[-2\alpha_{x}-2\alpha_{y}]d\alpha_{y}= \frac{1}{\sqrt{1+u}}\] Combining this integration and doing the final integral over \(u\), we find \[\langle\xi_{x}\rangle=4\ln[\frac{2\sqrt{2}}{\sqrt{2}+1}]=0.633389 \tag{3.26}\] The moments are calculated numerically using both methods used for the 1D distribution, are shown in table 1. The normalization is not quite unity with the second method. We attribute this \(\sim 2\%\) error to numerical issues in the inversion and interpolation required to find the functions \(\alpha_{x}(\xi_{x},\xi_{y}),\alpha_{y}(\xi_{x},\xi_{y})\). This can be used to find the correct moments by dividing the raw moments by the normalization. With this correction, the error in the first moment is \(\sim 0.7\%\) while the error in the second moment is \(7\%\). We briefly consider application of these results to beam stability with octupoles and space charge, a more detailed study will be reported elsewhere. Typically the stability is considered using the dispersion relation with a 2D betatron spread from both octupoles and space charge. This relation was used to derive stability curves [14] with a parabolic transverse density which results in a space charge tune spread linear in the actions that matches the octupolar tune spread dependence. It may be possible to use the exact space Figure 3: Top: density \(\rho(\xi_{x},\xi_{y})\) vs the scaled tune shifts \(\xi_{x},\xi_{y}\). Middle: : this plots shows the density from a different angle, it shows e.g. the density along the \(\xi_{x}\) and also along the \(\xi_{x}\) axes. Bottom: the projected density \(\rho_{x}\) along the \(\xi_{x}\) axis. It is non-zero at \(\xi_{x}=0\) and it has a maximum at a value less than \(\xi_{x}=1\). charge tune spread \(\Delta\nu_{x,sc}(a_{x},a_{y})=\Delta\nu_{x,sc}\xi_{z}(a_{x},a_{y})\) to obtain stability curves for Gaussian bunches. The extension to 3d stability is in principle straightforward, using the 3d space charge tuneshifts \(\xi_{x,y}(a_{x},a_{y},a_{z})\) found in section 2. A more direct use of the density curves derived in this section would be to check the results derived from PIC simulations against the exact results obtained here. We also note that the rms tune spread calculated in table 1 may be directly related to the decoherence time following a kick; this time in 2D is longer than the typical \(1/\Delta\nu_{sc}\) time scale associated with 1D decoherence. ## 4 Application to IOTA IOTA is an accelerator that was designed to test the concept of nonlinear integrable lattices [15]. The R & D program with electrons and protons was discussed in [16]. The ring has been operated with electrons since commissioning began and several notable results have been achieved, including the demonstration of optical stochastic cooling [17]. Proton operation is scheduled to begin in 2024 when the concept of achieving high space charge tune shifts with a nonlinear integrable lattice will be tested. In this section, we will evaluate the space charge footprints theoretically and compare with particle tracking. This is done in an otherwise completely linear lattice; we note that emittance growth and beam loss was studied in a partially integrable lattice with octupoles in [18]. Table 2 shows the relevant parameters of the iota proton ring. First, we consider whether the beam is sufficiently round everywhere in the ring for the round beam expressions for the footprint to be applicable. Fig. 4 shows the ratio of the vertical to horizontal beam sizes along the ring. The ratio varies between 0.5 to 5.0 with a mean value of 1.2. The mean value may not be relevant here, as the fluctuations are fairly large. We therefore use the general expression for the tune shifts but we also compare with the round beam forms as well as an approximation discussed in section 2. We discuss first the theoretical footprints under the different assumptions discussed in section 2. In the limit of long bunches, which is valid for IOTA, we can use Eq.(2.13). This involves calculating the tune shift at each longitudinal location and averaging over the locations. This method requires doing an integration at each location. We can reverse the order and instead do the averaging first and \begin{table} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{IOTA proton parameters} \\ \hline Circumference & 39.97 [m] \\ Kinetic energy & 2.5 [MeV] \\ Maximum bunch intensity /current & 9\(\times\)10\({}^{10}\) / 8 [ma] \\ Transverse normalized rms emittance & (0.3, 0.3) [mm-mrad] \\ Betatron tunes & (5.3, 5.3) \\ Natural chromaticities & (-8.2, -8.1) \\ Average transverse beam sizes (rms) & (2.22, 2.22) [mm] \\ Kinematic \(\gamma\) / transition \(\gamma_{t}\) & 1.003 / 3.75 \\ Rf voltage & 400 [Volts] \\ Rf frequency / harmonic number & 2.2 [MHz] / 4 \\ Bucket length & \(\sim\) 10 [m] \\ Bucket half height in \(\delta p/p\) & 3.72 \(\times\)10\({}^{-3}\) \\ Rms bunch length & 1.7 [m] \\ Rms energy /momentum spread & 1.05\(\times\)10\({}^{-5}\) / 1.99 \(\times\)10\({}^{-3}\) \\ \hline \end{tabular} \end{table} Table 2: Machine and beam parameters of the IOTA proton ring do a single integration instead. \[\Delta\nu_{x}(a_{x},a_{y},a_{z}) = \Delta\nu_{x,sc}\int_{0}^{1}du\ \left\{\big{<}\exp[-\frac{a_{z}^{2}u}{4}]I_{0} \left(\frac{a_{z}^{2}u}{4}\right)\exp[-\frac{a_{x}^{2}u}{4}]\left[I_{0}(\frac{ a_{x}^{2}u}{4})-I_{1}(\frac{a_{x}^{2}u}{4})\right]\right.\] \[\left.\times\exp[-\frac{a_{y}^{2}u}{4}]\left[\frac{1}{[(\sigma_{y} ^{2}/\sigma_{x}^{2}-1)u+1]}\right]^{1/2}I_{0}\left(\frac{a_{y}^{2}}{4}\frac{u} {(1-\sigma_{x}^{2}/\sigma_{y}^{2})u+\sigma_{x}^{2}/\sigma_{y}^{2}}\right) \big{>}_{s}\right\}\] This reduces the time required for evaluation and we found that the differences in numerical values are negligible, at least in the case of IOTA. The left plot in Fig. 5 shows a comparison of the footprints based on the general expression in Eq.(2.13), and that based on the round beam expression in Eq.(2.22). The differences are small; this is to be expected as iota has been designed to have axial symmetry almost everywhere in the ring in order to preserve integrability [15]. The right plot in this figure shows the footprints with synchrotron oscillations at two amplitudes \(a_{s}=1,2\). As expected from Fig. 1, the total tuneshift at \(a_{s}=1\) is about 90% and at \(a_{s}=2\), the total tuneshift is 70% of the value at \(a_{s}=0\). The theory of the amplitude dependent tuneshifts assumes that the particles stay at constant amplitudes while executing betatron oscillations. This is not always true, especially at high intensities. We examine this assumption by testing emittance growth with pyorbit simulations [19]. Many details on the pyorbit simulations and their validation can be found in earlier reports [20, 18]. Detailed analysis had shown that there was good agreement between theory and simulations in all the tested regimes. In the simulations reported here, all machine nonlinearities were turned off, space charge was the only nonlinearity. The results reported in [20] showed that initial beam losses can be minimized by a process of slow initialization in which the charge per macroparticle is increased over about 40 turns to full value and the beam was injected into a lattice that was rms matched to equilibrium beam sizes. The PIC parameters that led to convergence were foun values: number of macroparticles = \(5\times 10^{5}\), grid size = 128 x 128 x 5, and the number of space charge kicks per betatron wavelength = 63. These values were used in the simulations discussed below. It is not completely straightforward to compare space charge simulations with theory especially at high intensities. The simplest is to compare the space charge tune shifts. The complications arise from two effects theory assumes that the tune shift is calculated at constant emittances, which is not the case as the space charge increases. The second complication is related to PIC simulations. It has been observed that this method causes orbits to be chaotic at small amplitudes close to the origin [21]. We dealt with the first issue by using the average emittance over the time used for the tuneshift calculation. For the second issue, we distribute particles over 100 different angles at the same small amplitude and average over the angles to reduce the fluctuations in the tune shift value. Another complication at high intensities is that because The FFT aliases tunes to be in the range 0 - 0.5, it cannot determine if the tunes are below or above the half integer when the space charge tune shift exceeds 0.5. Here we determined the correct tuneshifts by selecting the value that increased with intensity, without deeming the integer parts. Previously we had determined the complete tune (integer and fractional parts) with the alternative method of counting the number of betatron oscillations over a thousand turns and found that the simulated tunes calculated both ways agreed well with each other and with theory [20]. At a low intensity of \(10^{9}\) particles/bunch, there are only fluctuations due to numerical noise in the PIC simulations. These are observed to be around 0.4%, which is close to the expected level \(\sim 1/\sqrt{n}=0.14\%\) with \(n=5\times 10^{5}\). Fig. 6 shows the emittance change over a range of intensities; the last value \(9\times 10^{10}\) corresponds to the maximum design bunch intensity. Since there is very little observable emittance growth at the lowest intensities in this range, we expect the simulated tunes to be close to theoretical values. However, the tune shifts are too small to be accurately computable with an FFT over \(\sim\)1000 turns, especially to resolve the tune-shift for neighboring particles. At intermediate intensity of \(10^{10}\), there is a larger emittance growth pf \(\sim 10\%\) while at \(9\times 10^{10}\), the emittance grows by Figure 5: Left: comparison of round and non-round footprints without synchrotron oscillations. Right: footprints with synchrotron oscillation amplitudes of \(1\sigma_{s}\) (in red) and \(2\sigma_{s}\) (blue). nearly a factor of ten without slow initialization. The tune footprints are calculated using \(5\times 10^{5}\) macroparticles and using 5000 test particles distributed transversely from 0 - 5\(\sigma\) and with zero synchrotron amplitude. Fig. 7 shows the footprints at \(10^{10}\) and \(9\times 10^{10}\) intensities obtained with pyorbit simulations and compared with the theoretical values using Eq.(2.21). In both cases, we have scaled the numerical footprints by the maximum tuneshift, so the analytical and numerical footprints can be easily compared. At the lower intensity, the two footprints agree reasonably well although the simulated footprint is wider at amplitudes from 1 to 5\(\sigma\). At the intensity of \(9\times 10^{10}\), the simulated footprint is even wider and the agreement is not as close, which is to be expected. The single most important reason for the increasing discrepancy is that the theory assumes that all particles move on invariant actions which is not true at high intensities. Nevertheless, the theoretical footprint can be useful both as a benchmark tool and also to quickly determine the important resonances Figure 6: Relative emittance growth over 1000 turns as a function of the intensity for two conditions: without slow initialization and with a slow initialization of 100 turns. Figure 7: Footprints from pyorbit tracking and theory with non-round beams. Left: intensity= \(10^{10}\), Right: intensity= \(9\times 10^{10}\) that can be crossed by the footprint at a chosen working point. ## 5 Conclusions We derived tune shifts with amplitude in terms of a universal dimensionless parameter under quite general conditions that are valid for space charge or beam-beam interactions. We included multiple interaction points and synchrotron oscillations. Our focus is on space charge interactions mainly and the inclusion of multiple interactions as well as beams with arbitrary transverse aspect ratios is especially important. We then used the analytical tune shifts to derive semi-analytical expressions for the density distribution of tunes assuming that the density is a Gaussian function of the phase-space coordinates. The tune distribution requires an inversion of the functional arguments followed by a numerical interpolation. We emphasize that the tune distribution thus obtained requires no numerical simulations. This is important because the density at the maximum tune shift requires very high sampling of this region and quite often the simulations get the wrong shape of the density in this region. The density is expressed in terms of variables \((\xi_{x},\xi_{y})\) which are the tune shifts scaled by the maximum tune shifts. Therefore the density \(\rho(\xi_{x},\xi_{y})\) has the same form and shape for both space charge and beam-beam interactions. With the method presented here, we verified that the low order moments of the distribution are preserved in the transformation; exactly in 1D and with a \(\sim 2\%\) error in 2D for the zeroth moment, see table 1 for the other moments. These error could be further reduced by improving the numerical schemes for the function inversion and interpolation. This calculation of the density distribution in tunes will enable a more accurate modeling of landau damping with space charge and an external nonlinearity such as octupoles as well as the damping with beam-beam interactions. We checked the tune spread calculations for the IOTA proton ring with simulations using the pyorbit code. At the highest intensities planned with bunched beams, there is substantial emittance blow up and steps will need to be taken to mitigate emittance growth and beam loss. We used a numerical scheme of slow initialization to reduce the growth and prevent beam loss over the short time scale of the simulation. Using this scheme, we found generally good agreement between the footprints calculated by theory and simulation. The expressions for the theoretical footprints developed in this paper should therefore be useful for bench-marking other space charge simulation codes as well as determining working points relatively free of low order space charge driven resonances. **Acknowledgments** I thank former undergraduate interns David Feigelson (U Chicago) and Runze Li (UW, Madison; now at Yale) and colleague Francois Ostiguy for their enthusiastic collaboration on a project to model space charge effects in IOTA. I am especially thankful to Runze for writing a library of pyorbit codes needed for modeling IOTA; these codes are available at his github site [22]. The work has been supported by the Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
2305.07055
Chern-Simons-Trinion Theories: One-form Symmetries and Superconformal Indices
We study 3d theories containing $\mathcal{N}=3$ Chern-Simons vector multiplets coupled to the $\mathrm{SU}(N)^3$ flavour symmetry of 3d $T_N$ theories with Chern-Simons level $k_1$, $k_2$ and $k_3$. It was formerly pointed out that these theories flow to infrared SCFTs with enhanced $\mathcal{N}=4$ supersymmetry when $1/k_1+1/k_2+1/k_3=0$. We examine superconformal indices of these theories which reveal that supersymmetry of the infrared SCFTs may get enhanced to $4 \leq \mathcal{N} \leq 6$ if such a condition is satisfied. Moreover, even if the Chern-Simons levels do not obey the aforementioned condition, we find that there is still an infinite family of theories that flows to infrared SCFTs with $\mathcal{N}=4$ supersymmetry. The 't Hooft anomalies of the one-form symmetries of these theories are analysed. As a by-product, we observe that there is generally a decoupled topological sector in the infrared. When the infrared SCFTs have $\mathcal{N} \geq 4$ supersymmetry, we also study the Higgs and Coulomb branch limits of the indices which provide geometric information of the moduli space of the theories in question in terms of the Hilbert series.
Riccardo Comi, William Harding, Noppadol Mekareeya
2023-05-11T18:00:03Z
http://arxiv.org/abs/2305.07055v3
# Chern-Simons-Trinion Theories: One-form Symmetries and Superconformal Indices ###### Abstract We study 3d theories containing \(\mathcal{N}=3\) Chern-Simons vector multiplets coupled to the \(\mathrm{SU}(N)^{3}\) flavour symmetry of 3d \(T_{N}\) theories with Chern-Simons level \(k_{1}\), \(k_{2}\) and \(k_{3}\). It was formerly pointed out that these theories flow to infrared SCFTs with enhanced \(\mathcal{N}=4\) supersymmetry when \(1/k_{1}+1/k_{2}+1/k_{3}=0\). We examine superconformal indices of these theories which reveal that supersymmetry of the infrared SCFTs may get enhanced to \(4\leq\mathcal{N}\leq 6\) if such a condition is satisfied. Moreover, even if the Chern-Simons levels do not obey the aforementioned condition, we find that there is still an infinite family of theories that flows to infrared SCFTs with \(\mathcal{N}=4\) supersymmetry. The 't Hooft anomalies of the one-form symmetries of these theories are analysed. As a by-product, we observe that there is generally a decoupled topological sector in the infrared. When the infrared SCFTs have \(\mathcal{N}\geq 4\) supersymmetry, we also study the Higgs and Coulomb branch limits of the indices which provide geometric information of the moduli space of the theories in question in terms of the Hilbert series. ###### Contents * 1 Introduction * 2 Theories with one \(T_{2}\) building block * 2.1 One-form symmetries and their 't Hooft anomalies * 2.2 Cases that satisfy the ATT condition * 2.2.1 Special case of \((k_{1},k_{2},k_{3})=(-k,2k,2k)\) * 2.2.2 General results for \((k_{1},k_{2},k_{3})=k(\mathfrak{pq},-\mathfrak{pr},-\mathfrak{qr})\) with \(\mathfrak{r}=\mathfrak{p}+\mathfrak{q}\) * 2.2.3 Higgs and Coulomb branches * 2.3 Cases that do not satisfy the ATT condition * 2.3.1 Special case of \((k_{1},k_{2},k_{3})=(k,1,1)\) * 2.4 Gluing with \(T(\text{SU}(2))\) theories * 2.4.1 't Hooft anomalies of the one-form symmetries * 2.4.2 Summary of the results * 3 Theories with two \(T_{2}\) building blocks * 3.1 Cases that satisfy the ATT condition * 3.1.1 Special case of \((k_{1},k_{2},k_{3})=(-k,2k,2k)\) * 3.1.2 General results for \((k_{1},k_{2},k_{3})=k(\mathfrak{pq},-\mathfrak{pr},-\mathfrak{qr})\) with \(\mathfrak{r}=\mathfrak{p}+\mathfrak{q}\) * 3.1.3 Higgs and Coulomb branches * 3.2 Cases that do not satisfy the ATT condition * 3.3 Gluing with \(T(\text{SU}(2))\) theories * 4 Theories with \(T_{3}\) building blocks * 4.1 One \(T_{3}\) building block * 4.2 Two \(T_{3}\) building blocks * A Theories with four \(T_{2}\) building blocks * B Mixed gauge/zero-form monopole operators Introduction Three-dimensional (3d) supersymmetric Chern-Simons (CS)-matter theories have rich infrared (IR) behaviours. For example, the \({\cal N}=3\) superconformal field theory (SCFT) can be obtained as the IR fixed point of the \({\cal N}=2\) theory deformed by a certain superpotential [1]. Moreover, if the gauge algebra and matter content are chosen appropriately in an \({\cal N}=3\) theory, supersymmetry may get further enhanced up to \({\cal N}=8\)[2; 3; 4; 5; 6; 7; 8; 9; 10]. In particular, the \({\rm U}(N)_{k}\times{\rm U}(N)_{-k}\) CS theory with two hypermultiplets in the bifundamental representation (in the 3d \({\cal N}=4\) language) describes the system of \(N\) M2-branes on \(\mathbb{C}^{4}/\mathbb{Z}_{k}\), where at large \(N\) it is dual to M-theory on \({\rm AdS}_{4}\times S^{7}/\mathbb{Z}_{k}\), and at large \(N\) with a fixed ratio \(N/k\) is dual to Type IIA string theory on \({\rm AdS}_{4}\times{\bf CP}^{3}\). This CS-matter theory provides the first explicit realisation of the AdS/CFT correspondence in these dimensions [8]. In this article, we focus on 3d \({\cal N}=3\) vector multiplets coupled to a certain number of copies of a 3d SCFT, known as the 3d \(T_{N}\) theory, whose flavour symmetry is \({\rm SU}(N)^{3}\). The 3d \(T_{N}\) theory can be realised by compactifying the 4d \(T_{N}\) theory [11; 12] on a circle, or equivalently by compactifying \(N\) M5-branes on a circle times a sphere with three full punctures. An example of the theories of our interest consists of that constructed from a single copy of the \(T_{N}\) theory such that the \({\rm SU}(N)^{3}\) flavour symmetry is gauged with CS levels \(k_{1}\), \(k_{2}\) and \(k_{3}\). This can be realised by compactifying \(N\) M5-branes on a three-manifold which is a Seifert bundle over \(S^{2}\) with three singular fibres, with Seifert parameters \(1/k_{1}\), \(1/k_{2}\) and \(1/k_{3}\)[13; 14]. This example can be generalised further, for example, by involving many copies of the \(T_{N}\) theories where each \({\rm SU}(N)\) factor of the \({\rm SU}(N)^{3}\) flavour symmetry of each copy is commonly gauged with CS levels \(k_{1}\), \(k_{2}\) and \(k_{3}\). The corresponding three-manifolds are then known as graph manifolds [14]. We remark _en passant_ that, due to their richness in mathematical and physical properties, 3d theories arising from compactifying M5-branes on three-manifolds have received considerable attention over the recent years; see e.g. [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. For the theories of our interest with general \(N\), it was pointed out by the authors of [14] that whenever the CS levels satisfy the condition \(\sum_{i=1}^{3}1/k_{i}=0\), then the theory flows to an IR SCFT with enhanced \({\cal N}=4\) supersymmetry.1 For convenience, we shall refer to this condition as the Assel-Tachikawa-Tomasiello (ATT) condition. This statement was supported by a field theoretic argument. Geometrically, for a single copy of the \(T_{N}\) theory, supersymmetry enhancement is accounted for by the holonomy of the corresponding Seifert manifolds. However, when the theory contains more than one building blocks, supersymmetry enhancement is left unaccounted for, in general, by the holonomy of graph manifolds. One of the main objectives of this paper is to study supersymmetry enhancement of this family of theories, focusing on \(N=2\) and \(N=3\), using the superconformal index [26; 27; 28; 29; 30; 31; 32; 33].2 We find that, when the ATT condition is satisfied, supersymmetry of the IR SCFT generally gets enhanced to \(\mathcal{N}=4\), but there are also a large number of cases with \(\mathcal{N}=5\) and \(\mathcal{N}=6\) supersymmetry. Surprisingly we also find that, even if the ATT condition is not satisfied, there is still an infinite family of theories whose IR SCFTs have enhanced \(\mathcal{N}=4\) supersymmetry. In particular, for certain special values of CS levels such as \((k_{1},k_{2},k_{3})=(2,1,1)\), the IR SCFT turns out to be the rank-zero minimal \(\mathcal{N}=4\) SCFT, discussed in [23; 25]. Another main goal of this article is to study the one-form symmetries [37; 38] of these theories, as well as their (mixed) 't Hooft anomalies along the line of [13; 39], and the mixed 't Hooft anomalies between the one-form symmetries and zero-form symmetries using the method of [36] (see also [40]). As a result, we deduce that there is generally a decoupled topological sector in the IR. We also identify these topological quantum field theories (TQFTs) using 't Hooft anomalies of the one-form symmetries. Importantly, we point out that, even if a set of theories flow to the same IR SCFTs (which can be deduced from the fact that the indices are the same or the associated three-manifolds are diffeomorphic to each other), the decoupled topological sectors may be different. We also gauge the non-anomalous one-form symmetry and study the resulting theories. Finally, we study the Higgs and Coulomb limits of the superconformal indices [41] for theories whose IR SCFTs have \(\mathcal{N}\geq 4\) enhanced supersymmetry. These provide geometric information of the Higgs and Coulomb branches of the IR SCFTs in question in terms of the Hilbert series [42; 43]. Footnote 2: We adopt the same notation as [33; 34; 35; 36]. Each notation is defined in the main text. The paper is organised as follows. In Sections 2 and 3, we study theories with one and two \(T_{2}\) building blocks, respectively. Their one-form symmetries and 't Hooft anomalies are investigated in Section 2.1. The theories whose CS levels satisfy the ATT condition are then studied in Sections 2.2 and 3.1. In this class of theories, the Higgs and Coulomb branch limits of the indices are studied in Sections 2.2.3 and 3.1.3. We then move on to explore theories whose CS levels do not satisfy the ATT condition in Sections 2.3 and 3.2. We also study theories coupled to one or many copies of the \(T(\text{SU}(2))\) SCFT in Sections 2.4 and 3.3. In Section 4, we discuss theories with \(T_{3}\) building blocks. We discuss their indices and 't Hooft anomalies of one-form symmetries. Due to the technicality of the computations, we focus on the indices of theories whose CS levels satisfy the ATT condition, from which we conclude that the IR SCFT has enhanced \(\mathcal{N}=4\) supersymmetry. In Appendix A, we study certain theories with four \(T_{2}\) building blocks as well as their indices. In Appendix B, the mixed gauge/zero-form monopole operators in theories with one and two \(T_{2}\) building blocks such that the ATT condition is satisfied are examined. The potential mixed anomaly between the \(\mathbb{Z}_{2}\) one-form symmetry and the zero-form flavour symmetry implied by the presence of such monopole operators is discussed. ## 2 Theories with one \(T_{2}\) building block Let us consider the \(T_{2}\) theory whose three SU(2) flavour symmetries are gauged with 3d \(\mathcal{N}=3\) Chern-Simons couplings \((k_{1},k_{2},k_{3})\). We depict this theory diagrammatically by \[\begin{split}\includegraphics[width=142.26378pt]{figures/top_2} \end{split} \tag{1}\] where each finite line with label \(k_{i}\) denotes an SU(2)\({}_{k_{i}}\) gauge group. This theory was studied extensively in [14, Section 2.2.1], where it was pointed out that (1) can be realised by compactifying M5-branes on a three-manifold which is a Seifert bundle over \(S^{2}\) with three singular fibres, with Seifert parameters \(1/k_{1}\), \(1/k_{2}\) and \(1/k_{3}\). The effective superpotential after integrating out the adjoint scalars is \[W\propto\sum_{i=1}^{3}\frac{1}{k_{i}}\operatorname{tr}(\mu_{i}^{2})\, \tag{2}\] where \(\mu_{i}\) are the moment map operators for the SU(2)\({}_{i}\) symmetry of the \(T_{2}\) theory. In terms of the chiral fields \(Q_{\alpha_{1}\alpha_{2}\alpha_{3}}\) of the \(T_{2}\) theory, \(\mu_{i}\) can be written as \[\begin{split}(\mu_{1})_{\alpha_{1}\alpha_{1}^{\prime}}& =\epsilon^{\alpha_{2}\alpha_{2}^{\prime}}\epsilon^{\alpha_{3} \alpha_{3}^{\prime}}Q_{\alpha_{1}\alpha_{2}\alpha_{3}}Q_{\alpha_{1}^{\prime} \alpha_{2}^{\prime}\alpha_{3}^{\prime}}\,\\ (\mu_{2})_{\alpha_{2}\alpha_{2}^{\prime}}&= \epsilon^{\alpha_{1}\alpha_{1}^{\prime}}\epsilon^{\alpha_{3}\alpha_{3}^{\prime }}Q_{\alpha_{1}\alpha_{2}\alpha_{3}}Q_{\alpha_{1}^{\prime}\alpha_{2}^{\prime} \alpha_{3}^{\prime}}\,\\ (\mu_{3})_{\alpha_{3}\alpha_{3}^{\prime}}&= \epsilon^{\alpha_{1}\alpha_{1}^{\prime}}\epsilon^{\alpha_{2}\alpha_{2}^{\prime }}Q_{\alpha_{1}\alpha_{2}\alpha_{3}}Q_{\alpha_{1}^{\prime}\alpha_{2}^{\prime} \alpha_{3}^{\prime}}\,\end{split} \tag{3}\] where the indices \(\alpha_{i},\alpha_{i}^{\prime}=1,2\) correspond to the SU(2)\({}_{i}\) gauge group (with \(i=1,2,3\)). From the above relations, it follows that \[\operatorname{tr}(\mu_{1}^{2})=\operatorname{tr}(\mu_{2}^{2})= \operatorname{tr}(\mu_{3}^{2})\equiv\operatorname{tr}(\mu^{2}) \tag{4}\] and so the effective superpotential can be rewritten as \[W\propto\left(\frac{1}{k_{1}}+\frac{1}{k_{2}}+\frac{1}{k_{3}} \right)\operatorname{tr}(\mu^{2}). \tag{5}\] If the following condition is satisfied \[\frac{1}{k_{1}}+\frac{1}{k_{2}}+\frac{1}{k_{3}}=0\, \tag{6}\] then supersymmetry gets enhanced from \(\mathcal{N}=3\) to \(\mathcal{N}=4\). This follows from the discussion in [5] and also from [9; 14]. Relation (6) is what we referred to as the ATT condition in the introduction. Subsequently we will show that this is a _sufficient_, but _not_ necessary, condition for supersymmetry enhancement. In particular, in Section 2.3, we will show that even if the ATT condition (6) is not satisfied, there are cases in which the IR SCFT has accidental \(\mathcal{N}=4\) supersymmetry. A main tool that we will use to analyse these theories is the superconformal index. It is explicitly given by \[\begin{split}\mathcal{I}_{\eqref{eq:2.1}}(a,n_{a};x)& =\left(\frac{1}{8}\prod_{i=1}^{3}\oint\frac{dz_{i}}{2\pi iz_{i}} \right)\sum_{(m_{1},m_{2},m_{3})\in\mathbb{Z}^{3}}\left(\prod_{i=1}^{3}z_{i}^{ 2k_{i}m_{i}}\mathcal{Z}_{\rm vec}^{\rm SU(2)}(z_{i};m_{i};x)\right)\times\\ &\qquad\prod_{s_{1},s_{2},s_{3}=\pm 1}\mathcal{Z}_{\chi}^{1/2}(z_{1 }^{s_{1}}z_{2}^{s_{2}}z_{3}^{s_{3}}a;s_{1}m_{1}+s_{2}m_{2}+s_{3}m_{3}+n_{a};x )\,\end{split} \tag{7}\] where the \(\rm SU(2)\) vector multiplet contribution is \[\mathcal{Z}_{\rm vec}^{\rm SU(2)}(z;n;x)=x^{-2|n|}\prod_{s=\pm 1}(1-(-1)^{2n}x^{2 |n|}z^{2s})\, \tag{8}\] and the contribution of the chiral multiplet of \(R\)-charge \(R\) is \[\mathcal{Z}_{\chi}^{R}(z;m;x)=\left(x^{1-R}z^{-1}\right)^{|m|/2}\prod_{j=0}^{ \infty}\frac{1-(-1)^{m}z^{-1}x^{|m|+2-R+2j}}{1-(-1)^{m}z\,x^{|m|-R+2j}}. \tag{9}\] If the CS levels satisfy the ATT condition (6), it follows from (5) that the effective superpotential is zero, and the \(\rm U(1)_{a}\) flavour symmetry which assigns charge \(+1\) to all of the eight chiral multiplets of the \(T_{2}\) theory is a symmetry of the Lagrangian. We denote by \(a\) and \(n_{a}\) the fugacity and background magnetic flux for this flavour symmetry. Upon computing the series expansion of the index, we will set \(n_{a}=0\) and drop the \(n_{a}\) dependence from the index, i.e. we write the latter simply as \(\mathcal{I}_{\eqref{eq:2.1}}(a;x)\). Note also that, if the CS levels do not satisfy the ATT condition (6), we should set \(a=1\) and \(n_{a}=0\) in (7), since the \(\rm U(1)_{a}\) flavour symmetry is no longer a symmetry of the effective superpotential (5). ### One-form symmetries and their 't Hooft anomalies We now discuss the one-form symmetries of theory (1) and their anomalies. Let us first consider the \(T_{2}\) theory, whose global form of the manifest flavour symmetry is (see [44, (4.2)]) \[\frac{\mathrm{SU}(2)_{1}\times\mathrm{SU}(2)_{2}\times\mathrm{SU}(2)_{3}}{( \mathbb{Z}_{2})_{13}\times(\mathbb{Z}_{2})_{23}}\, \tag{10}\] where \((\mathbb{Z}_{2})_{ij}\) denotes the diagonal \(\mathbb{Z}_{2}\) subgroup of the centre of \(\mathrm{SU}(2)_{i}\) times the centre of \(\mathrm{SU}(2)_{j}\). In other words, among the three \(\mathbb{Z}_{2}\) factors that come from the centre of \(\prod_{i=1}^{3}\mathrm{SU}(2)_{i}\) in the numerator of (10), only two combinations present in the denominator act trivially on the four free hypermultiplets of the \(T_{2}\) theory. Gauging each of the \(\mathrm{SU}(2)_{i}\) symmetry therefore leads to the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) one-form symmetry. We will see that turning on CS levels \(k_{i}\) for each \(\mathrm{SU}(2)_{i}\) gauge group (with \(i=1,2,3\)) results in 't Hooft anomalies of a subgroup or the whole \(\mathbb{Z}_{2}^{2}\) one-form symmetry. The 't Hooft anomaly of the one-form symmetry in theory (1) with CS levels \((k_{1},k_{2},k_{3})\) is characterised by the 4d anomaly theory whose action is [45] \[\frac{2\pi}{2}\int_{\mathcal{M}_{4}}\sum_{i=1}^{3}k_{i}\frac{ \mathcal{P}(w_{i}^{(2)})}{2}\, \tag{11}\] where each of \(w_{i}^{(2)}\in H^{2}(\mathcal{M}_{4},\mathbb{Z}_{2})\) is the two-form background field for each \(\mathbb{Z}_{2}\) one-form symmetry that arises from the centre of each \(\mathrm{SU}(2)_{i}\) gauge group, \(\mathcal{P}(w^{(2)})\) is the Pontryagin square operation and the integration is performed over a spin manifold \(\mathcal{M}_{4}\). Note that \(\int_{\mathcal{M}_{4}}\mathcal{P}(w_{i}^{(2)})\) is even on a spin manifold \(\mathcal{M}_{4}\). Since among the \(\mathbb{Z}_{2}^{3}\) centres of \(\mathrm{SU}(2)^{3}\) only \(\mathbb{Z}_{2}^{2}\) acts non-trivially on the trifundamental matter of the \(T_{2}\) theory, we have the condition \[\sum_{i=1}^{3}w_{i}^{(2)}=0. \tag{12}\] Using the identity \[\int_{\mathcal{M}_{4}}\mathcal{P}(A+B)=\int_{\mathcal{M}_{4}} \mathcal{P}(A)+\int_{\mathcal{M}_{4}}\mathcal{P}(B)+2\int_{\mathcal{M}_{4}}A \cup B\, \tag{13}\] and the fact that \(\int_{\mathcal{M}_{4}}\mathcal{P}(w_{i}^{(2)})\) is even on spin manifold \(\mathcal{M}_{4}\), we rewrite (11) as \[S_{\mathrm{anom}}=\frac{2\pi}{2}\int_{\mathcal{M}_{4}}\left[(k_{1}+k_{3})\frac {\mathcal{P}(w_{1}^{(2)})}{2}+(k_{2}+k_{3})\frac{\mathcal{P}(w_{2}^{(2)})}{2}- k_{3}(w_{1}^{(2)}\cup w_{2}^{(2)})\right] \tag{14}\] after dropping terms that are integer multiples of \(2\pi.\) Using the notation of [39, (F.8)] \[S_{\rm anom}=\frac{2\pi}{2}\left[\sum_{I=1}^{2}p_{II}\int_{\mathcal{M}_{4}}\frac{ \mathcal{P}(w_{I}^{(2)})}{2}+p_{12}\int_{\mathcal{M}_{4}}w_{1}^{(2)}\cup w_{2} ^{(2)}\right]\, \tag{15}\] we see that the above anomalies can be summarised in following symmetric matrix \(p\): \[p=\begin{pmatrix}k_{1}+k_{3}&-k_{3}\\ -k_{3}&k_{2}+k_{3}\end{pmatrix}\mod 2. \tag{16}\] For \(k_{3}=1,\) this is in agreement with the anomaly matrix given by [13, (4.58)] with \(N=n=2\) and \(k_{1,2}\to k_{1,2}+1.\) We can decompose the \(\mathbb{Z}_{2}^{2}\) one-form symmetry of (1) into two parts: a subgroup \(\Gamma_{A}\) of \(\mathbb{Z}_{2}^{2}\) that has an 't Hooft anomaly and the anomaly free part \(\mathbb{Z}_{2}^{2}/\Gamma_{A}.\) The anomalous part \(\Gamma_{A}\) is given by \(\Gamma_{A}=\mathbb{Z}_{2}^{\rm rank(p)},\) where \(\rm rank(p)\) is the rank of matrix \(p.\) The _non-anomalous_ one-form symmetry is then \(\mathbb{Z}_{2}^{\rm dim(ker\,p)},\) where \(\ker p\) denotes the kernel (nullspace) of matrix \(p.\) According to [13, (4.44)], it was proposed that theory (1) flows to an IR theory that splits into two subsectors: 1. the minimal abelian TQFT [39]: \[\mathcal{A}^{\Gamma_{A},p}=\begin{cases}\mathcal{A}^{2,1}\cong\mathrm{SU}(2) _{1}\cong\mathrm{U}(1)_{2}&\text{if $\mathrm{rank}(p)=1$}\\ \mathcal{A}^{\{2,2\},p}\text{ of \@@cite[cite]{[\@@bibref{}{AHH}{}{}, Appendix F]}}&\text{if $\mathrm{rank}(p)=2$}\end{cases}\ ;\] (17) 2. the subsector (1)\({}_{\rm AF},\) which can be an interacting SCFT or another TQFT, whose one-form symmetry is 't Hooft anomaly free. The above statement can be summarised as \[(\ref{1})=(\ref{1})_{\rm AF}\otimes\mathcal{A}^{\Gamma_{A},p}. \tag{18}\] This relation can be inverted as in [39, (1.13)] and [13, (4.45)], namely \[(\ref{1})_{\rm AF}=\frac{(\ref{1})\otimes\mathcal{A}^{\Gamma_{A},-p}}{\Gamma _{A}}. \tag{19}\] ### Cases that satisfy the ATT condition We consider theories with CS levels satisfying (6). #### 2.2.1 Special case of \((k_{1},k_{2},k_{3})=(-k,2k,2k)\) These CS levels satisfy the ATT condition. Theory (2.1) with these CS levels, namely (2.20) can also be regarded as the \(\mathrm{USp}(2)_{-k}\times\mathrm{Spin}(4)_{2k}\) gauge theory with a bifundamental half-hypermultiplet in the representation \([\mathbf{2};\mathbf{4}]\) whose quiver diagram is (2.21) The equivalence of these two theories is due to the fact that \(\mathrm{Spin}(4)\cong\mathrm{SU}(2)\times\mathrm{SU}(2)\) and that the vector representation \([\mathbf{4}]\) of \(\mathrm{Spin}(4)\) is equivalent to the representation \([\mathbf{2};\mathbf{2}]\) of \(\mathrm{SU}(2)\times\mathrm{SU}(2)\). The index of this theory can be derived as in [33; 35]. We first compute the index for the \(\mathrm{USp}(2)_{-k}\times\mathrm{SO}(4)_{2k}\) gauge theory with the same matter content, namely (2.22) The index of this theory is \[\begin{split}&\mathcal{I}_{(\ref{eq:2.22})}(\zeta,a;x)\\ &=\frac{1}{8}\sum_{(\mathfrak{m}_{1},\mathfrak{m}_{2})\in\mathbb{ Z}^{2}}\,\sum_{\mathfrak{n}\in\mathbb{Z}}\left(\prod_{j=1}^{2}\oint\frac{dv_{j}}{2 \pi iv_{j}}\,v_{j}^{2km_{j}}\right)\zeta^{\mathfrak{m}_{1}+\mathfrak{m}_{2}} \oint\frac{du}{2\pi iu}\,u^{-2kn}\times\\ &\mathcal{Z}_{\mathrm{vec}}^{\mathrm{SO}(4)}(v_{1},v_{2}; \mathfrak{m}_{1},\mathfrak{m}_{2};x)\mathcal{Z}_{\mathrm{vec}}^{\mathrm{USp}( 2)}(u;\mathfrak{n};x)\times\\ &\prod_{i=1}^{2}\prod_{s_{1},s_{2}=\pm 1}\mathcal{Z}_{\chi}^{1/2} \left(v_{i}^{s_{1}}u^{s_{2}}a;s_{1}\mathfrak{m}_{i}+s_{2}\mathfrak{n};x \right)\,\end{split} \tag{2.23}\] where \(\zeta\) is the fugacity for the zero-form magnetic symmetry such that \(\zeta^{2}=1\), and the contribution of the \(\mathrm{SO}(4)\) vector multiplet is \[\begin{split}\mathcal{Z}_{\mathrm{vec}}^{\mathrm{SO}(4)}(v_{1},v _{2};\mathfrak{m}_{1},\mathfrak{m}_{2};x)&=x^{-|\mathfrak{m}_{1} -\mathfrak{m}_{2}|-|\mathfrak{m}_{1}+\mathfrak{m}_{2}|}\times\\ &\prod_{s_{1},s_{2}=\pm 1}\left(1-(-1)^{s_{1}\mathfrak{m}_{1}+s_{2} \mathfrak{m}_{2}}x^{|s_{1}\mathfrak{m}_{1}+s_{2}\mathfrak{m}_{2}|}v_{1}^{s_{1} }v_{2}^{s_{2}}\right)\.\end{split} \tag{2.24}\] For simplicity, we have set the fugacity \(\chi\) for the charge conjugation symmetry to unity. Theory (2.21) can be obtained by gauging the magnetic symmetry; its index reads \[{\cal I}_{(\ref{2.21})}(a;x)=\frac{1}{2}\left[{\cal I}_{(\ref{2.22})}(\zeta=1,a;x )+{\cal I}_{(\ref{2.22})}(\zeta=-1,a;x)\right]. \tag{2.25}\] It can be checked that (2.7) and (2.25) yield the same result for \((k_{1},k_{2},k_{3})=(-k,2k,2k)\). In fact, the gauge fugacities and magnetic fluxes in (2.7) and (2.23) can be mapped to each other as follows: \[\begin{array}{c}z_{1}=u\,\hskip 28.452756ptz_{2}^{2}=v_{1}v_{2}\,\hskip 56.905512ptz_{3}^{2}=v_{1}v_{2}^{-1}\,\\ m_{1}=\mathfrak{n}\,\hskip 28.452756pt2m_{2}=\mathfrak{m}_{1}+\mathfrak{m}_{2}\, \hskip 28.452756pt2m_{3}=\mathfrak{m}_{1}-\mathfrak{m}_{2}\.\end{array} \tag{2.26}\] The indices up to order \(x^{4}\) are as follows. \begin{tabular}{|c|c|} \hline \(k\) & Index \\ \hline 1, 2 & diverges \\ 3 & \(1+0x+(2a^{4}-1)x^{2}+(a^{6}-a^{2}+a^{-2})\,x^{3}+(3a^{8}-a^{4}-2)\,x^{4}+\ldots\) \\ 4 & \(1+0x+(a^{4}-1)x^{2}+a^{-2}x^{3}+(2a^{8}-2)x^{4}+\ldots\) \\ \(\geq\,5\) & \(1+0x+(a^{4}-1)x^{2}+a^{-2}x^{3}+(a^{8}-2)x^{4}+\ldots\) \\ \hline \end{tabular} (2.27) Note that, for \(k\geq 5\), the indices for these cases differ from each other at a higher order than \(x^{4}\). For \(k=1,\,2\), the index diverges and the theories are _bad_ in the sense of Gaiotto and Witten [46]. Let us now analyse the superconformal multiplets and enhancement of supersymmetry for the cases of \(k\geq 3\) using information from [47; 48; 49] (see also the argument in [34; 50]). The vanishing coefficient of \(x\) implies that there is no 3d \({\cal N}=3\) flavour current multiplet \(B_{1}[0]_{1}^{(2)}\). In general, the negative terms at order \(x^{2}\) receive the contribution from the \({\cal N}=3\) flavour current multiplet \(B_{1}[0]_{1}^{(2)}\) and \({\cal N}=3\) extra-SUSY current multiplet \(A_{2}[0]_{1}^{(0)}\). Since the former is absent and the only negative term at order \(x^{2}\) is \(-1\), we conclude that there is precisely one \({\cal N}=3\) extra SUSY-current multiplet that leads to the enhanced \({\cal N}=4\) supersymmetry in the IR. As can be read off from the index, the bare monopole operator with magnetic fluxes \((m_{1},m_{2},m_{3})\) has dimension \[\Delta(m_{1},m_{2},m_{3})=\frac{1}{4}\sum_{s_{1},s_{2},s_{3}=\pm 1}\left| \sum_{i=1}^{3}s_{i}m_{i}\right|-\sum_{i=1}^{3}2|m_{i}|\, \tag{2.28}\] its charge under the U\((1)_{a}\) flavour symmetry is \[a(m_{1},m_{2},m_{3})=-\frac{1}{2}\sum_{s_{1},s_{2},s_{3}=\pm 1}\left| \sum_{i=1}^{3}s_{i}m_{i}\right| \tag{2.29}\] and it carries charge \(2k_{i}m_{i}\) under the Cartan subalgebra of each \(\mathrm{SU}(2)_{i}\) gauge factor. Viewing the theory in question from the 3d \(\mathcal{N}=2\) perspective, the \(\mathrm{U}(1)_{a}\) flavour current multiplet also contributes \(-1\) at order \(x^{2}\) in the index. As a consequence the aforementioned \(\mathcal{N}=3\) extra SUSY-current should be identified with that of the flavour symmetry. Furthermore, we see that there is no relevant operator due to the vanishing coefficient of \(x\). For \(k\geq 4\), there is one marginal operator corresponding to \(\mathrm{tr}(\mu_{1}^{2})=\mathrm{tr}(\mu_{2}^{2})=\mathrm{tr}(\mu_{3}^{2})\), where \(\mu_{i}\) is the moment map of the \(\mathrm{SU}(2)_{i}\) flavour symmetry of the \(T_{2}\) theory. For \(k=3\), we have an extra marginal operator which, according to (7), receives the contribution from the eight gauge magnetic fluxes \((m_{1},m_{2},m_{3})=(\pm 2,\pm 1,\pm 1)\), where \(\pm\) here denotes all possible 8 sign combinations that can appear. Since the bare (non-gauge-invariant) monopole operators with such fluxes contribute \(z_{1}^{\mp 12}z_{2}^{\pm 12}z_{3}^{\pm 12}\), \(x^{-4}\) and \(a^{-8}\) to the index, we interpret the aforementioned marginal operator as a dressed monopole operator, where such a bare monopole operator is dressed in a gauge invariant way with a combination of 12 chiral fields of the \(T_{2}\) theory. #### One-form symmetries and gauging thereof We now examine the one-form symmetry of (1) with \((k_{1},k_{2},k_{3})=(-k,2k,2k)\). From the discussion in Section 2.1, we see that \[\begin{array}{|c|c|c|c|c|}\hline\mathrm{CS\ levels}&\mathrm{Anomaly}& \mathrm{Non-anomalous}&\mathrm{Anomalous}&\mathrm{TQFT\ with}\\ (-k,2k,2k)&\mathrm{matrix}\ p&1\text{-form symmetry}&1\text{-form symmetry}& \mathrm{anom.\ symmetry}\\ \hline\end{array} \tag{30}\] For \(k\) even, theory (20) flows to an \(\mathcal{N}=4\) SCFT with a non-anomalous \(\mathbb{Z}_{2}^{2}\) one-form symmetry. However, for \(k\) odd, theory (20) flows to an \(\mathcal{N}=4\) SCFT with a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry and a decoupled topological sector \(\mathcal{A}^{2,1}\). We can understand the above statement from another point of view. A slight modification of [51, (3.27)] (see also [51, (3.18), (3.19)]) states that the \(\mathrm{USp}(2N)_{-k}\times\mathrm{SO}(2M)_{2k}\) gauge theory with bifundamental matter admits a quotient by a \(\mathbb{Z}_{2}\) symmetry, whose generator is a combination of the \(\mathbb{Z}_{2}\) centres of \(\mathrm{SO}(2M)\) and \(\mathrm{USp}(2N)\), if3 Footnote 3: More generally, for the \(\mathrm{USp}(2N)_{k_{2}}\times\mathrm{SO}(2M)_{k_{1}}\) gauge theory with bifundamental matter, this condition reads \(\frac{1}{4}k_{1}M+\frac{1}{2}k_{2}N\in\mathbb{Z}\). \[\frac{1}{2}k(M-N)\in\mathbb{Z}. \tag{31}\] In other words, the \(\mathbb{Z}_{2}\) one-form symmetry of the \(\mathrm{USp}(2N)_{-k}\times\mathrm{SO}(2M)_{2k}\) gauge theory is non-anomalous if (31) is satisfied. Applying this to (22), namely \(M=2\) and \(N=1\), we see that its non-anomalous one-form symmetry is \[\text{one-form symmetry of (\ref{2.22})}=\begin{cases}\mathbb{Z}_{2}&\qquad k\text{ even}\\ \text{trivial}&\qquad k\text{ odd}\end{cases}. \tag{32}\] Recall that theory (21), or equivalently theory (20), arises from gauging the \(\mathbb{Z}_{2}\) zero-form magnetic symmetry of (22). Since gauging a discrete \(\mathbb{Z}_{2}\) zero-form symmetry in 3d leads to a dual \(\mathbb{Z}_{2}\) one-form symmetry, we conclude that for \(k\) odd theory (21) = (20) has a \(\mathbb{Z}_{2}\) one-form symmetry. For \(k\) even, the one-form symmetry of theory (21) = (20) can be either \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) or its extension \(\mathbb{Z}_{4}\). Note that the extension is formed if theory (22), with \(k\) even, has a mixed anomaly between the \(\mathbb{Z}_{2}\) zero-form magnetic symmetry and the \(\mathbb{Z}_{2}\) one-form symmetry [52]. Subsequently, we will explicitly show that there is no extension of the symmetry to \(\mathbb{Z}_{4}\). For this purpose, let us gauge the _whole_ one-form symmetry of theory (21), i.e. we turn it into a dual zero-form symmetry. This is equivalent to gauging the \(\mathbb{Z}_{2}\) one-form symmetry in (22), namely considering the \([\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}]/\mathbb{Z}_{2}\) theory. The index of the latter can be computed as in (23) but with the inclusion of the summation over half-odd-integral fluxes; in particular, we modify the summation in (23) as follows: \[\sum_{(\mathfrak{m}_{1},\mathfrak{m}_{2})\in\mathbb{Z}^{2}}\;\sum_{\mathfrak{n }\in\mathbb{Z}}\;\;\;\longrightarrow\;\;\;\sum_{p^{\prime}=0}^{1}s^{p^{\prime }}\sum_{(\mathfrak{m}_{1},\mathfrak{m}_{2})\in\left(\mathbb{Z}+\frac{p^{ \prime}}{2}\right)^{2}}\sum_{\mathfrak{n}\in\mathbb{Z}+\frac{p^{\prime}}{2}}\,, \tag{33}\] where \(s\) is a fugacity for the \(\mathbb{Z}_{2}\) zero-form symmetry arising from gauging the one-form symmetry such that \(s^{2}=1\). For \(k\) odd, we see that the half-odd-integral fluxes (i.e.those correspond to \(p^{\prime}=1\)) do not contribute to the index; in other words, the index of the \([\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}]/\mathbb{Z}_{2}\) theory is equal to that of the \(\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}\) theory. This is in agreement with the proposal that the non-anomalous one-form symmetry of (22) for \(k\) odd is trivial.4 Let us focus on \(k\) even. We see that for \(p^{\prime}=0\) the contribution from \(\zeta^{\mathfrak{m}_{1}+\mathfrak{m}_{2}}\) is either \(1\) or \(\zeta\), whereas for \(p^{\prime}=1\) we have either \(s\) or \(s\zeta\). The elements of \(\{1,\zeta,s,s\zeta\}\) form the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) zero-from symmetry. Observe that the \(\mathbb{Z}_{2}\) zero-form symmetry associated with \(s\) and the \(\mathbb{Z}_{2}\) zero-form magnetic symmetry associated with \(\zeta\) do not form an extension to \(\mathbb{Z}_{4}\), since there is no element of order \(4\). Footnote 4: One can also check, in the same way as in [36], that the integrand of the index contains odd powers of \(\text{SU}(2)\) gauge fugacities for the magnetic fluxes that contribute \(p^{\prime}=1\). We can also see this from the perspective of theory (20). In order to gauge the whole one-form symmetry, we can do it in two steps. First, gauge the diagonal \(\mathbb{Z}_{2}\) one-form symmetry, whose generator is a combination of the \(\mathbb{Z}_{2}\) centres of \(\text{SU}(2)_{2k}\) and \(\text{SU}(2)_{2k}\); in other words, we consider the quotient \(\text{SU}(2)_{-k}\times[\text{SU}(2)_{2k}\times\text{SU}(2)_{2k}]/\mathbb{Z}_{2}\). The latter is equivalent to (2.22), namely the \(\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}\) gauge theory, since \(\text{SO}(4)\cong[\text{SU}(2)\times\text{SU}(2)]/\mathbb{Z}_{2}\). We report the indices up to order \(x^{4}\) below. \begin{tabular}{|c|c|} \hline \(k\) & Index \\ \hline 1, 2 & diverges \\ \hline 3 & \(1+\zeta a^{2}x+[(2+\zeta)a^{4}-(1+\zeta)]\,x^{2}+[(1+2\zeta)a^{6}-(1+\zeta)a^{ 2}+a^{-2}]\,x^{3}+\) \\ & \([(3+2\zeta)a^{8}-(1+\zeta)a^{4}-2]\,x^{4}+\ldots\) \\ \hline 4 & \(1+0x+[(1+\zeta)a^{4}-1]\,x^{2}+[\zeta(a^{6}-a^{2})+a^{-2}]\,x^{3}+\) \\ & \([(2+\zeta)a^{8}-\zeta a^{4}-2]\,x^{4}+\ldots\) \\ \hline 5 & \(1+0x+(a^{4}-1)\,x^{2}+(\zeta a^{6}+a^{-2})\,x^{3}+[(1+\zeta)a^{8}-\zeta a^{4}- 2]\,x^{4}+\ldots\) \\ \hline \(\geq\,6\) & \(1+0x+(a^{4}-1)x^{2}+a^{-2}x^{3}+[(1+\zeta)a^{8}-2]\,x^{4}+\ldots\) \\ \hline \end{tabular} (2.34) For \(k\geq 6\), the terms with fugacity \(\zeta\) appear at a higher order than \(x^{4}\). The second step is to gauge the remaining of \(\mathbb{Z}_{2}\) one-form symmetry, i.e. we consider the further quotient \([\text{SU}(2)_{-k}\times[\text{SU}(2)_{2k}\times\text{SU}(2)_{2k}]/\mathbb{Z} _{2}]/\mathbb{Z}_{2}\). The index of the latter can be obtained from (2.7) by replacing the summation as \[\sum_{(m_{1},m_{2},m_{3})\in\mathbb{Z}^{3}}\quad\longrightarrow\quad\sum_{p^{ \prime}=0}^{1}s^{p^{\prime}}\,\sum_{m_{1}\in\left(\mathbb{Z}+\frac{p^{\prime}} {2}\right)}\,\,\,\sum_{p=0}^{1}\zeta^{p}\,\sum_{(m_{2},m_{3})\in\left(\mathbb{ Z}+\frac{p}{2}\right)\times\left(\mathbb{Z}+\frac{p}{2}+\frac{p^{\prime}}{2} \right)}\,, \tag{2.35}\] where \(\zeta\) is the fugacity of the \(\mathbb{Z}_{2}\) zero-form symmetry arising from the first step and \(s\) is that arising from the second step.5 We deliberately used the same notation as those for the \([\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}]/\mathbb{Z}_{2}\) theory since they can be identified with each other. As before, the elements of \(\{1,\zeta,s,s\zeta\}\) form the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) zero-from symmetry. Moreover, it is clear from (2.35) that the order of gauging of the \(\mathbb{Z}_{2}\) one-form symmetry in each of the two steps is immaterial; this confirms that the one-form symmetry of (2.20) is indeed \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) for \(k\) even. Footnote 5: Observe that we have four mutually exclusive cases, namely (1) \(p=0\) and \(p^{\prime}=0\): \(m_{1},m_{2},m_{3}\) are integral and we have \(\zeta^{0}s^{0}=1\); (2) \(p=1\) and \(p^{\prime}=0\): \(m_{1}\) is integral, \(m_{2},m_{3}\) are half-odd-integral and we have \(\zeta^{1}s^{0}=\zeta\); (3) \(p=0\) and \(p^{\prime}=1\): \(m_{1}\) is half-odd-integral, \(m_{2}\) is integral, \(m_{3}\) is half-odd-integral and we have \(\zeta^{0}s^{1}=s\); (4) \(p=1\) and \(p^{\prime}=1\): \(m_{1}\) is half-odd-integral, \(m_{2}\) is half-odd-integral, \(m_{3}\) is integral and we have \(\zeta s\). For reference, we report the index for the \([\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}]/\mathbb{Z}_{2}\) theory or the \([{\rm SU}(2)_{-k}\times[{\rm SU}(2)_{2k}\times{\rm SU}(2)_{2k}]/\mathbb{Z}_{2}]/ \mathbb{Z}_{2}\) theory, with \(k=4\), up to order \(x^{9}\) as follows: \[\begin{split} 1+&(-1+a^{4}+a^{4}\zeta)x^{2}+(a^{-2}-a^{2} \zeta+a^{6}\zeta)x^{3}+(-2+2a^{8}-a^{4}\zeta+a^{8}\zeta)x^{4}+\\ &(-a^{6}+a^{10}+a^{10}\zeta)x^{5}+(1+a^{-4}-a^{4}-a^{8}+2a^{12}-2 a^{4}\zeta+2a^{12}\zeta)x^{6}+\\ &(-2a^{-2}-a^{6}+a^{1}4+2a^{2}\zeta-2a^{6}\zeta-a^{10}\zeta+2a^{1 4}\zeta)x^{7}+\\ &(-a^{4}-2a^{8}+3a^{16}-s+a^{4}s-\zeta+a^{4}\zeta-a^{12}\zeta+2a^ {16}\zeta-s\zeta+a^{4}s\zeta)x^{8}+\\ &(a^{-6}+4a^{2}+2a^{6}-a^{10}-a^{14}+2a^{18}+\\ & 2a^{-2}s-2a^{2}s-5a^{6}\zeta-a^{10}\zeta+2a^{18}\zeta+2a^{-2}s \zeta-2a^{2}s\zeta)x^{9}+\ldots\.\end{split} \tag{36}\] On the other hand, for \(k\) odd, we find that the magnetic fluxes in the sector \(p^{\prime}=1\) contribute zero to the index, and so the fugacity \(s\) does not appear. #### Higgs and Coulomb branches Let us now explore the Higgs and Coulomb branches of the IR 3d \(\mathcal{N}=4\) SCFT associated with this class of theories. We can take the Higgs and Coulomb branch limit of the index in a similar way as in [41] to obtain the Higgs and Coulomb branch Hilbert series as follows. We define \[\begin{split}& h=xa^{2}\,\ c=xa^{-2}\,\\ &\text{or equivalently}\quad x=(hc)^{1/2}\,\ a=(h/c)^{1/4} \end{split} \tag{37}\] and substitute them in the index (7). In the Higgs branch limit we send \(c\to 0\) and keep \(h\) fixed, whereas in the Coulomb branch limit we send \(h\to 0\) and keep \(c\) fixed. Let us now apply this to (1) with CS levels \((-k,2k,2k)\). We obtain the Higgs and Coulomb branch limits of (7) to be \[\begin{split}&\text{Higgs limit \eqref{eq:1}}_{(-k,2k,2k)}: \qquad\text{PE}\left[h^{2}+h^{2k-4}+h^{2k-3}-h^{4k-6}\right]\,\\ &\text{Coulomb limit \eqref{eq:1}}_{(-k,2k,2k)}:\qquad 1\.\end{split} \tag{38}\] The former is indeed the Hilbert series of \(\mathbb{C}^{2}/\widehat{D}_{2k-2}\)[42, (5.31)]. These indicate that Higgs and Coulomb branches of the IR \(\mathcal{N}=4\) interacting SCFT associated with (1) with CS levels \((-k,2k,2k)\) are \(\mathbb{C}^{2}/\widehat{D}_{2k-2}\) and trivial, respectively. The generators of the Higgs branch, corresponding to \(h^{2}\), \(h^{2k-4}\) and \(h^{2k-3}\), are respectively \[w=\text{tr}(\mu^{2})\,\quad v=X_{(2,1,1)}Q^{4k}\,\quad u=X_{(2,1,1)}Q^{4k-4} \mu_{1}\mu_{2}\mu_{3}\, \tag{39}\] satisfying the relation \[u^{2}+v^{2}w=w^{2k-3}. \tag{40}\] Here \(X_{(2,1,1)}\) denotes the bare monopole operator of flux \((2,1,1)\), where it has dimension \(-4\) and carries the flavour charge \(-8\) as well as gauge charges \((-4k,4k,4k)\) under the Cartan subalgebras of each \(SU(2)_{i}\); see around (28). In the above \(v=X_{(2,1,1)}Q^{4k}\) denotes the gauge invariant dressed monopole operator, where the bare monopoles with fluxes \((\pm 2,\pm 1,\pm 1)\) are dressed with appropriate combinations of \(4k\) chiral multiplets \(Q\) of the \(T_{2}\) theory. Note that \(X_{(2,1,1)}\) contains \(4k\) gauge indices of each \(\mathrm{SU}(2)_{i}\) and these are contracted with the gauge indices in \(Q^{4k}\) to form \(v\). Similarly, for \(u\), the gauge indices of \(X_{(2,1,1)}\) are contracted with those of \(Q^{4k-4}\) as well as \(\mu_{1}\mu_{2}\mu_{3}\), and the remaining indices are contracted with epsilon tensors. Let us revisit the special case of \(k=2\), i.e. the CS levels \((-2,4,4)\). Although the index diverges, the above computation shows that the Higgs branch is \(\mathbb{C}^{2}/\widehat{D}_{2}\). This is reminiscent of the Higgs branch of the 3d \(\mathcal{N}=4\)\(\mathrm{SU}(2)\) gauge theory with 2 hypermultiplets in the fundamental representation. The latter is the union of two isomorphic hyperKahler cones, each described by \(\mathbb{C}^{2}/\mathbb{Z}_{2}\)[46, 53, 54, 55]. We believe that the Higgs branch of the case of \(k=2\) has the same structure. We can also study of the Higgs and Coulomb branch limits of the \(\mathrm{USp}(2)_{-k}\times\mathrm{SO}(4)_{2k}\) gauge theory, or equivalently \(\mathrm{SU}(2)_{-k}\times[\mathrm{SU}(2)_{2k}\times\mathrm{SU}(2)_{2k}]/ \mathbb{Z}_{2}\), which comes from gauging a non-anomalous \(\mathbb{Z}_{2}\) subgroup of \(\mathbb{Z}_{2}^{2}\) one-form symmetry of the aforementioned theory. We find that the Higgs and Coulomb branch limits are6 Footnote 6: Throughout the paper, we put superscript [1] whenever we would like to emphasise that the corresponding symmetry is one-form. \[\begin{split}\text{Higgs limit }(\ref{eq:1})_{(-k,2k,2k)}/ \mathbb{Z}_{2}^{[1]}\text{:}&\qquad\text{PE}\left[h^{2}+h^{k-2}+h ^{k-1}-h^{2k-2}\right]\,\\ \text{Coulomb limit }(\ref{eq:1})_{(-k,2k,2k)}/\mathbb{Z}_{2}^{[1]} \text{:}&\qquad 1\.\end{split} \tag{41}\] The former is indeed the Hilbert series of \(\mathbb{C}^{2}/\widehat{D}_{k}\)[42, (5.31)]. This means that the Higgs branch is \(\mathbb{C}^{2}/\widehat{D}_{k}\), and the Coulomb branch is trivial. The generators of the Higgs branch, corresponding to \(h^{2}\), \(h^{k-2}\) and \(h^{k-1}\), are respectively \[w=\text{tr}(\mu^{2})\,\quad v=X_{\left(1,\frac{1}{2},\frac{1}{2}\right)}Q^{2k} \,\quad u=X_{\left(1,\frac{1}{2},\frac{1}{2}\right)}Q^{2k-4}\mu_{1}\mu_{2}\mu_{3}\, \tag{42}\] satisfying the relation \[u^{2}+v^{2}w=w^{k-1}. \tag{43}\] Here \(X_{\left(1,\frac{1}{2},\frac{1}{2}\right)}\) denotes the bare monopole operator of flux \(\left(1,\frac{1}{2},\frac{1}{2}\right)\), where it has dimension \(-2\) and carries the flavour charge \(-4\) as well as gauge charges \((-2k,2k,2k)\) under the Cartan subalgebras of each \(SU(2)_{i}\). The notations and contractions of the gauge indices are as described above. 2.2 General results for \((k_{1},k_{2},k_{3})=k(\mathfrak{p}\mathfrak{q},-\mathfrak{p}\mathfrak{r},- \mathfrak{q}\mathfrak{r})\) with \(\mathfrak{r}=\mathfrak{p}+\mathfrak{q}\) As pointed out in [14, Footnote 7], the ATT condition (2.6) admits the general solution of the form \[\begin{split}& k_{1}=\mathfrak{p}\mathfrak{q}k\,\quad k_{2}=- \mathfrak{p}\mathfrak{r}k\,\quad k_{3}=-\mathfrak{q}\mathfrak{r}k\,\\ &\text{with }\mathfrak{r}=\mathfrak{p}+\mathfrak{q}\text{ and } \mathfrak{p},\,\mathfrak{q},\,\mathfrak{r},\,k\in\mathbb{Z}_{\neq 0}\.\end{split} \tag{2.44}\] For simplicity, we will consider the cases of \(\mathfrak{p}>0\), \(\mathfrak{q}>0\) and \(k>0\). The index (2.7) receives non-trivial contributions from gauge fluxes \((0,0,0)\) and \((\pm\mathfrak{r},\pm\mathfrak{q},\pm\mathfrak{p})n\) with \(n\in\mathbb{Z}_{\geq 1}\). The contribution from flux \((0,0,0)\), up to order \(x^{4}\), reads \[1+0x+(a^{4}-1)x^{2}+a^{-2}x^{3}+(a^{8}-2)x^{4}+\ldots. \tag{2.45}\] The contributions from the eight fluxes \((\pm\mathfrak{r},\pm\mathfrak{q},\pm\mathfrak{p})n\) correspond to the gauge-invariant dressed monopole operators. According to the discussion around (2.28), it follows that the bare monopole operators associated with these fluxes have dimension \(-2\mathfrak{r}n\). The charge under the flavour symmetry is \(-4\mathfrak{r}n\), and the charges under the Cartan subalgebra of each \(\mathrm{SU}(2)_{i}\) are \(2k\mathfrak{p}\mathfrak{q}\mathfrak{r}n(\pm 1,\mp 1,\mp 1)\). This bare monopole can be dressed with a combination of \(2k\mathfrak{p}\mathfrak{q}\mathfrak{r}n\) chiral fields of the \(T_{2}\) theory to form gauge invariant quantities. As a consequence, such gauge-invariant dressed monopole operators have dimension \((-2+k\mathfrak{p}\mathfrak{q})\mathfrak{r}n\) and the charge under the flavour symmetry \((-4+2k\mathfrak{p}\mathfrak{q})\mathfrak{r}n\). Indeed, if \((-2+k\mathfrak{p}\mathfrak{q})\mathfrak{r}n\) is sufficiently large, the index (2.45) at sufficiently low order in \(x\) does not get affected by these dressed monopole operators. In any case, using the same argument as above, we see from (2.45) that the SCFT in the IR has enhanced \(\mathcal{N}=4\) supersymmetry, regardless of the contribution of the dressed monopole operators. In the previous example of \((-k,2k,2k)\), corresponding to \(\mathfrak{p}=1\), \(\mathfrak{q}=1\) and \(\mathfrak{r}=2\), we see that the dimension of such gauge-invariant dressed monopole operators is \((2k-4)n\) and the charge under flavour symmetry is \((4k-8)n\). Indeed, for \(k=3\), in addition to (2.45), we have the terms \(a^{4}x^{2}\) and \(a^{8}x^{4}\) coming from \(n=1\) and \(n=2\) (up to order \(x^{4}\)); as reported in (2.27). Similarly for \(k=4\), we have an additional term \(a^{8}x^{4}\) coming from \(n=1\); see (2.27). For \(k=1\) and \(k=2\), the dimensions are negative (or zero) and this is why the indices diverge; as reported in (2.27). #### One-form symmetries Let us now discuss the one-form symmetries as well as the topological sector in the IR. From the discussion in Section 2.1, we see that if \(k\) is even, then all of the CS levels \((k_{1},k_{2},k_{3})\) are even and it follows that the anomaly action (2.14) is an integral multiple of \(2\pi\), which means that the corresponding anomaly theory is trivial. We thus turn to the case in which \(k\) is odd, in which case the anomaly theory is the same as that for \(k=1\). We summarise the information in the table below. \[\begin{array}{|c|c|c|c|c|}\hline\text{CS levels}&\text{Anomaly}&\text{ Non-anomalous}&\text{Anomalous}&\text{TQFT with}\\ (\mathfrak{pq},-\mathfrak{pr},-\mathfrak{qr})&\text{matrix $p$ (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eq associated with the fugacity \(a\) is explicitly broken, and the marginal operator \(\text{tr}(\mu^{2})\) is set to zero in the chiral ring. We will consider three interesting families of theories arising from M5-branes compactified on the quotient of the three-sphere \(S^{3}/\Gamma\), where \(\Gamma\) is a finite subgroup of SU(2). The CS levels for each of these families are as follows [13, Section 5.4.3]: * Lens space7\(L(p,q)\) with8\(\frac{p}{q}=(k_{1}+1)-\frac{1}{k_{2}+1}\) or \((k_{2}+1)-\frac{1}{k_{1}+1}\). Explicitly, we take Footnote 7: Recall that the Lens space \(L(p,q)\) can be viewed as the quotient space \(S^{3}/\mathbb{Z}_{p}\) with the identification \((z_{1},z_{2})\sim(e^{2\pi i/p}z_{1},e^{2\pi iq/p}z_{2})\). \[p=|k_{1}k_{2}+k_{1}+k_{2}|\text{ and }q=\pm(k_{1}+1)\text{ or }\pm(k_{2}+1)\.\] (50) This corresponds to the CS levels \((k_{1},k_{2},1)\). * \(S^{3}/D_{n}\): This corresponds to the CS levels \((-2,2,n-2)\). * \(S^{3}/E_{m}\): This corresponds to the CS levels \((-2,3,m-3)\). We summarise the results in each case below. 1. Let us consider the CS levels \((k_{1},k_{2},1)\). We have three cases as follows: * If \(p=1\), i.e. the Lens space is diffeomorphic to the three-sphere, namely \(L(p=1,q)\cong S^{3}\), then the index vanishes and IR theory is trivial. * If \(p\neq 1\) and both choices of \(q\) in (50) satisfy either of the following conditions:9 Footnote 9: These conditions lead to the following identifications: \((z_{1},z_{2})\sim(e^{2\pi i/p}z_{1},e^{\pm 2\pi i/p}z_{2})\) or \((z_{1},z_{2})\sim(e^{2\pi i/p}z_{1},z_{2})\), with \(p\neq 1\). * One of the \(q\) is \(\pm 1\ (\text{mod}\,p)\) and the other is divisible by \(p\), \[\text{ or both choices of }q\text{ are }\pm 1\ (\text{mod}\,p)\,\] (51) then the index (7)\({}_{a=1}\) is equal to unity, and so theory (1) flows to a TQFT. * Otherwise, the index (7)\({}_{a=1}\) takes the form \[1+0x-x^{2}+2x^{3}+\ldots\,\] (52) where \(0x\) indicates that there is no \(\mathcal{N}=3\) flavour current and \(-x^{2}\) indicates that there is one \(\mathcal{N}=3\) extra SUSY-current; therefore, each theory in this subclass flows to a **3d**\(\mathcal{N}=4\)**interacting SCFT**, where supersymmetry gets **enhanced** in the IR, with a decoupled TQFT. Using the information in Section 2.1, we summarise the information about the anomalies and TQFTs below. \[\begin{array}{|c|c|c|c|c|}\hline\text{CS levels}&\text{Anomaly}&\text{ Non-anomalous}&\text{Anomalous}&\text{TQFT with}\\ (k_{1},k_{2},1)&\text{matrix }p&1\text{-form symmetry}&1\text{-form symmetry}&\text{ anom. symmetry}\\ \hline\text{Both }k_{1}\text{ and }&\left(\begin{array}{c}0\ 1\\ 1\ 0\end{array}\right)&\mathbf{1}&\mathbb{Z}_{2}^{2}&\mathcal{A}^{(2,2),p}\equiv (\mathcal{Z}_{2})_{0}\\ k_{2}\text{ are odd}&&\text{in the notation of \@@cite[cite]{[\@@bibref{}{A.1}{}{}]}}\\ \hline\text{$k_{1}$ is even}&\left(\begin{array}{c}1\ 1\\ 1\ 0\end{array}\right)&\mathbf{1}&\mathbb{Z}_{2}^{2}&\mathcal{A}^{(2,2),p}\equiv (\mathcal{Z}_{2})_{2}\\ \text{and }k_{2}\text{ is odd}&&\text{in}&\\ \hline\text{Both }k_{1}\text{ and }&\left(\begin{array}{c}1\ 1\\ 1\ 1\end{array}\right)&\mathbb{Z}_{2}&\mathbb{Z}_{2}&\mathcal{A}^{2,1}\cong \text{SU}(2)_{1}\\ k_{2}\text{ are even}&&\text{in}&\text{on}&\cong\text{U}(1)_{2}\\ \hline\end{array} \tag{2.53}\] Note that, in the case of \(k_{1}\) and \(k_{2}\) even, theory (2.1) flows to either * an \(\mathcal{N}=4\) SCFT with a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry \(\otimes\)\(\mathcal{A}^{2,1}\), or * a TQFT with non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry \(\otimes\)\(\mathcal{A}^{2,1}\) depending on the value of \(p/q\) as discussed above. * Let us consider the CS levels \((-2,2,n-2)\) with \(n\geq 4\).10 The index \((\ref{2.7})_{a=1}\) is unity. This indicates that the theory flows to a TQFT in the IR. Footnote 10: The case of \(n=3\) was discussed in Case 1. \[\begin{array}{|c|c|c|c|c|}\hline\text{CS levels}&\text{Anomaly}&\text{ Non-anomalous}&\text{Anomalous}&\text{TQFT with}\\ &\text{matrix }p&1\text{-form symmetry}&1\text{-form symmetry}&\text{ anom. symmetry}\\ \hline\text{$n$ even}&\left(\begin{array}{c}0\ 0\\ 0\ 0\end{array}\right)&\mathbb{Z}_{2}^{2}&\mathbf{1}&-\\ \hline\text{$n$ odd}&\left(\begin{array}{c}1\ 1\\ 1\ 1\end{array}\right)&\mathbb{Z}_{2}&\mathbb{Z}_{2}&\mathcal{A}^{2,1}\cong \text{SU}(2)_{1}\cong\text{U}(1)_{2}\\ \hline\end{array}\] (2.54) For \(n\) even, theory (2.1) flows to a TQFT that has a non-anomalous \(\mathbb{Z}_{2}^{2}\) one-form symmetry, whereas for \(n\) odd, it flows to a TQFT that has a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry \(\otimes\)\(\mathcal{A}^{2,1}\). * Let us consider the CS levels \((-2,3,m-3)\) with \(m=6,\,7,\,8.\) The index \((\ref{2.7})_{a=1}\) is unity, and so theory (2.1) flows to a TQFT in the IR. For \(m\) even, theory (1) flows to the TQFT \({\cal A}^{\{2,2\},p}\equiv({\cal Z}_{2})_{2}\), whereas for \(m\) odd, it flows to a TQFT that has a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry \(\otimes\)\({\cal A}^{2,1}\). #### 2.3.1 Special case of \((k_{1},k_{2},k_{3})=(k,1,1)\) For any integer \(k\), these CS levels do **not** satisfy the ATT condition.11 Let us report the indices, up to order \(x^{10}\), for the cases of \(k\geq 1\) below: Footnote 11: Other CS levels that leads to the same IR SCFTs in accordance with Footnote 8 are, for example, \((k_{1},k_{2},k_{3})=(-k-1,1,1)\) and \((k-1,-3,1)\). We have checked that the indices of the corresponding theories are equal. However, if \(k\) is even, the topological sector for \((k,1,1)\) is \(({\cal Z}_{2})_{2}\), whereas that for \((-k-1,1,1)\) and \((k-1,-3,1)\) is \(({\cal Z}_{2})_{0}\). On the other hand, if \(k\) is odd, the situation is reverse: the topological sector for \((k,1,1)\) is \(({\cal Z}_{2})_{0}\) and that for \((-k-1,1,1)\) and \((k-1,-3,1)\) is \(({\cal Z}_{2})_{2}\). We expect that different topological sectors (despite the diffeomorphism of the three-manifolds) arise from different choices of polarisation of the 6d \({\cal N}=(2,0)\) theory; see also [13]. \[\begin{array}{|c|c|}\hline k&\text{Index}\\ \hline 1&1\\ 2&1-x^{2}+2x^{3}-2x^{4}+2x^{5}-2x^{6}+2x^{7}-2x^{8}+2x^{10}+\ldots\\ 3&1-x^{2}+2x^{3}-2x^{4}+x^{5}+x^{8}-4x^{9}+7x^{10}+\ldots\\ \geq 4&1-x^{2}+2x^{3}-2x^{4}+x^{5}-2x^{9}+5x^{10}+\ldots\\ \hline\end{array} \tag{56}\] where for \(k\geq 4\) the indices differ from each other at higher order than \(x^{10}\). For \(k=1\), the theory flows to the TQFT given by the first row of (53). For \(k\geq 2\), each of these theories flows to an _interacting SCFT with enhanced supersymmetry_, along with a decoupled TQFT, in the IR. The enhancement of supersymmetry from \({\cal N}=3\) to \({\cal N}=4\) can be deduced using the same argument as in the precedent subsection: due to the absence of the \({\cal N}=2\) preserving marginal operator, the term \(-x^{2}\) indicates that there is one \({\cal N}=3\) extra SUSY-current, rendering the enhanced supersymmetry. From (53), we see that the interacting SCFT does not have a non-anomalous one-form symmetry, and the decoupled TQFT has an anomalous \(\mathbb{Z}_{2}^{2}\) one-form symmetry whose anomaly is given by the first or second row of (53) depending whether \(k\) is odd or even. #### The case of \(k=2\) The case of \(k=2\) is of particular importance: the SCFT in question turns out to be the (rank-zero) minimal 3d \({\cal N}=4\) SCFT, discussed in [23; 25]. From (7), the index of this theory, up to order \(x^{12}\), is \[1-x^{2}+2x^{3}-2x^{4}+2x^{5}-2x^{6}+2x^{7}-2x^{8}+2x^{10}-2x^{11}+3x^{12}+ \ldots. \tag{57}\] This turns out to be equal to the index of the (rank-zero) minimal 3d \({\cal N}=4\) SCFT, described by 3d \({\cal N}=2\)\({\rm U}(1)_{-3/2}\) gauge theory with one chiral multiplet of charge 1 [23; 25], namely \[\oint\frac{dz}{2\pi iz}\sum_{m\in\mathbb{Z}}w^{m}z^{-\frac{3}{2}m} \mathcal{Z}_{\chi}^{1/3}(z,m;x) \tag{58}\] \[=1-x^{2}+(w+w^{-1})x^{3}-2x^{4}+(w+w^{-1})x^{5}-2x^{6}+\] \[\qquad(w+w^{-1})x^{7}-2x^{8}+(w^{2}+w^{-2})x^{10}-(w+w^{-1})x^{11}+\] \[\qquad(w^{2}+1+w^{-2})x^{12}+\ldots\,\] upon setting the fugacity \(w\) for the topological symmetry to 1. It was pointed out in [23] that, in the \({\rm U}(1)_{-3/2}\) CS theory, supersymmetry gets enhanced from \({\cal N}=2\) to \({\cal N}=4\) in the IR. Viewing these as \({\cal N}=2\) indices, the absence of the term at order \(x\) and the positive term at order order \(x^{2}\) implies that there is no \({\cal N}=2\) preserving relevant and marginal deformation. As pointed out by [48; 49] (see also [47]), at order \(x^{3}\), there are two \({\cal N}=2\) extra SUSY-current multiplets \(A_{1}\overline{A}_{1}[1]^{(0)}_{3/2}\) that render supersymmetry enhancement. Here we find a _new description_, in terms of the \({\cal N}=3\) gauge theory (1) with the CS levels \((2,1,1)\), of the minimal 3d \({\cal N}=4\) SCFT, along with the decoupled TQFT described in the second row of (53). The \({\cal N}=3\) extra SUSY-current of this theory should actually be identified with the current of the topological symmetry that is manifest in the 3d \({\cal N}=2\)\({\rm U}(1)_{-3/2}\) CS theory. In fact, there is another theory that has a similar behaviour: the 3d \({\cal N}=2\)\({\rm U}(1)_{0}\) gauge theory with one chiral multiplet of charge 2. This was, in fact, mentioned in [23; (36)]. The three-sphere partition function of this theory is \(Z=\int_{-\infty}^{\infty}{\rm d}s\,\Gamma_{h}(ir+2s)\),12 where \(r\) is the R-charge of the chiral multiplet. It turns out that the value of \(|Z|\) is independent of \(r\) such that \(0\leq r<1\), and we find that the free energy is \(F\equiv-\log|Z|=-\log\sqrt{\frac{5-\sqrt{5}}{10}}+\log\sqrt{2}\). According to [25, (3.6)], the free energy of the minimal 3d \({\cal N}=4\) SCFT is \(-\log\sqrt{\frac{5-\sqrt{5}}{10}}\). The other contribution comes from the TQFT \({\cal A}^{2,1}\cong{\rm U}(1)_{2}\), whose free energy is given by \(-\log\left|\int_{-\infty}^{\infty}{\rm d}s\,e^{2\pi is^{2}}\right|=\log \sqrt{2}\). Hence, we conclude that the 3d \({\cal N}=2\)\({\rm U}(1)_{0}\) gauge theory with one chiral multiplet of charge 2 flows to the minimal 3d \({\cal N}=4\) SCFT along with the \({\cal A}^{2,1}\) TQFT. Indeed, the index of the \({\rm U}(1)_{0}\) gauge theory is given by the integral \(\oint\frac{dz}{2\pi iz}\sum_{m\in\mathbb{Z}}w^{m}\mathcal{Z}_{\chi}^{r}(z^{2}, 2m;x)\) and the result turns out to be equal to (58). Footnote 12: Here we use the same convention as [32, Section 5.1] and turn off the FI and mass parameters. #### Description in terms of the \(\mathrm{Spin}(4)_{1}\times\mathrm{USp}(2)_{k}\) gauge theory Theory (1) with the CS levels \((k,1,1)\) can also be described by the 3d \(\mathcal{N}=3\)\(\mathrm{Spin}(4)_{1}\times\mathrm{USp}(2)_{k}\) gauge theory with a half-hypermultiplet in the representation \([\mathbf{4};\mathbf{2}]\). In fact, it can be checked that the index of this theory is equal to that of the \(\mathrm{SO}(4)_{1}\times\mathrm{USp}(2)_{k}\) gauge theory with the same matter content. This is because in the latter the bare monopole operators cannot be dressed by the half-hypermultiplet to form a gauge invariant operator. Since the fugacity \(\zeta\) of the \(\mathbb{Z}_{2}\) zero-form magnetic symmetry does not appear in the index of the \(\mathrm{SO}(4)_{1}\times\mathrm{USp}(2)_{k}\) gauge theory, this means it acts trivially on the local operators. This indeed signalises that the corresponding dual one-form symmetry in the \(\mathrm{Spin}(4)_{1}\times\mathrm{USp}(2)_{k}\) gauge theory acts on the line operators of the decoupled topological sector, in accordance with the above statement that the interacting SCFT does not have a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry. Note that the topological sector is invisible to the index computation (7). The \(\mathrm{SO}(4)_{1}\times\mathrm{USp}(2)_{k}\) gauge theory may have a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry, depending on \(k\). As before, the condition for the existence of a \(\mathbb{Z}_{2}\) one-form symmetry of this theory can be determined by a simple generalisation of [51, (3.27)] (see Footnote 3 with \(k_{1}=1\), \(M=2\), \(k_{2}=k\) and \(N=1\)): \[\frac{1}{4}\times 1\times 2+\frac{1}{2}k=\frac{1}{2}(k+1)\in\mathbb{Z}\qquad \Leftrightarrow\qquad k\text{ is odd }. \tag{59}\] We interpret this results as follows. Although theory (1) with the CS levels \((k,1,1)\), or equivalently the \(\mathrm{Spin}(4)_{1}\times\mathrm{USp}(2)_{k}\) gauge theory, has an anomalous \(\mathbb{Z}_{2}^{2}\) one-form symmetry, its \(\mathbb{Z}_{2}\) diagonal subgroup is non-anomalous for \(k\) odd. This can be seen directly from the action (14) of the anomaly theory: \(S_{\mathrm{anom}}=-\frac{2\pi}{2}\int_{\mathcal{M}_{4}}(w_{1}^{(2)}\cup w_{2} ^{(2)})\), where the first two terms of (14) can be dropped. Upon taking \(w_{1}^{(2)}=w_{2}^{(2)}\equiv B^{(2)}\), we have \(S_{\mathrm{anom}}=-2\pi\int_{\mathcal{M}_{4}}\mathcal{P}(B^{(2)})/2\). Since \(\int_{\mathcal{M}_{4}}\mathcal{P}(B^{(2)})\) is even on a spin manifold \(\mathcal{M}_{4}\), this action is an integer multiple of \(2\pi\); this indicates that the \(\mathbb{Z}_{2}\) diagonal subgroup of \(\mathbb{Z}_{2}^{2}\) one-form symmetry is non-anomalous for \(k\) odd. On the other hand, for \(k\) even, we also have a non-trivial contribution from the first term of (14), namely \(\pi(k-1)\int_{\mathcal{M}_{4}}\mathcal{P}(w_{1}^{(2)})/2\); this renders the anomaly of the \(\mathbb{Z}_{2}\) diagonal subgroup non-trivial. For \(k\) odd, the above interpretation can be supported by an explicit realisation of the topological sector \((\mathcal{Z}_{2})_{0}\); see the first row of (53). As discussed around [39, (7)], the \(\mathbb{Z}_{2}^{2}\) one-form symmetry of the \((\mathcal{Z}_{2})_{0}\) TQFT is generated by the basic electric and magnetic lines \(V_{E}\), \(V_{M}\) of integer spins, where each of such lines generates a \(\mathbb{Z}_{2}\) non-anomalous one-form symmetry labelled by \(p=0\). Due to a non-trivial mutual braiding phase \(e^{-i\pi}\) of \(V_{E}\) and \(V_{M}\), we can always find a line \(b\) that generates a \(\mathbb{Z}_{2}\) subgroup of the \(\mathbb{Z}_{2}^{2}\) one-form symmetry with anomaly characterised by \(p\) (with \(p=0\), \(1\) mod \(2\)), namely \(b=V_{E}^{p/2}V_{M}\). We see that the \(\mathbb{Z}_{2}\) diagonal subgroup, generated by the line \(V_{E}V_{M}\), of the \(\mathbb{Z}_{2}^{2}\) one-form symmetry is indeed anomaly free. ### Gluing with \(T(\mathrm{SU}(2))\) theories Let us now include the \(T(\mathrm{SU}(2))\) SCFT [46] into the discussion. This theory can be realised as an IR SCFT of the 3d \(\mathcal{N}=4\) U(1) gauge theory with two hypermultiplets of charge 1 and it has an \(\mathrm{SU}(2)_{H}\times\mathrm{SU}(2)_{C}\) flavour symmetry with the mixed anomaly given by the following anomaly theory (see [57; 58; 21] and also [59]) \[\pi\int_{\mathcal{M}_{4}}w_{2}^{H}\cup w_{2}^{C}\, \tag{60}\] where \(w_{2}^{H}\) and \(w_{2}^{C}\) are, respectively, the second Stiefel-Whitney classes associated with the \(\mathrm{SO}(3)_{H}\) and \(\mathrm{SO}(3)_{C}\) bundles that obstruct the lift to the \(\mathrm{SU}(2)_{H}\) and \(\mathrm{SU}(2)_{C}\) bundles. As discussed in [14], an interesting generalisation of (1) is to gauge the diagonal subgroup of the \(\mathrm{SU}(2)_{i}\) symmetries (with \(i=1,2,3\)) of the \(T_{2}\) theory and the \(\mathrm{SU}(2)_{C}\) global symmetry of the \(i\)-th copy of the \(T(SU(2))\) theory with CS level \(k_{i}^{(1)}\), and then gauge the \(\mathrm{SU}(2)_{H}\) global symmetry of the \(i\)-th copy of the \(T(\mathrm{SU}(2))\) theory with CS level \(k_{i}^{(2)}\). The resulting theory can be represented as (61) where \(S\) stands for the \(T(SU(2))\) theory. More generally, we could consider the following longer 'tail': (62) For simplicity, we focus on the configuration (61). As pointed out in [14, Section 3.3], this model can be realised by compactifying M5-branes on a three-manifold given by a Seifert bundle over \(S^{2}\) with three singular fibers, with Seifert parameters \(q_{1}/p_{1}\), \(q_{2}/p_{2}\) and \(q_{3}/p_{3}\), where \[\frac{p_{i}}{q_{i}}=k_{i}^{(1)}-\frac{1}{k_{i}^{(2)}}\,\qquad i=1,2,3. \tag{63}\] The effective superpotential after integrating out the adjoint scalars is \[W=\frac{1}{2}\left(\frac{q_{1}}{p_{1}}+\frac{q_{2}}{p_{2}}+\frac{q_{3}}{p_{3}} \right)\text{tr}(\mu^{2})+\sum_{i=1}^{3}\frac{q_{i}}{p_{i}}\,\text{tr}(\mu_{i} \mu_{i,C})\, \tag{64}\] where \(\mu_{i}^{C}\) denotes the moment map of the SU(2)\({}_{C}\) global symmetry of the \(i\)-th copy of the \(T(\text{SU}(2))\) theory, and \(\mu_{i}\) is the moment map of the SU(2)\({}_{i}\) global symmetry of the \(T_{2}\) theory satisfying (4). The ATT condition (6) is then generalised to [14] \[\frac{q_{1}}{p_{1}}+\frac{q_{2}}{p_{2}}+\frac{q_{3}}{p_{3}}=0. \tag{65}\] Similarly to (1), we will again show that this is a _sufficient_ condition for supersymmetry enhancement in the IR. However, even if (65) is not satisfied, there are cases in which the IR SCFT has accidental \(\mathcal{N}=4\) supersymmetry. The index of the \(T(\text{SU}(2))\) SCFT can be written as \[\begin{split}&\mathcal{I}_{T(\text{SU}(2))}(w,n|f,m|a,n_{a};x)\\ &=\sum_{l\in\mathbb{Z}}(w^{2})^{l}\oint\frac{dz}{2\pi iz}z^{n} \prod_{s=\pm 1}\mathcal{Z}_{\chi}^{1/2}((zf)^{s}a;s(l+m)+n_{a};x)\times\\ &\qquad\qquad\mathcal{Z}_{\chi}^{1/2}((z^{-1}f)^{s}a;s(-l+m)+n_{ a};x)\,\end{split} \tag{66}\] where \((w,n)\) are the (fugacity, background magnetic flux) for the topological symmetry, \((f,m)\) are those for the flavour symmetry, and \((a,n_{a})\) are those for the axial symmetry. Here we normalise the power of the fugacity \(w\) in such a way that the elementary monopole operators \(V_{\pm}\) carry the fugacity \(a^{-2}w^{\pm 2}\). In this way, the Coulomb branch moment maps correspond to the term \(a^{-2}\chi_{[2]}^{\mathfrak{su}(2)_{C}}(w)x\) in the index, and the Higgs branch moment maps correspond to the term \(a^{2}\chi_{[2]}^{\mathfrak{su}(2)_{H}}(f)x\). The index for theory (61) is therefore given by \[\begin{split}&\mathcal{I}_{\eqref{eq:2.61}}(a,n_{a};x)\\ &=\left(\frac{1}{8}\prod_{i=1}^{3}\oint\frac{dz_{i}}{2\pi iz_{i} }\right)\sum_{(m_{1},\cdots,m_{3})\in\mathbb{Z}^{3}}\left(\frac{1}{8}\prod_{i =1}^{3}\oint\frac{df_{i}}{2\pi if_{i}}\right)\sum_{(\widehat{m}_{1},\cdots, \widehat{m}_{3})\in\mathbb{Z}^{3}}\times\\ &\quad\left(\prod_{i=1}^{3}z_{i}^{2k_{i}^{(1)}m_{i}}\mathcal{Z}_{ \text{vec}}^{\text{SU}(2)}(z_{i};m_{i};x)\right)\left(\prod_{i=1}^{3}f_{i}^{2k _{i}^{(2)}\widehat{m}_{i}}\mathcal{Z}_{\text{vec}}^{\text{SU}(2)}(f_{i}; \widehat{m}_{i};x)\right)\times\\ &\quad\prod_{s_{1},s_{2},s_{3}=\pm 1}\mathcal{Z}_{\chi}^{1/2}(z_{1} ^{s_{1}}z_{2}^{s_{2}}z_{3}^{s_{3}}a;s_{1}m_{1}+s_{2}m_{2}+s_{3}m_{3}+n_{a};x) \times\\ &\quad\prod_{i=1}^{3}\mathcal{I}_{T(\text{SU}(2))}(z_{i},m_{i}|f_{ i},\widehat{m}_{i}|a,n_{a};x)\.\end{split} \tag{67}\] When the ATT condition (2.65) is satisfied, the first term in (2.64) vanishes and the U(1)\({}_{a}\) symmetry associated with the fugacity \(a\) assigned as above is a symmetry of the theory, since \(\mu_{i}\) carries charge \(+2\) and \(\mu_{i,C}\) carries charge \(-2\). However, if (2.65) is not satisfied, we set \(a=1\) and \(n_{a}=0\) in the above expression of the index. #### 2.4.1 't Hooft anomalies of the one-form symmetries Gauging the SU(2)\({}_{H}\) and SU(2)\({}_{C}\) global symmetries of \(T\)(SU(2)) respectively with CS levels \(k_{H}\) and \(k_{C}\) leads to the \(\mathbb{Z}_{2,H}\times\mathbb{Z}_{2,C}\) one-form symmetry arising from the centres of \(SU(2)_{H}\) and \(SU(2)_{C}\). The 't Hooft anomaly of such a one-form symmetry is characterised by the following anomaly theory (see [25, (3.62)]) \[\pi\int_{\mathcal{M}_{4}}\left[k_{H}\frac{\mathcal{P}(w_{H}^{(2)})}{2}+k_{C} \frac{\mathcal{P}(w_{C}^{(2)})}{2}+w_{H}^{(2)}\cup w_{C}^{(2)}\right]\, \tag{2.68}\] where \(w_{H/C}^{(2)}\) are the two-form background fields for the \(\mathbb{Z}_{2,H/C}\) one-form symmetries. The first two terms arise as in (2.11) and the last term comes from (2.60). Upon gauging with the \(T_{2}\) theory as in (2.61), we have six SU(2) gauge groups but \(\mathbb{Z}_{2}^{5}\) one-form symmetry due to the screening effect of the matter of the \(T_{2}\) theory. The 't Hooft anomalies of the latter are given by \[\pi\int_{\mathcal{M}_{4}}\left[\sum_{r=1}^{2}\sum_{i=1}^{3}k_{i}^{(r)}\frac{ \mathcal{P}(B_{i}^{(r)})}{2}+\sum_{i=1}^{3}B_{i}^{(1)}\cup B_{i}^{(2)}\right] \quad\text{with}\ \sum_{i=1}^{3}B_{i}^{(1)}=0\, \tag{2.69}\] where \(B_{i}^{(r)}\) is the two-form background field associated with the SU(2) gauge group with CS level \(k_{i}^{(r)}\), with \(i=1,2,3\) and \(r=1,2\).13 The last constraint comes from (2.12). Footnote 13: We denote the two-form background fields for the one-form symmetries differently from the other sections in order to avoid cluttered notation. #### 2.4.2 Summary of the results We observe the following result: For given ratios \(p_{i}/q_{i}\) (with \(i=1,2,3\)), the index (2.67) of theory (2.61) is independent of specific values of \(k_{i}^{(a_{i})}\) in (2.63). As an immediate consequence, the following statement holds: If \[p_{i}/q_{i}\in\mathbb{Z}\] for all \[i=1,2,3\] , the index (2.67) of theory (2.61) is equal to the index (2.7) of theory (2.1) with \[k_{i}=p_{i}/q_{i}\] We remark that these statements are true, independently of whether the ATT condition (65) is satisfied. This means that the aforementioned theories flow to the same interacting SCFT in the IR. This observation may not be a surprise from the geometrical perspective, since both theories are associated with Seifert manifolds that are diffeomorphic to each other. Note that the decoupled topological sectors in the IR may be different, since the anomalous one-form symmetries determined by (14) and (69) may be different. We have actually seen this phenomenon in Footnote 11. For simplicity, we examine the indices of the following theory \[\begin{array}{c}\includegraphics[width=142.91692pt]{fig/sceft.eps}\end{array} \tag{72}\] with various CS levels as follows. \[\begin{array}{|c|c|c|c|c|c|c|c|}\hline k_{1}^{(1)}&k_{1}^{(2)}&\frac{p_{1}}{q _{1}}&k_{2}&k_{3}&\text{ATT (\ref{eq:2.1.1})}&\text{Index}\\ \hline-4&-1&-3&6&6&\boldsymbol{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\check{\check{\check{\checkcheck{\ When the ATT condition is not satisfied, there are cases in which the IR SCFTs have **enhanced \({\cal N}=4\) supersymmetry**; these are explicitly shown in the fifth and sixth rows of Table (73). This can be deduced by the same reasoning as above. Note that the marginal operator discussed below (74) is set to zero in the chiral ring by an \(F\)-term equation (_cf._ Section 2.3) and so we do not have a positive term at order \(x^{2}\). However, in the final row of Table (73), we find no indication of supersymmetry enhancement from the index, assuming that there is no additional marginal operator. If the index is unity, then the IR theory is a TQFT. Next, let us report some indices for theory (61) with various CS levels such that the ATT condition (65) is satisfied: \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|}\hline k_{1}^{(1)}&k_{1}^{(2)}&\frac{p_{ 1}}{q_{1}}&k_{2}^{(1)}&k_{2}^{(2)}&\frac{p_{2}}{q_{2}}&k_{3}^{(1)}&k_{3}^{(2)}& \frac{p_{3}}{q_{3}}&\text{Index}\\ \hline-2&1&-3&7&1&6&7&1&6&(2.27)\text{ with }k=3\\ \hline-3&1&-4&9&1&8&9&1&8&(2.27)\text{ with }k=4\\ \hline-4&1&-5&11&1&10&11&1&10&(2.27)\text{ with }k=5\\ \hline\end{array} \tag{75}\] These results support the statement (71). In all of these cases, the IR interacting SCFT has enhanced \({\cal N}=4\) supersymmetry. Finally, we observe that if a pair of Seifert fibres with parameters \(q_{i}/p_{i}\) and \(\widetilde{q}_{i}/\widetilde{p}_{i}\) are isomorphic by an orientation-preserving diffeomorphism, namely [56, Prop. 2.1] 1. after possibly permuting indices, \(q_{i}/p_{i}=\widetilde{q}_{i}/\widetilde{p}_{i}\pmod{1}\) for each \(i\), and 2. \(\sum_{i}q_{i}/p_{i}=\sum_{i}\widetilde{q}_{i}/\widetilde{p}_{i}\), then the indices of the theories associated with these Seifert fibres are equal. In other words, the corresponding IR SCFTs are the same. ## 3 Theories with two \(T_{2}\) building blocks Let us now couple two copies of the \(T_{2}\) theory together by gauging a diagonal subgroup of the two \(\text{SU}(2)_{i}\) flavour symmetries (with \(i=1,2,3\)), belonging to different copies of the \(T_{2}\) theories, with CS levels \(k_{i}\). We denote this diagrammatically as \[T_{2} \tag{76}\] We denote by \(\mu_{i}^{(I)}\), with \(i=1,2,3\) and \(I=1,2\), the moment maps of the \(\text{SU}(2)_{i}\) flavour symmetry of the \(I\)-th \(T_{2}\) theory. Their explicit expression for each \(I\) is given by (3). We also have the analogue of (4), namely \[\operatorname{tr}(\mu_{1}^{(I)\;2})=\operatorname{tr}(\mu_{2}^{(I)\;2})= \operatorname{tr}(\mu_{3}^{(I)\;2})\equiv\operatorname{tr}(\mu^{(I)\;2}) \tag{3.2}\] for each \(I=1,2\). The effective superpotential after integrating out the adjoint scalar fields is (see [14, (2.16)]) \[W=\frac{1}{2}\left(\frac{1}{k_{1}}+\frac{1}{k_{2}}+\frac{1}{k_{3}}\right)\left[ \operatorname{tr}\left(\mu^{(1)\;2}\right)+\operatorname{tr}\left(\mu^{(2)\;2} \right)\right]+\sum_{i=1}^{3}\frac{1}{k_{i}}\operatorname{tr}\left(\mu_{i}^{(1 )}\mu_{i}^{(2)}\right). \tag{3.3}\] When the ATT condition (2.6) is satisfied, the first term vanishes and there is a flavour symmetry that assigns charge \(+1\) to every chiral field of the first \(T_{2}\) theory and charge \(-1\) to every chiral field of the second \(T_{2}\) theory. As a consequence, \(\mu_{i}^{(1)}\) and \(\mu_{i}^{(2)}\) carry charges \(+2\) and \(-2\), respectively. We denote by \(a\) the fugacity associated with this flavour symmetry. We will shortly see that, if the CS levels obey the ATT condition, the flavour symmetry algebra is actually \(\mathfrak{su}(2)_{a}\). On the other hand, if the ATT condition (2.6) is not satisfied, this flavour symmetry is explicitly broken by the first term of the superpotential. The index for theory (3.1) is given by \[\mathcal{I}_{\eqref{eq:2.1}}(a,n_{a};x) =\left(\frac{1}{8}\prod_{i=1}^{3}\oint\frac{dz_{i}}{2\pi iz_{i}} \right)\sum_{(m_{1},m_{2},m_{3})\in\mathbb{Z}^{3}}\left(\prod_{i=1}^{3}z_{i}^{ 2k_{i}m_{i}}\mathcal{Z}_{\text{vec}}^{\text{SU}(2)}(z_{i};m_{i};x)\right)\times\] \[\qquad\prod_{s_{1},s_{2},s_{3}=\pm 1}\mathcal{Z}_{\chi}^{1/2}(z_{1}^{ s_{1}}z_{2}^{s_{2}}z_{3}^{s_{3}}a;s_{1}m_{1}+s_{2}m_{2}+s_{3}m_{3}+n_{a};x)\times \tag{3.4}\] \[\qquad\prod_{s_{1}^{\prime},s_{2}^{\prime},s_{3}^{\prime}=\pm 1} \mathcal{Z}_{\chi}^{1/2}(z_{1}^{s_{1}^{\prime}}z_{2}^{s_{2}^{\prime}}z_{3}^{ s_{3}^{\prime}}a^{-1};s_{1}^{\prime}m_{1}+s_{2}^{\prime}m_{2}+s_{3}^{\prime}m_{3}-n_{ a};x)\.\] As before, we will set \(n_{a}=0\) and drop \(n_{a}\) from the argument in \(\mathcal{I}_{\eqref{eq:2.1}}(a,n_{a};x)\) when we study the series expansion of the index. For the cases that do not satisfy the ATT condition, \(a\) should be set to \(1\) and \(n_{a}\) should be set to zero. Without the CS levels, theory (3.1) can be viewed as 3d reduction of the 4d \(\mathcal{N}=2\)\(A_{1}\) class \(\mathcal{S}\) theory associated with a Riemann surface of genus 2 with no puncture. The one-form symmetry of the latter is \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\); see [60, (3.12)]. With the CS levels turned on, their 't Hooft anomalies are characterised by (2.15) and (2.16). ### Cases that satisfy the ATT condition We consider theories (3.1) with CS levels satisfying (2.6). #### 3.1.1 Special case of \((k_{1},k_{2},k_{3})=(-k,2k,2k)\) This theory is equivalent to the \(\mathrm{USp}(2)_{-k}\times\mathrm{Spin}(4)_{2k}\) gauge theory with two copies of bifundamental half-hypermultiplets in the representation \([\mathbf{2};\mathbf{4}]\) : \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig1.eps}} \tag{3.5}\] It is indeed closely related to the ABJ theory [10], described by the \(\mathrm{USp}(2)_{-k}\times\mathrm{O}(4)_{2k}\) gauge theory with the same matter content. We can start from the \(\mathrm{USp}(2)_{-k}\times\mathrm{SO}(4)_{2k}\) variant of the theory: gauging the \(\mathbb{Z}_{2}\) zero-form charge conjugation symmetry associated with the \(\mathrm{SO}(4)\) gauge group leads to the original ABJ theory, whereas gauging the \(\mathbb{Z}_{2}\) zero-form magnetic symmetry leads to theory (3.5). This type of arguments was used to studied variants of ABJM [8] and ABJ theories in [35; 36; 51; 61]. The index of the \(\mathrm{USp}(2)_{-k}\times\mathrm{SO}(4)_{2k}\) variant is \[\mathcal{I}_{\mathrm{USp}(2)_{-k}\times\mathrm{SO}(4)_{2k}}( \zeta,a;x) \tag{3.6}\] \[=\frac{1}{8}\sum_{(\mathfrak{m}_{1},\mathfrak{m}_{2})\in\mathbb{ Z}^{2}}\,\sum_{\mathfrak{n}\in\mathbb{Z}}\left(\prod_{j=1}^{2}\oint\frac{dv_{j}}{2 \pi iv_{j}}\,v_{j}^{2\mathfrak{n}\mathfrak{m}_{j}}\right)\zeta^{\mathfrak{m}_ {1}+\mathfrak{m}_{2}}\oint\frac{du}{2\pi iu}\,u^{-2\mathfrak{n}}\times\] \[\mathcal{Z}_{\mathrm{vec}}^{\mathrm{SO}(4)}(v_{1},v_{2}; \mathfrak{m}_{1},\mathfrak{m}_{2};x)\mathcal{Z}_{\mathrm{vec}}^{\mathrm{USp}( 2)}(u;\mathfrak{n};x)\times\] \[\prod_{i=1}^{2}\,\prod_{s_{1},s_{2}=\pm 1}\mathcal{Z}_{\chi}^{1/2} \left(v_{i}^{s_{1}}u^{s_{2}}a;s_{1}\mathfrak{m}_{i}+s_{2}\mathfrak{n};x \right)\mathcal{Z}_{\chi}^{1/2}\left(v_{i}^{s_{1}}u^{s_{2}}a^{-1};s_{1} \mathfrak{m}_{i}+s_{2}\mathfrak{n};x\right)\.\] where \(\zeta\) (with \(\zeta^{2}=1\)) denotes the fugacity for the \(\mathbb{Z}_{2}\) zero-form magnetic symmetry and we have set the fugacity \(\chi\) for the charge conjugation symmetry to \(1\). The index of (3.5) can be obtained by gauging the magnetic symmetry as follows: \[\mathcal{I}_{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq symmetry of (3.1) is gauged. We tabulate the results, up to order \(x^{4}\), below. \[\begin{array}{|c|c|}\hline\text{CS levels}&\text{Index (\ref{eq:CS1})}\\ (-k,2k,2k)&\\ \hline k=1&1+\left[1+\zeta\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)\right]x+ \left[\left(2+\zeta\right)\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+2-\chi _{[2]}^{\mathfrak{su}(2)}\left(a\right)\right]x^{2}+\\ &\left\{\left(1+2\zeta\right)\chi_{[6]}^{\mathfrak{su}(2)}\left(a\right)- \left[\zeta\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+(3+2\zeta)\chi_{[2]}^{ \mathfrak{su}(2)}\left(a\right)+\zeta\right]\right\}x^{3}+\\ &\left\{\left(3+2\zeta\right)\chi_{[8]}^{\mathfrak{su}(2)}\left(a\right)+3(1+ \zeta)\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)+1+2\zeta-\right.\\ &\left.\left[(2+\zeta)\chi_{[6]}^{\mathfrak{su}(2)}\left(a\right)+(1+\zeta) \chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)\right]\right\}x^{4}+\ldots\\ \hline k=2&1+x+\left[(1+\zeta)\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+2- \chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)\right]x^{2}+\\ &\left[\zeta\chi_{[6]}^{\mathfrak{su}(2)}\left(a\right)+2-(2+\zeta)\chi_{[2]}^ {\mathfrak{su}(2)}\left(a\right)\right]x^{3}+\\ &\left\{\left(2+\zeta\right)\chi_{[8]}^{\mathfrak{su}(2)}\left(a\right)+2- \left[(1+\zeta)\chi_{[6]}^{\mathfrak{su}(2)}\left(a\right)+\zeta\chi_{[4]}^{ \mathfrak{su}(2)}\left(a\right)+\zeta\chi_{[2]}^{\mathfrak{su}(2)}\left(a \right)\right]\right\}x^{4}+\ldots\\ \hline k=3&1+x+\left[\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+2-\chi_{[2]}^ {\mathfrak{su}(2)}\left(a\right)\right]x^{2}+\left[\zeta\chi_{[6]}^{\mathfrak{ su}(2)}\left(a\right)+2-2\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)\right]x^{3}+\\ &\left\{(1+\zeta)\chi_{[8]}^{\mathfrak{su}(2)}\left(a\right)+2-\left[\chi_{[6 ]}^{\mathfrak{su}(2)}\left(a\right)+\zeta\chi_{[4]}^{\mathfrak{su}(2)}\left(a \right)\right]\right\}x^{4}+\ldots\\ \hline k=4&1+x+\left[\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+2-\chi_{[2]}^ {\mathfrak{su}(2)}\left(a\right)\right]x^{2}+2\left[1-\chi_{[2]}^{\mathfrak{ su}(2)}\left(a\right)\right]x^{3}+\\ &\left\{\left(1+\zeta\right)\chi_{[8]}^{\mathfrak{su}(2)}\left(a\right)+2- \chi_{[6]}^{\mathfrak{su}(2)}\left(a\right)\right\}x^{4}+\ldots\\ \hline\end{array} \tag{3.8}\] For \(k\geq 5\), the terms with fugacity \(\zeta\) appear at a higher order than \(x^{4}\). The index of the case of \(k=1\) was studied in [35, (3.80)], where it was pointed out that the \(\text{USp}(2)_{-1}\times\text{SO}(4)_{2}\) ABJ theory is dual to another variant of the ABJ theory, namely the \([\text{U}(3)_{4}\times\text{U}(1)_{-4}]/\mathbb{Z}_{2}\) gauge theory with two bifundamental hypermultiplets, whose IR SCFT has **enhanced \(\mathcal{N}=6\) supersymmetry**. The indices for the cases of \(k\geq 2\) indicate that supersymmetry gets **enhanced to \(\mathcal{N}=5\)**. This can be seen as follows. All of such indices have the coefficient of \(x\) equal to \(1\) indicating that there is one \(\mathcal{N}=3\) flavour current, but since there is the term \(-\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)=-(a^{2}+1+a^{-2})\) at order \(x^{2}\), the term \(-1\) corresponds to the \(\mathcal{N}=3\) flavour current and the terms \(-a^{2}\) and \(-a^{-2}\) correspond to the \(\mathcal{N}=3\) extra SUSY-currents, rendering supersymmetry enhancement from \(\mathcal{N}=3\) to \(\mathcal{N}=5\). Now we report the index of theory (3.1) with CS levels \((-k,2k,2k)\), or equivalently (3.5), given by \(\mathcal{I}_{(\ref{eq:CS1})}(a;x)\) up to order \(x^{4}\) below. \[\begin{array}{|c|c|}\hline\text{CS levels}&\text{Index }\mathcal{I}_{(\ref{eq:CS1})}(a;x)\\ (-k,2k,2k)&\\ \hline k=1&1+x+\left\{2\left[\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+1 \right]-\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)\right\}x^{2}+\left[\chi_{ [6]}^{\mathfrak{su}(2)}\left(a\right)-3\chi_{[2]}^{\mathfrak{su}(2)}\left(a \right)\right]x^{3}+\\ &\left\{3\left[\chi_{[8]}^{\mathfrak{su}(2)}\left(a\right)+\chi_{[2]}^{ \mathfrak{su}(2)}\left(a\right)\right]+1-\left[2\chi_{[6]}^{\mathfrak{su}(2)} \left(a\right)+\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)\right]\right\}x^{4}+ \ldots\\ \hline k=2&1+x+\left[\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+2-\chi_{[2]}^ {\mathfrak{su}(2)}\left(a\right)\right]x^{2}+2\left[1-\chi_{[2]}^{\mathfrak{ su}(2)}\left(a\right)\right]x^{3}+\\ &\left\{2\left[\chi_{[8]}^{\mathfrak{su}(2)}\left(a\right)+1\right]-\chi_{[6]}^ {\mathfrak{su}(2)}\left(a\right)\right\}x^{4}+\ldots\\ \hline k\geq 3&1+x+\left[\chi_{[4]}^{\mathfrak{su}(2)}\left(a\right)+2-\chi_{[2]}^ {\mathfrak{su}(2)}\left(a\right)\right]x^{2}+\\ &2\left[1-\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)\right]x^{3}+\left[\chi_{[8 ]}^{\mathfrak{su}(2)}\left(a\right)+2-\chi_{[6]}^{\mathfrak{su}(2)}\left(a \right)\right]x^{4}+\ldots\\ \hline\end{array} \tag{3.9}\] For \(k\geq 3\) the indices differ from each other at a higher order than \(x^{4}\). By the same reasoning as before, the indices indicate that the IR SCFT has enhanced \({\cal N}=5\) supersymmetry. The 't Hooft anomalies of the one-form symmetry of these theories are as presented in (30). For \(k\) odd, theory (3.1) with CS levels \((-k,2k,2k)\) or equivalently (3.5) flows to an interacting \({\cal N}=5\) SCFT with a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry, along with the TQFT \({\cal A}^{2,1}\).14 For \(k\) even, the theory flows to an interacting \({\cal N}=5\) SCFT that has a \(\mathbb{Z}_{2}^{2}\) one-form symmetry and there is no decoupled topological sector. Footnote 14: For \(k\) odd, we can infer from (31) that the \(\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}\) theory does not admit a \(\mathbb{Z}_{2}\) quotient. This means that, for \(k\) odd, the non-anomalous one-form symmetry of the \(\text{USp}(2)_{-k}\times\text{Spin}(4)_{2k}\) is \(\mathbb{Z}_{2}\), which we can gauge in order to obtain the \(\text{USp}(2)_{-k}\times\text{SO}(4)_{2k}\) theory, and there is no further \(\mathbb{Z}_{2}\) one-form symmetry that we can gauge in the latter. The above indices receive the contributions of the gauge invariant dressed monopole operators, whose properties are similar to that discussed around (28). Explicitly, the bare monopole operator with magnetic fluxes \((m_{1},m_{2},m_{3})\) has dimension \[\Delta(m_{1},m_{2},m_{3})=\frac{1}{2}\sum_{s_{1},s_{2},s_{3}=\pm 1} \left|\sum_{i=1}^{3}s_{i}m_{i}\right|-\sum_{i=1}^{3}2|m_{i}|\, \tag{3.10}\] is neutral under the flavour symmetry and it carries charge \(2k_{i}m_{i}\) under the Cartan subalgebra of each \(\text{SU}(2)_{i}\) gauge factor. We can compute the Coulomb branch and Higgs branch limits of the index as in (37). We find that they are equal, as expected for SCFTs with \({\cal N}\geq 5\) supersymmetry. In particular, we have \[\text{Higgs limit \eqref{eq:1}}_{(-k,2k,2k)}=\text{Coulomb limit \eqref{eq:1}}_{(-k,2k,2k)} \tag{3.11}\] \[=\text{PE}\left[t^{2}+t^{2k}+t^{2k+1}-t^{4k+2}\right]\,\quad\text{with $t=h$ or $c$}\.\] This is the Hilbert series of \(\mathbb{C}^{2}/\widehat{D}_{2k+2}\), indicating that the Higgs and Coulomb branches are isomorphic to this singularity. The generators of the Higgs branch are \[w=\text{tr}(\mu^{(1)\,2})\,\quad v=X_{(2,1,1)}\left(Q^{(1)}\right)^{4k}\,\quad u=X_{(2,1,1)}\left(Q^{(1)}\right)^{4k-4}\mu_{1}^{(1)}\mu_{2}^{(1)}\mu_{ 3}^{(1)}\, \tag{3.12}\] satisfying the relation \[u^{2}+v^{2}w=w^{2k+1}. \tag{3.13}\] The generators of the Coulomb branch can be obtained simply by replacing the superscript (1) by (2). Note that the generators \(u\) and \(v\) are the bare monopole operator \(X_{(2,1,1)}\), whose dimension is zero, dressed by appropriate chiral fields from each copy of \(T_{2}\) such that the combinations become gauge invariant. Indeed, from (3.9), we see that the dressed monopoles that are related to \(v\) contribute the term \(\chi_{[4k]}^{\mathfrak{su}(2)}(a)x^{2k}\) to the index. We can also examine the \(\mathrm{USp}(2)_{-k}\times\mathrm{SO}(4)_{2k}\) version of the ABJ theory, or equivalently \(\mathrm{SU}(2)_{-k}\times[\mathrm{SU}(2)_{2k}\times\mathrm{SU}(2)_{2k}]/\mathbb{ Z}_{2}\), which comes from gauging a non-anomalous \(\mathbb{Z}_{2}\) subgroup of the \(\mathbb{Z}_{2}^{2}\) one-form symmetry of the aforementioned theory. In this case, we have \[\begin{split}&\text{Higgs limit \eqref{eq:barged}}_{(-k,2k,2k)}/\mathbb{Z}_{2}^{[1]}= \text{Coulomb limit \eqref{eq:barged}}_{(-k,2k,2k)}/\mathbb{Z}_{2}^{[1]}\\ &=\text{PE}\left[t^{2}+t^{k}+t^{k+1}-t^{2k+2}\right]\,\quad \text{with $t=h$ or $c$ }.\end{split} \tag{3.14}\] This is the Hilbert series of \(\mathbb{C}^{2}/\widehat{D}_{k+2}\). The generators of the Higgs branch are \[w=\text{tr}(\mu^{(1)\,2})\,\quad v=X_{\left(1,\frac{1}{2},\frac{1}{2} \right)}\left(Q^{(1)}\right)^{2k}\,\quad u=X_{\left(1,\frac{1}{2},\frac{1}{2}\right)} \left(Q^{(1)}\right)^{2k-4}\mu_{1}^{(1)}\mu_{2}^{(1)}\mu_{3}^{(1)}\, \tag{3.15}\] satisfying the relation \[u^{2}+v^{2}w=w^{k+1}. \tag{3.16}\] Again, the Coulomb branch generators can be obtained by replacing the superscript (1) by (2). From (3.8), we see that the dressed monopoles that are related to \(v\) contribute the term \(\zeta\chi_{[2k]}^{\mathfrak{su}(2)}(a)x^{k}\) to the index. This explains why, when \(k\geq 5\), the contribution from the dressed monopole operators appears at a higher order than \(x^{4}\). In particular, for \(k=1\), these operators are related to \(\mathcal{N}=3\) flavour currents that are necessary for the enhanced \(\mathcal{N}=6\) supersymmetry in the IR. 1.2 General results for \((k_{1},k_{2},k_{3})=k(\mathfrak{pq},-\mathfrak{pr},-\mathfrak{qr})\) with \(\mathfrak{r}=\mathfrak{p}+\mathfrak{q}\) We consider the CS levels (2.44) for the theories formed by gauging two copies of the \(T_{2}\) theory. The information about the one-form symmetries and their 't Hooft anomalies are as tabulated in (2.46). Let us now consider the index (3.4). The contribution from flux \((m_{1},m_{2},m_{3})=(0,0,0)\), up to order \(x^{4}\), reads \[\begin{split}& 1+x+\Big{[}\chi_{[4]}^{\mathfrak{su}(2)}\left(a \right)+2-\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)\Big{]}x^{2}+\\ & 2\Big{[}1-\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)\Big{]}x^{3 }+\Big{[}\chi_{[8]}^{\mathfrak{su}(2)}\left(a\right)+2-\chi_{[6]}^{\mathfrak{ su}(2)}\left(a\right)\Big{]}x^{4}+\ldots\.\end{split} \tag{3.17}\] The term at order \(x\) is the contribution from \[\epsilon^{\alpha_{1}\alpha_{1}^{\prime}}\epsilon^{\alpha_{2}\alpha_{2}^{ \prime}}\epsilon^{\alpha_{3}\alpha_{3}^{\prime}}Q^{(1)}_{\alpha_{1}\alpha_{2} \alpha_{3}}Q^{(2)}_{\alpha_{1}^{\prime}\alpha_{2}^{\prime}\alpha_{3}^{\prime}}. \tag{3.18}\] This is the moment map operator associated with the \(\mathrm{U}(1)\)\(\mathcal{N}=3\) flavour symmetry current. The marginal operators contributing the positive terms, namely \(\chi^{\mathfrak{su}(2)}_{[4]}\left(a\right)+2\), at order \(x^{2}\) are \[Q^{((I_{1})}Q^{(I_{2})}Q^{(I_{3})}Q^{(I_{4}))_{S}}\,\quad\mathrm{tr}(\mu_{1}^{(1 )}\mu_{1}^{(2)})\,\quad\mathrm{tr}(\mu_{2}^{(1)}\mu_{2}^{(2)})\, \tag{119}\] where \(\mathrm{tr}(\mu_{3}^{(1)}\mu_{3}^{(2)})\) can be written as a linear combination of the latter two due to the \(F\)-terms. Also, in the first quantities, the contractions of gauge indices, which we have suppressed, are done in such a way that \(I_{1},\ldots I_{4}\) are completely symmetric; the latter is denoted by \(()_{S}\). Note also that the first quantities contain \(\mathrm{tr}(\mu^{(1)\,2})\) and \(\mathrm{tr}(\mu^{(2)\,2})\). There is also a contribution from eight gauge fluxes \((m_{1},m_{2},m_{3})=(\pm\mathfrak{r},\pm\mathfrak{q},\pm\mathfrak{p})\) corresponding to the gauge-invariant dressed monopole operators. According to the discussion around (10), the bare monopoles associated with these fluxes have dimension \(0\), are neutral under the flavour symmetry, and the charges under the Cartan subalgebra of each \(\mathrm{SU}(2)_{i}\) are \(2k\mathfrak{pqr}(\pm 1,\mp 1,\mp 1)\). They have to be dressed with \(2k\mathfrak{pqr}\) chiral fields from each copy of \(T_{2}\), or to form gauge invariant quantities. The gauge invariant dressed monopole operators therefore contribute to the index as \(\chi^{\mathfrak{su}(2)}_{[2k\mathfrak{pqr}]}(a)x^{k\mathfrak{pqr}}\). If \(k\mathfrak{pqr}\) is sufficiently large, the index at sufficiently low order does not get affected by these operators. In any case, using the same argument as above, we see from (117) that the SCFT in the IR has enhanced \(\mathcal{N}=5\) supersymmetry, regardless of the contribution of the dressed monopole operators. #### 3.1.3 Higgs and Coulomb branches For the theories discussed in Section (3.1.2), the Higgs and Coulomb branch limits of the index are both equal to the Hilbert series of \(\mathbb{C}^{2}/\widehat{D}_{K+2}\), namely \[\mathrm{PE}\left[t^{2}+t^{K}+t^{K+1}-t^{2K+2}\right]\,\quad K=\mathfrak{ pq}k\,\ t=h\ \mathrm{or}\ c. \tag{120}\] The generators of the Higgs or Coulomb branch are \[w=\mathrm{tr}(\mu^{(I)\,2})\,\quad v=X_{(\mathfrak{r},\mathfrak{q}, \mathfrak{p})}(Q^{(I)})^{2\mathfrak{pqr}k}\,\quad u=X_{(\mathfrak{r},\mathfrak{q}, \mathfrak{p})}(Q^{(I)})^{2\mathfrak{pqr}k-4}\mu_{1}^{(I)}\mu_{2}^{(I)}\mu_{3} ^{(I)}\, \tag{121}\] with \(I=1\) or \(2\). They satisfy the relation \[u^{2}+v^{2}w=w^{K+1}. \tag{122}\] ### Cases that do not satisfy the ATT condition From the effective superpotential (3.3), we see that the first term can be viewed as an \(\mathcal{N}=3\) preserving exactly marginal deformation of the \(\mathcal{N}=5\) theory whose superpotential contains only the second term. The index of the latter, excluding the contribution from the dressed monopoles, is given by (3.17), which indeed indicates \({\cal N}=5\) supersymmetry. The aforementioned exactly marginal deformation explicitly breaks the \(\mathfrak{su}(2)_{a}\) flavour symmetry, corresponding to the term \(-\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)x^{2}\) in the index, to its Cartan subalgebra. The latter can be seen from the term \(+x\) of the index which indicates that the \({\cal N}=3\) flavour symmetry is U(1).15. From the index (3.17) with \(a=1\), we have Footnote 15: To see the action of this symmetry, we view the moment maps \(\mu^{(I)}\) as \(2\times 2\) symmetric matrices \(\mu^{(I)}=\begin{pmatrix}a^{(I)}&b^{(I)}\\ b^{(I)}&c^{(I)}\end{pmatrix}\). Under this U(1) symmetry, the elements \(a^{(I)}\) and \(c^{(I)}\) carry charges \(+1\) and \(-1\) respectively, whereas \(b^{(I)}\) carry charge \(0\). Indeed, the terms \(\text{tr}(\mu^{(I)\,2})=2a^{(I)}c^{(I)}-2b^{(I)\,2}\) and \(\text{tr}(\mu^{(1)}\mu^{(2)})=a^{(2)}c^{(1)}+a^{(1)}c^{(2)}-2b^{(1)}b^{(2)}\) in the superpotential are neutral under this flavour symmetry, as it should be. \[1+x+(5-1)x^{2}-4x^{3}+4x^{4}+\ldots\, \tag{3.23}\] where we see that 7 marginal operators as listed in (3.19) get reduced to 5 due to the two \(F\)-term relations coming from the non-vanishing first term in the superpotential (3.3). Indeed this is due to the terms \(a^{2}\) and \(a^{-2}\) in the term \(-\chi_{[2]}^{\mathfrak{su}(2)}\left(a\right)x^{2}\) being set to 1. In general, we expect that the cases that do not satisfy the ATT condition have \({\cal N}=3\) supersymmetry. Although we have not taken into account the contributions of the dressed monopole operators, we do not expect the latter to make supersymmetry enhanced. Nevertheless, the dressed monopole operators can make the _flavour symmetry_ enhanced. An example is the case of CS levels \((k_{1},k_{2},k_{3})=(-1,1,1)\), whose index is \[1+3x+(9-3)x^{2}-7x^{3}+16x^{4}+\ldots. \tag{3.24}\] The contributions to \(3x\) come from (3.18), along with the dressed monopole operators \(X_{(1,1,0)}Q^{(1)}Q^{(2)}\) and \(X_{(1,0,1)}Q^{(1)}Q^{(2)}\). We propose that these form a triplet of the _enhanced_ SO(3) _flavour symmetry_ which is indeed the moment map of this symmetry. For convenience, let denote by \(b\) the fugacity of this SO(3) flavour symmetry, and so the term \(3x\) can be rewritten as \(\chi_{[2]}^{\mathfrak{su}(2)}(b)x\). Note that the square of such dressed monopole operators contribute to the index at order \(x^{2}\); therefore, the term \((9-3)x^{2}\) can be rewritten as \(\left[\chi_{[4]}^{\mathfrak{su}(2)}(b)+4-\chi_{[2]}^{\mathfrak{su}(2)}(b) \right]x^{2}\). The term \(b^{0}=1\) in \(\chi_{[4]}^{\mathfrak{su}(2)}(b)\), together with \(+4\), accounts for the 5 marginal operators, as mentioned above; the remaining positive terms are marginal operators that are dressed monopole operators. We do not see any contribution of the \({\cal N}=3\) extra SUSY-current, and so we conclude that the theory has \({\cal N}=3\) supersymmetry. ### Gluing with \(T(\mathrm{SU}(2))\) theories Let us consider gauging with the \(T(\mathrm{SU}(2))\) theories in a similar fashion to the theories discussed in Section 2.4. In particular, we focus on the following class of theories: (3.25) where \(S\) stands for the \(T(\mathrm{SU}(2))\) theory. The \(\mathrm{SU}(2)_{i}\) global symmetry left \(T_{2}\) theory is diagonally gauged with the \(\mathrm{SU}(2)_{C}\) symmetry of the \(i\)-th copy of the \(T(SU(2))\) theory with CS level \(k_{i}^{(1)}\) (with \(i=1,2,3\)), and the \(\mathrm{SU}(2)_{i}\) global symmetry right \(T_{2}\) theory is diagonally gauged with the \(\mathrm{SU}(2)_{H}\) symmetry of the \(i\)-th copy of the \(T(SU(2))\) theory with CS level \(k_{i}^{(2)}\). The analogue of the ATT condition (2.65) is the following [14]: \[\frac{q_{1}}{p_{1}}+\frac{q_{2}}{p_{2}}+\frac{q_{3}}{p_{3}}=0\,\qquad \frac{q_{1}^{\prime}}{p_{1}}+\frac{q_{2}^{\prime}}{p_{2}}+\frac{q_{3}^{\prime }}{p_{3}}=0\, \tag{3.26}\] where \[\frac{p_{i}}{q_{i}}=k_{i}^{(1)}-\frac{1}{k_{i}^{(2)}}\,\qquad \frac{p_{i}}{q_{i}^{\prime}}=k_{i}^{(2)}-\frac{1}{k_{i}^{(1)}}. \tag{3.27}\] For convenience, we will also refer to (3.26) as the ATT conditions. The index of these theories is given by the following expression: \[\begin{split}&\mathcal{I}_{\eqref{eq:T}}(a,n_{a};x)\\ &=\left(\frac{1}{8}\prod_{i=1}^{3}\oint\frac{dz_{i}}{2\pi iz_{i}} \right)\sum_{(m_{1},\cdots,m_{3})\in\mathbb{Z}^{3}}\left(\frac{1}{8}\prod_{i=1 }^{3}\oint\frac{df_{i}}{2\pi if_{i}}\right)\sum_{(\widehat{m}_{1},\cdots, \widehat{m}_{3})\in\mathbb{Z}^{3}}\times\\ &\quad\left(\prod_{i=1}^{3}z_{i}^{2k_{i}^{(1)}m_{i}}\mathcal{Z}_ {\mathrm{vec}}^{\mathrm{SU}(2)}(z_{i};m_{i};x)\right)\left(\prod_{i=1}^{3}f_{ i}^{2k_{i}^{(2)}\widehat{m}_{i}}\mathcal{Z}_{\mathrm{vec}}^{\mathrm{SU}(2)}(f_{i}; \widehat{m}_{i};x)\right)\times\\ &\quad\prod_{s_{1},s_{2},s_{3}=\pm 1}\mathcal{Z}_{\chi}^{1/2}(z_{ 1}^{s_{1}}z_{2}^{s_{2}}z_{3}^{s_{3}}a;s_{1}m_{1}+s_{2}m_{2}+s_{3}m_{3}+n_{a};x )\times\\ &\quad\prod_{s_{1},s_{2},s_{3}=\pm 1}\mathcal{Z}_{\chi}^{1/2}(z_{ 1}^{s_{1}}z_{2}^{s_{2}}z_{3}^{s_{3}}a^{-1};s_{1}m_{1}+s_{2}m_{2}+s_{3}m_{3}-n _{a};x)\times\\ &\quad\prod_{i=1}^{3}\mathcal{I}_{T(\mathrm{SU}(2))}(z_{i},m_{i} |f_{i},\widehat{m}_{i}|a,n_{a};x)\.\end{split} \tag{3.28}\] As before, when both conditions in (3.26) are satisfied, there is a flavour symmetry that assigns charge \(+1\) to the chiral fields of the first copy of the \(T_{2}\) theory, charge \(-2\) to the Coulomb branch moment map of \(T(\text{SU}(2))\), \(+2\) to the Higgs branch moment map of \(T(\text{SU}(2))\), and charge \(-1\) to the chiral fields of the second copy of the \(T_{2}\) theory. If one of the conditions in (3.26) is not satisfied, this symmetry is explicitly broken and we should set \(a=1\) and \(n_{a}=0\) in the above expression. ### 't Hooft anomalies of the one-form symmetries Similarly to (2.69), here we have six SU(2) gauge groups but \(\mathbb{Z}_{2}^{4}\) one-form symmetry due to the screening effect of the matter of two copies of the \(T_{2}\) theory. The 't Hooft anomalies are characterised by \[\begin{split}\pi\int_{\mathcal{M}_{4}}&\left[\sum_ {r=1}^{2}\sum_{i=1}^{3}k_{i}^{(r)}\frac{\mathcal{P}(B_{i}^{(r)})}{2}+\sum_{i=1 }^{3}B_{i}^{(1)}\cup B_{i}^{(2)}\right]\\ &\text{with }\sum_{i=1}^{3}B_{i}^{(1)}=\sum_{i=1}^{3}B_{i}^{(2)}=0 \,\end{split} \tag{3.29}\] where \(B_{i}^{(r)}\) is the two-form background field associated with the SU(2) gauge group with CS level \(k_{i}^{(r)}\), with \(i=1,2,3\) and \(r=1,2\). The last constraint comes from (2.12). ### Summary of the results For simplicity, we focus on the following theories (3.30) with various CS levels. We find the following results: * When both conditions in (3.26) are satisfied, IR SCFT always has enhanced \(\mathcal{N}=4\) supersymmetry. * When either of the conditions (3.26) is not satisfied, we generally do not find an indication of supersymmetry enhancement from the index. We report the indices as follows. (3.31) Let us first consider the cases in which both conditions in (3.26) are satisfied. The contribution of the gauge fluxes that are all zero is \[1+0x+(a^{4}+a^{-4}+2-1)x^{2}-(2a^{2}+2a^{-2})x^{3}+(a^{8}+a^{-8})x^{4}+\ldots. \tag{3.32}\] Comparing with the first two rows in the above table, we see that the contributions of the higher monopole fluxes generally appear at higher order of \(x\). Since the coefficient of \(x\) vanishes, there is no \(\mathcal{N}=3\) flavour current. The current associated with the \(\mathrm{U}(1)_{a}\) flavour symmetry, contributing the term \(-x^{2}\), acts as the \(\mathcal{N}=3\) extra-SUSY current. The latter implies that the IR SCFTs have **enhanced \(\mathcal{N}=4\) supersymmetry**. Due to the vanishing coefficient of \(x\), the index does not satisfy the sufficient conditions to have \(\mathcal{N}\geq 5\) supersymmetry [49]. The marginal operators contributing the terms \(a^{4}+a^{-4}+2\) at order \(x^{2}\) are \[\begin{split} a^{\pm 4}:&\epsilon^{a_{1}b_{1}} \epsilon^{c_{1}d_{1}}\epsilon^{a_{2}c_{2}}\epsilon^{b_{2}d_{2}}\epsilon^{a_{3 }c_{3}}\epsilon^{b_{3}d_{3}}Q^{(I)}_{a_{1}a_{2}a_{3}}Q^{(I)}_{b_{1}b_{2}b_{3}} Q^{(I)}_{c_{1}c_{2}c_{3}}Q^{(I)}_{d_{1}d_{2}d_{3}}\,\quad I=1,2\,\\ 1:&\epsilon^{a_{1}b_{1}}\epsilon^{\widehat{c_{1} \widehat{d_{1}}}\widehat{d_{1}}}\epsilon^{a_{2}c_{2}}\epsilon^{b_{2}d_{2}} \epsilon^{a_{3}c_{3}}\epsilon^{b_{3}d_{3}}Q^{(1)}_{a_{1}a_{2}a_{3}}Q^{(1)}_{b_ {1}b_{2}b_{3}}Q^{(2)}_{\widehat{c_{1}}c_{2}c_{3}}Q^{(2)}_{\widehat{d_{1}}d_{2} d_{3}}\,\\ 1:&\epsilon^{a_{1}b_{1}}\epsilon^{\widehat{c_{1} \widehat{d_{1}}}\widehat{d_{1}}}\epsilon^{a_{2}c_{2}}\epsilon^{b_{2}d_{2}} \epsilon^{a_{3}d_{3}}\epsilon^{b_{3}c_{3}}Q^{(1)}_{a_{1}a_{2}a_{3}}Q^{(1)}_{b_ {1}b_{2}b_{3}}Q^{(2)}_{\widehat{c_{1}}c_{2}c_{3}}Q^{(2)}_{\widehat{d_{1}}d_{2} d_{3}}\.\end{split} \tag{3.33}\] The other gauge invariant combinations with \(R\)-charge 2 are related to these combinations by the identities of the epsilon tensors or the \(F\)-term conditions. However, in the cases in which one or both of the conditions (3.26) is not satisfied, the \(\mathrm{U}(1)_{a}\) flavour symmetry is explicitly broken, and so the marginal operators listed in (3.33) may be related to each other by the \(F\)-terms (_cf_. Sections 2.3 and 3.2). In these cases, we do not see clear evidence of supersymmetry enhancement from the index. ## 4 Theories with \(T_{3}\) building blocks We now consider theories whose building blocks are the 3d \(T_{3}\) theory. Let us start by summarising the important information of the \(T(\mathrm{SU}(3))\) and \(T_{3}\) theories. The \(T(\mathrm{SU}(3))\) theory has an \(\mathrm{SU}(3)_{H}\times\mathrm{SU}(3)_{C}\) global symmetry with a mixed anomaly characterised by \[\frac{2\pi}{3}\int_{\mathcal{M}_{4}}w_{2}^{H}\cup w_{2}^{C}\, \tag{4.1}\] where \(w_{2}^{H/C}\) is the second Stiefel-Whitney class which measures the obstruction to lifting the \((\mathrm{SU}(3)/\mathbb{Z}_{3})_{H/C}\) bundle to the \(SU(3)_{H/C}\) bundle. The 3d \(\mathcal{N}=4\)\(T_{3}\) theory can then be constructed by gauging the diagonal \(\mathrm{SU}(3)/\mathbb{Z}_{3}\) subgroup of the \(\mathrm{SU}(3)_{H}^{3}\) symmetry coming from three copies of the \(T(\mathrm{SU}(3))\) theory. Note that the \(\mathrm{SU}(3)_{C}^{3}\) manifest flavour symmetry of the \(T_{3}\) theory gets enhanced to \(E_{6}\) in the IR. The moment map in the adjoint representation of \(E_{6}\) can be decomposed into fields in representations of the \(\mathrm{SU}(3)^{3}\) maximal subgroup as follows: \[\begin{array}{ccccccccccccc}\mathbf{78}&\rightarrow&[\mathbf{8};\mathbf{1}; \mathbf{1}]&\oplus&[\mathbf{1};\mathbf{8};\mathbf{1}]&\oplus&[\mathbf{1}; \mathbf{1};\mathbf{8}]&\oplus&[\mathbf{3};\mathbf{3};\mathbf{3}]&\oplus&[ \overline{\mathbf{3}};\overline{\mathbf{3}};\overline{\mathbf{3}}]\\ &&X_{j_{1}}^{i_{1}}&&Y_{j_{2}}^{i_{2}}&&Z_{j_{3}}^{i_{3}}&&\mathcal{Q}^{i_{1} i_{2}i_{3}}&&\widetilde{\mathcal{Q}}_{i_{1}i_{2}i_{3}}&\widetilde{\mathcal{Q}}_{i_{1}i_{ 2}i_{3}}\end{array}. \tag{4.2}\] They satisfy the following relations (see [62, Section 2.2] and [63, Section 5.3]): \[\begin{array}{l}\mathrm{tr}_{1}(X^{2})=\mathrm{tr}_{2}(Y^{2})=\mathrm{tr}_{ 3}(Z^{2})\equiv\mathbb{M}_{2}\,\\ \mathrm{tr}_{1}(X^{3})=\mathrm{tr}_{2}(Y^{3})=\mathrm{tr}_{3}(Z^{3})\equiv \mathbb{M}_{3}\,\\ X_{j_{1}}^{i_{1}}\mathcal{Q}^{ji_{1}i_{2}i_{3}}=Y_{j_{2}}^{i_{2}}\mathcal{Q}_{ i_{1}ji_{2}i_{3}}=Z_{j_{3}}^{i_{3}}\mathcal{Q}^{i_{1}i_{2}j_{3}}\,\\ X_{i_{1}}^{j_{1}}\widetilde{\mathcal{Q}}_{j_{1}i_{2}i_{3}}=Y_{i_{2}}^{j_{2}} \widetilde{\mathcal{Q}}_{i_{1}j_{2}i_{3}}=Z_{i_{3}}^{j_{3}}\widetilde{\mathcal{ Q}}_{i_{1}i_{2}j_{3}}\,\\ \mathcal{Q}^{i_{1}i_{2}i_{3}}\widetilde{\mathcal{Q}}_{j_{1}j_{2}i_{3}}=\sum_{l =0}^{3}v_{l}\sum_{m=0}^{2-l}(X^{2-l-m})_{j_{1}}^{i_{1}}(Y^{m})_{j_{2}}^{i_{2}} \,\ \ \ v_{0}=1\,\ v_{1}=0\,\ (X^{0})_{j}^{i}=(Y^{0})_{j}^{i}=\delta_{j}^{i}\,\\ \frac{1}{2}\mathcal{Q}^{i_{1}i_{2}i_{3}}\mathcal{Q}^{j_{1}j_{2}i_{3}}\epsilon_{ i_{2}j_{2}k_{2}}\epsilon_{i_{3}j_{3}k_{3}}=\widetilde{\mathcal{Q}}_{k_{1}k_{2}k_{3}} \delta_{p_{1}}^{i_{1}}X_{q_{1}}^{j_{1}}\epsilon^{p_{1}q_{1}k_{1}}\,\\ \frac{1}{2}\widetilde{\mathcal{Q}}_{i_{1}i_{2}i_{3}}\widetilde{\mathcal{Q}}_{ j_{1}j_{2}j_{3}}\epsilon^{i_{2}j_{2}k_{2}}\epsilon^{i_{3}j_{3}k_{3}}=\mathcal{Q}^{k_{1} k_{2}k_{3}}\delta_{i_{1}}^{p_{1}}X_{j_{1}}^{q_{1}}\epsilon_{p_{1}q_{1}k_{1}}\,\end{array} \tag{4.3}\] where \(\mathrm{tr}_{i}\) denotes the trace over the fundamental representation of the \(\mathrm{SU}(3)_{i}\) symmetry (with \(i=1,2,3\)) of the \(T_{3}\) theory. ### One \(T_{3}\) building block The theory of our interest is obtained by gauging each \(\mathrm{SU}(3)_{C}\) factor of the \(\mathrm{SU}(3)_{C}^{3}\) symmetry with CS levels \(k_{1}\), \(k_{2}\) and \(k_{3}\). Similarly to (2.1), we denote this theory by \[\begin{array}{c}\includegraphics[width=142.362858pt]{fig/C3-crop.pdf}\end{array} \tag{4.4}\] ### 't Hooft anomalies of the one-form symmetry Since the faithful manifest flavour symmetry of the \(T_{3}\) theory is \(\mathrm{SU}(3)^{3}/(\mathbb{Z}_{3}\times\mathbb{Z}_{3})\)[44, (4.40)], it follows that theory (4.4) has a \(\mathbb{Z}_{3}^{2}\) one-form symmetry. The 't Hooft anomaly of the one-form symmetry in theory (4.4) with CS levels \((k_{1},k_{2},k_{3})\) is characterised by the 4d anomaly theory whose action is [45] \[\frac{2\pi}{3}\int_{\mathcal{M}_{4}}\sum_{i=1}^{3}k_{i}\frac{\mathcal{P}(w_{i} ^{(2)})}{2}\,\quad\text{with}\quad\sum_{i=1}^{3}w_{i}^{(2)}=0\, \tag{4.5}\] where \(w_{i}^{(2)}\) is the two-form background field for the \(\mathbb{Z}_{3}\) one-form symmetry arising from the \(\mathrm{SU}(3)_{i}\) gauge group of (4.4). ### Superconformal indices The index of the \(T(\mathrm{SU}(3))\) theory is given by \[\begin{split}&\mathcal{I}_{T(\mathrm{SU}(3))}(\mathbf{w},\mathbf{n}|\mathbf{f}, \mathbf{m}|a,n_{a};x)\\ &=\frac{1}{2!}\sum_{h\in\mathbb{Z}+\mathbf{\epsilon}(\mathbf{m})}\oint \frac{du}{2\pi iu}w_{1}^{h}u^{n_{1}}\sum_{l_{1},l_{2}\in\mathbb{Z}+\mathbf{ \epsilon}(\mathbf{m})}\oint\left(\prod_{\alpha=1}^{2}\frac{dz_{\alpha}}{2\pi iz_{ \alpha}}z_{\alpha}^{n_{2}}\right)w_{2}^{l_{1}+l_{2}}\times\\ &\qquad\mathcal{Z}_{\mathrm{vec}}^{\mathrm{U}(2)}(\{z_{1},z_{2} \};\{l_{1},l_{2}\};x)\times\prod_{\alpha=1}^{2}\prod_{s=\pm 1}\mathcal{Z}_{ \chi}^{\frac{1}{2}}\left(a(uz_{\alpha}^{-1})^{s};s(h-l_{\alpha})+n_{a};x \right)\times\\ &\qquad\prod_{i=1}^{3}\prod_{\alpha=1}^{2}\prod_{s=\pm 1} \mathcal{Z}_{\chi}^{\frac{1}{2}}\left(a(z_{\alpha}f_{i}^{-1})^{s};s(l_{\alpha} -m_{i})+n_{a};x\right)\,\end{split} \tag{4.6}\] where \((\mathbf{w},\mathbf{n})\), \((\mathbf{f},\mathbf{m})\) and \((a,n_{a})\) are (fugacities, background fluxes) for the topological, flavour, and axial symmetries respectively. In the above, \(\mathbf{\epsilon}(\mathbf{m})\) denotes the fractional part of the background fluxes \(m_{i}\). The \(\mathrm{U}(N)\) vector multiplet contribution is \[\mathcal{Z}_{\mathrm{vec}}^{\mathrm{U}(N)}\left(\mathbf{z};\mathbf{n};x\right)=x^{- \sum_{1\leq i<j\leq N}|n_{i}-n_{j}|}\prod_{1\leq i\neq j\leq N}(1-(-1)^{n_{i}- n_{j}}x^{|n_{i}-n_{j}|}z_{i}z_{j}^{-1}). \tag{4.7}\] Note that the index of the \(T(\mathrm{SU}(3))\) theory is invariant under the mirror symmetry in the following sense: \[\begin{split}&\widehat{\mathcal{I}}_{T(\mathrm{SU}(3))}(\{w_{1};w_{2} \},\{n_{1},n_{2}\}|\{f_{1},f_{2}\},\{m_{1},m_{2}\}|a,n_{a};x)\\ &=\widehat{\mathcal{I}}_{T(\mathrm{SU}(3))}(\{f_{1},f_{2}\},\{m_ {1};m_{2}\}|\{w_{1},w_{2}\},\{n_{1},n_{2}\}|a^{-1},-n_{a};x)\,\end{split} \tag{4.8}\] where we have defined \[\begin{split}&\widehat{\mathcal{I}}_{T(\mathrm{SU}(3))}(\{w_{1},w_{2} \},\{n_{1};n_{2}\}|\{f_{1},f_{2}\},\{m_{1},m_{2}\}|a,n_{a};x)\\ &:=\mathcal{I}_{T(\mathrm{SU}(3))}(\{w_{1}w_{2}^{-1},w_{1}^{-2}w_ {2}^{-1}\},\{n_{1}-n_{2},-2n_{1}-n_{2}\}|\\ &\qquad\qquad\qquad\qquad\{f_{1},f_{2},f_{1}^{-1}f_{2}^{-1}\},\{ m_{1},m_{2},-m_{1}-m_{2}\}|a,n_{a};x)\.\end{split} \tag{4.9}\] The index of the \(T_{3}\) theory is therefore \[\begin{split}&\mathcal{I}_{T_{3}}(\mathbf{w}^{(1)},\mathbf{n}^{(1)}|\mathbf{w}^ {(2)},\mathbf{n}^{(2)}|\mathbf{w}^{(3)},\mathbf{n}^{(3)}|a,n_{a};x)\\ &=\frac{1}{3!}\sum_{r=0}^{2}\ \sum_{m_{1},m_{2}\in\mathbb{Z}+\frac{r}{ 3}}\ \oint\left(\prod_{\alpha=1}^{2}\frac{df_{\alpha}}{2\pi if_{\alpha}}\right) \mathcal{Z}_{\rm vec}^{\rm SU(3)}\left(\mathbf{f};\mathbf{m};x\right)\times\\ &\qquad\prod_{I=1}^{3}\widehat{\mathcal{I}}_{T({\rm SU(3)})}(\mathbf{w }^{(I)},\mathbf{n}^{(I)}|\mathbf{f},\mathbf{m}|a,n_{a};x)\,\end{split} \tag{4.10}\] where the \({\rm SU(3)}\) vector multiplet contribution is given by \[\mathcal{Z}_{\rm vec}^{\rm SU(3)}\left(\{z_{1},z_{2}\};\{n_{1},n_{2}\};x \right)=\mathcal{Z}_{\rm vec}^{\rm U(3)}\left(\{z_{1},z_{2},z_{1}^{-1}z_{2}^ {-1}\};\{n_{1},n_{2},-n_{1}-n_{2}\};x\right). \tag{4.11}\] The index of the theory of our interest (4.4) is then \[\begin{split}&\mathcal{I}_{\eqref{eq:2.1}}(a,n_{a};x)\\ &=\frac{1}{(3!)^{3}}\prod_{i=1}^{3}\sum_{n_{1}^{(i)},n_{2}^{(i)} \in\mathbb{Z}}\oint\frac{dw_{1}^{(i)}}{2\pi w_{1}^{(i)}}\frac{dw_{2}^{(i)}}{2 \pi w_{2}^{(i)}}\ (w_{1}^{(i)})^{k_{i}(2n_{1}^{(i)}+n_{2}^{(i)})}(w_{2}^{(i)})^{k_{i}(n_{1}^{( i)}+2n_{2}^{(i)})}\times\\ &\qquad\left[\prod_{i=1}^{3}\mathcal{Z}_{\rm vec}^{\rm SU(3)} \left(\mathbf{w}^{(i)};\mathbf{n}^{(i)};x\right)\right]\times\mathcal{I}_{T_{3}}(\mathbf{ w}^{(1)},\mathbf{n}^{(1)}|\mathbf{w}^{(2)},\mathbf{n}^{(2)}|\mathbf{w}^{(3)},\mathbf{n}^{(3)}|a,n_{a};x )\.\end{split} \tag{4.12}\] As before, if the ATT condition (2.6) is not satisfied, we set \(a=1\) and \(n_{a}=0\). Due to the technicality of the computation, let us discuss the results only in certain cases. We first focus on the theories that satisfy the ATT condition and we set \(n_{a}=0\). The contribution from the fluxes \(n_{1}^{(i)},n_{2}^{(i)}=0\) for all \(i=1,2,3\) is \[1+0x+(a^{4}-1)x^{2}+(a^{6}-a^{2}+a^{-2}+a^{-6})x^{3}+(a^{8}-1-a^{-4}+a^{-8})x^ {4}+\ldots. \tag{4.13}\] This index holds when the CS levels are sufficiently high so that the contributions from non-zero fluxes \(n_{1}^{(i)},n_{2}^{(i)}\) appear at a higher order.16 The term \(0x\) implies that there is no \(\mathcal{N}=3\) flavour symmetry current. Therefore, the term \(-x^{2}\) indicates that there is one extra SUSY-current, implying that the IR SCFT has enhanced \(\mathcal{N}=4\) supersymmetry. The term \(a^{4}x^{2}\) corresponds to the marginal operator \(\mathbb{M}_{2}\), defined in (4.3). When the ATT condition is not satisfied, we set \(a=1\) in (4.13). In this case, the index does not provide any evidence for supersymmetry enhancement. Footnote 16: For example, the index for theory (4.4) with CS levels \((-1,2,2)\) is \(1+0x+(a^{4}-1)x^{2}+(a^{6}-a^{2}+a^{-2}-3a^{-6})x^{3}+(a^{8}-1-5a^{-4}+2a^{-8} )x^{4}+\ldots\). On the other hand, for CS levels \((-3,6,6)\), the index up to order \(x^{4}\) is given by (4.13). ### Two \(T_{3}\) building blocks Similarly to (3.1), we can couple two copies of the \(T_{3}\) theory together by gauging a diagonal subgroup of the two \(\text{SU}(3)_{i}\) flavour symmetries (with \(i=1,2,3\)), belonging to different copies of the \(T_{3}\) theories, with CS levels \(k_{i}\). \[T_{3}\raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/C3.eps}}_{k_{2}} \raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/C4.eps}}_{k_{3}} \raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/C4.eps}}_{k_{1}} \raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/C4.eps}}_{k_{2}} \raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/C4.eps}}_{k_{3}} \raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/C4.eps}}_{T_{3}} \tag{4.14}\] This theory has a \(\mathbb{Z}_{3}^{2}\) one-form symmetry, whose 't Hooft anomaly is given by (4.5). The index of this theory is given by \[\begin{split}&\mathcal{I}_{\eqref{eq:C4.eps}}(a,n_{a};x)\\ &=\frac{1}{(3!)^{3}}\prod_{i=1}^{3}\sum_{n_{1}^{(i)},n_{2}^{(i)} \in\mathbb{Z}}\oint\frac{dw_{1}^{(i)}}{2\pi w_{1}^{(i)}}\frac{dw_{2}^{(i)}}{2 \pi w_{2}^{(i)}}\ (w_{1}^{(i)})^{k_{i}(2n_{1}^{(i)}+n_{2}^{(i)})}(w_{2}^{(i)})^{k_{i}(n_{1}^{(i) }+2n_{2}^{(i)})}\times\\ &\quad\left[\prod_{i=1}^{3}\mathcal{Z}_{\text{vec}}^{\text{SU}(3 )}\left(\mathbf{w}^{(i)};\mathbf{n}^{(i)};x\right)\right]\prod_{s=\pm 1}\mathcal{I}_{T_{3}}(\mathbf{w}^{(1)},\mathbf{n}^{(1)}|\mathbf{w}^{(2)},\mathbf{n}^{(2)}|\mathbf{w}^{(3)},\mathbf{n}^{(3)}|a^{s},sn_{a} ;x)\.\end{split} \tag{4.15}\] Once again, the fugacity \(a\) and background magnetic flux \(n_{a}\) for the flavour symmetry should be set to \(1\) and \(0\) respectively if the ATT condition (2.6) is not satisfied. Due to the technicality of the computation, we set \(n_{a}=0\) and we report only the contributions of the zero gauge fluxes \(n_{1}^{(i)},n_{2}^{(i)}=0\): \[1+0x+\big{(}a^{4}+a^{-4}+4-1\big{)}x^{2}+2\big{(}a^{6}+a^{-6}\big{)}x^{3}+ \big{[}2\big{(}a^{8}+a^{-8}\big{)}+3\big{]}x^{4}+\ldots. \tag{4.16}\] Note that this is the index of theory (4.14) with sufficiently large CS levels \(k_{1,2,3}\). As before, the term \(0x\) indicates that there is no \(\mathcal{N}=3\) flavour symmetry current, and so the term \(-1x^{2}\) indicates that there is one extra SUSY-current. The IR SCFT indeed has enhanced \(\mathcal{N}=4\) supersymmetry. The positive terms at order \(x^{2}\) correspond to the following marginal operators: \[\begin{split} a^{\pm 4}:&\mathbb{M}_{2}^{(1)}\,\ \mathbb{M}_{2}^{(2)}\,\\ 4:&\text{tr}_{1}\big{(}X^{(1)}X^{(2)}\big{)}\,\ \text{tr}_{2}\big{(}Y^{(1)}Y^{(2)}\big{)}\,\ \text{tr}_{3}\big{(}Z^{(1)}Z^{(2)}\big{)}\ \text{and}\ \mathcal{Q}^{(1)i_{1}i_{2}i_{3}}\widetilde{\mathcal{Q}}_{i_{1}i_{2}i_{3}}^{(2) }\,\end{split} \tag{4.17}\] where we have used the same notation as in (4.3) with an extra superscript \((I)\) such that \(I=1,2\) to denote the \(I\)-th copy of the \(T_{3}\) theory. ## Acknowledgments We express our gratitude to Matteo Sacchi and Alessandro Tomasiello for a number of useful discussions and for carefully reading through the manuscript as well as providing us with insightful comments. N.M. thanks the visiting research fellowship of the CNRS and the LPTENS, ENS Paris, where part of this project was conducted. ## Appendix A Theories with four \(T_{2}\) building blocks In this appendix, we consider theories obtained by gauging four copies of the \(T_{2}\) theory in the way depicted below: (A.1) where each line with label \(k\) denotes the gauging with CS level \(k\) of the diagonal \(\mathrm{SU}(2)\) subgroup of the \(\mathrm{SU}(2)\times\mathrm{SU}(2)\) flavour symmetry belonging to a pair of \(T_{2}\) theories. Theory (A.1) admits an _equivalent_ quiver description in terms of the \(\mathrm{USp}(2)_{-k}\times\mathrm{Spin}(4)_{2k}\times\mathrm{USp}(2)_{-k} \times\mathrm{Spin}(4)_{2k}\) circular quiver with a bifundamental half-hypermultiplet corresponding to each line between adjacent gauge nodes. Let us compute the index of our theory, which is given by \[\mathcal{I}_{(A.1)}\left(a,n_{a};x\right)\] \[=\frac{1}{64}\,\sum_{(m_{1},\dots,m_{6})\in\mathbb{Z}^{6}}\,\oint \left(\prod_{b=1}^{6}\frac{dz_{b}}{2\pi iz_{b}}\right)\times\] \[\qquad z_{1}^{-2km_{1}}z_{2}^{4km_{2}}z_{3}^{4km_{3}}z_{4}^{-2km_ {4}}z_{5}^{4km_{5}}z_{6}^{4km_{6}}\prod_{b=1}^{6}\mathcal{Z}_{\rm vec}^{{\rm SU }(2)}\left(z_{b};m_{b};x\right)\prod_{s_{1},s_{2},s_{3}=\pm 1}\times\] (A.3) \[\left[\mathcal{Z}_{\chi}^{\frac{1}{2}}\left(z_{1}^{s_{1}}z_{2}^{s _{2}}z_{3}^{s_{3}}a;s_{1}m_{1}+s_{2}m_{2}+s_{3}m_{3}+n_{a};x\right)\mathcal{Z} _{\chi}^{\frac{1}{2}}\left(z_{2}^{s_{1}}z_{3}^{s_{2}}z_{4}^{s_{3}}a^{-1};s_{1} m_{2}+s_{2}m_{3}+s_{3}m_{4}-n_{a};x\right)\times\right.\] \[\left.\mathcal{Z}_{\chi}^{\frac{1}{2}}\left(z_{4}^{s_{1}}z_{5}^{ s_{2}}z_{6}^{s_{3}}a;s_{1}m_{4}+s_{2}m_{5}+s_{3}m_{6}+n_{a};x\right)\mathcal{Z}_{ \chi}^{\frac{1}{2}}\left(z_{5}^{s_{1}}z_{6}^{s_{2}}z_{1}^{s_{3}}a^{-1};s_{1}m_ {5}+s_{2}m_{6}+s_{3}m_{1}-n_{a};x\right)\right]\,\] where \(a\) and \(n_{a}\) are the fugacity and background magnetic flux for the flavour symmetry that assigns charge \(+1\) to the chiral fields of theories \(T^{(1)}\) and \(T^{(3)}\) and \(-1\) to those of theories \(T^{(2)}\) and \(T^{(4)}\). For simplicity, we will set \(n_{a}=0\) upon computing the series expansion of the index. On the other hand, the index of theory (A.2) can be obtained starting from the one of the \({\rm USp}(2)_{-k}\times{\rm SO}(4)_{2k}\times{\rm USp}(2)_{-k}\times{\rm SO}(4 )_{2k}\) circular quiver theory. \[\mathcal{I}_{(A.4)}(\zeta_{1},\zeta_{2},a;x)\] \[=\frac{1}{64}\,\sum_{\mathfrak{m}_{1},\dots,\mathfrak{m}_{4}, \mathfrak{n}_{1},\mathfrak{n}_{2}\in\mathbb{Z}}\,\oint\left(\prod_{b=1}^{4} \frac{dv_{b}}{2\pi iv_{b}}v_{b}^{2\mathfrak{k}\mathfrak{m}_{b}}\right)\zeta_{ 1}^{\mathfrak{m}_{1}+\mathfrak{m}_{2}}\zeta_{2}^{\mathfrak{m}_{3}+\mathfrak{m }_{4}}\oint\left(\prod_{b=1}^{2}\frac{du_{b}}{2\pi iu_{b}}u_{b}^{-2\mathfrak{k} \mathfrak{n}_{b}}\right)\times\] \[\qquad\mathcal{Z}_{\rm vec}^{{\rm SO}(4)}\left(v_{1},v_{2}; \mathfrak{m}_{1},\mathfrak{m}_{2};x\right)\mathcal{Z}_{\rm vec}^{{\rm USp}(2)} \left(u_{1};\mathfrak{n}_{1};x\right)\times\] \[\qquad\mathcal{Z}_{\rm vec}^{{\rm SO}(4)}\left(v_{3},v_{4}; \mathfrak{m}_{3},\mathfrak{m}_{4};x\right)\mathcal{Z}_{\rm vec}^{{\rm USp}(2)} \left(u_{2};\mathfrak{n}_{2};x\right)\times\] (A.5) \[\prod_{b=1}^{2}\,\prod_{s_{1},s_{2}=\pm 1}\mathcal{Z}_{\chi}^{\frac{1}{2}} \left(v_{b}^{s_{1}}u_{1}^{s_{2}}a;s_{1}\mathfrak{m}_{b}+s_{2}\mathfrak{n}_{1}; x\right)\mathcal{Z}_{\chi}^{\frac{1}{2}}\left(v_{b}^{s_{1}}u_{2}^{s_{2}}a^{-1};s_{1} \mathfrak{m}_{b}+s_{2}\mathfrak{n}_{2};x\right)\times\] \[\prod_{b=3}^{4}\,\prod_{s_{1},s_{2}=\pm 1}\mathcal{Z}_{\chi}^{\frac{1}{2}} \left(v_{b}^{s_{1}}u_{1}^{s_{2}}a^{-1};s_{1}\mathfrak{m}_{b}+s_{2}\mathfrak{n}_{ 1};x\right)\mathcal{Z}_{\chi}^{\frac{1}{2}}\left(v_{b}^{s_{1}}u_{2}^{s_{2}}a;s _{1}\mathfrak{m}_{b}+s_{2}\mathfrak{n}_{2};x\right)\,,\] where \(\zeta_{1}\) and \(\zeta_{2}\) are the fugacities associated with the \(\mathbb{Z}_{2}\) zero-form magnetic symmetries of the two \({\rm SO}(4)_{2k}\) gauge factors, and the fugacities for the zero-form charge conjugation symmetries are set to unity. In order to obtain the index of the circular quiver (A.2) with Spin gauge groups, we have to sum over both \(\zeta_{1}=\pm 1\) and \(\zeta_{2}=\pm 1\) and divide by four: \[\mathcal{I}_{(A.2)}\left(a;x\right)=\frac{1}{4}\sum_{\zeta_{1},\zeta_{2}=\pm 1 }\mathcal{I}_{(A.4)}\left(\zeta_{1},\zeta_{2},a;x\right).\] (A.6) It is easy to check that the indices (A.3) and (A.6) are equal if we perform the following map between the gauge fugacities and magnetic fluxes of the two theories: \[\begin{array}{ll}z_{1}=u_{1}\,&z_{2}^{2}=v_{1}v_{2}\,&z_{3}^{2}=v_{1}v_{2}^{-1}\,\\ z_{4}=u_{2}\,&z_{5}^{2}=v_{3}v_{4}\,&z_{6}^{2}=v_{3}v_{4}^{-1}\,\\ m_{1}=\mathfrak{n}_{1}\,&2m_{2}=\mathfrak{m}_{1}+\mathfrak{m}_{2}\,&2m_{3}= \mathfrak{m}_{1}-\mathfrak{m}_{2}\,\\ m_{4}=\mathfrak{n}_{2}\,&2m_{5}=\mathfrak{m}_{3}+\mathfrak{m}_{4}\,&2m_{6}= \mathfrak{m}_{3}-\mathfrak{m}_{4}\.\end{array}\] (A.7) It can now be checked that the IR SCFT in question exhibits \(\mathcal{N}=4\) supersymmetry enhancement, as expected from the general prescription described in the main text, by computing the index as a series expansion in the variable \(x\). We report the results up to order \(x^{4}\) for various values of \(k\): \[\begin{array}{ll}k=1:&\mathcal{I}_{(A.1)}(a;x)=\mathcal{I}_{(A.2)}(a;x)\\ &=1+\big{(}2a^{4}+2a^{-4}+3\big{)}x^{2}-4\big{(}a^{2}+a^{-2}\big{)}x^{3}+\\ &\qquad\big{(}6a^{8}+3a^{4}+3a^{-4}+6a^{-8}+7\big{)}x^{4}+\ldots\,\\ k=2:&\mathcal{I}_{(A.1)}(a;x)=\mathcal{I}_{(A.2)}(a;x)\\ &=1+\big{(}2a^{4}+2a^{-4}+3\big{)}x^{2}-4\big{(}a^{2}+a^{-2}\big{)}x^{3}+\\ &\qquad\big{(}3a^{8}+2a^{4}+2a^{-4}+3a^{-8}+4\big{)}x^{4}+\ldots\.\end{array}\] (A.8) Note that for \(k\geq 2\) the indices differ at a higher order than \(x^{4}\) in the expansion. We notice that the coefficient of \(x\) vanishes, meaning that there is no \(\mathcal{N}=3\) flavour current. The coefficient of \(x^{2}\) should be written as \[2a^{4}+2a^{-4}+4-1\,\] (A.9) where the term \(-1\) is the contribution of the \(\mathcal{N}=3\) extra SUSY-current that leads to \(\mathcal{N}=4\) supersymmetry enhancement. The marginal operators, contributing the positive terms \(2a^{4}+2a^{-4}+4\), can be listed as follows: \[\begin{array}{ll}2a^{4}:&\,\mathrm{tr}(\mu^{(1)\,2})\ \mathrm{and}\ \mathrm{tr}(\mu^{(3)\,2})\,\\ 2a^{-4}:&\,\mathrm{tr}(\mu^{(2)\,2})\ \mathrm{and}\ \mathrm{tr}(\mu^{(4)\,2})\,\\ 4:&\,\mathrm{gauge\ invariant\ combinations\ of\ two\ chiral\ fields}\ Q^{(I)}_{i_{I}j_{I}k_{I}}\ \mathrm{of}\ T^{(I)}_{2}\\ &\qquad\mathrm{and\ two\ chiral\ fields}\ Q^{(J)}_{i_{J}j_{J}k_{J}}\ \mathrm{of}\ T^{(J)}_{2}\,\ \mathrm{with}\ I=1,3\ \mathrm{and}\ J=2,4\.\end{array}\] (A.10) We can gauge the one-form symmetry of the theory (A.1) to obtain the \(SU(2)_{-k}\times\left[SU(2)_{2k}\times SU(2)_{2k}\right]/\mathbb{Z}_{2}\times SU(2) _{-k}\times\left[SU(2)_{2k}\times SU(2)_{2k}\right]/\mathbb{Z}_{2}\) gauge theory, as depicted below. (A.11) It turns out that this theory is equivalent to the circular quiver theory (A.4). The index of theory (A.11) can be obtained from (A.3) by replacing the summation as \[\sum_{(m_{1},\ldots,m_{6})\in\mathbb{Z}^{6}}\quad\longrightarrow\quad\sum_{p=0 }^{1}\zeta_{1}{}^{p}\,\sum_{p^{\prime}=0}^{1}\zeta_{2}{}^{p^{\prime}}\sum_{m_{1 },m_{4}\in\mathbb{Z}^{2}}\,\sum_{m_{2},m_{3}\in\left(\mathbb{Z}+\frac{p}{2} \right)^{2}}\,\sum_{m_{5},m_{6}\in\left(\mathbb{Z}+\frac{p^{\prime}}{2}\right) ^{2}}\,,\] (A.12) where \(\zeta_{1}\) and \(\zeta_{2}\) are the fugacities of the zero-form symmetry of (A.11). The result of this procedure is equal to the index (A.5).17 Footnote 17: Indeed, by taking into account the map (A.7), \(\zeta_{1}\) and \(\zeta_{2}\) are such that \[\zeta_{1}{}^{\mathfrak{m}_{1}\pm\mathfrak{m}_{2}} =\begin{cases}1&\mathfrak{m}_{1}\pm\mathfrak{m}_{2}\in\mathbb{Z}_{ \text{even}}\longleftrightarrow m_{2},m_{3}\in\mathbb{Z}\longleftrightarrow p =0\\ \zeta_{1}&\mathfrak{m}_{1}\pm\mathfrak{m}_{2}\in\mathbb{Z}_{\text{odd}} \longleftrightarrow m_{2},m_{3}\in\frac{\mathbb{Z}_{\text{odd}}}{2} \longleftrightarrow p=1\end{cases}\,,\] \[\zeta_{2}{}^{\mathfrak{m}_{3}\pm\mathfrak{m}_{4}} =\begin{cases}1&\mathfrak{m}_{3}\pm\mathfrak{m}_{4}\in\mathbb{Z}_{ \text{even}}\longleftrightarrow m_{5},m_{6}\in\mathbb{Z}\longleftrightarrow p ^{\prime}=0\\ \zeta_{2}&\mathfrak{m}_{3}\pm\mathfrak{m}_{4}\in\mathbb{Z}_{\text{odd}} \longleftrightarrow m_{5},m_{6}\in\frac{\mathbb{Z}_{\text{odd}}}{2} \longleftrightarrow p^{\prime}=1\end{cases}\,.\] Let us report the results for various values of \(k\) up to order \(x^{4}\). For \(k=1\), the expansion of the index up to order \(x^{4}\) reads \[\mathcal{I}_{(\ref{eq:2.1.1})}(\zeta_{1},\zeta_{2},a;x)=\mathcal{I }_{(\ref{eq:2.1.1})}(\zeta_{1},\zeta_{2},a;x)\] \[k=1: 1+\big{[}(2+\zeta_{1}+\zeta_{2}+\zeta_{1}\zeta_{2})\big{(}a^{4}+ a^{-4}\big{)}+3+\zeta_{1}+\zeta_{2}+\zeta_{1}\zeta_{2}\big{]}x^{2}+\] (A.13) \[\big{[}(\zeta_{1}+\zeta_{2}+2\zeta_{1}\zeta_{2})a^{6}+(-4-\zeta_{ 1}-\zeta_{2}+2\zeta_{1}\zeta_{2})\big{(}a^{2}+a^{-2}\big{)}+\] \[(\zeta_{1}+\zeta_{2}+2\zeta_{1}\zeta_{2})a^{-6}\big{]}\,x^{3}+\] \[\big{[}(6+3\zeta_{1}+3\zeta_{2}+3\zeta_{1}\zeta_{2})a^{8}+(3-\zeta _{1}-\zeta_{2}-3\zeta_{1}\zeta_{2})\big{(}a^{4}+a^{-4}\big{)}+\] \[(6+3\zeta_{1}+3\zeta_{2}+3\zeta_{1}\zeta_{2})a^{-8}+7-3\zeta_{1} \zeta_{2}\big{]}\,x^{4}+\ldots\,\] For \(k=2,3\) we get \[k=2: 1+\big{(}2a^{4}+2a^{-4}+3\big{)}x^{2}+\big{[}(\zeta_{1}+\zeta_{2})a^ {6}-4\big{(}a^{2}+a^{-2}\big{)}+(\zeta_{1}+\zeta_{2})a^{-6}\big{]}x^{3}+\] \[\big{[}(3+\zeta_{1}+\zeta_{2}+\zeta_{1}\zeta_{2})a^{8}+(2+\zeta_{1 }\zeta_{2})\big{(}a^{4}+a^{-4}\big{)}+\] \[(3+\zeta_{1}+\zeta_{2}+\zeta_{1}\zeta_{2})a^{-8}+4+\zeta_{1}\zeta _{2}\big{]}\;x^{4}+\ldots\;,\] \[k=3: 1+\big{(}2a^{4}+2a^{-4}+3\big{)}x^{2}-4\big{(}a^{2}+a^{-2}\big{)} x^{3}+\] \[\big{[}(3+\zeta_{1}+\zeta_{2})a^{8}+2\big{(}a^{4}+a^{-4}\big{)}+( 3+\zeta_{1}+\zeta_{2})a^{-8}+4\big{]}x^{4}+\ldots\;.\] For \(k\geq 4\) the fugacities \(\zeta_{1}\) and \(\zeta_{2}\) appear at a higher order than \(x^{4}\). From the expansion of the indices, it is clear that we have two independent \(\mathbb{Z}_{2}\) fugacities \(\zeta_{1}\) and \(\zeta_{2}\) satisfying \({\zeta_{1}}^{2}=\zeta_{2}^{2}=1\). Hence, the elements of \(\{1,\zeta_{1},\zeta_{2},\zeta_{1}\zeta_{2}\}\) corresponding to the possible choices of \(p=\{0,1\}\) and \(p^{\prime}=\{0,1\}\) in (A.1) indicate the presence of the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) zero-form symmetry in theory (A.4) = (A.1), and hence the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) one-form symmetry of theory (A.1) = (A.2). From the point of view of the \(\mathrm{USp}(2)_{-k}\times\mathrm{SO}(4)_{2k}\times\mathrm{USp}(2)_{-k}\times \mathrm{SO}(4)_{2k}\) circular quiver theory (A.4), it is expected by a similar argument that leads to (32) that there is a non-anomalous \(\mathbb{Z}_{2}\) one-form symmetry for \(k\) even. In principle, one can further gauge this one-form symmetry, generalising what we did in (36). ## Appendix B Mixed gauge/zero-form monopole operators In this appendix, we examine the mixed gauge/zero-form monopole operators with _fractional_ magnetic flux for both the Cartan subalgebra of the gauge group and the \(\mathrm{U}(1)_{a}\) flavour symmetry group [58; 64] for theories (1) with one \(T_{2}\) building block such that the ATT condition (6) is satisfied. We adopt the same method as described in [36] (see also [40]) which relies on the supersymmetric index. In particular, in (7), we take \(n_{a}=1/2\) and examine the contribution of the gauge fluxes \(m_{1,2,3}\) satisfying the Dirac quantisation condition, namely \(\pm m_{1}\pm m_{2}\pm m_{3}+n_{a}\in\mathbb{Z}\), i.e. \(\sum_{i}m_{i}\in\mathbb{Z}+\frac{1}{2}\). Let us report the result for various theories with one \(T_{2}\) building block below. \begin{tabular}{|c|c|c|} \hline CS levels & Gauge fluxes & Contribution to the index (7) \\ & \((m_{1},m_{2},m_{3})\) & \\ \hline \((-1,2,2)\) & \(\left(\frac{1}{2},0,0\right)\) & \(0\) \\ & \(\left(0,\frac{1}{2},0\right)\) & \(X_{2}\equiv\left(\frac{1}{2}+\frac{1}{2a^{2}}\right)x+\left(\frac{a^{4}}{2}-1 -\frac{1}{a^{2}}-\frac{1}{2a^{4}}\right)x^{5}+O(x^{7})\) \\ & \(\left(0,0,\frac{1}{2}\right)\) & \(X_{2}\) \\ & \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) & \(0\) \\ \hline \hline \((-2,4,4)\) & \(\left(\frac{1}{2},0,0\right)\) & \(X_{-2}\equiv\frac{x}{2a^{2}}+\left(\frac{a^{2}}{2}-\frac{1}{a^{2}}-\frac{1}{2 a^{4}}\right)x^{5}+O(x^{7})\) \\ & \(\left(0,\frac{1}{2},0\right)\) & \(X_{4}\equiv\left(\frac{a^{2}}{2}+\frac{1}{2}\right)x^{2}+\left(\frac{1}{2}+ \frac{1}{2a^{2}}\right)x^{4}+\left(\frac{a^{6}}{2}-a^{2}-\frac{1}{2}\right)x^{ 6}+O(x^{7})\) \\ & \(\left(0,0,\frac{1}{2}\right)\) & \(X_{4}\) \\ & \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) & \(\left(\frac{a^{3}}{8}+\frac{a}{4}+\frac{1}{8a}\right)x^{3/2}+\left(\frac{1}{8a }+\frac{1}{8a^{3}}\right)x^{7/2}+O(x^{11/2})\) \\ & & \(=\frac{1}{4}\left[\{x^{1/2}(a^{5}+a^{3})+\ldots\}X_{-2}+\{x^{-1/2}a^{-1}+ \ldots\}X_{4}\right]\) \\ \hline \hline \((-2,3,6)\) & \(\left(\frac{1}{2},0,0\right)\) & \(X_{-2}\) \\ & \(\left(0,\frac{1}{2},0\right)\) & \(0\) \\ & \(\left(0,0,\frac{1}{2}\right)\) & \(X_{6}\equiv\left(\frac{a^{4}}{2}+\frac{a^{2}}{2}\right)x^{3}+\left(\frac{a^{2 }}{2}+\frac{1}{2}\right)x^{5}+\left(\frac{a^{8}}{2}-a^{4}+\frac{1}{2}\right)x^ {7}+O(x^{9})\) \\ & \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) & \(0\) \\ \hline \hline \((-4,6,12)\) & \(\left(\frac{1}{2},0,0\right)\) & \(X_{-4}\equiv\left(\frac{1}{2a^{2}}+\frac{1}{2}\right)x^{4}+\left(\frac{1}{2a^{ 4}}+\frac{1}{2a^{2}}\right)x^{6}+\left(\frac{1}{2a^{6}}+\frac{a^{4}}{2}-\frac {3}{2a^{2}}-\frac{3}{2}\right)x^{8}+O(x^{10})\) \\ & \(\left(0,\frac{1}{2},0\right)\) & \(X_{6}\) \\ & \(\left(0,0,\frac{1}{2}\right)\) & \(X_{12}\equiv\left(\frac{a^{10}}{2}+\frac{a^{8}}{2}\right)x^{6}+\left(\frac{a ^{8}}{2}+\frac{a^{6}}{2}\right)x^{8}+\left(\frac{a^{14}}{2}-a^{10}+\frac{a^{6 }}{2}\right)x^{10}+O(x^{12})\) \\ & \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) & \(\left(\frac{a^{11}}{8}+\frac{a^{9}}{4}+\frac{a^{7}}{8}\right)x^{11/2}+\left( \frac{a^{9}}{8}+\frac{a^{7}}{4}+\frac{a^{5}}{8}\right)x^{15/2}+O(x^{19/2})\) \\ \hline \end{tabular} The presence of such mixed gauge/zero-form monopole operators potentially implies a mixed anomaly between the \(\mathbb{Z}_{2}\) one-form symmetry associated to the centre of the SU(2) gauge group and the U(1)\({}_{a}\) symmetry [58, 64]. Observe that this occurs only when the CS level of that SU(2) gauge group is even, but not for odd CS levels. Such a mixed anomaly is characterised by the following 4d anomaly action \(\pi\int_{\mathcal{M}_{4}}w_{i}^{(2)}\cup c_{1}^{a}\) where \(w_{i}^{(2)}\) is the two-form background field for the \(\mathbb{Z}_{2}\) one-form symmetry arising from the centre of the SU(2)\({}_{i}\) gauge group (with \(\sum_{i=1}^{3}w_{i}^{(2)}=0\)) and \(c_{1}^{a}\) is the first Chern class of a background U(1)\({}_{a}\) flavour symmetry bundle. Let us compare the above interpretation with respect to (36). Recall that in the latter we gauge the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) one-form symmetry of theory (1) with CS levels \((k_{1},k_{2},k_{3})=(-4,8,8)\) and obtain the dual \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) zero-form symmetry associated with the fugacities \(\zeta\) and \(s\). Observe that only even powers of \(a\) appear in (36), and we do not see any indication of the mixed anomaly between either \(\mathbb{Z}_{2}\) factor in the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) zero-form symmetry and the U(1)\({}_{a}\) zero-form symmetry. A natural question that arises is whether this is in contradiction with the above interpretation regarding the mixed anomaly between the \(\mathbb{Z}_{2}\) one-form symmetry and the U(1)\({}_{a}\) flavour symmetry. We leave this for a future investigation. A similar analysis can be performed for theories (3.1) with two \(T_{2}\) building blocks such that the ATT condition (6) is satisfied. The only difference is that in the present case the faithful flavour symmetry group is \(\mathrm{SO}(3)_{a}\). This can be seen from the discussion in the previous subsection, where only representations of \(\mathfrak{su}(2)_{a}\) with even Dynkin labels appear in the index. To determine the mixed anomaly involving the flavour symmetry, we fix \(n_{a}=1/2\) in (3.4); this amounts to turning on the second Stiefel-Whitney class \(w_{2}^{a}\) that obstructs the lift from the \(\mathrm{SO}(3)_{a}\) bundle to the \(\mathrm{SU}(2)_{a}\) bundle. The Dirac quantisation condition requires that the gauge fluxes \(m_{1,2,3}\) satisfy \(\pm m_{1}\pm m_{2}\pm m_{3}\pm n_{a}\in\mathbb{Z}\), i.e. \(\sum_{i}m_{i}\in\mathbb{Z}+\frac{1}{2}\). We tabulate the contributions of some of such gauge fluxes in certain examples below. \[\begin{array}{|c|c|}\hline\mathrm{CS\ levels}&\mathrm{Gauge\ fluxes}&\mathrm{Contribution\ to\ index}\ (\ref{eq:2.1})\\ &(m_{1},m_{2},m_{3})&\\ \hline(-2,3,6)&\left(\frac{1}{2},0,0\right)&X_{-2}\equiv\left(\frac{1}{2}+ \frac{1}{2a^{2}}\right)x^{2}+\left(\frac{a^{2}}{2}+\frac{3}{2a^{2}}+\frac{1}{ 2a^{2}}\right)x^{5}+O(x^{7})\\ &\left(0,\frac{1}{2},0\right)&0\\ &\left(0,0,\frac{1}{2}\right)&X_{6}\equiv\left(\frac{a^{6}}{2}+\frac{a^{4}}{2 }\right)x^{4}+\left(-\frac{a^{4}}{2}-\frac{a^{2}}{2}\right)x^{5}+\left(\frac{a ^{4}}{2}+\frac{3a^{2}}{2}+1\right)x^{6}+O(x^{7})\\ &\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)&\\ \hline\hline(-4,6,12)&\left(\frac{1}{2},0,0\right)&X_{-4}\equiv\left(\frac{1}{2 a^{2}}+\frac{1}{2a^{4}}\right)x^{3}+\left(-\frac{1}{2}-\frac{1}{2a^{2}}\right)x^{4}+ \left(\frac{a^{2}}{2}+1+\frac{1}{2a^{2}}\right)x^{5}+O(x^{6})\\ &\left(0,\frac{1}{2},0\right)&X_{6}\\ &\left(0,0,\frac{1}{2}\right)&X_{12}\equiv\left(\frac{a^{12}}{2}+\frac{a^{10}} {2}\right)x^{7}+\left(-\frac{a^{10}}{2}-\frac{a^{4}}{2}\right)x^{5}+O(x^{9})\\ &\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)&\left(\frac{a^{14}}{8}+\frac {a^{12}}{4}+\frac{a^{10}}{8}\right)x^{7}+\left(-\frac{a^{12}}{8}-\frac{a^{10}} {4}-\frac{a^{8}}{8}\right)x^{8}+O(x^{9})\\ \hline\hline\end{array}\] (B.2) Observe that, for an even CS level, there are mixed gauge/zero-form monopole operators with _fractional_ magnetic flux for both the Cartan subalgebras of the gauge group and the \(\mathrm{SO}(3)_{a}\) flavour symmetry (i.e. \(n_{a}=\frac{1}{2}\)). However, these are absent for odd CS levels. Using the argument in [58, 64], we see that, for an even CS level, there is potentially a mixed anomaly between the \(\mathbb{Z}_{2}\) one-form symmetry arising from the centre of the \(\mathrm{SU}(2)_{i}\) gauge group (whose two-form background field is \(w_{i}^{(2)}\)) and the \(\mathrm{SO}(3)\) flavour symmetry. This is characterised by \(\pi\int_{\mathcal{M}_{4}}w_{i}^{(2)}\cup w_{2}^{a}\), with \(\sum_{i=1}^{3}w_{i}^{(2)}=0\).
2304.10600
A Survey of Prevent and Detect Access Control Vulnerabilities
Broken access control is one of the most common security vulnerabilities in web applications. These vulnerabilities are the major cause of many data breach incidents, which result in privacy concern and revenue loss. However, preventing and detecting access control vulnerabilities proactively in web applications could be difficult. Currently, these vulnerabilities are actively detected by bug bounty hunters post-deployment, which creates attack windows for malicious access. To solve this problem proactively requires security awareness and expertise from developers, which calls for systematic solutions. This survey targets to provide a structured overview of approaches that tackle access control vulnerabilities. It firstly discusses the unique feature of access control vulnerabilities, then studies the existing works proposed to tackle access control vulnerabilities in web applications, which span the spectrum of software development from software design and implementation, software analysis and testing, and runtime monitoring. At last we discuss the open problem in this field.
Li Zhong
2023-04-20T18:48:32Z
http://arxiv.org/abs/2304.10600v1
# A Survey of Prevent and Detect Access Control Vulnerabilities ###### Abstract. Broken access control is one of the most common security vulnerabilities in web applications. These vulnerabilities are the major cause of many data breach incidents, which result in privacy concern and revenue loss. However, preventing and detecting access control vulnerabilities proactively in web applications could be difficult. Currently, these vulnerabilities are actively detected by bug bounty hunters post-deployment, which creates attack windows for malicious access. To solve this problem proactively requires security awareness and expertise from developers, which calls for systematic solutions. This survey targets to provide a structured overview of approaches that tackle access control vulnerabilities. It firstly discusses the unique feature of access control vulnerabilities, then studies the existing works proposed to tackle access control vulnerabilities in web applications, which span the spectrum of software development from software design and implementation, software analysis and testing, and runtime monitoring. At last we discuss the open problem in this field. ## 1. Introduction Nowadays web services have became an important part of our life and preserve huge amount of sensitive and private data of their users. To protect these data, web services need to maintain access control inside the system, which typically implemented as authorization checks inside the application code. However, to get things right is not easy. From decades ago, broken access control has become a severe problem in web services. Missing authorization and improper authorization were ranked 6th and 15th respectively in 2011 CWE/SANS report as most dangerous software vulnerabilities (Bang et al., 2021). And it even gets worse these days - in the recent year 2021, the OWASP ranked broken access control as the first among top ten web application risks, with the most occurrences in real-world applications (Li et al., 2021). Some recent security news are also highly related to such vulnerabilities, as shown in Table 1. Unfortunately, it is still difficult to prevent and detect such vulnerabilities in the development of web applications. The difficulty comes from multiple aspects. On the one hand, modern web applications are widely written in framework that are not natively put access control as first-class role, like Django (Django, 2021) and Ruby on Rails (Rulyy et al., 2020). As a result, most web applications implement its access control policy in an ad-hoc manner throughout the application code. On the other hand, access control vulnerabilities are hard to tested and detected in development. Unlike other vulnerabilities, e.g. SQL injection or XSS attacks, which has explicit symptoms such as irregular SQL queries or abnormal data flows, access control vulnerabilities usually flow legal control and data flow inside the system. Whether an access is unauthorized is determined by the internal logic of a web application. These factors contribute to the surprising fact that this problem has long lasting and still cause big security accidents until now. As the consequence of access control vulnerabilities could be very severe, a lot of work are proposed to tackle with this problem. These work span across the development process of a web application and solve this problem from different aspects. Some works help to detect such vulnerabilities before the systems are deployed, while some works monitor the runtime execution and detect skeptical accesses to defense against them after the systems are deployed. Some work participate in the first place that help developers to prevent access control vulnerabilities. They either provide centralized access control frameworks (Li et al., 2021; Li et al., 2021; Li et al., 2021), or propose new access control mechanisms based on architecture design, language support and database support. In this survey, we provides a structured overview on the existing approaches on preventing and detecting access control vulnerabilities in web applications in different phase of development cycles. We mainly focus on the vulnerabilities introduced by developers of web applications that contain different roles, permissions and resources in their business logic. We hope that with this survey, we can have a better understanding on this problem and overview what existing solutions are proposed for this problem. The rest of this survey are organized as follows: In Section 2, we begin with a description on the problem scope, definitions and the challenges to understand the problem. Then in Section 3, we provide an overview of the state-of-the-art approaches that tackle with this problem in different phases of software development. In Section 4, 5 and 6, we will respectively discuss the details of tool support in software design and implementation, software analysis and testing and runtime monitoring. In Section 7, we discuss the new opportunities and open problems with regard to access control vulnerabilities. ## 2. Preliminary:access control and access control vulnerabilities ### Access Control Vulnerabilities In web applications, access control is the mechanism than restricts operations on resources. Nowadays software is used to perform access control by the workflow of authentication and authorization. These two concepts are separate processes and cannot be used interchangeably. Therefore, we clarify their definitions here and further talk about the definitions and features of access control vulnerabilities. _Authentication._ Authentication is the process of identifying someone is who they claim to be. This process can work through many ways, including passwords, one-time pins, biometric verification, etc. These authentication approaches can be combined to further increase the difficulty of cracking them all, thus increasing the confidentiality. Since most of web applications are using HTTP/HTTPS stateless protocol, it means they cannot maintain the login state like how the access control mechanism does in UNIX/Linux systems. To make up for this, web applications use security tokens like cookies and session tokens to maintain the authenticated identity of current users so they can avoid annoying process of repeating login. _Authorization._ Authorization is the process inside a system to decide what access a user is granted. It always takes place after authentication. Without special features, authorization settings in a web applications are usually implemented and maintained by the organizations using this system by staffs with special roles like administers, or the core developers, which is different from authentication for which even normal users may have options to change the token/secrets. Therefore, in a web application, the typical workflow is as follows: * A user logs in and the server returns a generated randomized token, which can be transmit between clients and servers in the following requests as credentials. (Authentication) * The client includes that token in the requests, which identify himself as the login users. Then server decide whether the permission required in this request is satisfied. (Authorization) * The server responds to the user with either an error page due to insufficient permission, or some other information based on the user's permissions. (Authorization) However, authentication is not a mandatory precondition for authorization. For example, an unauthenticated user could be authorized to public resources of a web application even before he identifies himself. There is another exception case where the implementation of authentications and authorizations are combined and simplified- for example, in collaboration platforms like Overleaf, Google Dore, they contain a long randomized string in the URL and anyone who get the URL can access to the content. The authorization process is shortcut as 'anyone get the randomized token embedded in the URL is regarded as having the capability of access something'. The management of such tokens embedded into URLs is up to the users who get them. ### Access Control Vulnerabilities Access control vulnerabilities enable the attacker to access web pages containing sensitive or private data and perform unauthorized actions. It happens when the intended access control policies are not enforced inside the web applications, which could end up in severe consequence like data leakage, Quality of Service (QoS) degradation, and causes financial loss to the service providers. The categories of access control vulnerabilities can be divided into three: vertical, horizontal and context-dependent access control vulnerabilities. * **Vertical Access Control Vulnerabilities.** This allow users to illegally access the resources of another user with totally different roles. A typical example is normal users may have access to resources that should have been only accessed by administrators. * **Horizontal Access Control Vulnerabilities.** It happens when a user can illegally access resources of another user with the same role. For example, in a email applications, a user should only access to mails of himself. If he can access mails of another user, that is a horizontal access control vulnerabilities. * **Context-dependent Access Control Vulnerabilities.** It happens when some resources should only have been accessed only in multiple stages within a process. For example, when a customer buys an item from a E-Commerce web applications, he should go though the payment step before he obtain an item confirmation page. If that context dependency is broken, it will result in an context-dependent access control vulnerabilities. To exploit such vulnerabilities, there are multiple ways. If the attacker try modify parameters inside his request to make the identifiers point to other sensitive resource, this is referred as parameter manipulation(Wang et al., 2018). If he tries to access a restricted page by directly access by the URL of that page, this is referred as forceful browsing (Kang et al., 2018). All these attacks try to find the holes in access control policy enforcement and bypass the access control. Figure 1 show such an example. In the real world case from Open-Xchange (Beng et al., 2018), by mutating the "id" field in the payload of the PUT request, an attacker, as a normal user that has no difference from other users, could access private appointments that belong to other users. The vulnerability wins bug bounty award and gets fixed soon after the report, which reveals the importance of this kind of vulnerabilities. ### What Causes Access Control Vulnerabilities Access control vulnerabilities can be introduced from different ways. By the location of vulnerability root causes, we can divide them into two different type: code logic error and misconfiguration. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Website** & **Description** & **Time** \\ \hline **Jared and Kay Jewelers(Bened et al., 2018)** & Exposed order information including customers’ name, address, phone, email, etc & December 2018 \\ \hline **LifeLock(Bened et al., 2018)** & Exposed customers’ email addresses. & July 2018 \\ \hline **Panera Bread(Panera, 2018)** & Exposed customers’ name, email, address, birthday, last four digits of credit card & April 2018 \\ \hline **Facebook(Bened et al., 2018)** & Allowed anyone to delete other people’s photos & February 2015 \\ \hline **First American Financial(Fan et al., 2018)** & Leaked more than 800 million real estate documents & June 2021 \\ \hline **JustDial(Bened et al., 2018)** & Leaks 100 million users’ names, emails, phone numbers, etc & Apr 2019 \\ \hline **AOL(Bened et al., 2018)** & Leaked all search queries and reactions of its users & August 2006 \\ \hline \end{tabular} \end{table} Table 1. Security accidents caused by broken access control in real world websites _Code Logic Error_ Since the access control checks in web applications are usually implemented in an ad-hoc way nowadays, it is hard to distinguish between the concrete code patterns. However, from how these errors are exploited, the code logic error can happen in following cases: 1. Unprotected endpoints: sensitive functionality and resources can be accessed without appropriate protection, like the administrative endpoints '[https://example.com/admin/](https://example.com/admin/)'. Making the URL that grant access to be less easy to guess could help, but a cryptic URL does not guarantee security even it is not referred inside the application to low-privileged users. 2. Identifier-based endpoints: by mutating the identifiers in requests, attackers could gain access to other resources without protection, as we show in Figure 1. Multistage endpoints: for process that involves multiple stages, any step that fails to validate the request sender's identity could result in potential vulnerabilities. A malicious user could intercepts the request and modify the parameters to perform unauthorized operations. _Misconfiguration_ Web applications could also have access control on application platform layer to control which user perform what HTTP methods (GET, POST, etc.) on specific URL paths. Besides, static files in a web application are also protected by web server configurations, which are usually located within the server's web root. Since no applications-level code intercepts in their handling, the static resources can be accessed directly in this way. The vulnerabilities caused by misconfigurations are discussed in studies on misconfigurations (Krishnan et al., 2017; Krishnan et al., 2017). In this survey we focus mainly on access control vulnerabilities caused by developers' mistakes in implementation. ### Why Access Control Vulnerabilities Are Hard to Deal With The difficulties of eliminating access control vulnerabilities from web applications are rooted in their semantics, complexity, dynamism and the lack of systematic support for enforcement and testing. _Semantics_ Unlike other vulnerabilities such as SQL injection and cross site scripting (XSS) (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) where the attackers need make up special input into the web applications, access control vulnerabilities usually manifests with no such abnormal input. They are difficult to prevent by data sanitization even though access control vulnerabilities are also related to data flow vulnerabilities. Besides, access control policies of a web application are deeply bound to the business logic of this application. And the legality of access is validated by developers' intention, without a fixed ground truth. The semantics of access control makes this problem difficult to tackle with, which calls for the collaboration from developers. _Complexity_ Nowadays web applications, with the development of IT industry, are no longer easily fitting into simple access control models like DAC, RBAC or MAC. The permissions or capabilities are divided into finer granularity and deeply bound to the application logic. In popular web applications like Facebook, the access control policy can even be continuously updated by users in the privacy settings (Krishnan et al., 2017; Krishnan et al., 2017), which means even the capabilities on same type of resources should be separately handled. Implementing access control mechanisms in correct ways is also not an easy task. As (Krishnan et al., 2017) pointed out, untrained developers could ignore some underlying assumptions on their implementations. (Krishnan et al., 2017) also considers that not all of the online service providers follow the best practice when using security tokens, and they sometimes does not enforce the check on token belongings. In another case, Orange CERT (Krishnan et al., 2017) use short randomized tokens that can be easily exploited. Due to the unawareness and lack of security trainings, it is hard for developers to enforce access control mechanism in the correct way. _Dynamism_ New objects and subjects are added constantly to the applications. And the interactions between users could also result in the grant or revoke of new permissions. Therefore, in web applications, access control policy in web applications are continuously keeping updated, which leave the attackers with information that could lead to exploitations, like the resource identifiers, URL patterns, etc. Besides, one unique characteristic of web applications is that the functionalities in them are keep changing. Old deprecated functionalities leave outdated APIs (Krishnan et al., 2017) and orphaned pages (Krishnan et al., 2017) if not getting attention, while new added features could be vulnerable to access control bypass without thorough testing. _Lack of Systematic Support_ The consequence of broken access control could be largely different than other broken logic code, as we discuss in Section 2.2. Most broken code can be fixed and return to normal use, while broken access control could result in irreversible data leak and affect the victims even after the patches. Due to this special attribute, the tool support should be more on the pre-deployment phase as proactive ways. Current popular web frameworks, like Django, Ruby on Rails, Vue.js, emphasize on fast and smooth development on functionalities. Access control are treated as part of the functionalities that implemented by developers instead of an enforcement. But this problem does not get Figure 1. A request that can trigger access control vulnerability in Open-Xchange. special treatment in popular web frameworks, which usually follows the software architectural patterns of MVC (Wang et al., 2017), MVP (Mak et al., 2018) or MVVM (Mak et al., 2018). Similarily, the testing support for access control in web applications is insufficient. In most web applications, there is not dedicated access control testing. Instead, the access control tests are embedded in other functionality testing as an intermediate steps. This attitude hinders a thorough testing to cover all the objects, subjects and operations inside the system while access control vulnerabilities hide in those corner cases that few tests can cover. ## 3. Overview In this survey, we discuss the state-of-the-art approaches that are proposed to tackle with access control vulnerabilities. Since in this survey, we target at the vulnerabilities in application code, based on the typical software development life cycle, we divide these approaches across the stages including software design and implementation, software analysis and testing, deployment and monitoring. Table 2 and Table 3 gives an overview of these categories and the related works in each stage. We hope to help both developers and researchers in this field to understand the existing solutions in this field, and encourage more people to fill in the blank and push this problem forward. ### Tool Support in Software Design and Implementation Software design and implementation is the stage when an executable software system is developed. With customized requirements, developers need to identify components and relationships in the software and realize the software design in specific languages and frameworks, which end up with the executable software. In this process, the design and implementation choice could affect the possibility of introducing access control vulnerabilities into this software systems. A series of work are done in this line to help eliminate the access control vulnerabilities from the root, which by the components they locate can by divided into support in framework level, database level and language level. Besides, a special component in software design and implementation, the development environment, can also serve as helpers in tackling with access control vulnerabilities. Framework SupportAs the mainstream web framework like Django and Ruby on Rails do not put security concerns as the first-class considerations, some modules and libraries are proposed to supplement the requirements on security goals as an extension to existing frameworks. Besides, many previous works also notice the internal drawbacks of the widely used MVC frameworks, thus propose to enforce access control policies in the applications by providing interface of domain-specific language (DSL) alongside the data models. With the enforcement, it is possible to put all data flow under mandatory access control (MAC), which will eliminate some access control vulnerabilities that totally ignore security checks. We discuss the framework support for prevent access control vulnerabilities in Section 4.1. Database SupportAs the source and destination of data operations, access control support from database is explored to restrict access control vulnerabilities from being exploited. One direction in DBMS layers is to take advantage of existing database features like SQL conditions and database views, while some other work design special databases and middleware that internally enforce access control. The access control policy could either from developer's specifications, or from learning and inference on past queries. We discuss the database support for prevent access control vulnerabilities in Section 4.2. Language SupportThe best way to prevent access control vulnerabilities is to avoid introducing it in the first place. In software development, one of the fundamental consideration is the programming language developers will use. Therefore, solutions on programming language layer are proposed to enforce access control policies at the early stage. One direction is to enforce information flow control so that if data flow must obey the security assertion-s/attributes on data objects. This enforcement can be achieved by taking advantage of programming language concepts and techniques like refinement types, logical attestations, static information flow analysis, etc. We discuss the language support for prevent access control vulnerabilities in Section 4.3. Development Environment SupportWhile all the supports we discussed above could require additional effort from developers to written access control policies in special languages or APIs, a more approachable way to tackle with this problem without bringing much additional effort is to remind them of potential access control vulnerabilities in development environment. This solution is non-intrusive and interactive, guaranteeing the participation from developers since the access control policy is more of a semantic problem, but with less effort. Not every developers are motivated to switch to non-mainstream databases, languages or frameworks with special access control support, which could potentially lack of community support. But the learning cost of switching to another IDE could definitely help with this problem more practically. We discuss the development environment support for prevent access control vulnerabilities in Section 4.4. ### Software Analysis and Testing Aspects of software development besides programming comprise over 50% of development costs (Han et al., 2017), among which software analysis and testing serve an important role. Software analysis discover facts about a given program, while software testing conduct with both manually written and automatically generated test cases to evaluate and verify the behavior of the software. Despite the aforementioned efforts, access control vulnerabilities as human errors are still inevitable (Zam and Bettettett, 2017). Software analysis and testing serve as the second defense to this devil. Outlier DetectionAccording to Clark-Wilson model (Cark and Wilson, 1999), access control model can be expressed as a relationship between an authenticated principal (subject), a set of programs (program), and a set of data items to be operated on (object), namely access control triple. Butler Lampson, in his 1971 paper _Protection_(Lampson, 1971), proposed access control matrix as well to abstract security model of access control. Figure 2 shows such an example. As an intuition, outliers or deviants in rows and columns of access control models could indicate a signal of violations. Outliers detection relies on the construction of 'inliners', which require a base criteria. This criteria comes from two source: developer specification and policy inference. A number of tools are proposed based on this workflow, which we will discuss in Section 5.1. _Automated Testing._ Many previous work of access control policy testing assumes there is a dedicated component to control over the access to resources, which is normally called the policy decision point (PDP). And the policies typically are written in XACML by administrations, making it more of a misconfiguration problem instead of code logic bugs. However, in our survey, we do not place such assumptions on most of the web applications. Therefore, we discuss the related works that target to test access control vulnerabilities in web applications implementation. These works range from white-box to black-box testing, which we will discuss in Section 5.2 in detail. _Formal Method._ Formal methods use rigorously mathematical models to ensure correct behaviors of systems, which serve as a complement to system testing. It uses formal verification schemes to prove the correctness of a system. In access control scenario, formal methods are used to validate the correctness of access control implementation correctness in this applications, based on the security models either proposed by developers or learned from execution traces. We discuss these approaches in Section 5.3. ### Runtime Monitoring Runtime monitoring and tracking serves as the last defense to illegal access. These approaches, compared to testing and analysis, have the advantages of adapting to evolving web application languages and frameworks. While the technique trend keeps changing, most web applications still rely on HTTP protocol for transmitting information. Therefore, monitoring provides a promising one-size-fits-all solution. Also, runtime monitoring approaches allow for adaption to the dynamism of access control policies. The monitor agent can keep updating the intended policies it validates against. We discuss these monitoring approaches in Section 6. ## 4. Tool Support in Software Design and Implementation developers. Passe (Passe, 2017) also tackle with unintended data leaks and unauthorized access by splitting applications into different components, but in an automatic way. It applies three design principles in its design: * Split application code into components based on natural isolation boundaries. * Apply the principle of least privilege. * Automate the POLP split by dynamic analysis. These principles can be applied in today's scale-out architectures more easily because developers now are encouraged to design and implement applications that provide narrowly defined interfaces. Therefore, Passe leverages this to automatically split applications and isolate different components. It runs the applications as a set of processes as 'views' (note that this is different from the 'view' in MVC architecture), and limit the data queries a view can make to restrict the information leak. Besides, Passe also captures the data and control-flow relationships between queries and other data in the system. This is learned from a trusted learning phase when the applications will run normally under the monitoring of Passe. Passe infers the relationships between database query results and following queries to detect the dependency between them like equality or set-membership. For example, in a social media system, a view displays all of a user's friends' could issue two queries on database: would do with this data holistically. MPVC models are proposed to target at this scenario. The components fall into two categories: * **MP**: data model and policy logic. It interacts with databases and provides an API through with any other components access the data they want from databases under the policy enforcement. As shown in Figure 5, Follower and GitStar are two MPs. * **VC**: views and controllers. It is the part that interact with users and provide different functionalities, developed by the third parties. As shown in Figure 5, those apps, Bookmark, Git-Wiki and Code Viewer, are VCs. Users directly interact with them through browsers. To specify the security policy, developers write domain-specific language alongside the data model. In this security policy, the subjects are named as _principals_, including four types: users that identified with user-names, remote web sites that identified with URLs, each VC, and each MP. ### Database Support Data protection policy is usually implemented in application layer as checks in application code. However, this approach could be error-prone because they are spread around the application code and easy to be missed. DBMS do provide some support for specifying fine-grained access control policy by SQL conditions (Selena et al., 2017) and views (Selena et al., 2017; Selena et al., 2017). These solutions require specialized support within DBMSes. To make the database-level solution more portable, DataLawyer (Selena et al., 2017) and Qapla (Selena et al., 2017) proposes to execute the query rewriting in middleware layer to enforce the access control policies. It allows the developers to express access control policies in SQL-like language. The difference between them is Qapla has an extra requirement on the developer-provided policies, which need to be indexed by columns to improve the efficiency. For example, in a fictitious company, the policy on names of employees can be expressed as: name :- EXISTS(SELECT 1 FROM Employees WHERE emPID = Suser) All the policy can be expressed as SQL WHERE clauses like this example. Qapla intercepts all queries and rewrite them to enforce the access control policy. ### Language Support While access control checks act as gatekeepers when data are read or written, another way of protecting data against illegal access is to enforce the data flow to trusted sink. _Information Flow Control_ is the solution that can be used to achieve this. Many language-based techniques are explored to enforce the information flow. Jif (Jif, 2016) extends Java with support for information flow control, enforced at both compiler time and runtime. Jeeves (Jeeves, 2017) is a Scala library that similarly enforces information flow policies. IFDB (Jeeves, 2017) works alongside decentralized information flow control languages, using tags attached to data to denote data sensitivity. Jacqueline (Jacqueline, 2017), a Python web framework, uses multi-faceted execution to dynamically enforce security policies. LabelFlow (Jaf, 2016) extends PHP bytecode interpreter to track security labels at runtime. LWeb (LWeb, 2018) is another framework that marries the LIO Haskell IFC enforcement library to enforce label-based information flow policies. The work we previously discussed in Section 4.1, Hails also uses language-level confinement and enforces mandatory access control throughout the system. Security policies, referred as _labels_ in Hails, are written to specify which principals can read and write which piece of data, for example, \(<\mathit{alice}\lor\mathit{bob},\mathit{alice}>\). Data labeled with \(<S,I>\) can only be sent to a principal \(p\) when \(p\Rightarrow S\). The similar rules apply to writers. To maintain these policies, the trusted runtime will check whether they are still satisfied before permitting communication in any thread. In implementation of Hails, it spread the security enforcement across language-level, OS-level and browser-level. It crucially takes advantage of Haskell's type system, Linux isolation mechanisms and client-side sandbox. Storm (Storm, 2018) is a web framework that targets at allowing developers to build applications in MVC architecture but with compile-time enforcement on security policies. The main idea is to take advantage of refinement types: a type endowed with a predicate that assumed to hold for any element of this type. In Storm, their unique insight is to specify the logical assertion as the security policies on certain piece of data. In this way, the access policies are enforced statically at compile-time and non-invasively. No labels or special APIs need to be invoked when developing. To apply this insight, Storm enhances the data model in MVC into a refined data model with annotations of security policies. To implement this, it takes advantage of LiquidHaskell (Luo et al., 2018), which is an refinement type checker to verify whether the application code follows the enforcement of security policies. In Storm, the authors proves that the Storm API is secure and cannot leak or corrupt sensitive data. To implement Storm, two foundational blocks are involved: refinement types and compositional IFC. Refinement types decorate a existing type with logic assertions to specify a subset of values of the original type. By writing pre- and post-conditions, the user can refine the input and output of functions. With bounded refinements, developers can abstract the policies and invariants of applications. An SMT solver can be used to automatically verify all the refinement variables can be instantiated. Compositional IFC is forced in Storm as follows: Every applications are regarded as a set of handlers. Every handlers are regarded as a sequence of operations. (e.g. \(e_{1}2=\mathit{doe}_{1};e_{2}\), where ; is sequential composition operator). Each data operations are regarded as read sensitive data with authorizes or write data to observers. Then the information flow control is enfoced as if operation \(e_{i}\) reads data as authorizes \(auth_{i}\), any subsequent operation \(e_{j}\) can only write to observer \(\mathit{obs}_{i}\subseteq auth_{i}\). To prevent path explosion, a two-step compositional approach is applied here. First, every primitive operations are assigned with types Figure 5. An example of Hails platform. that contain the corresponding authorizes and observers. Next, the operator ; is assigned with a type that enforces that for any two composed computations \(e:<auth,obs>\) and \(e^{\prime}:<auth^{\prime},obs^{\prime}>\), for the output \(e;e^{\prime}:<auth^{\prime\prime},obs^{\prime\prime}>\)we have: \[obs^{\prime}\subseteq auth\] \[auth^{\prime\prime}=auth\cap auth^{\prime}\] \[obs^{\prime\prime}=obs\cup obs^{\prime}\] In this way, the IFC constraints are enforced. Storm is the first framework that statically enforce security policies in language level. The advantage of this solution is: First, it does not impose any additional run-time overhead as other runtime checking mechanisms. Second, the potential violations to security policies are detected in the early stage before deployment, which align with our consideration in Section 2 that access control vulnerabilities are better to be caught before deployment. Third, no additional developers' effort on the underlying enforcement mechanism is explicitly required in this context. The limitation of language support is that it only applies to language implementations with certain features, while most web applications are written and run in mainstream language supports. Besides, these solutions are not free of any additional effort. In fact, in the experiment of Hails, developers felt the difficulty on implementing MPs. Though a DSL could make the process easier and help developers to understand the underlying enforcement, it still requires additional effort compared to normal development process. In Storm, The programmers still need to write a refined models file, which is a centralized place to specify the security policies. Aside from information control flow, REBIN (Zhu et al., 2017) is a runtime prototyped in PHP that allows the developers to specify data flow assertions and associate them to application data. It checks the data assertions when it flows cross boundaries such as writing data to files or network. ### Development Environment Support Zhu et al. (Zhu et al., 2017) propose a hybrid approach to detect access control vulnerabilities based on interactive static analysis. They firstly conduct a comparative study on six PHP web applications, which focus on Security Sensitive Operations (SSO) that are database operations. They manually examined and identified access control checks in source code of these applications, then summarize the distribution of access control patterns in these applications. From this, they observe that access control patterns usually have low repetitions, which means it is relatively hard to construct pattern models and detect them automatically. Also, they find there is much noise in code instances of access control code, which further affect the accuracy of auto-detection methods, and results in false positives in these solutions. Therefore, in their work, they seek for developer's input to identify access control models with a relatively little distraction to developers' normal workflow. Their interactive tool ASIDE identifies SSO as developers write code in IDEs, then place a yellow notification alongside as a remind, which ask developers to highlight access control logic so the warnings can be removed. Meanwhile, ASIDE uses static analysis to help find the incorrect access control logic. For example, as shown in Figure 6, ASIDE identifies SSOs of INSERT to _chat_messages_ and _chat_messages_current_ and the surrounding access control logic. But when in Figure 7, similar SSOs are performed in function _chat_login_user_ but without similar protection as the previous code. Therefore, ASIDE alerts the developers in case there is a potential vulnerability. ## 5. Software Analysis and Testing ### Outlier Detection As a type of semantic bugs, detecting access control vulnerabilities require a ground truth of the correct semantics. Then the tools detect vulnerabilities candidates based on the outliers to the ground truth. To obtain the ground truth, the most direct way is to require the developers to express their intended access control policies. However, this can be a burden to developers. Therefore, researchers came up with workaround solutions to mitigate the additional effort. MACE (Zhu et al., 2017) only require annotations on the variables that correspond to user ids, roles and other session-related variables. It depends on program analysis and symbolic evaluation to identify the _authorization context_ for web applications from these annotations. It then checks the authorization context with every program point if they access the same resource from different execution paths. If these authorization contexts are mismatched, there is possibly a potential access control vulnerabilities. MACE infers and expresses access control rules in a four tuple \(<\)U, R, S, P\(>\), where U is the authenticated users, R is the set of roles, S is the session variables or identifier, and P is the permissions required on the resources. It then computes the authorization context by control flow and data flow analysis for every database queries in the application code. It firstly builds a source-sink graph based on data dependency of the programs, then collect the constraints in sinks to build up the authorization context of each query. In this way, MACE could detect both vertical privilege escalation and horizontal privilege escalation. Vertical privilege escalation happens when users in low privilege level could access to resource of high-privilege users. Horizontal privilege escalation happens when users can access the resource of another user that is from same privilege level. Figure 6. Access control on Moodle database SSOs Figure 7. Access control vulnerabilities detected in Moodle by ASIDE SPACE (Security Pattern Checker) (Sutton, 2015) also claims to be a specification-free tool for finding access control bugs. How it avoid specifications from developers is to make the observation that developers usually applied similar patterns of checks for a given resource type. Therefore, developers only need to provide mapping to their selected catalog of access control patterns from SPACE. Then SPACE checks whether each data exposure is allowed. SPACE comes up with six patterns as 'catalog': * Ownership. * Public Objects. * Authentication. * Explicit Permission. * User Profiles. * Administators. * Explicit Roles. SPACE uses symbolic execution to extract the data exposures from source code. Then it uses a constraint solver to check whether the constraints associated with the data exposures are also in the allowed scope of user-given catalogs. We illustrate the workflow of SPACE with a example from MediumClone. Figure 8 shows an access control bug from this application. In the UserController class, the edit action will show a page of editing the profile and click on buttons will result in submitting the updated profile form, which results in the invocation of update. However, the filter of access control only apply to show and edit, which means an attacker can directly bypass the edit pages and send the post request to update handler, then update other users' profiles. This is regarded as an access control bug in MediumClone. To detect this bug, in SPACE, the user will specify the access control policy in form of role-based access contril (RBAC) models as: ``` classUserController<ApplicationController before_filter:signed_in_user,:only=>[show,:edit,:update] before_filter:correct_user,:only=>[show,:edit]... defshow @user=User.find(params[:id]) @posts=find_posts_by_user.id @user.id @editing=true ifsigned_in? end def edit @user=User.find(params[:id]) @url='/user/'+params[:id] end def update @user=User.find(params[:id]) [email protected]_attributes(user_params) redirect_to@user, success:'Editing successful! else redirect_to@edit_user_path(@user.id),error:'Editing failed!' end end end ``` **Figure 8. An access control bug from MediumClone** Space.analyzedo mappingUser:RBACUser, Post:OwnedObject(user:owns) end ``` which means only the owner of a user profile can update the information. SPACE then builds the exposure relation of update as (update(user_profile), true, write, user). By substituting the RBAC rules SPACE can determine whether illegal access is allowed in this exposure, thus detects an access control vulnerability. Some other works push hard to be specification-free. Instead, they construct their ground truth from the code implementations and detect the outliers that are deviated from their peers. RoleCast (Sutton, 2015) is such a static analysis tool that does not rely on any annotations from developers or security policy specifications. Instead, it tries to catch some common patterns of security checks in web applications and apply to infer the missing checks. The architecture of RoleCast is shown in Figure 9. It works in four phases. First, it analyzes interprocedural control flow to gather the context of sensitive events, including the critical variables in conditions. To eliminate impossible candidates, it filters out those without abnormal exit branches because those conditions are possibly not related to security checks. Second, RoleCast divides program files into different groups based on the roles they are attached to. It partitions them by analyzing the file contexts and minimize the number of files that shared between different roles. Third, RoleCast will determine for each role what is the set of critical variables to check. Last, RoleCast will check the calling contexts of each sensitive event and reports potential vulnerable cases if they match one of the following patterns: (1) calling contexts of a security-sensitive operations do not have any check; (2) the role contains a unique context that is not able to check the consistency; (3) A check is inconsistent with the majority of other calling contexts. Another work by Sun et al. (Sun et al., 2015) also detect access control vulnerabilities without requiring any explicit access control specifications from developers. They catch the _implicit access control assumptions_ from PHP programs source code where it implicitly indicates the intended accesses of every roles in the system. Therefore, if a role can force browsing a privileged web page in this system, it is potentially a vulnerability. In their work, they abstract the web application systems with \(k\) nodes as follows: \[P_{r}=(S_{r},Q_{r},E_{r},I_{r},\Pi_{r},N_{r})\] where \(S_{r}\) is the entry set of this application for role \(r\). \(Q_{r}\) is the set of sensitive application states that a web page \(n_{i}\) contains for role \(r\). \(E_{r}\) is the set of explicit edges between pages. \(I_{r}=<ni,n_{j}>|1\leq i,j\leq k\), where \(n_{j}\) is the set of pages that can be forced browsing to from page \(n_{i}\) under state \(q_{i}\). \(\Pi_{r}\) is the set of all web pages that role \(r\) can navigate to. \(N_{r}\) is the set of web pages that are explicitly reachable through \(E_{r}\), starting from \(S_{r}\). In this way, let \(a,b\in R\) are two roles in this application, this work defines access control vulnerability as: \[n\in N_{a}\wedge n\notin N_{b}\wedge\exists\pi_{b}\in\Pi_{b}(n\in\pi_{b})\] This definition means, if a page legally accessed by role \(a\) can be forced browsing by role \(b\), but it cannot be explicitly accessed by \(b\), this page is potentially an access control vulnerability. The workflow of this work is shown in Figure 10. The only input required from developers is the entry set \(S_{r}\) and the state set \(Q_{r}\). The tool builds sitemaps from the entry set by control flow analysis and find the branches that a role can trigger. Then it collects recursively all the PHP files that can be triggered by each roles. To resolve the infeasible path, this tool uses Z3 to solve constraints. With the link extractor, this tool can construct per-role sitemaps. Next, with collected \(N_{a}\) and \(N_{b}\) set, the comparator will compare between two sets and infer privileged nodes, which mean only privileged users can access. It then uses analyzer to emulate forced browsing attempts on these privileged nodes from low-privilege users to determine whether they will be successful. Finally, with manual confirmation, this tool will output the vulnerable nodes. Aside from static methods, dynamic methods can be applied to construct the access control models of web applications, for which their assumption is normally the accesses in the applications follow the access control policy and can serve reversely as specifications. Waler [(33)] use dynamic analysis to infer a set of behavioral specifications, then use model checking to identify paths in programs that are likely to break the specifications as vulnerabilities. CanCheck [(26)] takes advantage of the applications of the access control library CanCan [(14)] to help instrument authorization checks in web applications, thus being able to construct access control models from the dynamic execution, since instrumented execution is made easier with the centralized CanCan modules and explicit _can_? checks. Then it infers formal access control models into a set of first order logic formulas. If existing an action, for certain operations on a certain object, the authorization checks are not implemented as specified by the access control models, CanCheck regards it as an access control bug. FixMeUp [(56)] goes a further step to not only detect access control bugs, but also generate repairs for them. The unique observation it makes is that the missing access control checks are always presented in the call context so that it can reuse the statements to generate repairs. The process of how FixMeUp works is as follows. First, it requires input of high-level specification on access control policy, which could obtain either by user manually written or from program analysis. Typically, a developer can annotate in the source code to mark the correct access control checks, the sensitive operations, and the user role that this policy applies. Then, FixMeUp generates access control templates(ACT) based on the specification. It starts with the correct access control checks and compute in backward interprocedural manner to build the representation of concrete low-level policy specifications. And this representation will also serve as program transformation template when later generating fixes. Next, FixMeUp checks every calling context to verify whether the access control logic matches the ACT. To identify the relevant statements, it looks at the entry points of programs, where the access control checks are usually stylized. It then generates the repairs if there is missing checks. ### Automated Testing Automated testing for access control in web applications is a more difficult task compared to analysis approaches. The difficulty comes from the generation of test cases, which includes a series of user interactions, and test oracle, which largely depends on business Figure 11. Workflow of CanCheck Figure 10. Architecture of Sun et al.’s Work Figure 9. Architecture of RoleCast logic of the applications. For most web applications where the access control components are not unified in special format like XACML, and the source code is not available, the access control testing have to be end-to-end. The tools send out web requests and analyze the responses to decide whether the test is passed or failed. AuthScope (Zhou et al., 2017) tackles with the test case generation by using differential traffic analysis to compare and recognize the fields of interest in web requests, then substitute the field in requests and observe the server response. This work performs a large scale study based on their detection. To scale their method, the first challenge is how to pass the authentication of most apps so that the tool can obtain post-authentication requests and responses. This is solved by using social login interface of these apps (for example, Facebook login). Second, they need to identify the field that are possibly guessed to generate the test cases. To solve this, they use Euclidean distance measure the randomness of identifiers. Third, how to confirm a vulnerability is exploited from the responses (i.e. test oracle generation). They similarly apply differential analysis to response messages between different accounts and filter out trivial differences like time stamps in the responses. LogicScope (Zhou et al., 2017) catches logic vulnerabilities within web applications, among which unauthorized access is a major category. They model the logic of a web application using finite state machine (FSM) over the observed user inputs from execution traces, in which the users follow the legal navigation paths. Then based on the FSM they construct, they make up unexpected inputs for each state and evaluate the corresponding responses to see whether this input reveals a logic vulnerability. ### Formal Method Ur/Web is a domain-specific language for web application development. It contains a static analysis tool UrFlow (Zhou et al., 2017), which asks for security policy specifications in the form of SQL queries, then use symbolic evaluation and automated theorem-proving to statically check these policies. Wang et al's work (Wang et al., 2017) considers the authorization problems in SDKs. In modern applications, developers could take advantage of online providers to do authentication and authorization services. However, how to use these SDKs in a secure way require the developers to be aware of assumptions in both SDKs and the underlying runtime systems, which is difficult for most of the developers to understand. In their work, they try to demonstrate an explication process in which they devise precise definitions of desired security properties to construct the security models of the SDK and the underlying systems. Then they apply formal verification to detect counterexamples so that they can either find the incorrect part of their model, or add new assumptions to their models. In this iterative process, they are able to get a set of assumptions and models, which can be enforced when the SDK is used by other third-party developers. They advocate this explication process as part of the engineering process of SDKs. In their explicated assumptions, they define their security properties as follows: the granularity of their security model is session. The basis of security model is the usage of secrets and signed data, which are used to idenfity each users. The desired security goals include authentication violations, authorization violations and association violations. All the assumptions are built based on these properties. ## 6. Runtime Monitoring In Section 4.3, we discuss the language supports of information control flow, which can be enforced during runtime. Aside from these works, there are other works trying to enforce access control during runtime without special requirements to the existing code bases. Nemesis (Nemesis, 2017) integrates authentication information and access control policy specified by developers to make sure only certain authenticated users are able to access the legal resource. To trace how the authentication information flows, Nemesis takes advantage of the insight that most web applications use similar design of storing usernames and passwords. Therefore, Nemesis apps Dynamic Information Flow Tracking to track user credentials. It assigns an additional shadow HTTP cookie to a authenticated user so it can follow the subsequent requests sent by the same user. To track the authentication information, Nemesis uses DIFT and keeps two taint bit for each data items, a credential bit and a user bit. Besides, Nemesis requires developers to tag credential data retrieved from database. When a data item is tagged with a taint bit and compared to credential data, Nemesis regards it as an authentication operation. If the two operands are the same, it records this client as authenticated. To deal with user registration, Nemesis infers by a SQL INSERT statement. Figure 12. An example of misusing SDKs. The access token from Live ID service is wrongly sent to a third-party server, which could be exploited to stole sensitive information by malicious apps. Figure 13. Architecture of Nemesis FlowWatcher (Wang et al., 2017) detects access control vulnerabilities during runtime. It makes the observation that most web applications use similar simple access control models, which can be described in a rule-based specification as user-data-access policy, namely UDA policy. This policy specification allows dynamic tracking and monitoring of access control vulnerabilities once violations of the UDA policy happens. In this way, an enforced policy can be created and reused across different applications, which contains less code and is easy to be implemented correctly. Besides, it requires no modifications to the original applications. To apply this approach, there are two challenges. First, how to make it easy for developers to express the access control policy and keep it updated when new roles and objects are added to the application. This work applies UDA policies to solve this problem. The UDA policy are specified through two entities: data objects, and users and groups. And the rules in UDA policies can be two types: 1) Definition/removal rules, which update access control policy when there is objects creation or deletion. 2) Update rules, which related to updating on the existing object. It matches an HTTP request to corresponding users and resource based on its URL, header and body. Second, how to efficiently track user data with little performance overhead. FlowWatcher uses shadow policy state to avoid the delay of service performance. It checks the request URL, bodies and the response of this request to see whether it requires updating of UDA policy. FlowWatcher tracks unique data in the system by matching the data value and monitors the value in HTTP response. ## 7. Open Problems Access control vulnerabilities are not newly emerged. As long as there is data and data access inside web applications, there could be access control vulnerabilities. However, with the growing dependency of society on web services and the blossom of IT industry, this problem is becoming even prevalent and severe. Following trends have emerged: _Heterogeneous Interfaces in Web Applications._ With the development and evolution of web services, nowadays web applications are providing heterogeneous APIs, including GraphQL API, RESTful API, xmlRPC API, etc. This also aligns with the demand of cross platform applications. These APIs share similar functionality and same user data with the main applications. Any exploitation of these APIs could result in data leak as severe as the main web applications. _Data Hosted on Third-party Platform._ Cloud hosting provides a scalable and friendly management solution for web application developers. However, this also introduces new risk of data leak. Data leak from third-party platform is not only a configuration problem as already studied by previous work (Wang et al., 2018). Interactions to third-party platform also increase the complexity of code implementation. In a real world example from Bug Bounty programs (Bounty, 2018), a Google API Key is leaked in front-end code of this web applications, resulting in access control vulnerabilities. _Emerging Software Development Style._ Tech companies are pushing hard to make the software development more approachable. Even novice programmers can set up and tailor the web application based on website builders like WordPress, Joomla, Drupal, etc. A exposed access control vulnerabilities in such software will result in many websites under the risk due to software reuse. Another new trend is serverless computing, where the data flow inside a web applications could even not be under total control of developers, making current approaches like information flow control not applicable in these scenarios. _Adoption of Machine Learning Models._ Applying machine learning models in web applications means the user data not only flows through application code, but also potentially flows and leaked from the model parameters, which could also become candidates of illegal access attacks and harm users' privacy. Aside from these new trends that increase the complexity of access control vulnerability problem, even for the most simple and original form of this problem, it is still far from a perfect solution. Based on the importance and severity of access control vulnerabilities, we advocate to solve it in early stage of software development. Since in essence they are semantic bugs, we believe that to prevent and detect access control vulnerabilities as early and thoroughly, the participation of developers is crucial. And the key problem here is how to make it developer-friendly to express and perceive potential risk: _Access Control Configurationize._ Access control is more easily to be vulnerable when it spreads across the code base. As many previous works propose, a centralized access control module would definitely reduce the chance of access control vulnerabilities and increase the auditability of access control implementations and policies in a system. With this centralizing design, access control will move towards more of a configuration problem where the developers will focus on what access control configurations should be placed on the interfaces of a system. And how they express the access control enforcements will also depend more on the specific configuration language instead of the language they use to implement the system. _Improving the Access Control UIs._ Another direction to help developers avoid access control vulnerabilities is to help them know clearly about their implementation, so that they can easily find the holes in their current design. The de facto access control is still expressed by application code or text files. However, this kind of expression is abstract and cannot help developers a lot when the access control policies become complex. Therefore, building an informative and handy access control UIs could be non-trivial to help developers. Even simply expressing access control in a visualized interface is a not a simple problem. First, there could be a lot of subjects and objects in the system. Plainly displaying them and building up the access control matrix cannot help developers but will in reverse impede them from identifying the actual problem. Second, access control UIs need to reflect the change when developers introduce new endpoints and new authorization checks. How to extract them without much additional effort from developers also presents a problem. _White-box Access Control Testing._ In the existing literature, researchers explore a bunch of approaches in black-box access control testing to identify the vulnerabilities. However, black-box testing is insufficient in solving this problem. In a typical web applications, there are orphaned pages and operations that are not exposed to outside publicly. Black-box testing need to guess and hit them by chance, which will take much computation resource. With source code and more information, the hidden part can get more thorough testing without the need to guess through the random space. And the penetration testing can be much more efficient. Currently, the white-box testing is carried out by security experts and requires manual assessment on the web application code. This brings additional financial burden to companies and cannot guarantee the coverage in a systematic way. Therefore, we encourage the community to summarize experience of white-box penetration testing and automatic this process. ## 8. Conclusions In this article, we discuss the characteristics of access control vulnerabilities in web applications, the challenges of eliminating access control vulnerabilities from web applications, and the state-of-the-art approaches in different stages of software development to deal with these vulnerabilities. Based on our study on the literature, we discuss the open problems in this field. We hope this report can encourage further research toward the elimination of access control vulnerabilities.
2307.09601
Modelling reflected polarised light from close-in giant exoplanet WASP-96b using PolHEx (Polarisation of Hot Exoplanets)
We present the Polarisation of Hot Exoplanets (PolHEx) code for modelling the total flux (F) and degree of linear polarisation (P) of light spectra reflected by close-in, tidally locked exoplanets. We use the output from a global climate model (GCM) combined with a kinetic cloud model of hot Jupiter WASP-96b as a base to investigate effects of atmospheric longitudinal-latitudinal inhomogeneities on these spectra. We model F and P-spectra as functions of wavelength and planet orbital phase for various model atmospheres. We find different materials and sizes of cloud particles to impact the reflected flux F, and particularly the linear polarisation state P. A range of materials are used to form inhomogeneous mixed-material cloud particles (Al2O3, Fe2O3, Fe2SiO4, FeO, Fe, Mg2SiO4, MgO, MgSiO3, SiO2, SiO, TiO2), with Fe2O3, Fe, and FeO the most strongly absorbing species. The cloud particles near the relatively cool morning terminator are expected to have smaller average sizes and a narrower size distribution than those near the warmer evening terminator, which leads to different reflected spectra at the respective orbital phases .We also find differences in the spectra of F and P as functions of orbital phase for irregularly or spherically shaped cloud particles. This work highlights the importance of including polarisation in models and future observations of the reflection spectra of exoplanets.
Katy L. Chubb, Daphne M. Stam, Christiane Helling, Dominic Samra, Ludmila Carone
2023-07-13T15:19:06Z
http://arxiv.org/abs/2307.09601v2
Modelling reflected polarised light from close-in giant exoplanet WASP-96b using PolHEx (Polarisation of Hot Exoplanets) ###### Abstract We present the Polarisation of Hot Exoplanets (PolHEx) code, for modelling the polarised reflection spectra of close-in exoplanets which are assumed to be tidally locked. We use outputs from global climate models (GCMs) combined with kinetic cloud models of hot Jupiter WASP-96b as a base to investigate effects of longitudinal-latitudinal inhomogeneities in the atmosphere. We model flux (F) and degree of linear polarisation (P) as a function of wavelength and planet orbital phase for various model atmospheres. We find different materials and size of cloud particles to impact the reflected flux, particularly the polarisation state. A range of materials are used to form inhomogeneous mixed-material cloud particles (\(\text{Al}_{2}\text{O}_{3}\), \(\text{Fe}_{2}\text{O}_{3}\), \(\text{Fe}_{2}\text{SiO}_{4}\), FeO, Fe, \(\text{Mg}_{2}\text{SiO}_{4}\), MgO, \(\text{MgSiO}_{3}\), \(\text{SiO}_{2}\), SiO, \(\text{TiO}_{2}\)), with \(\text{Fe}_{2}\text{O}_{3}\), Fe, and FeO the most strongly absorbing species. We find that a smaller average particle size and narrower size distribution around the cooler morning terminator region leads to different scattering properties to at the warmer evening terminator region. We also find differences in \(F\) and \(P\) as a function of wavelength and orbital phase for irregularly shaped compared to spherical cloud particles. This work highlights the importance of including polarisation state in models and future observations of the reflection spectra of transiting exoplanet atmospheres. keywords: exoplanet - atmospheres - polarimetry - spectroscopy - scattering ## 1 Introduction The theoretical groundwork for the scattering properties of atmospheric particles derives from prominent works such as Mie (1908); Rayleigh (1918); Chandrasekhar (1950). It was the inclusion of the polarization state in the modelling of scattered light, however, that was crucial in enabling the identification of cloud types on Venus during the 1970s by Hansen & Hovenier (1974). Via polarimetry they were able to deduce the clouds on Venus were most likely formed from sulphuric acid, with a narrow distribution of particle size and mean radius of \(\sim\)1 \(\mu\)m. The method was later utilised further for Venus and other solar system planets (Schmude, 2008; Rossi et al., 2015; McLean et al., 2017). Reflection spectra, particularly that which includes polarisation, is a highly powerful and complementary observation technique to transmission and emission spectroscopy (observing the light from a star as a function of wavelength during eclipse and secondary eclipse of a transiting planet) for revealing additional information about transiting exoplanet atmospheres (Munoz, 2018; Millar-Blanchaer et al., 2018; Fossati et al., 2091). There have been a range of theoretical studies and models of the polarised flux of exoplanet systems, including Seager et al. (2000); Bailey et al. (2018) for close-in giant planets, and Stam (2008); Karalioli & Stam (2012); Fauchez et al. (2017); Rossi & Stam (2017); Groot et al. (2020); Trees & Stam (2022); West et al. (2022) for Earth-like or habitable-zone exoplanets. Observations of polarisation have been proposed in the context of searching for biosignatures on Earth-like planets (Berdyugina et al., 2016; Sparks et al., 2021). By following the orbital phase of a transiting exoplanet, information on the scattered (reflected) light can be determined via secondary eclipse (when the planet is hidden behind the star) and phase curve mapping (Wong et al., 2021; Heng et al., 2021). Although this technique is the same used for measuring emission spectra, emission and reflection spectra can be disentangled from one another due to the fact they typically dominate across different wavelength regions to one another, with reflection spectroscopy in the visible/near-IR and emission spectroscopy in the IR. There are a number of studies and tools for modelling reflection spectra (Barstow et al., 2014; MacDonald et al., 2018; Batalha et al., 2019; Kawashima & Rugheimer, 2019), often with a focus on the overall flux and not considering the polarisation state. Transmission spectra, which can be observed across the whole wavelength region from visible to IR (see, for example, Alderson et al. (2023); Feinstein et al. (2023); Ahrer et al. (2023); Rustamkulov et al. (2023)), does allow some information of spectral cloud features to be inferred (Wakeford & Sing, 2015; Mollire et al., 2017; Ormel & Min, 2019; Powell et al., 2019; Samra et al., 2020; Lothringer et al., 2022), in particular at longer wavelengths around 10\(\mu\)m, where there are typically signatures from vibrational modes (Ormel & Min, 2019; Bouwman et al., 2023). However the extent of information to be inferred from these observations is limited. Including the polarisation state in reflection spectroscopy, on the other hand, makes the technique particularly sensitive to detailed cloud properties, such as material and size distribution. Being sensitive to different components of an atmosphere makes polarised reflection spectroscopy a very complementary technique to transmission and emission spectroscopy. There are a number of ground-based telescopes which can measure polarisation of observed light, such as HARP-Spol (Piskunov et al., 2011), CHRES+VLT (Dorn et al., 2023), SPHERE/VLT (de Boer et al., 2020), ZIMPOL/VLT (Gisler et al., 2004), ESPaDOnS (Donati et al., 2006), WIRC+Pol (Tinyanont et al., 2019), PEPSI (Strassmeier et al., 2015), HIPPI-2 (Bailey et al., 2020). Some of these have been pointed towards exoplanets (Berdyugina et al., 2007, 2011; Bott et al., 2018; Bailey et al., 2021) and Brown Dwarfs (Millar-Blanchaer et al., 2020), however there is still some discussion over the reliability and interpretation of the exoplanet observations (Bott et al., 2016, 2018). There are already some plans to include polarimeters on future space-based instruments (Takahashi et al., 2017), such as LUVOIR/POLLUX (Bouret et al., 2018), and the Nancy Grace Roman Space Telescope (Groff et al., 2021). In order to further motivate the implementation and facilitate the design of such instruments it is important to have detailed and accurate theoretical models of real systems that are likely to be observed by such instruments, which is the motivation behind the present study. In this work we present a code for modelling the polarised reflected flux of close-in transiting exoplanet atmospheres called PolHEx. PolHEx is based on the adding-doubling radiative transfer algorithm of de Haan et al. (1987), which has been built upon over the years for application to exoplanet atmospheres (Stam et al., 2006; Stam, 2008). Versions of the code have been used for modelling for exoplanet atmospheres by many studies, such as Stam et al. (1999, 2000, 2004, 2006); Stam (2008); de Kok et al. (2011); Karalidi et al. (2012); Karalidi & Stam (2012); Karalidi et al. (2013); Fauchez et al. (2017); Rossi & Stam (2017); McLean et al. (2017); Palmer et al. (2017); Trees & Stam (2019); Groot et al. (2020); Meinke et al. (2022); Trees & Stam (2022). There is a publicly available code written in python and fortran called PyMieDap1(Rossi et al., 2018), which shares much of the functionality and origins with the code in the present study. PolHEx has been specifically tailored for modelling close-in hot exoplanets, which are assumed to be tidally locked. This allows us to directly link longitude/latitude atmospheric variation to orbital phase, and gives a simple way of specifying inhomogeneities in the atmosphere. We use atmospheric climate and kinetic cloud models of hot gas giant exoplanet WASP-96b from Samra et al. (2023) as a base atmosphere to study the impact of inhomogeneous atmospheric composition on reflected flux (\(F\)) and degree of linear polarisation (\(P\)) using PolHEx. These kinetic cloud models build on a global climate model (GCM) of WASP-96b which was produced using expeRT/MITgcm (Carone et al., 2020; Baeyens et al., 2021). Footnote 1: [https://gitlab.com/loic.cg.rossi/pymiedap.git](https://gitlab.com/loic.cg.rossi/pymiedap.git) The paper is structured as follows. Section 2 outlines the relevant theory behind the modelling techniques used in this paper, with details on the PolHEx code itself given in Section 3. Section 4 summarises the atmospheric properties of hot gas giant exoplanet WASP-96b which are used in this work, based on the outputs of GCM and kinetic cloud models. This includes the molecular and cloud composition, pressure-temperature profiles and longitude-latitude grid. Section 5 then details the different theoretical models we have set up based on these atmospheric properties. Here, a number of inhomoegeous (i.e. varying in terms of longitude and latitude) and homogenous (no variation in longitude or latitude) atmospheric models are considered. The results of these models are presented in Section 6, followed by discussion in Section 7. We present our conclusions in Section 8. ## 2 Theory ### Atmospheric scattering When an oscillating plane electromagnetic (EM) wave emitted from a host star encounters particles (molecular, atomic, cloud) in an orbiting exoplanet's atmosphere, the wave will interact with the charged components of the atmospheric particles. This interaction causes the charged components such as electrons to oscillate with the same frequency as the incident wave, which in turn induces new EM waves that propagate out in all directions from the particle. If a new wave which is propagating in the same direction as the incident beam is out of phase with the incident beam then they will interfere with one another, leading to a change of direction of the incident wave. This process is known as scattering (Mishchenko et al., 2002; Hovenier et al., 2004). In general, the polarisation state of the stellar EM wave will change during a scattering process. The refractive index \(m=n+ik\) of a material used to form cloud particles gives information on the scattering properties of that particle. This needs to be combined with the wavelength and particle size and shape in order to determine how light is scattered. The real part \(n\) of the refractive index represents the phase velocity (rate of propagation) in the material, and the imaginary part \(k\) the absorption of incoming radiation by the material. Each of the atmospheric particles which produce new secondary EM waves due to interaction with the incoming stellar wave will have an impact on the other particles around it. If the number of particles is small the secondary wave contribution can be neglected which leads to the single scattering approximation. If, however, the atmosphere contains many particles, then the scattering of light which has already been scattered by another particle needs to be taken into account. This is known as multiple scattering, which is a process which tends to reduce the degree of linear polarisation of the outgoing radiation in comparison to single scattering. In wavelength regions where absorption is high (for example a strongly absorbing molecule or atom), then multiple scattering effects are reduced. PolHEx includes multiple scattering effects in it's adding-doubling radiative transfer routine (de Haan et al., 1987; Stam et al., 2006). ### Flux and polarisation state In order to model scattered radiation from a planetary atmosphere, we define flux and polarisation using the flux vector (Chandrasekhar, 1950; Hovenier & van der Mee, 1983; Hovenier et al., 2004; Stam et al., 2006). et al., 2006) (as a function of wavelength \(\lambda\) and orbital phase \(\alpha\) of the planet orbiting a star): \[\pi\mathbf{F}(\lambda,\alpha)=\pi\begin{pmatrix}F(\lambda,\alpha)\\ O(\lambda,\alpha)\\ U(\lambda,\alpha)\\ V(\lambda,\alpha)\end{pmatrix}. \tag{1}\] Here, the Stokes parameters are defined with \(F\) as the total flux, \(Q\) and \(U\) describe the linear polarisation (defined with respect to a reference plane), and \(V\) the circular polarisation. The units of all four Stokes parameters are W m\({}^{-2}\) Hz\({}^{-1}\). The Stokes parameters are used to describe the flux and polarisation state of the stellar radiation which is scattered towards us, the observer, by the exoplanet's atmosphere. The degree of linear polarisation is defined as: \[P(\alpha,\lambda)=-\frac{\sqrt{Q(\alpha,\lambda)^{2}+U(\alpha,\lambda)^{2}}}{F (\alpha,\lambda)}. \tag{2}\] We define the planetary scattering plane as the plane through the centres of the star and planet, and the observer. This is the plane from which the \(Q\) and \(U\) are defined with respect to when integrating over the planetary disk. Our models assume that the rotation axis of the transition planet is perpendicular to this plane (they are tidally locked), and that the observer is facing the system edge-on, i.e. so the inclination is 90\({}^{\circ}\). We also assume that the planet is symmetric about the equator, which means that \(U=0\) when integrated over the planetary disk (Hovenier, 1970). This leads to a simplification for the degree of polarisation \(P\) in our case: \[P(\alpha,\lambda)=-\frac{Q(\alpha,\lambda)}{F(\alpha,\lambda)}. \tag{3}\] For positive values of \(P\), the light is polarised perpendicular to the planetary scattering plane, and for negative values of \(P\), the light is polarised parallel to the planetary scattering plane. We chose this convention to ensure \(P\) is positive for a clear atmosphere (i.e. only scattering from gaseous species), where the scattered light will be perpendicular to the planetary reference plane (Stam, 2008). It is beneficial to keep the sign of \(P\) rather than use the absolute value so that information on the direction of polarisation in relation to the scattering plane is kept. In our case \(F\) is the observed flux from the planet, and is composed both of stellar flux reflected by the planet's atmosphere (and surface if there were one), and also of thermally produced flux from the planet itself, i.e: \[F(\alpha,\lambda)=F_{\mathrm{reflected}}(\alpha,\lambda)+F_{\mathrm{thermal}}( \alpha,\lambda). \tag{4}\] In the IR wavelength region, flux from the planet is expected to be dominated by emission, whereas in the visible it's expected to be dominated by reflected. We only consider the degree of polarisation of reflected flux in this study and assume thermal flux is negligible, as we focus on the wavelength region 0.5 - 1 \(\mu\)m where the reflected flux will dominate. Therefore, in our case, \(F\) in Eqs 1 - 3 is really just \(F_{\mathrm{reflected}}\). ### Scattering by spherical particles We use Mie theory in order to describe how radiation is scattered as a function of scattering angle \(\Theta\) for spherical particles (Mie, 1908). For this we use a scattering matrix of the form (see, for example, Hovenier (1970); Hovenier et al. (2004)): \[\mathbf{F}(\Theta)=\begin{pmatrix}\alpha_{1}(\Theta)&\beta_{1}(\Theta)&0&0 \\ \beta_{1}(\Theta)&\alpha_{2}(\Theta)&0&0\\ 0&0&\alpha_{3}(\Theta)&\beta_{2}(\Theta)\\ 0&0&-\beta_{2}(\Theta)&\alpha_{4}(\Theta)\\ \end{pmatrix}. \tag{5}\] Such a single scattering matrix can be computed for spherical particles of a defined size distribution using PolHEx. However, we do not directly use these matrix elements directly in the radiative transfer part of the code. Instead we expand them into generalised spherical functions (de Rooij & van der Stap, 1984) and use the coefficients of this expansion. The degree of linear polarisation \(P\), as defined in Eq 3, is related to the scattering matrix elements of Eq 5 by \(P=\frac{-\beta_{1}(\Theta)}{\alpha_{1}(\Theta)}\). The first matrix element, \(\alpha_{1}\) (\(\Theta\)), is known as the phase function or scattering function. It would be the only element of \(F(\Theta)\) needed if polarisation were to be ignored. The elements of this scattering matrix are functions of the scattering angle \(\Theta\). Typically Legendre polynomials are used in radiative transfer models to expand the phase function if only total flux is being considered, but generalised spherical functions can be more appropriate for the full stokes vector which includes polarisation states (Kuscer & Ribaric, 1959). Full details on the expansion of Mie scattering matrices into spherical functions which is used in PolHEx can be found in de Rooij & van der Stap (1984). ### Scattering by non-spherical particles The accurate computation of scattering matrices (equivalent to Eq. 5) for radiation scattered by non-spherical (irregularly shaped) particles can be obtained using various methods. These include the Discrete Dipole Approximation (DDA) (Yurkin & Hoekstra, 2011) and the T-matrix method (Mishchenko et al., 2002, 2017). Although accurate, these methods can take a considerable amount of computational time, especially for particles with large size parameter (as determined by Eq. 6). Other more efficient methods have been developed, such as the Distribution of Hollow Spheres (DHS) method (Min et al., 2003, 2005). In this method, which has been employed by various studies such as Samra et al. (2020), the optical properties of a collection of particles with random orientations are approximated by the optical properties of a collection of basic shapes, i.e. spherical particles with varying amounts of vacuum inside. It has been shown by Min et al. (2003) to recreate the measured absorption cross-sections of small crystalline forsterite particles well. We use the publicly available code optool2(Dominik et al., 2021) to compute scattering matrices of irregularly shaped particles. It is derived from codes by Min et al. (2005) (DHS model for irregular grains) and Tazaki & Tanaka (2018) (scattering by fractal dust aggregates). Optool allows a material to be defined at input, with specified refractive indices, size distribution, wavelength, and degree of irregularity. This degree of irregularity is defined using a parameter called \(f_{\mathrm{max}}\), which ranges from 0 for spherical particles to 1 for very irregular particles (although computationally \(f_{\mathrm{max}}\) needs to stay just below 1, e.g. 0.98). ### Rayleigh scattering by molecules Rayleigh scattering is essentially Mie scattering in the limit of a very small size parameter \(x\), which is related to wavelength \(\lambda\) and particle size (radius) \(r\) by: \[x=\frac{2\pi r}{\lambda}. \tag{6}\] Rayleigh scattering will occur due to the small gaseous (molecular and atomic) particles in our model atmospheres. Incident radiation (in this case from the planet's host star) induces a dipole moment in the particle, which is proportional to the incident electric field, with a proportionality constant known as the polarisability. This polarisability can be isotropic or non-isotropic. If a particle which is small compared to the wavelength (i.e. \(x<<1\) in Eq. 6) has isotropic polarisability then Rayleigh scattering without depolarisation occurs (Hovenier et al., 2004). If the particle has anisotropic polarisability, as is the case for H\({}_{2}\)(Kolos & Wolniewicz, 2004) for example, then Rayleigh scattering with depolarisation occurs. This can be quantified using the depolarisation factor \(\delta\), which appears in the equation for the scattering matrix (as a function of scattering angle \(\Theta\)) for anisotropic Rayleigh gaseous molecular particles (Rayleigh, 1918; Chandrasekhar, 1950; Hansen & Travis, 1974): \[P_{m}(\Theta)=\Delta\begin{pmatrix}\frac{3}{4}(1+\cos^{2}\Theta)&-\frac{3}{4} (\sin^{2}\Theta)&0&0\\ -\frac{3}{4}(\sin^{2}\Theta)&\frac{3}{4}(1+\cos^{2}\Theta)&0&0\\ 0&0&\frac{3}{2}\cos\Theta&0\\ 0&0&0&\Delta^{{}^{\prime}}\frac{3}{2}\cos\Theta\end{pmatrix}\] \[+(1-\Delta)\begin{pmatrix}1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix} \tag{7}\] where: \[\Delta=\frac{1-\delta}{1+\frac{\delta}{2}}, \tag{8}\] \[\Delta^{{}^{\prime}}=\frac{1-2\delta}{1-\delta}. \tag{9}\] The average molecular scattering cross-section \(\sigma_{m}\) per particle (over all scattering angles \(\Theta\)) for anisotropic Rayleigh molecules Hansen & Travis (1974) is: \[\sigma_{m}=\frac{8\pi^{3}}{3}\frac{(n^{2}-1)^{2}}{4^{4}N^{2}}\frac{6+3\delta}{ 6-7\delta}, \tag{10}\] with \(\sigma_{m}\propto\lambda^{-4}\) meaning that molecular scattering is much stronger at lower (bluer) wavelengths. Here, \(n\) is the real part of the refractive index of the gas, and \(N\) is the number of molecules per unit volume (which depends on the gas temperature, \(T_{\rm gas}\)). We use a depolarisation factor \(\delta\) of 0.02 for H\({}_{2}\)(Penndorf, 1957; Hansen & Travis, 1974), as this is the main gaseous component of our model WASP-96b atmosphere. ### Integrated reflection spectra across the planet The local meridian plane for a particular longitude/latitude position on the exoplanet contains both the local zenith and the direction of propagation of the light. For a given orbital phase, the flux vectors from various points on the planet (as defined by longitude and latitude) are integrated over the illuminated part of the planetary disk which is visible from the observer's point of view. The Stokes vector after being reflected by the planetary atmosphere and arriving at the observer can be described (as a function of wavelength \(\lambda\) and orbital phase \(\alpha\)) by: \[\mathbf{F}(\lambda,\alpha)=\frac{r^{2}}{d^{2}}\frac{R^{2}}{D^{2}}\frac{1}{4} \mathbf{S}(\lambda,\alpha)\pi\mathbf{B}_{0}(\lambda), \tag{11}\] where \(R\) is stellar radius, \(D\) the star-planet distance, \(S\) the planetary scattering matrix (this describes the light scattered and reflected towards the observer by the planet's atmosphere) (Stam et al., 2004). We use c.g. s units for the distances, and the scattering matrices are unit-less. \(B_{0}\) is the the stokes column vector: \[\begin{pmatrix}B_{0}\\ 0\\ 0\\ 0\end{pmatrix}, \tag{12}\] with \(\pi B_{0}\) the stellar surface flux (units of erg s\({}^{-1}\) cm\({}^{2}\)). The stellar surface flux is considered unpolarised when integrated over the stellar disk (Kemp et al., 1987). In this work we only compute the planetary scattering matrix, \(\mathbf{S}\) (\(\lambda\),\(\alpha\)) (Stam et al., 2004): \[\mathbf{S}(\lambda,\alpha)=\begin{pmatrix}a_{1}(\lambda,\alpha)&b_{1}(\lambda,\alpha)&0&0\\ b_{1}(\lambda,\alpha)&a_{2}(\lambda,\alpha)&0&0\\ 0&0&a_{3}(\lambda,\alpha)&b_{2}(\lambda,\alpha)\\ 0&0&-b_{2}(\lambda,\alpha)&a_{4}(\lambda,\alpha)\end{pmatrix}. \tag{13}\] We set all other terms in Equation 11 to 1, and thus our outputs labelled \(F\) are really the \(a_{1}\) (\(\lambda\),\(\alpha\)) element of the planetary scattering matrix. These values can be scaled given the parameters of the system; we do not choose to do so here and just look at comparative values, as our main aim is to assess differences between different model atmospheres. \(P\) is a relative measure and so does not need to be scaled. Further details on the planetary scattering matrix can be found in Rossi et al. (2018), including the expansion as a Fourier series and choice of Gaussian abscissae, which are also relevant to PolHEx. We find 80 Gaussian abscissae to be sufficiently accurate for our computations. ### Geometric and bond albedo The bond albedo, Ag, of a planet is essentially the efficiency with which the planet reflects incoming stellar radiation. It therefore determines how much energy from stellar radiation is absorbed and available for transport round the planet (see, for example, Chubb & Min (2022)). It is usually assumed that the incoming stellar radiation is initially unpolarised. Adaptions to PolHEx could be made if the incoming stellar radiation was not initially unpolarised. The geometric albedo is defined as the ratio of the reflected flux at \(\alpha\) = 0\({}^{\circ}\) compared to a Lambertian (isotropically reflecting) flat disk of the same cross-sectional area. Unlike the bond albedo, it is possible for the geometric albedo to be larger than 1. It can be observed just before and during secondary transit for a transiting exoplanet. Although both the bond and geometric albedo are wavelength-dependent, they are typically measured and averaged over a band pass (see, for example, Krenn et al. (2023)). If we create a planet with no atmosphere and a purely reflecting surface (i.e. surface albedo of 1), then the geometric albedo (\(F\) at \(\alpha\) = 0\({}^{\circ}\)) will be \(\frac{2}{3}\) for all wavelengths. This is because of the definition for the phase function of a spherical planet with a Lambertian, perfectly reflecting surface and no atmosphere(Stam et al., 2006): \[a_{1}(\Theta)=\frac{8}{3/\pi}(\sin\Theta-\Theta\cos\Theta), \tag{14}\] and for \(\alpha=0^{\circ}\), i.e. \(\Theta=180^{\circ}\) then \(a_{1}(\Theta)\) becomes \(\frac{8}{3}\). Here, \(a_{1}(\Theta)\) is the (1,1)-element of the planetary scattering matrix \(\mathbf{S}\) (\(\lambda\),\(\alpha\)). Geometric albedo \(A_{G}\) can be found as an output of PolHex via: \[A_{G}=\frac{1}{4}a_{1}(\Theta=180^{\circ}), \tag{15}\] which thus becomes \(\frac{8}{3}\times\frac{1}{4}=\frac{2}{5}\) for a planet with a purely reflecting surface and no atmosphere. Equation 15 can be used to compute the geometric albedo (as a function of wavelength) for any model atmosphere setup with PolHEx, via the planetary scattering matrix which is computed as part of the code. ## 3 The polarisation of Hot exoplanets (PolHEx) code The Polarisation of Hot Exoplanets (PolHEx) code is used throughout this study. This code is based on the adding-doubling radiative transfer algorithm of de Haan et al. (1987), adapted for and used to model polarised flux due to exoplanetary atmospheres by Stam et al. (2006); Stam (2008). Variations of the code have been used in many studies such as Stam et al. (1999, 2000, 2004, 2006); Stam (2008); de Kok et al. (2011); Karalidi et al. (2012); Karalidi and Stam (2012); Karalidi et al. (2013); Fauchez et al. (2017); Rossi and Stam (2017); McLean et al. (2017); Palmer et al. (2017); Trees and Stam (2019); Groot et al. (2020); Meinke et al. (2022); Trees and Stam (2022). A version of the code was used in a benchmark against another radiative transfer code ARTES in Karalidi et al. (2012). There is a publicly available code written in python and fortran called PyMieDap3(Rossi et al., 2018), which shares much of the functionality and origins with PolHEx. Footnote 3: [https://gitlab.com/loic.cg.rossi/pymiedap.git](https://gitlab.com/loic.cg.rossi/pymiedap.git) Figure 1 gives a summary of the stages of the PolHEx code. Either the MIE routine within PolHEx or an external code can be used to compute scattering matrices (as in Eq. 5) for a particle size distribution of cloud/aerosol particles, resulting in 6 independent matrix elements as functions of the scattering angle. The scattering matrix elements are expanded in general spherical coordinates (see Section 2.3) before being passed on to be used in the adding-doubling radiative transfer (DAP) part of the PolHEx code. The atmosphere is also built as input to the radiative-transfer part of the code, as described in Section 4 for our WASP-96b model atmospheres. Molecular absorption and scattering are both included here. The output of the radiative-transfer part of the code gives information on the scattering properties of the atmosphere. The matrix elements which describe this local reflection of the planet (in our case due to the atmosphere only) are then expanded as a fourier series which can be read into the PIX part of the PolHEx code. If an external code is used to compute the scattering matrix elements then an extra step is required for their expansion (this is done by the SCA component of the code, as labelled in Figure 1). The pressure-dependent atmospheric parameters are setup in DAP for a number of atmospheric layers. The geometry of the system is setup in this part of the code, including the atmospheric composition as a function of longitude and latitude, and the orbital phase angle \(\alpha\). Different Fourier coefficients are read in for different regions of the planet, which allows for the computation of locally reflected stokes vectors for each longitude/latitude grid point which is observable for the defined geometry. Of course this requires more computation, so some compromise needs to be made between degree of complexity and feasible computations. The contributions of reflected flux from each longitude/latitude grid point at a defined orbital phase angle \(\alpha\) are combined together to give a set of reflected strokes vectors for the visible planetary disk. The default of PolHEx is to compute \(\alpha\) from 0 to 360\({}^{\circ}\) in steps of 10\({}^{\circ}\), with 0\({}^{\circ}\) being when the planet is directly behind the star, 90\({}^{\circ}\) when the morning terminator is face-on, 180\({}^{\circ}\) when it's directly infront (mid transit), and 270\({}^{\circ}\) when the evening terminator is face-on to the observer. The outputs from the final part of PolHEx, PIX, as labelled in Figure 1, are either \(F\) and \(P\) as a function of wavelength \(\lambda\) for a fixed orbital phase \(\alpha\), or \(F\) and \(P\) as a function of orbital phase \(\alpha\) for a fixed wavelength \(\lambda\). ## 4 The atmosphere of WASP-96b WASP-96b is a hot gaseous exoplanet with a mass of 0.48 \(\pm\) 0.03 M\({}_{\rm Jup}\), and a mean radius of 1.2 \(\pm\) 0.06 R\({}_{\rm Jup}\). It orbits close to it's host star, with a semi-major axis of 0.045 AU and an orbital period of 3.4 days (Hellier et al., 2014). Our base planetary atmosphere setup in this study is derived from the output of a global circulation model (GCM) of WASP-96b which was produced using expeRT/MITGcm(Carone et al., 2020; Baeyens et al., 2021) and combined with a kinetic cloud modelling routine (Helling et al., 2019, 2021) in Samra et al. (2023). We explore how varying certain parameters, such as the materials used to form cloud particles, impacts not only the reflected flux from the planet, but also the degree of polarisation of that flux, using stokes parameters defined in Section 2. We utilise the output provided as a result of Samra et al. (2023) for 6 longitude and latitude dependent atmospheric regions, which we label A...F, as defined in Figure 2. For each atmospheric region A...F, we build an atmosphere with 44 plane-parallel atmospheric layers, with the pressure and temperatures for each (from Samra et al. (2023)) taken from Figure 3 (left). We assign the VMR of molecules, material volume fractions of the clouds, and optical depth of clouds for each region, as described below. We only consider the parts of the planet which are both illuminated and visible to the observer at any given phase. There is no clearly defined surface on hot gaseous exoplanets, so we choose to consider atmospheric layers down to 6 bar. There we set the albedo to 0, i.e. there is no surface reflection. ### Gaseous composition The molecular volume concentrations \(\frac{n_{\rm I}}{\int}n_{\rm tot}\) (number of molecules of a given species per unit volume divided by the total number of molecules in that unit volume) at each of the six atmospheric regions A...F in Figure 2 can be found in Figure 2. For simplicity, and because most of the molecular VMRS are very similar between different regions, we only use the two terminator regions (B and D) for modelling the molecular composition, which are given in Figure 4. We create a model atmosphere using only the most abundant species (H\({}_{2}\)O (Polyansky et al., 2018), CO (Li et al., 2015), H\({}_{2}\)S (Azamz et al., 2016), CH\({}_{4}\)(Yurchenko et al., 2017), Na (Kramida et al., 2013; Allard et al., 2019), and K (Kramida et al., 2013; Allard et al., 2016). The remainder of the atmosphere is comprised of H\({}_{2}\) and He, in assumed solar abundances. We take absorption cross-sections (in units of \(\frac{\rm cm^{2}}{\rm molecule}\)) computed using ExoCross (Yurchenko et al., 2018) as part of the ExoMolOP database (Chubb et al., 2021), with the line list for each as specified above. In general we do not expect to see spectral features from these species at abundances below around 1x10\({}^{-6}\)(Gasman et al., 2022), however we include the atoms Na and K because of their very strong resonance doublet features. Such features have been observed in the atmosphere of WASP-96b using the Very Large Telescope (VLT) (Nikolov et al., 2018). We bin the combined cross-sections which include all these species down to a small number of wavelength values (58), ensuring sufficient sampling around the prominent spectral features. These absorption features are most apparent in our clear (i.e. cloud-free) atmospheric models, but are also important in our cloudy models in order to explore the scattering behaviour within and outside the regions of the absorption features. ### Cloud composition A variety of different species are predicted by Samra et al. (2023) to form the clouds in a WASP-96b-like atmosphere. Building an inhomogeneous atmosphere of WASP-96b allows us to investigate: * different cloud compositions * different cloud particle sizes and distributions * clouds at different layers of the atmosphere We then explore these further using model homogeneous atmospheres, largely based on either atmospheric region B (around the evening terminator) or D (around the morning terminator). We set up an atmosphere using PolHEx, which allows for a user-defined number of atmospheric layers, each with its own pressure, temperature, gaseous abundance, and cloud layer. For the cloud layer at each pressure level, expansion coefficients of the scattering matrix are read in, with clouds composed of different materials already pre-mixed, as explained in Section 4.2.2. The scattering properties of the cloud particles can either be computed using the internal PolHEx-MIE computation, which uses Mie theory and therefore only deals with spherical particles, or from an external source. We use the optical code4(Dominik et al., 2021) (see Section 2.4) for exploring the impact of irregularly shaped rather than spherical particles in the atmosphere. We check for consistency with the MIE computations of PolHEx for the case of a spherical particle for each set of refractive indices and size distribution, and find identical results. Footnote 4: [https://github.com/edominik/optool](https://github.com/edominik/optool) #### 4.2.1 Materials used for clouds Table 1 gives a summary of the different materials used to form the clouds used in the models of this work, along with the reference of the source used for their refractive indices as a function of wavelength. The real \(n\) and imaginary \(k\) parts of the refractive indices (sometimes known as optical constants) as a function of wavelength can be seen in Figure 5. It is worth noting that the Fe-bearing species Fe, FeO, Fe\({}_{2}\)O\({}_{3}\) have the highest values of the imaginary component \(k\), which signifies they are highly absorbing. The same species, along with TiO\({}_{2}\) have relatively high values of the real part of the refractive index \(n\), which indicates a high phase velocity (rate of propagation) which impacts scattering. We note that some typical spectral features which are driven by the imaginary component of the refractive index occur at longer wavelengths than those shown here. There are a number of databases which can be used to search for optical constants such as the refractive indicies of materials used to form clouds. For example, there is the Database of Optical Constants for Cosmic Dust5, the Aerosol Refractive Index Archive Figure 1: A simplified summary of the PolHEx code setup. Either the MIE routine within PolHEx or an external code can be used for computing the scattering matrices of cloud or aerosol particles. Use of an external code output requires an extra step (SCA) to expand these matrix elements (as functions of scattering angle) into spherical functions, and to format the output for input into the adding-doubling radiative transfer routine (DAP) part of PolHEx. The pressure-dependent atmospheric parameters are setup in DAP. Here we compute for each different atmospheric region, before combining different regions of the planet together in the PIX component of the code. The outputs from PIX are either \(F\) and \(P\) as a function of wavelength \(\lambda\) for a fixed orbital phase \(\alpha\), or \(F\) and \(P\) as a function of orbital phase \(\alpha\) for a fixed wavelength \(\lambda\). (ARIA) 6, the HITRAN2020 database (Gordon & et al., 2022)7, and Texas A&M University dust 2020 (TAMUdust2020) (Saito et al., 2021) which compiles the optical properties of irregularly shaped aerosol particles. The Amsterdam-Granada Light Scatter Figure 3: Left: pressure-temperature profiles for the 6 atmospheric regions A to F, as labelled. Right: Average particle size for the same 6 atmospheric regions A to F. Figure 2: An illustration of how the dayside of WASP-96b is divided into different atmospheric compositions in this study. The letter denoting the composition is linked to the output of StaticWeather (Samra et al., 2023) for WASP-96b for the following longitude (\(\phi_{\rm long}\)) / latitude (\(A_{\rm light}\)) points: A = 0\({}^{\circ}\) / 0\({}^{\circ}\), B = 90\({}^{\circ}\) / 0\({}^{\circ}\), C = 45\({}^{\circ}\) / 0\({}^{\circ}\), D = 90\({}^{\circ}\)/ 0\({}^{\circ}\), E = -45\({}^{\circ}\) / 0\({}^{\circ}\), F = 0\({}^{\circ}\)/ 86\({}^{\circ}\). The blue stars indicate the location of these longitude / latitude points. A larger grid (64 \(\times\) 64) is used in this work but with the same proportions covered by each atmosphere type as in this diagram. ing Database (Munfox et al., 2012)8 provides primarily laboratory-measured scattering matrices as a function of scattering angle at specific wavelengths for many different species, also including irregularly shaped aerosol particles. Measured size distributions are also provided, along with refractive indices. Such measurements are very useful for comparing to computations of scattering matrices. The optical properties of potential condensates in exoplanetary atmospheres have also been compiled in various works, such as Kitzmann & Heng (2017); Min et al. (2020); Lietzow & Wolf (2022). Footnote 8: [https://www.ia.csic.es/scattering/list/index.html](https://www.ia.csic.es/scattering/list/index.html) #### 4.2.2 Mixing materials to form clouds using effective medium theory In realistic scenarios, and as demonstrated in Samra et al. (2023) for WASP-96b, we expect clouds to be formed not of just one single material, but of several different species. In order to model clouds formed of different materials, we use effective medium theory to compute the complex refractive indices which result from the different materials combined (Mishchenko et al., 2016). Each material has it's own real \(n\) and imaginary \(k\) part of the refractive index, which varies by wavelength, as illustrated by Figure 5. We take into account the material volume fractions of different cloud materials for different longitude, latitude and pressure layer (as illustrated in Figure 6; here, the volume fractions are only as proportions of the total cloud composition and do not take molecular abundances into account), along with the refractive index for each material as a function of wavelength, in order to get a mixed refractive index as a function of wavelength and pressure layer for each atmospheric region (Figure A1). We use the Bruggeman mixing rule (Bruggeman, 1935), as was used in Samra et al. (2022); Samra (2022): \[\sum_{s}\frac{V_{s}}{V_{\rm tot}}\frac{\epsilon_{s}-\epsilon_{\rm eff}}{ \epsilon_{s}+2\epsilon_{\rm eff}}=0 \tag{16}\] Here, \(\epsilon_{s}\) is the dielectric constant of each individual condensate material which makes up the inhomogeneous cloud particles. \(\epsilon_{s}\) is related to the refractive index by: \(\epsilon_{s}=(n+ik)^{2}\). We solve Eq. 16 iteratively using Mathematica (Wolfram, 2022) to get the combined (or effective) dielectric constant \(\epsilon_{\rm eff}\), which can be split into real \(n_{\rm eff}\) and imaginary \(k_{\rm eff}\) parts. We compute these effective refractive indices as a function of longitude, latitude, pressure layer, and wavelength, and use them as input into our PolHex WASP-96b models. ### Optical depth of clouds Strong spectral features of Na and K have been observed in the transmission spectra of WASP-96b using the Very Large Telescope (VLT), Hubble Space Telescope (HST) and the Spitzer space telescope (Nikolov et al., 2018, 2022). The conclusion of Nikolov et al. (2018) was that the atmosphere must be cloud-free in order for the line-wings of the atomic absorption features to be visible. The GCM and kinetic cloud models of Samra et al. (2023), however, find that it would be very unlikely for WASP-96b to be cloud free. Samra et al. (2023) therefore explored how their models could better match up with the observations, with one of the processes being a reduced atmospheric vertical mixing, which would cause clouds to settle deeper in the atmosphere than originally predicted. In this work we choose the lower altitude cloud layer in our inhomogeneous model atmosphere models, in order to be more consistent with the observations of Nikolov et al. (2018). We do also demonstrate an inhomogeneous model where we place the cloud layer higher up in the atmosphere (to become optically thick at \(1\times 10^{-4}\) bar), as a comparison (see Table 2 for a summary of the different models computed in this study). We assume that within each atmospheric layer the number density \(N_{\rm cloud}\) of the materials (both molecules/atoms and clouds) within remains constant. This means the optical depth \(\tau\) of a given layer of length \(l\) due to clouds composed of a variety of materials with combined extinction coefficient \(k_{\rm ext}\) can be deduced by: \[\tau=k_{\rm ext}N_{\rm cloud}l. \tag{17}\] We use cgs units in our code, with \(l\) in cm, \(N_{\rm cloud}\) in \(\frac{g}{cm^{2}}\), and \(k_{\rm ext}\) in \(\frac{cm^{2}}{g}\). Optical depth \(\tau\) is unitless. Extinction coefficient \(k_{\rm ext}\) is sometimes called attenuation cross section \(\sigma_{\rm cloud}\). \(k_{\rm ext}\) is the sum of the scattering \(k_{\rm scat}\) and absorption \(k_{\rm abs}\) cross sections: \[k_{\rm ext}=k_{\rm scat}+k_{\rm abs} \tag{18}\] Figure 5: The real (left) and imaginary (right) part of the refractive index for species used in this work, with references for each in Table 1. The real part of the refractive index indicates the phase velocity (rate of propagation), which relates to scattering, whereas the imaginary part relates to the material’s absorption properties. The (0) in the legend refers to \(k\) being zero across all wavelengths shown for that species. The single scattering albedo (ssa) can be found from these: \[\mathrm{ssa}=\frac{k_{\mathrm{scat}}}{k_{\mathrm{ext}}} \tag{19}\] #### 4.3.1 Particle Size distribution Although various size distributions are possible for the Mie computations within PolHex (de Rooij & van der Stap, 1984), we choose to use a simple gaussian distribution (Samra et al., 2020) for the size distribution of cloud particles in each atmospheric layer and region that we consider. We note other particle size distributions are discussed in various works such as Samra (2022) and warrant further exploration in the future. The average particle size \(r_{B}\) as a function of pressure layer for each atmospheric region A...F are illustrated by Figure 3 (right). We assume a gaussian distribution standard deviation around each of these average particle sizes which is an order of magnitude less than each of the average particle sizes assumed. It can be seen from Figure 3 (left) that the evening terminator region (B) is warmer than the morning terminator region (D). This generally corresponds to a smaller average particle size for the cooler morning terminator in comparison to a larger average particle size for the warmer evening terminator. ### Geometry of WASP-96b's transit PolHEx is setup with the hot exoplanets assumed to be tidally locked to their host star. This means that the same face of the planet is always facing the star. This simplifies the part of the planet visible to the observer, with the convention of phase = 0\({}^{\circ}\) for the dayside of the planet facing the observer and phase = 180\({}^{\circ}\) for the nightside of the planet facing the observer. Phases of 90\({}^{\circ}\) and 270\({}^{\circ}\) correspond to the morning and evening terminators directly facing the observer, respectively. An illustration of this and a definition of how the phase angles are defined is given in Figure 7, which gives a face-down perspective of the geometry. Only the dayside part of the planet (\(-\frac{\pi}{2}\geq\phi_{\mathrm{long}}\leq\frac{\pi}{2}\), \(-\frac{\pi}{2}\geq\lambda_{\mathrm{latt}}\leq\frac{\pi}{2}\)) will give non-zero reflected stokes vectors. Under our assumption of a tidally locked planet we can therefore assign atmospheric types based on regions of longitude and latitude, and these definitions will hold for all phase angles. The illuminated part of the planet which is visible to an observer of course changes as a function of orbital phase, as illustrated in Figure 8. ## 5 Different atmospheric setups The atmosphere is divided into _nlatt_ = 64 longitude \(\phi_{\mathrm{long}}\) and _nlong_ = 64 latitude \(\lambda_{\mathrm{latt}}\) points. Each of these grid points is assigned an atmospheric type, computed from the radiative transfer adding-doubling part of the code. For simplicity we have divided WASP-96b up into six atmospheric regions A...F, as described in Section 4 and illustrated in Figure 2. The locally reflected Stokes vectors are computed for each longitude-latitude grid point and then integrated over the visible and illuminated part of the planetary disk, in order to get the total reflected stokes parameters for a given orbital phase (as in Figures 7 and 8). Enough latitude and longitude grid points need to be used such that the stokes vector does not vary significantly between adjacent grid points. The method of assigning grid points ensures that the planet is well sampled around the terminator and polar regions. An adequate number of grid points to converge was found to be 64 x 64. Each atmosphere type consists of 44 altitude layers. The following parameters (see Section 4 for details) are varied for each atmosphere type (region in longitude and latitude space) and atmospheric layer (altitude or pressure layers): * **gas temperature** (K) as a function of gas pressure (bar) * **molecular/atomic number densities (cm\({}^{-3}\))** as a function of pressure * **size distribution** of cloud particles for that given pressure layer * **complex refractive index** as a function of pressure, computed using effective medium theory to mix different materials together * the wavelength-dependent **optical depth** of clouds as a function of pressure layer The different model setups that we compute in this work are summarised in Table 2, along with the figure(s) which demonstrate the results of these models. In general we produce figures for the \begin{table} \begin{tabular}{l l l l l} \hline Species & Name & \(n\) & \(k\) & Source \\ \hline Al\({}_{2}\)O\({}_{3}\) (crystalline) & Corundum & 1.76 & 0 & Palik (2012) \\ Fe\({}_{2}\)O\({}_{3}\) (solid) & Hematite & 2.79 & 0.22 & Triaud (2005)\({}^{a}\) \\ Fe\({}_{2}\)SiO\({}_{4}\) (crystalline) & Fayalite & 1.85 & \(1.16\times 10^{-3}\) & Unpublished\({}^{b}\) \\ FeO (amorphous) & Wustite & 2.43 & 0.55 & Henning et al. (1995) \\ Fe (metallic) & Iron & 2.66 & 3.64 & Palik (2012) \\ Mg\({}_{2}\)SiO\({}_{4}\) (amorphous) & Forsterite & 1.61 & \(1.22\times 10^{-4}\) & Jäger et al. (2003) \\ MgO (cubic) & Magnesium oxide & 1.74 & \(6.76\times 10^{-8}\) & Palik (2012) \\ MgSiO\({}_{3}\) (amorphous) & Enstatite & 1.57 & \(2.99\times 10^{-5}\) & Dorschner et al. (1995) \\ SiO\({}_{2}\) (Crystalline) & Quartz & 1.54 & 0 & Palik (2012) \\ SiO (Noncrystalline) & Silicon oxide & 1.93 & \(6.61\times 10^{-3}\) & Palik (2012) \\ TiO\({}_{2}\) (Rutile) & Rutile & 2.54 & \(2.40\times 10^{-4}\) & Zeidler et al. (2011) \\ \end{tabular} \end{table} Table 1: Sources used for the refractive indices of the species used to form cloud particles in this work. Average values across the wavelength range we consider (0.5 - 1 \(\mu\)m) are given. All species are in solid phase. Figure 8: An illustration of how the planet disk would look from the perspective of the observer (if it could be resolved) as a function of phase. The cooler morning terminator at \(\alpha=90^{\circ}\) and the warmer evening terminator at \(\alpha=270^{\circ}\) are highlighted as the phases we focus on in this paper. Figure 7: An illustration of how we define phase angle \(\alpha\), with a face-down view of the orbiting planet-star system. We assume tidally locked planets, so the rotation period of the planet on its axis is the same as the orbital rotation period, both in the anti-clockwise direction in this diagram. The direction of observation is indicated, with the cooler morning terminator being face-on at \(\alpha=270^{\circ}\) and the warmer evening terminator being face-on at \(\alpha=90^{\circ}\). total flux \(F\) ( \(\lambda\), \(\alpha\)) and degree of linear polarisation \(P\) ( \(\lambda\), \(\alpha\)) for model atmospheres as either a function of wavelength (between 0.5 - 1 \(\mu\)m) for \(\alpha\) = 90\({}^{\circ}\) or \(\alpha\) = 270\({}^{\circ}\), or as a function of orbital phase (between 0 - 360\({}^{\circ}\)) for a selection of wavelengths. Here, we use the term homogeneous in terms of longitude and latitude; there is variation with altitude in all models we call homogenous. ## 6 Results for WASP-96B Inhomogeneous model atmospheres: \(F\) and \(P\) as a function of wavelength for \(\alpha\) = 90\({}^{\circ}\) and \(\alpha\) = 270\({}^{\circ}\) Figure 9 gives \(F\) and \(P\) as a function of wavelength for different inhomogenous model atmospheres (i.e. those which vary as a function of longitude \(\phi_{\text{long}}\) and latitude \(\lambda_{\text{lat}}\)), at orbital phases of 90 and 270\({}^{\circ}\). The full inhomogenous atmosphere (orange and turquoise dashed lines) is with a cloud layer that becomes optically thick at \(1\times 10^{-2}\) bar (see discussion at the start of Section 4). We also include models where the cloud layer becomes optically thick higher in the atmosphere, at \(1\times 10^{-4}\) bar. The flux as a function of wavelength is relatively low in both scenarios, but the degree of linear polarisation is markedly different. We also show completely clear (no cloud) models in Figure 9, which are nearly identical for both \(F\) and \(P\) at phase 90 \(\pm\) 270\({}^{\circ}\). This illustrates that it is the cloud particles (in particular their refractive properties and size distributions) which are causing the differences in reflected flux and degree of polarisation between the different model atmospheres. We explore further why these differences occur, using model homogeneous models and scattering properties of different species, in Sections 6.3 and 6.4. Similar to the models of Jupiter-like exoplanets in Stam et al. (2004), the clear atmosphere in Figure 9 has a general trend of decreasing \(F\) with \(\lambda\), due to a decrease in the molecular scattering optical thickness with \(\lambda\). \(P\) has a corresponding general increase with \(\lambda\), due to less multiple scattering taking place at longer wavelengths because of the lower molecular scattering cross-section. Multiple scattering typically lowers the degree of polarisation \(P\) for reflected light. The regions of increased molecular and atomic absorption can clearly be seen in the clear spectra of Figure 9 (left). The most prominent absorption features occur around 0.6 \(\mu\)m and just under 0.8 \(\mu\)m, due to strong resonance transition doublets of Na and K (Allard et al., 2016, 2019). In these regions the strong absorption also causes less multiple scattering to take place. This absorption thus leads to low \(F\) and high \(P\). In the models described so far, it is assumed that all cloud particles are spherical, with scattering properties computed using Mie theory. We also compare to a full inhomogenous atmosphere with the same properties as in the dashed lines of Figure 9, but with irregularly shaped instead of spherical particles used for modelling the particles at pressure layers of 0.01 bar and above. We use the opted code9(Dominik et al., 2021) (see Section 2.4). The value of \(f_{\text{max}}\) indicates the irregularity of the particle, with 0 a sphere (red) and higher values being more irregular. In the dotted lines of Figure 9 the difference between inhomogenous atmospheres with spherical and very irregular particles (\(f_{\text{max}}\) = 0.8) can be seen, particularly for the degree of linear polarisation \(P\) for \(\alpha\) = 90\({}^{\circ}\) (face-on to the morning terminator). Footnote 9: [https://github.com/edomink/optool](https://github.com/edomink/optool) Inhomogeneous model atmospheres: \(F\) and \(P\) as a function of orbital phase for selected wavelengths In Figure 10, we plot \(F\) (left) and \(P\) (right) as a function of orbital phase for selected wavelengths between 0.5 - 1 \(\mu\)m for the full inhomogeneous model atmosphere (with clouds becoming optically thick at \(1\times 10^{-2}\) bar, relating to the dashed lines in Figure 9). The inhomogeneity can be clearly seen by the lack of symmetry either side of 180\({}^{\circ}\) for both \(F\) and \(P\). Figure 11 gives the same output but for a clear atmosphere, which appears symmetric about 180\({}^{\circ}\). It can be seen that \(P\) peaks at 90 and 270\({}^{\circ}\), due to Rayleigh scattering by the atoms and molecules in the atmosphere. Figure 12 gives the inhomogenous model atmospheres with irregular rather than spherical particles at pressure layers of 0.01 bar and above. Homogenous model atmospheres: \(F\) and \(P\) as a function of wavelength for \(\alpha\) = 90\({}^{\circ}\) In order to explore why the inhomogeneous atmosphere looks as it does for \(F\) and \(P\), in Figure 13 we show a series of models of homogenous atmospheres (i.e. atmospheres which do not vary as a function of longitude and latitude, but only by altitude) of types A...F (where A...F are the six atmospheric regions as defined in Figure 2). The models of Figure 13 can be compared to the dashed lines in Figure 9, which gives the equivalent inhomogeneous atmosphere. There, the \(\alpha\) = 90\({}^{\circ}\) model has contributions from atmospheric regions D, E, A, and F (i.e. the morning side of the planet), while the \(\alpha\) = 90\({}^{\circ}\) model has contributions from regions B, C, A and F (the evening side). We note from Figure 13 that atmospheric region B and E are very similar to one another, as are A and C. This is due to the hot spot shift away from the sub-solar point and towards the warmer evening terminator, which is a result of strong equatorial winds driven by the tidally-locked nature of the planet. In order to now focus on one atmospheric type at a time, we focus on region B (around the evening terminator) and D (around the morning terminator). Figures 14 and 15 show \(F\) and \(P\) for some homogeneous atmospheres, using the atmospheric setup for region B (Figure 14) and D (Figure 15). The model with mixed composition clouds are shown (labelled as all species), along with the same setup but with single materials only used to form the clouds. It can be seen from Figures 14 and 15 that the model WASP-96b atmosphere is largely dominated by the optical properties of the Fe-bearing species which are used to form the mixed-composition clouds. For Figure 14 in particular, the atmosphere with mixed-composition clouds and FeO-only clouds are nearly identical in \(F\), but differ in \(P\). The flux as a function of wavelength for a homogenous atmosphere of type B is almost identical to the same model atmosphere but with FeO used to form all cloud particles, instead of the mixed cloud particles (see Section 4.2). The examples shown in Figures 14 and 15 are for clouds made purely of Al\({}_{2}\)O\({}_{3}\), Fe\({}_{2}\)O\({}_{3}\), FeO, Mg\({}_{2}\)SiO\({}_{4}\), or MgO. It can be seen that the optical properties of Fe\({}_{2}\)SiO\({}_{4}\) lead it to share more similarities with the silicate and oxide species than the iron species. Such atmospheres have a high single scattering albedo across all wavelengths (see Figure 16), so a significant fraction of the incoming stellar light would be reflected out, some of it towards the observer, before it can be absorbed. As for the inhomogenous atmospheres, we also explore homogenous atmospheres using irregularly shaped instead of spherical cloud particles. Figure 17 shows \(F\) (left) and degree of linear polarisation \(P\) (right) for a model homogeneous planet based on WASP-96b at \(\alpha\) = 90\({}^{\circ}\) assuming atmospheric type B only, but with varying irregularity of the cloud particles. The value of \(f_{\text{max}}\) indi cates the irregularity of the particle, with 0 a sphere (red) and higher values being more irregular. The effect of using irregular instead of spherical particles can be clearly seen. Figures 18 - 20 give an indication of how the particle size distribution affects \(F\) and \(P\) as a function of wavelength. All three figures give \(F\) and \(P\) as a function of wavelength at \(\alpha\) = 90\({}^{\circ}\) for model homogeneous atmospheres of type B. Figure 18 uses a gaussian particle size distribution with average particle size 0.25 \(\mu\)m and standard deviation 2.5 \(\times\) 10\({}^{-4}\) for all models, while Figure 19 uses a gaussian particle size distribution with average particle size 0.1 \(\mu\)m and standard deviation 0.01 for all models. Both present 11 different models, each with a single species used to form the \begin{table} \begin{tabular}{l c} \hline \hline Description & Associated figure \\ \hline **Inhomogeneous model atmospheres: \(F\) and \(P\) as a function of wavelength for \(\alpha\) = 90\({}^{\circ}\) and \(\alpha\) = 270\({}^{\circ}\)** & Fig. 9 \\ WASP-96b setup including atmosphere types A\(\ldots\)F, with and without clouds, and varying optical depth & Fig. 9 \\ WASP-96b setup including atmosphere types A\(\ldots\)F, for irregularly shaped particles above 0.01 bar & Fig. 9 \\ \hline **Inhomogeneous model atmospheres: \(F\) and \(P\) as a function of orbital phase for selected wavelengths** & \\ WASP-96b setup including atmosphere types A\(\ldots\)F with clouds of mixed composition & Fig. 10 \\ WASP-96b setup including atmosphere types A\(\ldots\)F with no clouds (clear) & Fig. 11 \\ WASP-96b setup including atmosphere types A\(\ldots\)F, for irregularly shaped particles above 0.01 bar & Fig. 12 \\ \hline **Homogenous model atmospheres: \(F\) and \(P\) as a function of wavelength for \(\alpha\) = 90\({}^{\circ}\)** & \\ A series of 6 model atmospheres, of types A\(\ldots\)F, each with mixed species used to form clouds & Fig. 13 \\ A series of models of atmosphere type B, each with one single species used to form the clouds & Fig. 14 \\ A series of models of atmosphere type D, each with one single species used to form the clouds & Fig. 15 \\ A series of 5 model atmospheres of type B, with varying irregularity of clouds particles & Fig. 17 \\ 11 models of type B, with one single species used to form clouds and 0.25/2.5 \(\times\) 10\({}^{-4}\)\(\mu\)m size distribution & Fig. 18 \\ 11 models of type B, with one single species used to form clouds and 0.1/0.01 \(\mu\)m size distribution & Fig. 19 \\ Models of type B, each with Mg\({}_{2}\)SiO\({}_{4}\) only used to form clouds and various gaussian size distributions & Fig. 20 \\ \hline **Homogenous model atmospheres: \(F\) and \(P\) as a function of orbital phase for selected wavelengths** & \\ Mixed cloud composition type B, compared to models with one single species used to form the clouds & Fig. A3/A4 \\ \hline \hline \end{tabular} \end{table} Table 2: A summary of the different model atmospheres computed in this study. Note that \(\alpha\) = 90\({}^{\circ}\) and \(\alpha\) = 270\({}^{\circ}\) are identical in the cases of (longitudinally)/latitudinally homogeneous atmospheres, so only \(\alpha\) = 90\({}^{\circ}\) is shown in those cases. Figure 9: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for our model **inhomogenous** (i.e. varying as a function of longitude and latitude) WASP-96b atmosphere, assuming different properties for atmospheric regions \(A\) to \(F\), split as illustrated by Figure 2. The optical depth reaches unity due to clouds at mid (dashed orange and turquoise lines) and high (solid lines labelled optically thick) altitude in the atmosphere. A comparison to the same model setup (same pressure-temperature profiles and molecular compositions) but with completely clear atmospheres is shown. The clear atmospheres are nearly identical so cannot easily be distinguished here. The dotted lines are for model atmospheres which reach optical depth at mid altitude but with irregularly shaped particles (\(f_{\rm max}\) = 0.8) from 0.01 bar and above. We consider orbital phases of \(\alpha\) of 90 and 270\({}^{\circ}\). Figure 11: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) as a function of orbital phase \(\alpha\) and at various wavelengths, for the inhomogeneous WASP-96b model atmosphere with a clear (no-cloud) atmosphere. Vertical lines at 90 and 270\({}^{\circ}\) are shown for reference. Figure 12: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) as a function of orbital phase \(\alpha\) and at various wavelengths, for the inhomogeneous WASP-96b model atmosphere with irregular instead of spherical particles for pressure layers of 0.01 bar and above. Figure 10: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) as a function of orbital phase \(\alpha\) and at various wavelengths, for the inhomogeneous WASP-96b model atmosphere. clouds, as labelled. Figure 20 focuses on model atmospheres each with only Mg\({}_{2}\)SiO\({}_{4}\) used to form the clouds, but this time with varying parameters used for the gaussian size distributions. Homogenous model atmospheres: \(F\) and \(P\) as a function of orbital phase for selected wavelengths Figures A3 and A4 give the phase curves (i.e. variation of \(F\) or \(P\) with orbital phase) of different homogeneous atmospheres for selected wavelengths. All panels are assuming atmosphere type B, with the majority of the panels showing atmospheres with clouds made up of a single species (Al\({}_{2}\)O\({}_{3}\), Fe\({}_{2}\)O\({}_{3}\), FeO, Mg\({}_{2}\)SiO\({}_{4}\), MgO) only. These are the same models as in Figure 14. Phase curves for a homogenous planet with atmosphere type B but containing clouds made up of mixed species (as described in Section 4.2.2) are shown for comparison. It can be seen that different materials give different signatures, particularly when looking at the degree of linear polarisation \(P\). Note the different scales on the y-axis; those which are less reflective such as FeO and Fe\({}_{2}\)O\({}_{3}\) are also generally more highly polarising than the other more reflective materials. ### Geometric albedo As introduced in Section 2.7, the geometric albedo \(A_{G}\) as a function of wavelength can be found by looking at the reflected flux at \(\alpha\) = 0\({}^{\circ}\). \(A_{G}\) is plotted in Figure 21 for the full inhomogeneous atmosphere setup, and for the homogeneous atmosphere setup of type B. The latter either includes mixed-species used to form clouds, or only a single species used to form the clouds (A\({}_{2}\)O\({}_{3}\), Fe\({}_{2}\)O\({}_{3}\), FeO, Mg\({}_{2}\)SiO\({}_{4}\), MgO). The geometric albedo of a population of around 20 hot gaseous exoplanets have been measured by studies such as Angrahusen et al. (2015) and Esteves et al. (2015), with the finding that the majority have albedos typically less than 0.15 in the Kepler bandpass (0.42 - 0.91 \(\mu\)m). Two notable exceptions are HAT-P-7b with a measured geometric albedo of 0.23 (Heng and Demory, 2013) and Kepler-7b with 0.25 (Heng et al., 2021). In our models, it can be seen that the highly absorbing Fe-bearing species FeO causes the geometric albedo to be very low for both the full inhomogenous atmosphere and the homogeneous mixed species atmosphere of type B. If only species with similar properties to silicates and oxides like Al\({}_{2}\)O\({}_{3}\), Fe\({}_{2}\)O\({}_{3}\), Mg\({}_{2}\)SiO\({}_{4}\), MgO were included in the atmosphere then the geometric albedo (which can be measured from observations) would be much higher. We note that it is known that there can be errors in calculated wavelength-dependent planetary phase functions and albedos due to treating light as a scalar and not as a vector, by neglecting polarization (Stam and Hovenier, 2005). An investigation on the impact of cloud materials on measured geometric albedo warrants further investigation. ## 7 Discussion of results ### Impact of effective refractive index of materials used to form clouds As previously mentioned, the imaginary part of the refractive index \(k\) of cloud particles relates to absorption, while the real part \(n\) relates to scattering. Materials considered in this study which have high values of \(k\) and relatively lower values of \(n\) (see Figure 5) and thus low single scattering albedos (Figure 16) are all Fe-bearing species (Fe, FeO, Fe\({}_{2}\)O\({}_{3}\)). Theoretical atmospheres composed of such species, as illustrated by Figure 18, have relatively lower \(F\) across all wavelengths. Figure 16 gives some insights into the scattering behaviour of particles formed from different materials as a function of wavelength. Fe\({}_{2}\)O\({}_{3}\) for example has a single scattering albedo which varies significantly as a function of wavelength. The impact of Fe-bearing materials forming clouds in our model exoplanet atmospheres can be seen in Figures 14 and 15. The left panel of each shows \(F\) as a function of wavelength for single species compared to the full setups (including mixed cloud species) for homogeneous atmosphere setups of type B or D, respectively. It can be seen that the models which include either the mixed cloud species or FeO only have much lower \(F\) across all wavelengths than those which include only Al\({}_{2}\)O\({}_{3}\) or Mg\({}_{2}\)SiO\({}_{4}\). This highlights the effect that Fe-bearing materials can have on the reflected flux, and therefore observed albedo, of hot transiting gas giants. Fe and Fe\({}_{2}\)O\({}_{3}\) behave in a similar way to FeO, but interestingly Fe\({}_{2}\)SiO\({}_{4}\) behaves in a similar way to the silicates or oxides, due to it's lower imaginary part of the refractive index \(k\). It can be seen that the refractive indices of the morning and evening terminators in Figure A1 (top and middle) are similar for some pressure layers, but differ around \(1\times 10^{-3}\) bar in particular. The refractive indices at the sub-stellar point are very similar to at the evening terminator for all pressure layers shown. The imaginary component of the refractive index \(k\) is higher in the hotter evening region than the cooler morning region. From Figure 6 it can be seen that this is likely due to the large proportion of clouds formed from Fe extending higher in to the atmosphere. Figure 13 (left), demonstrates that for the hotter regions of the atmosphere which generally have a higher imaginary component of the refractive index \(k\) (from Figure A1) also have lower \(F\) in comparison to the cooler regions. The imaginary component is slightly higher at lower wavelengths, which leads to a general trend of increasing flux with wavelength, as shown by Figure 13 (left). The shape of \(F\) and \(P\) as a function of wavelength for homogeneous atmospheres composed of atmospheric region B, D, or A only (using the clouds formed from mixed materials for each region) can be seen from the single scattering properties of the materials used to form the clouds in these regions at various pressure layers, as shown in Figure A5. Here the single scattering matrix elements \(F_{11}\) and \(P\) = \(\frac{F_{12}}{F_{11}}\) are plotted as a function of wavelength (see details in Section 7.2). ### Impact of clouds on the degree of linear polarisation Different atmospheric layers are probed within (higher in the atmosphere) and outside (lower in the atmosphere) the atomic and molecular absorption features. If clouds are present in the atmosphere then different cloud layers are thus probed within and outside the absorption features. Gas particles scatter strongly at an angle of 90\({}^{\circ}\), as can be seen by Figure 11. This scattering angle is largely relevant for orbital phases of \(\alpha\) = 90\({}^{\circ}\) or \(\alpha\) = 270\({}^{\circ}\). We therefore plot single scattering matrix elements \(F_{11}\) and \(P\) = \(\frac{F_{12}}{F_{11}}\) at 90\({}^{\circ}\) as a function of wavelength \(\lambda\) in order to gain some insight into why the models with mid-altitude cloud layers in Figure 9 of \(P\) as a function of \(\lambda\) for \(\alpha\) = 90\({}^{\circ}\) or 270\({}^{\circ}\) look like they do. This is demonstrated by Figure 13, showing the contribution to \(P\) from different atmospheric regions A... F, and Figure A5, showing the single scattering properties of the mixed-material cloud particles used to form each layer in regions B (evening), D (morning), and A (sub-solar). Figure A6 shows the single scattering properties of selected different materials, and so does not depend on the atmospheric setup of WASP-96b such as the temperature pressure profile. The only difference between atmosphere regions \(B\) (upper panels) and \(D\) (lower panels) in this case are the size distributions of the particles, which are based on the size distributions at 0.01 bar for these regions. Figure A6 therefore really highlights the impact that size distribution alone can have on the scattering properties of a cloud material. The atmosphere containing mixed-composition clouds in Figure 14 has a low value of \(P\) between 0.7 and 0.8 \(\mu\)m. The behaviour around the absorption feature, where \(P\) dips just before and after the peak, is indicative of different atmospheric levels being probed here. Particles with different material composition and size distribution are in the atmosphere at different levels. ### Close-in planets Our focus is on characterising close-in transiting exoplanets. Although we do not explicitly take it into account here, we are aware of expected deviations of \(F\) and \(P\) for extreme orbital phases \(\alpha\). This has been explored before by, for example, Palmer et al. (2017). They show the variation of \(F\) and \(P\) as a function of distance from the host star for hot Jupiter exoplanets around solar-type stars. Situations where the angular size of the host star in an exoplanet's sky is non-negligible are also investigated in Palmer (2019). They define close-in planets which start to be affected, largely at the extreme orbital phase angles, as those closer than 0.05 AU, although the exact distance depends on the planet and star sizes also. With a semi-major axis of 0.045 AU (Hellier et al., 2014), WASP-96b is very close to this cutoff and thus has potential to be affected by the geometry, although noticeably less than even closer-in planets at 0.005 or 0.01 AU, as demonstrated by Palmer et al. (2017). Palmer (2019) find the flux to be particularly affected at extremes of orbital phase angle. Kostogryz et al. (2017) investigate the difference in flux and polarization curves for transiting exoplanets in the cases of either plane parallel or a spherical stellar atmosphere used in models. They find that for most cases of known transiting systems the plane-parallel approximation can be safely used due to only a very small difference between the results using the two approaches. Figure 14: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for model homogeneous planets based on WASP-96b at \(\alpha\) = 90\({}^{\circ}\) assuming atmospheric type B (around the evening terminator) only. Some examples of clouds made of single materials only are shown. \(F\) and \(P\) for a homogeneous atmosphere with the full atmospheric setup for region B are shown for comparison (i.e. with clouds formed from mixed materials). In this case all size distributions of different layers for the single species cases are the same as in the mixed composition atmosphere. Figure 13: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for 6 model homogenous planets, each assuming a full atmosphere covered by one atmospheric region (A to F), as labelled. Region A covers the region around the sub-solar point, B the hotter evening terminator, and D the cooler morning terminator. We therefore do not expect our resulting spectra as a function of wavelength which we typically take at orbital phases of \(\alpha\) = 90 or 270\({}^{\circ}\) to be significantly affected by such affects, but for our results which show the variation of \(F\) or \(P\) with orbital phase at set values of wavelength, some caution should be exercised at the extreme values of phase (close to 0\({}^{\circ}\) and 180\({}^{\circ}\)). This also applies to our figures of geometric albedo as a function of wavelength, although we do expect the difference to be small and for the general trends to hold. In all our models the geometry causes \(F\) to go to 0 at 180\({}^{\circ}\), but in reality there could be expected to be some scattered flux at such a phase angle for very close-in exoplanets. There are other aspects of an exoplanet's orbit which we do not consider in the present study. For example, there is an expectation that close-in exoplanets are more impacted by tidal deformation than those further out, and the rotation rate of these planets affect their oblateness. Palmer (2019) also investigate such effects and show, for example, that increasing the oblateness of a planet increases the amount of scattering at high atmospheric altitudes, which is typically expected to lead to an increase in the maximum degree of polarisation. We assume the inclination of our model planets are 90\({}^{\circ}\). For reference, the inclination of WASP-96b has been measured very close to 90\({}^{\circ}\), at 85.6\({}^{\circ}\)(Hellier et al., 2014). ### Temperature dependence of optical properties Some studies have been done regarding the temperature dependence of the optical properties of various species, such as olivine, enstatite (Zeidler et al., 2015), and corundum, spinel, and alpha-quartz (Zeidler et al., 2013), although these studies are generally focused on higher wavelengths than in the current study. Yang & Zhan Figure 16: Single scattering albedo (ssa = \(\frac{k_{\mathrm{sat}}}{\mathrm{Total}}\)) of various species used in this study. The species with high values of the imaginary part of the refractive index \(k\) (Fe, Fe\({}_{2}\)O\({}_{3}\), FeO) have lower values of single scattering albedo due to them being highly absorbing in comparison to the other species. Atmospheres with large mixing ratios of these iron-bearing have lower flux (due to high absorption) and relatively low degree of linear polarisation. The species in the legend with a (1) after their name are those with \(k\) = 0 and thus ssa = 1 for all wavelengths shown. Figure 15: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for model homogeneous planets based on WASP-96b at \(\alpha\) = 90\({}^{\circ}\) assuming atmospheric type D (around the morning terminator) only. Some examples of clouds made of single materials only are shown. \(F\) and \(P\) for a homogeneous atmosphere with the full atmospheric setup for region D are shown for comparison (i.e. with clouds formed from mixed materials). In this case all size distributions of different layers for the single species cases are the same as in the mixed composition atmosphere. Figure 19: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for a model homogeneous atmosphere based on WASP-96b at \(\alpha\) = 90\({}^{\circ}\), using a different single material to form the clouds for each. A gaussian size distribution is used, with 0.01 \(\sigma\) standard deviation around an average particle size of 0.1 \(\mu\)m, for atmosphere type B. Figure 17: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for a model homogeneous planet based on WASP-96b at \(\alpha\) = 90\({}^{\circ}\) assuming atmospheric type B only, but with varying irregularity of the cloud particles. The value of \(f_{\rm max}\) indicates the irregularity of the particle, with 0 a sphere (red) and higher values being more irregular. Figure 18: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for a model homogeneous atmosphere based on WASP-96b at \(\alpha\) = 90\({}^{\circ}\), using a different single material to form the clouds for each. A gaussian size distribution is used, with \(2.5\times 10^{-4}\sigma\) standard deviation around an average particle size of 0.25 \(\mu\)m, for atmosphere type B. (2020) found that the optical properties of forsterite (Mg\({}_{2}\)SiO\({}_{4}\)) undergo a blue shift with increasing pressure. More studies exploring the temperature dependence of materials used to form clouds would be beneficial to future work. ### Porosity of cloud particles As shown by the predictions in Samra et al. (2023), the atmosphere of WASP-96b will also be impacted by the porosity of the material used to form the cloud particles. Although we do not investigate it here, it will be of interest to look into how different degrees of porosity of the cloud particles will effect the flux and polarisation signal of a model planet such as WASP-96b. ## 8 Conclusion WASP-96b is a relatively homogenous planet, in terms of temperature contrast and differences in composition between morning and evening terminators, when compared to some other hotter gas giant planets which have been modelled using global climate models coupled with kinetic cloud modelling (Helling et al., 2021; Samra et al., 2023). The planet is expected to be relatively warm throughout, and thus expected to have clouds throughout the majority of the atmosphere. The molecular composition is relatively consistent across the morning and evening sides of the planet, with the exception of CH\({}_{4}\). We find that the degree of polarisation of reflected flux in particular is highly dependent on the types of clouds and the properties of the materials used to form them and therefore highlights even subtle differences between the morning and evening sides of the planet (i.e. when the planet is at 90 or 270deg phase angle). We have investigated the effect of using irregularly shaped particles as opposed to assuming perfectly spherical particles in our models, and find there is a considerable difference in the modelled polarisation signals, in particular for an orbital phase of 90deg (face-on to the cooler morning terminator). This highlights the importance of using more physically realistic models of cloud particles, in situations where the particles are expected to be more fluffy or irregular shaped. The exact shape of aerosol particles in such exotic atmospheres is not necessarily known at present, but it is an avenue which warrants Figure 21: Geometric albedo \(A_{G}\) as a function of wavelength for: the full inhomogeneous setup including mixed-material clouds, the clear inhomogeneous setup, and model homogeneous planets assuming atmospheric type B (around the evening terminator) only. Homogeneous model atmosphere of atmospheric type B are also shown: either with all cloud species (i.e. with clouds formed from mixed materials), or with some examples of single materials only. Figure 20: Reflected flux \(F\) (left) and degree of linear polarisation \(P\) (right) for a model homogeneous atmosphere based on WASP-96b at \(\alpha=90\)°, using a different single material Mg\({}_{2}\)SiO\({}_{4}\) to form the clouds. Gaussian size distribution are used for each with the average particle size (in units of \(\mu\)m) and standard deviation \(\sigma\) as labelled. future exploration. In either case, knowing the scattering properties of different shapes of particles will be an advantage when fitting models to observed spectra in the future. In general we demonstrate in this study using the PolHEx code that we expect measuring the polarisation state of reflected flux of hot close-in transiting planets to give detailed insights into the atmosphere. It is an extremely complimentary tool to analysing transmission and emission spectroscopy observations of the same exoplanets, as it is sensitive to different components of an exoplanet atmosphere, most notably the material composition of clouds and their size distribution. ## Acknowledgements This project has received funding from STFC, under project number ST/V000861/1. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2303.15907
Group rings of three-manifold groups
Let $G$ be the fundamental group of a three-manifold. By piecing together many known facts about three manifold groups, we establish two properties of the group ring $\mathbb{C}G$. We show that if $G$ has rational cohomological dimension two, then $\mathbb{C}G$ is coherent. We also show that if $G$ is torsion-free, then $G$ satisfies the Strong Atiyah Conjecture over $\mathbb{C}$ and hence that $\mathbb{C}G$ satisfies Kaplansky's Zero Divisor Conjecture.
Dawid Kielak, Marco Linton
2023-03-28T11:53:16Z
http://arxiv.org/abs/2303.15907v3
# The Atiyah conjecture for three-manifold groups ###### Abstract. We show that finitely generated fundamental groups of three-manifolds satisfy the Strong Atiyah Conjecture over the complex numbers. This implies that when the group is additionally torsion-free, then its complex group ring satisfies the Kaplansky Zero-divisor Conjecture. As an application, we give a very short proof of a significant generalisation of a recent result of Shalen dealing with the minimal index of freedom of three-manifold groups. ## 1. Introduction If \(K\) is a field and \(G\) is a torsion-free group, then the well-known Kaplansky Zero-divisor Conjecture predicts that the group ring \(KG\) has no non-trivial zero-divisors. An affirmative answer has been established for many classes of groups: locally indicable groups [10, Theorem 12], elementary amenable groups [11, Theorem 1.4], free-by-elementary amenable groups [12, Theorem 1.3] and left orderable groups [13, Proposition 6]. If a torsion-free group \(G\) satisfies the Strong Atiyah Conjecture over \(\mathbb{C}\), then its group ring \(\mathbb{C}G\) embeds in a skew-field and thus cannot contain any non-trivial zero-divisors - for the statement of the conjecture and related background see the book of Luck [11, Section 10]. Linnell established the Strong Atiyah Conjecture over \(\mathbb{C}\) for torsion-free groups within a large class \(\mathcal{C}\) of groups [12, Theorem 1.5]. The class \(\mathcal{C}\) is defined to be the smallest class of groups containing all free groups that is closed under directed unions and extensions by elementary amenable groups. The Strong Atiyah Conjecture has been shown to hold for many three-manifold groups by Friedl-Luck [10]. However, known results do not extend beyond fundamental groups of compact three-manifolds with empty or toroidal boundary. In this article, we show that all three-manifold groups lie in the class \(\mathcal{C}\). In order to do this, we prove a strong algebraic result for lower-dimensional three-manifold groups, stated below. Recall that a group is free-by-cyclic if it admits a homomorphism to a cyclic group with a possibly infinitely generated free kernel. **Theorem 1.1**.: _Fundamental groups of three-manifolds lie in Linnell's class \(\mathcal{C}\). Moreover, if \(G\) is such a group with \(\operatorname{cd}_{\mathbb{Q}}(G)<3\) then \(G\) is locally virtually free-by-cyclic, where \(\operatorname{cd}_{\mathbb{Q}}\) denotes the rational cohomological dimension._ After the proof of Theorem 1.1, we present an example of a torsion-free three-manifold group that is locally virtually free-by-cyclic, but not virtually free-by-cyclic itself, demonstrating that Theorem 1.1 is, in a sense, sharp. A useful fact that seems to have escaped being mentioned in the literature is that Linnell's class \(\mathcal{C}\) is closed under free products; we establish this fact in Section 2. Beyond that, we use standard three-manifold techniques (prime decomposition, the Sphere and Loop Theorems), virtual-fibring theorems, the Compact Core Theorem of Scott [12] (a version of which was proved independently by Shalen, see [12, Footnote \(\dagger\)]), and the work of Friedl-Luck [13]. **Corollary 1.2**.: _Let \(G\) be the fundamental group of a three-manifold._ 1. _If_ \(G\) _is finitely generated or torsion-free, then_ \(G\) _satisfies the Strong Atiyah Conjecture over_ \(\mathbb{C}\)_._ 2. _If_ \(G\) _is torsion-free, then_ \(\mathbb{C}G\) _has no non-trivial zero divisors and hence the Kaplansky Zero-divisor Conjecture holds for_ \(\mathbb{C}G\)_._ Corollary 1.2 answers two questions of Aschenbrenner-Friedl-Wilton [1, Question 7.2.6, (1) & (2)]. In a recent article [10], Shalen studied the _minimal index of freedom_\(\operatorname{miof}(G)\) of finitely generated fundamental groups of three-manifolds. Given a finite generating set \(S\) of \(G\), the _index of freedom_ of \(S\) is defined to be the cardinality of the maximal subset of \(S\) that freely generates a free subgroup of \(G\); \(\operatorname{miof}(G)\) is then the minimal index of freedom among all finite generating sets of \(G\). Using representation varieties, Shalen proved the following. **Theorem 1.3** ([10, Theorem B]).: _Let \(M\) be an orientable hyperbolic \(3\)-manifold, and let \(G\) be a finitely generated subgroup of \(\pi_{1}(M)\). The Euler characteristic \(\chi(G)\) satisfies_ \[-\chi(G)<\operatorname{miof}(G).\] Note that \(\chi(G)\) is always defined for groups as above. In Theorem 4.1, we show that the geometric hypothesis on \(M\) can be dropped entirely, and the theorem is true for all finitely generated torsion-free three-manifold groups. ### Acknowledgements This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 850930). The authors would like to thank Adele Jackson for insightful conversations on Seifert-fibred spaces, and Peter Shalen for comments on an earlier version of this article. ## 2. Linnell's class \(\mathcal{C}\) In this section we show that Linnell's class \(\mathcal{C}\) is closed under free products, but is not closed under direct sums. We first remind the reader of the definition of elementary amenable groups. Denote by \(\mathcal{EA}_{0}\) the class of groups that are abelian or finite. For each ordinal \(\alpha\), we define \(\mathcal{EA}_{\alpha}\) to consist of extensions of groups in \(\mathcal{EA}_{\beta}\) for \(\beta<\alpha\), and all directed unions of groups, each of which lies in some \(\mathcal{EA}_{\beta}\) with \(\beta<\alpha\). The union of all the classes \(\mathcal{EA}_{\alpha}\) is precisely the class of elementary amenable groups. We may similarly stratify Linnell's class \(\mathcal{C}\). Denote by \(\mathcal{C}_{0}\) the class of free groups. For each ordinal \(\alpha\), let \(\mathcal{C}_{\alpha}\) consist of all elementary amenable extensions of groups in \(\mathcal{C}_{\beta}\) for \(\beta<\alpha\), and all directed unions of groups, each of which lies in some \(\mathcal{C}_{\beta}\) with \(\beta<\alpha\). It is clear that the class \(\mathcal{C}\) is precisely the union of the classes \(\mathcal{C}_{\alpha}\) taken over all ordinals \(\alpha\). It is easy to see that every \(\mathcal{EA}_{\alpha}\) and every \(\mathcal{C}_{\alpha}\) are closed under taking subgroups. **Proposition 2.1**.: _Linnell's class \(\mathcal{C}\) is closed under arbitrary free products._ Proof.: Let \(\alpha\) be an ordinal. Consider a free product \(*_{i\in I}G_{i}\) where for all \(i\), \(G_{i}\in\mathcal{C}_{\alpha}\). We claim that \(*_{i\in I}G_{i}\in\mathcal{C}\). The proof is by transfinite induction. Since free products of free groups are free, the base case holds. Now consider an ordinal \(\alpha>0\), and suppose that the claim is true for all ordinals \(\beta\) with \(\beta<\alpha\). Let \(J\subseteq I\) be such that for every \(i\not\in J\) we have \[1\to N_{i}\to G_{i}\to A_{i}\to 1\] with \(N_{i}\in\mathcal{C}_{\beta_{i}}\) where \(\beta_{i}<\alpha\), and with \(A_{i}\) elementary amenable (and possibly trivial), and for every \(i\in J\) we have \(G_{i}=\bigcup_{j\in J_{i}}G_{i,j}\) with \(G_{i,j}\in\mathcal{C}_{\beta_{i,j}}\) for some \(\beta_{i,j}<\alpha\). For \(\beta<\alpha\) let \(G_{i,\beta}\) denote the union of the subgroups \(G_{i,j}\) such that \(\beta_{i,j}<\beta\); if no such subgroup exists, we take \(G_{i,\beta}\) to be the trivial group. By definition, \(G_{i,\beta}\in\mathcal{C}_{\beta}\). Consider the homomorphism \[*_{i\in I}G_{i}\to\bigoplus_{i\in I-J}A_{i}\] obtained from the homomorphisms above in the obvious way, and with groups \(G_{i}\) for \(i\in J\) lying in the kernel. Note that the image is a subgroup of a direct sum of elementary amenable groups, and hence an elementary amenable group itself. By the Kurosh Subgroup Theorem, the kernel \(K\) is a free product of conjugates of the groups \(N_{i}\), conjugates of the groups \(G_{i}\) with \(i\in J\), and a free group \(F\). Take \(\beta<\alpha\). Let \(K_{\beta}\) denote the free product of \(F\) with the conjugates of the groups \(N_{i}\) that lie in \(\mathcal{C}_{\beta}\), and with conjugates of groups \(G_{i,\beta}\) for \(i\in J\). By the inductive hypothesis, \(K_{\beta}\) lies in \(\mathcal{C}\). Note that \(K=\bigcup_{\beta<\alpha}K_{\beta}\), and hence \(K\) lies in \(\mathcal{C}\). But then \(*_{i\in I}G_{i}\) lies in \(\mathcal{C}\) as well, being an extension of \(K\) by an elementary amenable group. We finish by observing that every free product of groups in \(\mathcal{C}\) is a free product of groups lying in \(\mathcal{C}_{\alpha}\) for some \(\alpha\), since \(\mathcal{C}\) is the union of the classes \(\mathcal{C}_{\alpha}\), and ordinals are closed under taking unions. We remark here that Schick proved in [10] (see also [10]) that class \(\mathcal{D}\), a class containing the torsion-free groups from \(\mathcal{C}\), is closed under free products. However, the argument relies on class \(\mathcal{D}\) being closed under direct sums, which class \(\mathcal{C}\) is not by the following lemma. **Lemma 2.2**.: _If \(G=A\times B\in\mathcal{C}\), then \(A,B\in\mathcal{C}\) and at least one of \(A\) or \(B\) is elementary amenable._ Proof.: Class \(\mathcal{C}\) is closed under taking subgroups, so certainly \(A,B\in\mathcal{C}\). Suppose now that \(A\) and \(B\) are not elementary amenable. As elementary amenable groups are closed under directed unions and extensions, it follows that both \(A\) and \(B\) must contain a non-abelian free subgroup. In particular, \(G\) contains a copy of \(F_{2}\times F_{2}\). Let \(\alpha\) be the first ordinal such that \(\mathcal{C}_{\alpha}\) contains a group \(K\) containing a copy of \(F_{2}\times F_{2}\). If \(K\) is a directed union of groups, each of which lies in some \(C_{\beta}\) with \(\beta<\alpha\), then since \(F_{2}\times F_{2}\) is finitely generated, it appears as a subgroup of one of the groups in this directed union, contradicting minimality of \(\alpha\). If \(K\) is an extension of a group \(N\) in \(C_{\beta}\), for \(\beta<\alpha\), by an elementary amenable group \(Q\), then we claim that \(N\) must also contain a copy of \(F_{2}\times F_{2}\). Indeed let \(L\) and \(R\) be the kernels of the induced maps from the two \(F_{2}\) factors to \(Q\). As \(Q\) is elementary amenable and non-trivial normal subgroups of non-abelian free groups are non-abelian free, we must have that \(L\) and \(R\) are non-abelian free groups. Since \(L\times R\leqslant N\), the claim follows. But this also contradicts the minimality of \(\alpha\). So no group in \(\mathcal{C}\) can contain \(F_{2}\times F_{2}\), a contradiction. Recall that a group has property \(\mathrm{FAb}\) if its finite-index subgroups have finite abelianisations. Property \(\mathrm{FAb}\) is a consquence of Property \(T\). It is immediate that \(\mathrm{FAb}\) passes to finite-index subgroups and overgroups. **Lemma 2.3**.: _If \(G\) is a finitely generated group with \(\mathrm{FAb}\) lying in \(\mathcal{C}\) then every elementary amenable quotient of \(G\) is finite. Therefore, \(G\) is itself finite._ Proof.: We will argue by contradiction. Take \(\alpha\) to be the smallest ordinal such that there exists \(Q\in\mathcal{EA}_{\alpha}\) that is infinite and fits into a short exact sequence \[1\to N\to G\to Q\to 1\] with \(G\) a finitely generated group in \(\mathcal{C}\) with property FAb. Since \(Q\) is finitely generated and infinite, it cannot be abelian as \(G\) has FAb. Hence we see that \(\alpha\neq 0\) and so \(Q\) is an extension \[1\to Q_{0}\to Q\to Q_{1}\to 1\] with \(Q_{0}\) and \(Q_{1}\) lying in \(Q\in\mathcal{EA}_{\beta}\) for some \(\beta<\alpha\). Thus \(G\) maps onto \(Q_{1}\) with kernel an extension of \(N\) by an elementary amenable group. This forces the kernel to lie in \(\mathcal{C}\), and hence \(Q_{1}\) is finite by minimality of \(\alpha\). But then a finite-index subgroup of \(G\) fits into an exact sequence with kernel \(N\) and quotient \(Q_{0}\). By minimality of \(\alpha\), the group \(Q_{0}\) is finite as well. This proves that \(Q\) was also finite, a contradiction. We will now prove the last claim. Again, this will be done by contradiction. Suppose that \(\gamma\) is the smallest ordinal such that \(\mathcal{C}_{\gamma}\) contains an infinite, finitely generated group \(G\) with FAb. Since non-trivial free groups do not have FAb, and since \(G\) is finitely generated, the group \(G\) must fit into a short exact sequence \[1\to N\to G\to Q\to 1\] with \(N\) in \(\mathcal{C}_{\delta}\) for some \(\delta<\gamma\), and with \(Q\) elementary amenable. But then \(Q\) must be finite, and hence \(N\) has property FAb. This contradicts the minimality of \(\gamma\). ## 3. Three-manifolds The first two statements in the following result appear as [1, Lemma 14 & Theorem 15]. The third is a straightforward application of the Kurosh Subgroup Theorem. **Proposition 3.1**.: 1. _A free product of any number of free-by-_ \(\mathbb{Z}\) _groups is free-by-_ \(\mathbb{Z}\)_._ 2. _A free product of finitely many virtually free-by-cyclic groups is virtually free-by-cyclic._ 3. _A free product of finitely many virtually torsion-free groups is virtually torsion-free._ The following is a summary of well-known facts about three-manifold groups. **Proposition 3.2**.: _Let \(M\) be a connected three-manifold and let \(G\) be its fundamental group. If \(G\) is finitely generated, then there is a finitely generated free group \(F\) and finitely many compact, connected, orientable, aspherical three-manifolds \(M_{1},\ldots,M_{n}\), each with a (possibly trivial) incompressible boundary, such that a finite-index subgroup of \(G\) is isomorphic to the free product \(F*(*_{i=1}^{n}\pi_{1}(M_{i}))\)._ Proof.: By replacing \(M\) with its orientation double cover, we may assume that \(M\) is orientable. The Compact Core Theorem allows us to assume that \(M\) is compact. Since an orientable prime non-irreducible three-manifold is homeomorphic to \(S^{2}\times S^{1}\), which has fundamental group \(\mathbb{Z}\), we may combine the Prime Decomposition Theorem with the Loop Theorem (in the form of [1, Lemma 1.4.2]) to obtain a finitely generated free group \(F\) and finitely many compact, connected, irreducible and orientable three-manifolds \(M_{1},\ldots,M_{k}\) with (non-spherical and possibly empty) incompressible boundaries, such that \(G\cong F\ast\bigl{(}\ast_{i=1}^{k}\pi_{1}(M_{i})\bigr{)}\). If \(G\) is torsion-free, then each \(M_{i}\) would also be aspherical by the Sphere Theorem. If \(G\) is not torsion-free, then it suffices to show that \(G\) is virtually torsion-free as then we can apply the above argument to an appropriate finite sheeted cover of \(M\). Since each \(M_{i}\) is either aspherical or covered by \(S^{3}\), it follows that each \(\pi_{1}(M_{i})\) is either torsion-free or finite. Since a finite free product of virtually torsion-free groups is virtually torsion-free by Proposition 3.1, it follows that \(G\) is virtually torsion-free. **Theorem 3.3**.: _Let \(M\) be a connected three-manifold and let \(G\) be its fundamental group. If \(G\) is finitely generated and \(\operatorname{cd}_{\mathbb{Q}}(G)<3\), then \(G\) is virtually free-by-cyclic._ Proof.: By Propositions 3.1 and 3.2, we may assume that \(M\) is compact, orientable, aspherical and with incompressible boundary. We also may assume that \(M\) is not contractible. Since \(G\) is non-trivial, there is some embedded curve \(\gamma\colon S^{1}\hookrightarrow M\) which is not contained in an embedded three-ball and does not intersect the boundary. Since \(\operatorname{cd}_{\mathbb{Q}}(G)<3\) and \(M\) is aspherical, it follows that \(M\) must have non-empty boundary. There is a tubular neighbourhood \(T\subset M\) of \(\gamma(S^{1})\) whose boundary does not intersect the boundary of \(M\). Denote by \(N\) the compact three-manifold obtained from \(M\) by removing \(T\) and by \(\nu\colon N\hookrightarrow M\) the canonical inclusion. We claim that \(\pi_{1}(M)\) is a subgroup of \(\pi_{1}(M\cup_{\partial M}N)\) and that \(M\cup_{\partial M}N\) remains irreducible. We first show that \(\pi_{1}(M)\) is a subgroup \(\pi_{1}(M\cup_{\partial M}N)\). We have a canonical inclusion \(M\hookrightarrow M\cup_{\partial M}N\) and a map \(M\cup_{\partial M}N\to M\) defined by the identity on \(M\) and \(\nu\) on \(N\). As the composition of these two maps is the identity, we see that \(\pi_{1}(M)\) is a subgroup of \(\pi_{1}(M\cup_{\partial M}N)\). We now show that \(M\cup_{\partial M}N\) is irreducible. First note that as the image of \(\gamma\) is not contained in an embedded ball, it follows that \(N\) is also irreducible. Suppose that \(M\cup_{\partial M}N\) is not irreducible. Then there is a sphere \(S^{2}\), embedded in the interior of \(M\cup_{\partial M}N\), that does not bound a ball. After an isotopy, we may assume that \(S^{2}\) is transverse to \(\partial M\). Thus, \(S^{2}\cap\partial M\) is either empty or consists of embedded circles. If the intersection is non-empty, take some innermost circle \(S^{1}\subset S^{2}\cap\partial M\). Then the two-disc \(D\) it bounds in \(S^{2}\) must embed in \(M\) or \(N\). As \(M\) had incompressible boundary, it follows that we may isotope \(D\) through \(\partial M\) and reduce the number of components in \(S^{2}\cap\partial M\). Continuing in this way, we see that we may isotope \(S^{2}\) so that \(S^{2}\cap\partial M\) is empty. If \(S^{2}\cap\partial M\) is empty, then \(S^{2}\) is contained in \(M\) or \(N\). As \(M\) and \(N\) are both irreducible, it follows that \(M\cup_{\partial M}N\) is also irreducible. As subgroups of virtually free-by-cyclic groups are virtually free-by-cyclic, it is enough to show that the fundamental group of \(M\cup_{\partial M}N\) is virtually free-by-cyclic. If \(M\cup_{\partial M}N\) is a hyperbolic three-manifold, then it is virtually fibred by [20, Theorem 17.14] and [1, Theorem 1.1]. If \(M\cup_{\partial M}N\) is a graph manifold, it is virtually fibred by [21]. In all other cases, it is virtually fibred by [21, Corollary 1.3]. As \(M\cup_{\partial M}N\) has non-empty boundary, it virtually fibres over a surface with boundary. Thus, \(G\) is virtually free-by-cyclic. We need one final result due to Friedl-Luck and then we will be ready to prove our main theorem. **Theorem 3.4** (Friedl-Luck [19, Theorem 3.2(3)]).: _Let \(M\) be a closed, connected, orientable and aspherical three-manifold. The fundamental group \(G\) of \(M\) lies in Linnell's class \(\mathcal{C}\)._ Proof of Theorem 1.1.: Let \(M\) be a connected three-manifold with fundamental group \(G\). Suppose first that \(G\) is finitely generated. Since class \(\mathcal{C}\) is closed under extensions by finite groups and under free products by Proposition 2.1, we may assume that \(M\) is compact, orientable, aspherical and with incompressible boundary by Proposition 3.2. If \(M\) has empty boundary, we are done by Theorem 3.4. Otherwise, we are done by Theorem 3.3, since \(M\) is homotopy equivalent to its spine, which is an aspherical two-complex, and hence \(\operatorname{cd}_{\mathbb{Q}}(G)<3\). Finally, suppose that \(\pi_{1}(M)\) is not finitely generated. Let \(M_{0}\to M_{1}\to\ldots\to M\) be a sequence of covers such that \(\pi_{1}(M_{i})\) is finitely generated for all \(i\) and such that \(\bigcup_{i\geqslant 0}\pi_{1}(M_{i})=\pi_{1}(M)\). Since we showed above that \(\pi_{1}(M_{i})\) is in \(\mathcal{C}\) for all \(i\), the group \(\pi_{1}(M)\) is also in \(\mathcal{C}\). If \(\operatorname{cd}_{\mathbb{Q}}(G)<3\), then the same is true for every subgroup of \(G\). Hence \(G\) is locally free-by-cyclic by Theorem 3.3. Proof of Corollary 1.2.: Linnell [19, Theorem 1.5] proved that all groups in \(\mathcal{C}\) with a uniform bound on cardinalities of torsion subgroups satisfy the Strong Atiyah Conjecture over \(\mathbb{C}\). It is very easy to see that torsion-free groups satisfying this conjecture do not have non-trivial zero-divisors, see [12, Lemma 10.15]. If \(G\) is not torsion-free, but is finitely generated, Proposition 3.2 implies that \(G\) is virtually torsion-free. Being virtually torsion-free gives a bound on the size of torsion subgroups. This finishes the proof. ### An example We now present an example of a locally virtually free-by-cyclic three-manifold group that is not virtually free-by-cyclic, showing that Theorem 1.1 is sharp. A straightforward example would be a three-manifold whose fundamental group has no bound on the order of its torsion elements. Instead, we construct an example that is Seifert fibred with base orbifold of infinite type and whose fundamental group is torsion-free. We first require a couple of facts about Seifert fibred spaces. The reader is directed towards [11, Section 10] for the necessary background. Let \(M\) be a compact orientable three-manifold and let \(M\to S\) be a Seifert fibration. If \(M\) fibres over the circle, then \(M\) admits a foliation by compact surfaces. Note that if \(M\) has non-trivial boundary, then we may glue solid tori along the boundary and extend this foliation. So now we may apply work of Eisenbud-Hirsch-Neumann [1, Theorems 3.4 & 6.1] to conclude that if \(S\) has genus at least two, then such a foliation must be homotopic to a foliation transverse to the fibres of the Seifert fibration and moreover exists only if \(e(M)=0\), where \(e(M)\) denotes the Euler number of the Seifert fibration. Recall that the Euler number of a Seifert fibred space with boundary is only defined modulo the integers. Now let \(M_{n}\) denote the orientable Seifert fibred space with Euler number \(e(M)=1/n\) whose base orbifold is a genus two surface with two boundary components and a single cone point. The Euler characteristic of \(M_{n}\) is zero, so any finite index subgroup of \(\pi_{1}(M_{n})\) that is free-by-cyclic, must be {finitely generated free}-by-cyclic by work of Feighn-Handel [10]. In particular, by Stallings' Fibration Theorem [12], the corresponding cover of \(M_{n}\) must fibre over the circle. Since any finite degree cover of \(M_{n}\) induces a finite degree orbifold cover of the base orbifold, we see that the minimal degree of a cover of \(M_{n}\) with Euler number zero is \(n\). So by the previous paragraph, the minimal degree of a cover of \(M_{n}\) that fibres over the circle is \(n\). Consider the three-manifold \[M=M_{2}\cup_{T}M_{3}\cup_{T}\dots\] where \(T\) is one of the boundary tori of \(M_{i}\). If \(\pi_{1}(M)\) were virtually free-by-cyclic, then this would imply that every \(M_{n}\) admits a finite cover of uniformly bounded index that fibres over the circle. But this cannot happen by the previous paragraph and so \(\pi_{1}(M)\) is not virtually free-by-cyclic; it is locally virtually free-by-cyclic by Theorem 1.1. ## 4. Shalen's theorem In this section we use the Atiyah conjecture, and show how the \(L^{2}\)-technology gives an alternative route to the following generalisation of Theorem 1.3. The key tool we use is the \(L^{2}\)-Freiheitssatz of Peterson-Thom [11], which may be applied to any torsion-free group satisfying the Atiyah conjecture. **Theorem 4.1**.: _Let \(G\) be a finitely generated torsion-free fundamental group of a three manifold. We have_ \[-\chi(G)<\operatorname{miof}(G).\] Proof.: Since \(G\) is finitely generated, it is the fundamental group of a compact three-manifold by the Compact Core Theorem. Prime decomposition allows us to see that the Euler characteristic \(\chi(G)\) is well defined. By Corollary 1.2, the group \(G\) satisfies the Strong Atiyah conjecture. By combining standard properties of \(L^{2}\)-Betti numbers (see [12, Theorem 1.35]) with the work of Lott-Luck [13, Theorem 0.1], we see that \(b_{1}^{(2)}(G)=-\chi(G)\). For every finite generating set of \(G\), the \(L^{2}\)-Freiheitssatz of Peterson-Thom [11, Corollary 4.7] allows us to find a subset freely generating a free group of rank \(b_{1}^{(2)}(G)+1\). So \[-\chi(G)=b_{1}^{(2)}(G)<b_{1}^{(2)}(G)+1\leqslant\operatorname{miof}(G).\qed\]
2306.16408
Coagulation-Fragmentation Equilibrium for Charged Dust: Abundance of Submicron Grains Increases Dramatically in Protoplanetary Disks
Dust coagulation in protoplanetary disks is not straightforward and is subject to several slow-down mechanisms, such as bouncing, fragmentation and radial drift to the star. Furthermore, dust grains in UV-shielded disk regions are negatively charged due to collisions with the surrounding electrons and ions, which leads to their electrostatic repulsion. For typical disk conditions, the relative velocities between micron-size grains are small and their collisions are strongly affected by the repulsion. On the other hand, collisions between pebble-size grains can be too energetic, leading to grain fragmentation. The aim of the present paper is to study a combined effect of the electrostatic and fragmentation barriers on dust evolution. We numerically solve the Smoluchowski coagulation-fragmentation equation for grains whose charging occurs under conditions typical for the inner disk regions, where thermal ionization operates. We find that dust fragmentation efficiently resupplies the population of small grains under the electrostatic barrier. As a result, the equilibrium abundance of sub-micron grains is enhanced by several orders of magnitude compared to the case of neutral dust. For some conditions with fragmentation velocities $\sim 1$ m s$^{-1}$, macroscopic grains are completely destroyed.
Vitaly Akimkin, Alexei V. Ivlev, Paola Caselli, Munan Gong, Kedron Silsbee
2023-06-28T17:55:52Z
http://arxiv.org/abs/2306.16408v1
# Coagulation-Fragmentation Equilibrium for Charged Dust: ###### Abstract Dust coagulation in protoplanetary disks is not straightforward and is subject to several slow-down mechanisms, such as bouncing, fragmentation and radial drift to the star. Furthermore, dust grains in UV-shielded disk regions are negatively charged due to collisions with the surrounding electrons and ions, which leads to their electrostatic repulsion. For typical disk conditions, the relative velocities between micron-size grains are small and their collisions are strongly affected by the repulsion. On the other hand, collisions between pebble-size grains can be too energetic, leading to grain fragmentation. The aim of the present paper is to study a combined effect of the electrostatic and fragmentation barriers on dust evolution. We numerically solve the Smoluchowski coagulation-fragmentation equation for grains whose charging occurs under conditions typical for the inner disk regions, where thermal ionization operates. We find that dust fragmentation efficiently resupplies the population of small grains under the electrostatic barrier. As a result, the equilibrium abundance of sub-micron grains is enhanced by several orders of magnitude compared to the case of neutral dust. For some conditions with fragmentation velocities \(\sim 1\,\mathrm{m}\,\mathrm{s}^{-1}\), macroscopic grains are completely destroyed. Protoplanetary disks (1300) -- Circumstellar dust (236) -- Interstellar dust (836) -- Young stellar objects (1834) -- Circumstellar disks (235) -- Dust physics (2229) 0000-0002-0002-0002]Vitaly Akimkin 0000-0002-3878-7880]Alexei V. Ivlev 0000-0002-4880-7880]Paola Caselli 0000-0002-4882-7880]Munan Gong 0000-0002-4882-7880]Kedron Silsbee ## 1 Introduction Dust coagulation from sub-micron to centimeter-size range is the first and key step on the stairs of the bottom-up planet formation scenario (Drazkowska et al., 2022). The top-down planet formation via the gravitational instability (Boss, 1997; Nayakshin, 2010) is affected by the grain size distribution as well, as dust opacity controls the temperature. Thus, understanding the microphysics of grain collisions provides a foundation for both branches of the planet formation theory. The rate and outcome of grain collisions are defined by their relative velocities. For small grains (\(\lesssim 1\,\mathrm{\mu m}\)), the Brownian motion dominates. When (and if) dust grows further, the turbulence-induced velocities concur and pave the way for dust coagulation to the pebble-size range (Birnstiel et al., 2016). However, the road from sub-micron grains to pebbles has at least two barriers hindering or even blocking the dust growth, the electrostatic and the fragmentation barriers. The first barrier operates in the micron-size range, where the grain collisions are not energetic enough to overcome the Coulomb repulsion (Okuzumi, 2009; Akimkin et al., 2020). The second barrier occurs due to too energetic collisions of pebble-size grains, which result in fragmentation rather than sticking (Brauer et al., 2008). While the mitigation of the electrostatic barrier requires high turbulence-induced collision velocities, mitigation the fragmentation barrier requires the opposite (Okuzumi et al., 2011), and it is currently unclear how these two barriers can be simultaneously overcome. Grain charge in astrophysical environments is defined by a variety of positive and negative charging currents, such as collisions with electrons and ions, photoelectric emission, and triboelectric charging (see short summary in Weingartner (2004)). Photoelectric emission due to the ultraviolet radiation in the disk upper layers leads to predominantly positive grain charges there (Peder sen & Gomez de Castro, 2011; Akimkin, 2015). In UV-shielded regions, the plasma charging dominates and grains become negatively charged on average (Draine & Sutin, 1987). The acquired charge is expected to be high enough for the total electrostatic prevention of collisions between micron-size grains (Okuzumi, 2009). In the absence of efficient gas ionization, the average dust charge can stay near zero and both positive and negative grains are present in equal amounts. In this case the Coulomb interaction may instead facilitate dust growth. This channel was shown to be effective for the triboelectrically charged millimeter size grains in very deep regions of protoplanetary disks shielded from the cosmic rays (Steinpilz et al., 2020). If gas ionization cannot be neglected, the plasma charging dominates and the overall effect of grain charge is detrimental to the dust growth. Despite the apparent importance of the grain charge among the basic physical factors affecting dust evolution, this topic is currently weakly explored. A huge amount of work is done on self-consistent modelling of coagulation and fragmentation for neutral dust (Brauer et al., 2008; Estrada et al., 2016; Lombart & Laibe, 2021; Stammler & Birnstiel, 2022; Lebreuilly et al., 2023). Concerning charged dust evolution, the focus so far has been on modeling the effect of electrostatic interactions on coagulation (Okuzumi et al., 2011; Akimkin et al., 2020; Xiang et al., 2020), completely omitting the role of fragmentation. In this paper, we propose a self-consistent approach to model coagulation and fragmentation of charged dust, which allows us to study the interplay between the electrostatic and fragmentation barriers. ## 2 Model To evaluate the effect of dust charge on its evolution, we solve numerically the Smoluchowski coagulation-fragmentation equation locally for a set of physical conditions. The electrostatic interaction affects at least two factors important for the outcome of grain collision: the collision cross section and impact velocity. Both factors are primarily controlled by long-range charge-charge (Coulomb) interaction, while short-range (induced dipole) interaction is only important in specific cases (see Appendices A and B). The collision cross section deviates from the geometrical one depending on the ratio between electrostatic and collision energies of two grains. Normally, in the UV-shielded regions of protoplanetary disks, the plasma charging dominates over other charging currents and grains acquire same-sign negative charges and thus repel each other. For typical strengths of turbulence-induced velocities in protoplanetary disks (corresponding to the Shakura-Sunyaev turbulence parameter \(\alpha_{\rm t}\lesssim 10^{-3}\)), the Coulomb repulsion overcomes the kinetic energy for the collisions of small grains (\(\lesssim 1\,\mu\)m). If the collision is electrostatically 'allowed', as in the case of small-grown and grown-grown grain collisions, the second factor emerges. The actual impact velocity \(v_{\rm imp}\) at the grain contact changes in the comparison with the relative velocity defined at infinity \(v_{\rm rel}\). While the Coulomb repulsion decelerate approaching grains, at small distances, the repulsion switches to the attraction due to induced dipole interaction. Depending on conditions, the impact velocity can be either lower or higher than the initial relative velocity \(v_{\rm rel}\) at infinity. The Smoluchowski coagulation-fragmentation equation for the mass distribution \(f(m),{\rm cm}^{-3}\,{\rm g}^{-1}\) (or the size distribution \(f(a),{\rm cm}^{-4}\)) can be rewritten in a discrete form. This leads to a system of \(n\) nonlinear ordinary differential equations (Brauer, 2009): \[\begin{split}\frac{{\rm d}N_{k}}{{\rm d}t}&=\frac{ 1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}N_{i}N_{j}K_{ij}\left[p_{ij}C_{ijk}+(1-p_{ij} )F_{ijk}\right]\\ &-N_{k}\sum_{i=1}^{n}N_{i}K_{ik},\end{split} \tag{1}\] where \(N_{k}=f(m_{k})\Delta m_{k}\) is the number density of grains within the grain mass bin \(k\). Upon each collision between grains with masses \(m_{i}\) and \(m_{j}\), they undergo coagulation with probability \(p_{ij}\) and fragmentation with probability \((1-p_{ij})\). Coagulation increases the grain number density in \(k\)-th bins around the mass \(m_{i}+m_{j}\). Fragmentation produces a power-law spectrum of fragments and thus repopulates a range of mass bins with \(k\leq\max\{i,j\}\). The coefficients \(C_{ijk}\) and \(F_{ijk}\) are the mass weights defining how the total mass of colliding grains \(m_{i}+m_{j}\) is redistributed to the \(k\)-th bin after their coagulation and fragmentation, respectively (see Brauer (2009) and Birnstiel (2011) for more details). The rate of collisions is determined by the coagulation kernel \(K_{ij}\), \({\rm cm}^{-3}\,{\rm s}^{-1}\) defined as the product of relative velocity at infinity \(v_{\rm rel}\) and the collision cross section for grains with radii \(a_{i}\) and \(a_{j}\): \[K_{ij}=W_{ij}\pi(a_{i}+a_{j})^{2}v_{\rm rel}(a_{i},a_{j}). \tag{2}\] Here the collision efficiency factor \(W_{ij}\) accounts for the electrostatic interaction, including both the charge-charge (Coulomb) term and charge-dipole term. The Coulomb term can be accounted for in a simple analytical form: \[W_{0}=\max\left\{1-\frac{U_{\rm C}}{E_{\rm kin}},0\right\}, \tag{3}\] where \(U_{\rm C}=Q_{i}Q_{j}/(a_{i}+a_{j})\) is the Coulomb repulsion energy between grains with charges \(Q_{i}\) and \(Q_{j}\) at the point of their contact, and \(E_{\rm kin}=\mu_{ij}v_{\rm rel}^{2}/2\) is their initial kinetic energy, \(\mu_{ij}=m_{i}m_{j}/(m_{i}+m_{j})\) is the reduced mass of grains. The induced dipole term can be taken into account numerically (see Appendix A). Analysis shows that this term affects the collisional cross section in a narrow range of grain sizes and has limited impact on the resulting size distribution. Therefore, we include only Coulomb contribution to the collision efficiency factor, i.e. we take \(W_{ij}=W_{0}\). Equation 3 is exact for monoenergetic collisions. For the Brownian relative velocities, the collision efficiency factor \(W_{0}\) takes the following form (Okuzumi et al., 2011): \[W_{0}=\exp\left(-\frac{U_{\rm C}}{k_{\rm B}T}\right). \tag{4}\] In this case, the _average_ collision velocity is set by the Brownian velocity \(v_{\rm B}=\sqrt{8k_{\rm B}T/\pi\mu}\). We use the above correction for collisions dominated by the Brownian motion (\(v_{\rm B}/v_{\rm rel}>1/2\)) and Equation 3 otherwise. Such approach allows us to account for the collisional velocity distribution in the Brownian motion-dominated regime while keeping the monoenergetic recipe for the turbulence-induced collisions (as their underlying velocity dispersion depends on the turbulence nature and therefore is poorly constrained). For sufficiently high gas ionization rates, ensuring electron abundance higher than the grain abundance, the equilibrium charge depends only on the grain radius, gas temperature \(T\) and the dominant ion mass \(m_{\rm ion}\): \[Q=\min\left\{-q(m_{\rm ion})\left(\frac{a}{0.1\,\mu{\rm m}}\right)\left(\frac{ T}{100\,{\rm K}}\right),-1\right\}e_{\rm p}, \tag{5}\] where \(e_{\rm p}=4.8\times 10^{-10}\,{\rm statC}\) is the proton charge and \(q\approx 2.3\) for HCO\({}^{+}\) or N\({}_{2}\)H\({}^{+}\) ions - see, e.g., the left panel of Figure 1 in Ivlev et al. (2016) (for \(m_{\rm ion}=29\) amu). The efficient 'image' attraction (Draine and Sutin, 1987) ensures that even the smallest grains acquire at least one electron charge, so we set the grain charge to be at least \(-e_{\rm p}\). This recipe on the grain charge can be violated in low ionization-high density regions (see Section 4), where the limited amount of electrons reduces the charging efficiency. This case becomes more complicated numerically and requires additional solution for the ionization and charging balance equations to account for possible dust-ion and dust-dust plasma regimes (Ivlev et al., 2016). On top of that, in the dust-ion and dust-dust plasma, the kernel of the Smoluchowski equation \(K_{ij}\) starts to depend on the dust number densities \(N_{i},N_{j}\) as well, as grains compete over a limited amount of free electrons, so that their charges and, hence, collisional cross sections depend on the local number density of all surrounding grains. These effects were considered in our previous paper studying pure coagulation of charged dust (Akimkin et al., 2020). In the present paper, we add into consideration dust fragmentation, but restrict ourselves to the disk regions with high ionization fractions for simplicity. To ensure efficient dust charging and validity of Equation 5, we apply our simulations to inner regions of protoplanetary disks, where thermal ionization of alkali elements (Desch and Turner, 2015) and/or strong external ionization by the stellar X-rays allow the ionization fraction of \(x_{\rm e}\gtrsim 10^{-10}\). These regions are important for understanding the rocky planet formation (Dzyurkevich et al., 2010; Drazkowska et al., 2013; Flock et al., 2016), especially at early gravitationally unstable phases characterized by high accretion rates, luminosity outbursts, and efficient gas viscous heating (Vorobyov et al., 2018). Our implementation of dust fragmentation is similar to the approach described in Brauer (2009) and Birnstiel (2011) with several differences. First, instead of rearranging the sums in Equation 1, we use the quadruple precision for floating point numbers to alleviate possible rounding errors and ensure the mass conservation in the numerical scheme. The more important second modification is a correction of the actual impact velocity for the charge-charge and charge-dipole interactions between approaching grains: \[v_{\rm imp}=v_{\rm rel}\sqrt{1-\frac{U(a_{1}+a_{2})}{E_{\rm kin}}}, \tag{6}\] where \(U\) is the electrostatic potential energy \[U(r)=\frac{Q_{1}Q_{2}}{r}-\frac{1}{2r^{2}}\left(\frac{\epsilon-1}{\epsilon+2} \right)\left(\frac{a_{1}^{3}Q_{2}^{2}}{r^{2}-a_{1}^{2}}+\frac{a_{2}^{3}Q_{1}^ {2}}{r^{2}-a_{2}^{2}}\right) \tag{7}\] at the point of contact of two grains \(r=a_{1}+a_{2}\). Here \(\epsilon\) is the dielectric constant, which is \(\approx 7\) for silicate materials (Shannon et al., 1991). Generally, as \(U(r)\) has both the repulsive Coulomb and attractive dipole components, the impact velocity can be smaller or larger than the relative velocity at infinity \(v_{\rm rel}\), depending on the choice of grain sizes and charges. For physical conditions in the inner disk regions, the impact velocity is typically smaller than \(v_{\rm rel}\), which softens the fragmentation conditions (see Appendix B). The impact velocity is set to zero if \(U(a_{1}+a_{2})>E_{\rm kin}\). We consider Brownian motion and turbulence with Kolmogorov spectrum as dominant sources of grain relative velocities \(v_{\rm rel}\). For the latter, we use analytical formulas from Gong et al. (2021), which define collisional velocities for different grain size regimes ("tiny", "small", and "big", depending on their stopping times). The turbulence is characterized by the largest eddy's size \(L\) and velocity \(v_{L}\) which scale with the disk vertical scale height \(H\), local sound speed \(c_{\rm s}\), and \(\alpha_{\rm t}\)-parameter as \(L=\sqrt{\alpha_{\rm t}}H\) and \(V_{L}=\sqrt{\alpha_{\rm t}}c_{\rm s}\)(Cuzzi et al., 2001). As the turbulence-induced relative velocities between same-size grains in the "tiny" regime vanish (Gong et al., 2021), we assume that colliding grains are at least 10% different in size. This allows us to account for the non-zero size dispersion for grains from the same mass bin (Sato et al., 2016; Bate, 2022). ### Treatment of fragmentation and erosion We adopt the recipe from Stammler and Birnstiel (2022) to compute dust fragmentation and erosion. In this approach, grains experience fragmentation if their masses are similar (\(\max(m_{1},m_{2})/\min(m_{1},m_{2})<10\)) or erosion in the opposite case. Fragmentation produces a power-law spectrum of fragments from some minimal fragment size \(a_{\rm min}\) up to the size of the largest grain among two colliding ones. In the erosion process, a smaller (projectile) grain excavates some mass from the larger one, then the sum of excavated mass (set to the mass of the projectile by default) and the projectile mass is distributed between \(a_{\rm min}\) and the projectile size with a power-law spectrum. The mass of the target grain is reduced by the amount of the excavated mass and redistributed between neighbouring mass bins according to the Podolak algorithm used for coagulation treatment as well (Brauer, 2009). We use the power-law slope of \(-3.5\) for the fragment size distribution (both for fragmentation and erosion), which translates to \(-1.83\) slope in the corresponding mass distribution. The critical impact velocity required for fragmentation and erosion \(v_{\rm frag}\) is assumed to be the same for both processes and all grain sizes. The fragmentation/erosion probability \((1-p_{ij})\) is set depending on the \(v_{\rm imp}\) and \(v_{\rm frag}\): it is zero if \(v_{\rm imp}<0.8v_{\rm frag}\), unity for \(v_{\rm imp}>v_{\rm frag}\) and linearly changes with \(v_{\rm imp}\) from 0 to 1 in between. We discuss potential consequences of different erosion treatment in Section 4. The initial size distribution \(f_{\rm ini}(a)\) is assumed to be a power-law with the slope \(-3.5\) and maximum grain size of \(0.5\,\mu\)m. The minimum grain size is varied from \(0.005\,\mu\)m to \(0.05\,\mu\)m in the simulations presented below, the minimum size of fragments \(a_{\rm min}\) is set to the minimal grain size in the initial distribution. To simulate higher fragmentation velocities in monomer-monomer collisions, we do not allow grain fragmentation/erosion if both grains come from the size range of the initial distribution. The canonical dust-to-gas mass ratio of \(\rho_{\rm d}/\rho_{\rm g}=0.01\) and mean molecular weight of \(2.3\,\)amu are assumed. We work within a compact growth approximation assuming silicate grains with effective material density \(\rho_{\rm s}=1.6\) g cm\({}^{-3}\), which corresponds to a loose random packing of monodisperse silicate spheres with the volume fraction between \(0.5-0.6\)(Torquato et al., 2000). Our simulations are local and ignore possible sedimentation and radial drift. The impact of the dust porosity and global dynamics is discussed in Section 4. We utilize the logarithmic grain size grid with 500 bins, which covers the range from \(10^{-7}\) to \(2\,\)cm. To integrate the system of equations 1 we use the semi-implicit mid-point rule for stiff systems of ODEs by Bader and Deuflhard (1983) implemented in stifbs procedure from Press et al. (1992). ## 3 Results A part of the electrostatic barrier problem lies in the initial conditions. If all grains in the initial distribution fall into the size range where the Coulomb repulsion energy \(U_{\rm C}=Q_{1}Q_{2}/(a_{1}+a_{2})\) at grains' contact exceeds the initial collision kinetic energy \(E_{\rm kin}=\mu_{12}v_{\rm rel}^{2}/2\), dust coagulation cannot start at all. Such conditions are likely met in protoplanetary disks. In Figure 1 we show an example of typical electrostatic-to-kinetic ratio \(U_{\rm C}/E_{\rm kin}\) and impact-to-fragmentation energy ratio \((v_{\rm imp}/v_{\rm frag})^{2}\) for similar-sized grains assuming for our reference model the gas volume density \(\rho_{\rm g}=10^{-12}\,\)g cm\({}^{-3}\), the temperature \(T=1000\,\)K, the turbulence \(\alpha\)-parameter \(\alpha_{\rm t}=3\times 10^{-4}\), and the fragmentation velocity \(v_{\rm frag}=10\,\)m s\({}^{-1}\) at a location \(r=1\,\)au around a solar mass star. The orange shading indicates the size range with electrostat Figure 1: Electrostatic and fragmentation barriers for the collisions of similar size grains (\(a_{2}/a_{1}=1.1\)) for \(\rho_{\rm g}=10^{-12}\,\)g cm\({}^{-3}\), \(T=1000\,\)K, \(\alpha_{\rm t}=3\times 10^{-4}\), \(v_{\rm frag}=10\,\)m s\({}^{-1}\). The impact velocity \(v_{\rm imp}\) is given by Equation 6. Grains within the Coulomb barrier are not able to coagulate with each other, but can coagulate with grains above the barrier. Fragmentation and erosion processes repopulate the distribution at smaller sizes. ically 'forbidden' coagulation: the Coulomb barrier has a maximum at sub-micron grain sizes, corresponding to a transition from Brownian to turbulence-induced motion. While dust grains from within the orange region cannot coagulate, their coagulation with larger grains is possible. Hence, the initial electrostatic barrier is easy to overcome by introducing large (\(\gtrsim 10\,\mu\)m) seed grains, which may appear, for example, due to the radial drift from cold high-turbulence disk regions with a weak electrostatic barrier (Okuzumi et al., 2011). In our simulations, we artificially allow dust to coagulate at initial stages by ignoring its charge for the first \(10^{3}\,\)yr, and then we switch on the charge and follow the charged dust evolution up to \(10^{6}\,\)yr. In Figure 2 we demonstrate the solution of the Smoluchowski equation for the same conditions as in Figure 1. The upper and lower panels show the contribution of different grain sizes to the mass and total surface area, respectively. The black lines correspond to the coagulation-fragmentation equilibrium of artificially neutral grains (reached around \(10^{3}\,\)yr). When charging is switched on at later stages (red lines), the balance between coagulation and fragmentation changes drastically, leading to profound dust redistribution toward the smallest sizes. Due to a much larger amount of small fragments, erosion becomes more efficient. In case of neutral dust, the smallest fragments do not contribute to the erosion process significantly as they are quickly lost due to Brownian coagulation. A small kink seen in the distributions around \(2\times 10^{-3}\) cm reflects a transition to the regime of "tiny" grains in the turbulence-induced velocities (Birnstiel et al., 2011; Gong et al., 2021). To study the sensitivity of the resulting size distributions to the physical conditions and model parameters, we run additional models that differ from the reference model in Figure 2 by one of the four parameters: turbulence strength \(\alpha_{\rm t}\), gas temperature \(T\), fragmentation velocity \(v_{\rm frag}\), and minimum size of fragments \(a_{\rm min}\). The results are presented in Figure 3. Some changes seen in the distributions are expected and mostly quantitative. For example, the right boundary of the size distribution depends on the fragmentation limit \(a_{\rm frag}\), which scales with the physical parameters as \[a_{\rm frag}\propto\rho_{\rm g}v_{\rm frag}^{2}T^{-1/2}\alpha_{\rm t}^{-1} \tag{8}\] (see, e.g., Eq.(34) in Birnstiel et al. (2016) assuming \(\rho_{\rm g}\sim\Sigma_{\rm g}/H\sim\Sigma_{\rm g}/T^{1/2}\)). However, a remarkable interplay between the electrostatic and fragmentation barriers can be seen from the upper left panel: it shows a higher amount of small grains for larger values of turbulence \(\alpha\)-parameter - despite the fact that stronger turbulence helps in overcoming the electrostatic barrier. While high turbulent velocities are indeed required to overcome the barrier during the initial phase of dust growth, stronger turbulence backfires when grains reach the fragmentation barrier and start to replenish the small grain ensemble. Larger values of \(\alpha_{\rm t}\) suggest a lower fragmentation limit \(a_{\rm frag}\) and, hence, a narrower range of the grain sizes that not affected by either barrier. These 'intermediate' grains are crucial collision partners for smaller grains under the electrostatic barrier, so their deficiency leads to a stronger pile-up of sub-micron dust. Dependence of \(f(a)\) on changes in the gas density (not plotted) is very similar to that for varying \(\alpha_{\rm t}\), but acting Figure 2: Dust size distribution at the coagulation-fragmentation equilibrium for neutral grains (black lines) and charged grains (red lines), computed for the conditions of Figure 1. The upper and lower panels display the normalized distributions of grain mass and geometrical cross section, respectively. The orange shading shows the range of grain sizes where \(U_{\rm C}/E_{\rm kin}>1\) for similar-size collisions, the blue shading indicates the sizes where \(v_{\rm imp}>0.8v_{\rm frag}\). The distributions in the lower panel are normalized by the initial dust cross section (per unit volume), \(\sigma_{\rm ini}=6.6\times 10^{-10}\,{\rm cm}^{2}\,{\rm cm}^{-3}\) in the opposite direction in accordance with the above scaling \(a_{\rm frag}\propto\rho_{\rm g}/\alpha_{\rm t}\). The dependencies on the gas temperature and the minimum fragment size are shown in the right panels of Figure 3. Lower temperatures lead to less efficient plasma charging and, thus, to a weaker electrostatic barrier. Therefore, the number density of sub-micron grains under the barrier decreases with the temperature as well. However, one should be cautious in extrapolating this trend to the outer cold disk regions, as several important factors such as dust fractality/porosity and electron depletion are not properly considered in our simulations (see Section 4). The dependence on the minimum fragment size shows two different shapes of the size distributions. For the small and reference values of \(a_{\rm min}\) (\(5\times 10^{-7}\) and \(1\times 10^{-6}\) cm, plotted by the blue and red lines, respectively), the left peaks of the distributions are smooth, while for the large value of \(a_{\rm min}=5\times 10^{-6}\) cm the peak is very sharp. All three values of \(a_{\rm min}\) fall into the range of a strong electrostatic barrier, where the Coulomb repulsion energy exceeding the _average_ collision energy (\(U_{\rm C}>E_{\rm kin}\)). However, given the Maxwellian velocity distribution of grains, the collision efficiency factor \(W_{0}\) calculated using Equation 4 is non-zero and the smallest fragments can occasionally coagulate. This is not true for the large value of \(a_{\rm min}=5\times 10^{-6}\) cm, because the electrostatic barrier is very high in this case (\(U_{\rm C}/E_{\rm kin}\sim 100\)). The sharpness Figure 3: Dependence of the dust size distribution on the model parameters. The red lines in all panels represent the reference model plotted in Figure 2, the blue and green lines illustrate the effect of variation of one of the parameters, as indicated in the legends. All distributions are for \(10^{6}\,\)yr. of the peak is then enhanced by erosion, as a positive feedback loop exists between the amount of small grains and the erosion rates in the absence of efficient coagulation. Such a sharp peak is certainly a result of the used fragmentation/erosion prescription, and therefore is expected to change if more realistic size distribution models are employed for fragments (instead of the used power law distribution with fixed boundaries). Probing the characteristic range of \(1-30\) m s\({}^{-1}\) for the critical fragmentation velocity (Blum and Wurm, 2008; Schrapler et al., 2018) reveals a qualitatively new behavior. For the fragmentation velocity \(v_{\rm frag}=1\) m s\({}^{-1}\) (shown by the green line in the left bottom panel of Figure 2), we observe drastic changes, resulting in very low abundance of the grown dust by the end of simulations at \(10^{6}\) yr. This case represents an example of imbalanced fragmentation and erosion of the grown dust. For most of the cases shown in Figure 3, the smallest fragments can collide with grown grains with impact velocities \(v_{\rm imp}\) smaller than \(v_{\rm frag}\), which leads to their coagulation. However, if the fragmentation velocity is low, the smallest fragments (as soon as they are able to overcome strong electrostatic barrier) collide only with \(v_{\rm imp}>v_{\rm frag}\). Such collisions lead to erosion and replenish small-grain population, and therefore small fragments accumulate until the grown dust is mostly destroyed. To further highlight this regime of the coagulation-fragmentation imbalance, in Figure 4 we show how the size distribution evolves for a higher gas density of \(\rho_{\rm g}=5\times 10^{-11}\) g cm\({}^{-3}\), temperature of \(T=1300\) K, and the fragmentation velocity \(v_{\rm frag}=1\) m s\({}^{-1}\). Without grain charging, the equilibrium size distribution for such density is reached within \(10^{3}\) yr (shown by the black line). Once the charging is switched on at \(10^{3}\) yr, small grains stop coagulating (see Appendix B), but their replenishment due to fragmentation and erosion of grown dust continues, and vice versa - grains with sizes within the fragmentation barrier start depleting severely, while the resulting fragments remain trapped under the electrostatic barrier. ### Important implications The presented solutions of the Smoluchowski equation suggest that the electrostatic barrier problem cannot be merely solved by introducing large seed particles, as small particles under the barrier are being constantly replenished due to fragmentation and erosion. We show that the mass contained in small grains can be comparable to or even exceed the mass of grown dust, while the surface area is completely dominated by small grains. The predicted drastic enhancement in the abundance of small grains may have profound consequences for various processes occurring in protoplanetary disks. Along with obvious implications for the grain surface chemistry, this may also significantly affect observational appearance of the disks. For example, shallow slopes of the dust millimeter opacity do not only require the presence of grown dust, but also a reduced fraction of small grains (see, e.g., figure 2 in Pavlyuchenkov et al., 2019). In the extreme case illustrated in Figure 4, the sharp peak of "recycled" grains kept under the electrostatic barrier is indistinguishable from the initial distribution when observed in the millimeter wavelengths - since most of the grains are in the Rayleigh limit. Our analysis shows that the evolution of charged dust should typically lead to the size distributions characterized by steeper opacity indices \(\beta\), as compared to the case of uncharged grains. We note that variations in the spectral index \(\alpha\) are affected by several factors, which, in addition to the dust growth, also include temperature and optical depth effects (Tazzari et al., 2016; Woitke et al., 2016; Liu, 2019; Zamponi et al., 2021; Maureira et al., 2022), and therefore they are harder to predict. Figure 4: An example showing the size distribution of charged dust which is out of the coagulation-fragmentation equilibrium. The results are computed for \(\rho_{\rm g}=5\times 10^{-11}\) g cm\({}^{-3}\), \(T=1300\) K, \(\alpha_{\rm t}=3\times 10^{-4}\), \(v_{\rm frag}=1\) m s\({}^{-1}\). The black line shows the coagulation-fragmentation equilibrium for neutral dust, reached at \(\sim 10^{3}\) yr. The grain charging is switched on at this point, and then bigger grains start vanishing due to uncompensated fragmentation and erosion (as depicted by the red lines). The orange and blue shadings show the respective size ranges for the electrostatic and fragmentation barriers (assuming same-size collisions). ## 4 Discussion The simulations shown in the previous section reveal two key effects of dust charging on its evolution. First, the amount of sub-micron grains becomes dramatically higher than in the case of neutral grains. Second, imbalanced destruction of grown dust can occur if fragmentation velocities are \(\sim 1\,\mathrm{m\,s^{-1}}\). At the same time, there are several poorly constrained factors and simplifying assumptions that may affect the resulting dust size distributions and shift the balance between small and grown dust in both directions. In this section, we discuss several aspects that may be important for consistent treatment of charged dust evolution in protoplanetary disks. ### Electron depletion Equation 5 breaks down if electrons are no longer the dominant carriers of the negative charge, i.e., if most of it is carried by dust (which, according to the above results, is concentrated around \(a\approx 200-500\,\mathrm{\AA}\)). For this reason, in the present paper we apply our model to the inner disk regions of \(\lesssim 1\,\mathrm{au}\), where such a requirement is likely satisfied. As a rough estimate of the minimum amount of free electrons sufficient to charge the smallest grains up to at least one electron charge, one can use the condition \(n_{\mathrm{e}}>n_{\mathrm{d}}\), where \(n_{\mathrm{e}}\) is the number density of electrons and \(n_{\mathrm{d}}\) is the dust number density (assuming the mass is dominated by grains with \(a\approx a_{\mathrm{min}}\)). This yields the following sufficient condition for the minimum value of the ionization fraction \(x_{\mathrm{e}}=n_{\mathrm{e}}/n_{\mathrm{g}}\): \[\begin{split} x_{\mathrm{e}}&>2\times 10^{-10} \left(\frac{a_{\mathrm{min}}}{300\,\mathrm{\AA}}\right)^{-3}\left(\frac{\rho_{ \mathrm{d}}/\rho_{\mathrm{g}}}{0.01}\right)\\ &\times\left(\frac{\rho_{\mathrm{s}}}{1.6\,\mathrm{g\ cm^{-1}}} \right)^{-1}\left(\frac{\mu_{\mathrm{g}}}{2.3\ \mathrm{amu}}\right),\end{split} \tag{9}\] which is typically satisfied for the inner disk regions, where the electron abundance is governed by efficient thermal ionization of alkali elements (Desch & Turner, 2015). Electrons in the outer, low-ionization regions of protoplanetary disks can be strongly depleted if a large amount of small grains is present (Ivlev et al., 2016). In such cases, the smallest \(\lesssim 100\,\mathrm{\AA}\) fragments can become neutral and therefore free to coagulate until the resulting grains become large enough to acquire (at least) one electron charge. Due to a strong dependence of the depletion condition on the grain size (see Equations 16 and 27 in Ivlev et al., 2016), this is likely to occur for the sizes within the electrostatic barrier. This suggestion is supported by our earlier modelling of charged dust coagulation in the presence of electron depletion (Akimkin et al., 2020), demonstrating that the electrostatic barrier efficiently limits the dust growth under conditions of the MRI-dead zones. We expect that accounting for the lower charging efficiency in the outer disk regions would shift the left peak of dust distribution to larger sizes, still keeping large amounts of small fragments under the electrostatic barrier. ### Erosion efficiency In our simulations, the critical velocity \(v_{\mathrm{er}}\) above which the erosion occurs is set equal to \(v_{\mathrm{frag}}\). In fact, the erosion experiments for the projectile sizes \(a_{\mathrm{p}}\) between 1 and \(100\,\mathrm{\mu m}\)(Schrapler et al., 2018) show that the critical erosion velocity scales as \(v_{\mathrm{er}}\sim a_{\mathrm{p}}^{0.62}\), i.e., erosion is achieved easier for smaller projectiles. Also, the erosion efficiency (the ratio of the small fragments mass to the initial mass of the projectile) is set equal to 2 for any collision that leads to erosion, with no dependence on \(v_{\mathrm{imp}}\). However, experimental and theoretical studies suggest that the erosion efficiency linearly increases with \(v_{\mathrm{imp}}\)(Schrapler et al., 2018), and therefore more energetic collisions excavate more mass. Nevertheless, with both factors underestimating the erosion efficiency, we still obtain anomalously high abundance of small grains. Details of erosion are less important for models ignoring grain charging because of fast coagulation of small grains and, hence, their low abundance. While it is unknown whether the scaling relations for the erosion efficiency and critical velocity hold also for sub-microns grains, accounting for the two above factors will likely lead to higher amounts of small dust and relaxed conditions for the imbalanced fragmentation. ### Dust porosity The very possibility for overcoming the electrostatic barrier relies on a sufficient strength of non-thermal collision velocities of micron-size grains. Small porous grains are better coupled to the gas than compact grains of the same mass, i.e., the former have lower turbulence-induced velocities. Thus, dust porosity hampers charged dust coagulation and widens the mass range of grains affected by the electrostatic barrier (Okuzumi et al., 2011; Akimkin et al., 2020). Our simulations are done within the simplifying compact growth approximation, so they underestimate the strength of the electrostatic barrier. ### Turbulence spectrum Our modelling assumes the Kolmogorov turbulence with the slope of the kinetic energy spectrum of \(p=-5/3\). Shallower spectra for the Iroshnikov-Kraichnan cascade (\(p=-3/2\), see Iroshnikov, 1964; Kraichnan, 1965), or for the turbulence seen in magneto-hydrodynamical simulations (\(p=-4/3\), see Grete et al., 2021; Gong et al., 2020) are more favorable for early coag ulation and may help to overcome the electrostatic barrier (Gong et al., 2021). On the other hand, within a certain range of turbulent scales the slopes for pure hydrodynamic instabilities (such as the subcritical baroclinic instability or the vertical shear instability) are steeper than the Kolmogorov one (Klahr & Bodenheimer, 2003; Manger et al., 2020). Thus, deviations from the Kolmogorov turbulence can shift the balance between small and grown dust in both directions. ### Space or time-dependent turbulence strength Overcoming the electrostatic barrier requires strong turbulence and/or presence of grown grains. Avoiding dust destruction at the fragmentation barrier and quenching the replenishment of small dust under the electrostatic barrier requires the opposite. A viable way to overcome both barriers may lie in dealing with them subsequently and not simultaneously. Grain coagulation in high-turbulence outer disk regions and their subsequent drift to dead zones is one perspective way to be explored (Okuzumi et al., 2011). Another possibility is time-dependent non-thermal sources of grain collisional velocities. Early gravitationally unstable disk phases may provide conditions to overcome the electrostatic barrier and to form certain amounts of grown dust. Subsequent global disk evolution towards less turbulent state may help to alleviate the fragmentation barrier and drag the remaining small grains towards next steps in bottom-up planet formation scenario. These possibilities are worth to be studied in the future with the next generation dust evolution models taking into account global disk evolution (Pignatale et al., 2019; Laune et al., 2020; Akimkin et al., 2020; Vericel et al., 2021; Bate, 2022). ## 5 Conclusions Electrostatic interaction between dust grains is a frequently overlooked factor in contemporary models of dust evolution in protoplanetary disks. The Coulomb repulsion of same-charged grains is likely to be an effective barrier blocking the initial dust coagulation of micron-size grains (Okuzumi, 2009; Okuzumi et al., 2011). One possibility to trigger the initial coagulation is to introduce large (\(\gtrsim 10\,\mu\)m) seed particles, which can coagulate with smaller particles and thus pull them out of the size range corresponding to the electrostatic barrier. In this paper, we perform a combined analysis of electrostatic interactions between dust grains and their fragmentation, and develop a self-consistent approach to model these two processes. For this purpose, we numerically solve the Smoluchowki coagulation-fragmentation equation for a set of physical conditions relevant for inner regions of protoplanetary disks with ionization fractions \(\gtrsim 10^{-10}\). Our conclusions can be summarized as follows: 1. Coagulation-fragmentation equilibrium for charged dust is characterized by dramatically higher abundances of sub-micron particles than in the case of neutral dust. The reason for that is a continuous replenishment of small grain population due to fragmentation of macroscopic grains. Without charging, the sub-micron grains quickly coagulate owing to their Brownian motion, which is not the case if charging is taken into account. 2. For studied cases with the fragmentation velocities \(v_{\rm frag}\sim 1\) m s\({}^{-1}\), the coagulation-fragmentation balance between smaller and larger grains cannot be reached. This is because the electrostatic barrier inhibits almost all low-velocity collisions, that may lead to coagulation, while the majority of high-velocity collisions, able to overcome the barrier, are destructive for sufficiently small \(v_{\rm frag}\). The resulting lack of mass supply to larger grains causes their gradual destruction on timescales shorter than the typical disk lifetimes. Further studies of charged dust evolution, focusing on the outer disk regions and searching for ways to overcome the combined effect of electrostatic and fragmentation barriers, may include a more detailed analysis of erosion and charging efficiencies, dust porosity, turbulence properties, and global disk evolution. We thank anonymous referee for their constructive suggestions. VA was supported by the RSF grant 22-72-10029. ## Appendix A Effect of electrostatic interaction on collisional cross sections The collision cross section between two charged grains cannot be written analytically if both repulsive (Coulomb) and attractive (induced dipole) terms are present. Here we describe the algorithm used to find the collision efficiency factor \(W\) for the cross section numerically: \[\sigma=\pi(a_{1}+a_{2})^{2}W.\] (A1) The electrostatic potential energy as a function of the radial distance \(r\) between two spherical particles is (Landau & Lifshitz (1960, SS3); Draine & Sutin (1987)): \[U(r)=\frac{Q_{1}Q_{2}}{r}-\frac{1}{2r^{2}}\left(\frac{\epsilon-1}{\epsilon+2} \right)\left(\frac{a_{1}^{3}Q_{2}^{2}}{r^{2}-a_{1}^{2}}+\frac{a_{2}^{3}Q_{1}^{ 2}}{r^{2}-a_{2}^{2}}\right).\] (A2) This function is non-monotonic: the Coulomb repulsion of same charged dust grains, dominating at larger distances, changes to attraction at smaller distances due to the induced dipole terms. We assume a conservative value of the dielectric constant \(\epsilon=7\) for silicate materials (Shannon et al., 1991). The dynamics of two colliding grains is described by the effective potential energy, which takes into account the centrifugal component at non-zero impact parameter \(b\)(Landau & Lifshitz, 1969, SS14): \[U_{\rm eff}(r,b)=U(r)+K_{\infty}\left(\frac{b}{r}\right)^{2},\] (A3) where \(K_{\infty}=\mu_{12}v_{\rm rel}^{2}/2\) is the kinetic energy of the collision at infinity. The algorithm to calculate the cross section consists of two steps. Step 1 is required to determine if collisions are possible for zero impact parameter; if yes, then the collision cross section is calculated at step 2. 1. Finding the location \(r_{\rm max}\) of the maximum of \(U(r)\) for \(r\geq a_{1}+a_{2}\). If the dipole component is sufficiently strong, the maximum is located at \(r_{\rm max}>a_{1}+a_{2}\), derived from the condition \[\frac{dU}{dr}=0.\] (A4) Otherwise, if the derivative is negative at \(r\geq a_{1}+a_{2}\), the maximum is achieved at \(r_{\rm max}=a_{1}+a_{2}\). If \(U(r_{\rm max})>K_{\infty}\), then grains cannot collide and \(W=0\). 2. Calculating the cross section. If \(U(r_{\rm max})\leq K_{\infty}\), there is a critical impact parameter \(b_{*}\) for which \(U_{\rm eff}(r,b_{*})=K_{\infty}\), i.e., for \(b<b_{*}\) grains overcome the potential barrier and collide. Thus, the collision cross section is equal to \(\pi b_{*}^{2}\), and \[W=\frac{b_{*}^{2}}{(a_{1}+a_{2})^{2}}.\] (A5) The desired value of \(b_{*}\) is a root of the equation \[U_{\rm eff}(r_{\rm max}(b),b)=K_{\infty},\] (A6) where \(r_{\rm max}(b)\) is, in turn, a root of \[\frac{\partial U_{\rm eff}(r,b)}{\partial r}=0.\] (A7) For a given \(b\), if the derivative is negative at \(r\geq a_{1}+a_{2}\), then \(r_{\rm max}(b)=a_{1}+a_{2}\). The above algorithm is formulated for like-charged grains, i.e., for \(Q_{1}Q_{2}>0\). If one of the grains is neutral, the interaction is determined by the attractive dipole term and the potential energy \(U(r)\) is negative for any \(r\). In this case, step 1 can be skipped and step 2 yields \(W\geq 1\). We note that the analysis becomes more complicated for \(Q_{1}Q_{2}<0\), as Equation A7 may then have two roots, in which case \(W\) should be computed for the root corresponding to a maximum. However, the latter regime is usually realized in the upper atmosphere of disks and therefore is irrelevant for our analysis. In Figure 5 we show the difference between the pure Coulomb collision efficiency factor \(W_{0}\), calculated analytically using Equation 3, and the full factor \(W\) with dipole terms, calculated with the above algorithm. The physical parameters correspond to our reference model. The collision efficiency factor with dipole terms \(W\) is always larger or equal to that with the Coulomb term only, \(W_{0}\). The difference between numerical and analytical approaches becomes noticeable at intermediate values of \(W\), reaching the maximum of \(\approx 0.15\). However, such a difference is observed in a relatively narrow range of grain sizes and has marginal impact on the resulting dust size distributions. In the present paper we utilize the analytical approach, as it allows a simple recipe to account for the grain velocity dispersion (Equations 3 and 4). ## Appendix B Effect of electrostatic interaction on impact velocities The relative velocity between two charged grains changes as they approach each other. As the electrostatic interaction includes both the long-range Coulomb repulsion and the short-range induced-dipole attraction (Equation 7), the actual impact velocity at the moment of grain collision \(v_{\rm imp}\) (Equation 6) can be smaller or larger than the initial relative velocity at infinity \(v_{\rm rel}\). The ratio \(v_{\rm imp}/v_{\rm rel}\) depends on the grain charges and sizes. For the inner disk conditions considered in this paper, the Coulomb term dominates and \(v_{\rm imp}\) for charged grains is generally smaller than that for neutral grains of the same sizes. However, the opposite situation is possible for outer (colder) disk regions. In Figure 6 we show how the electrostatic interaction affects the impact velocities between small grains with \(a_{1}=0.01\,\mu\)m and other grains. The left panel corresponds to our reference model with \(v_{\rm frag}=10\,{\rm m\,s^{-1}}\) (Figures 1 and 2), while the right panel is for the model with lower fragmentation velocity of \(1\,{\rm m\,s^{-1}}\) (Figure 4), where the grown grains are being efficiently destroyed. The solid red lines showing \(v_{\rm imp}\) change to the dashed where the collision efficiency \(W\) becomes equal to zero. As a result, no coagulation-fragmentation equilibrium is reached for the case depicted in the right panel, as the smallest fragments can only collide with \(v_{\rm imp}>v_{\rm frag}\).
2310.13296
Trotterization in Quantum Theory
Trotterization in quantum mechanics is an important theoretical concept in handling the exponential of noncommutative operators. In this communication, we give a mathematical formulation of the Trotter Product Formula, and apply it to basic examples in which the utility of Trotterization is evident. Originally, this article was completed in December 2020 as a report under the mentorship of Esteban C\'ardenas for the University of Texas at Austin Mathematics Directed Reading Program (DRP). However, the relevance of Trotterization in reducing quantum circuit complexity has warranted the release of a revised and more formal version of the original. Thus, we present a mathematical perspective on Trotterization, including a detailed sketch of a formal proof of the Trotter Product Formula.
Grant Kluber
2023-10-20T06:02:52Z
http://arxiv.org/abs/2310.13296v2
# Trotterization in Quantum Theory ###### Abstract Trotterization in quantum mechanics is an important theoretical concept in handling the exponential of noncommutative operators. In this communication, we give a mathematical formulation of the Trotter Product Formula, and apply it to basic examples in which the utility of Trotterization is evident. Originally, this article was completed in December 2020 as a report under the mentorship of Esteban Cardenas for the University of Texas at Austin Mathematics Directed Reading Program (DRP). However, the relevance of Trotterization in reducing quantum circuit complexity has warranted the release of a revised and more formal version of the original. Thus, we present a mathematical perspective on Trotterization, including a detailed sketch of a formal proof of the Trotter Product Formula. ###### Contents * 1 Introduction * 2 The Trotter Product Formula * 3 Applications and Implications * 4 Conclusion ## 1 Introduction One of the major goals of quantum mechanics is finding solutions, called wavefunctions/eigenfunctions, to the time-independent Schrodinger wave equation. For a given time-independent Hamiltonian operator \(\hat{H}\) on a Hilbert space \(\mathcal{H}\), the Schrodinger equation is given by \[\hat{H}\left|\Psi\right\rangle=E\left|\Psi\right\rangle \tag{1}\] where \(\left|\Psi\right\rangle\in\mathcal{H}\) is an eigenfunction and \(E\in\mathbb{R}\) is an energy eigenvalue. Alone, this equation only yields the energy values and the stationary states of the physical system. To get the time-dependent eigenvector \(\left|\Psi(t)\right\rangle\), one needs the unitary operator \(U(t)\): \[U(t)=e^{-i\hat{H}t/\hbar}, \tag{2}\] \[\left|\Psi(t)\right\rangle=U(t)\left|\Psi\right\rangle=e^{-i\hat{ H}t/\hbar}\left|\Psi\right\rangle, \tag{3}\] where \(\hbar=h/2\pi\) is Planck's reduced constant. In practice, it can be hard to compute this operator exponential, so let us focus on the simplest example: When the (finite-dimensional in this case) Hamiltonian operator \(\hat{H}\) is given by the diagonal matrix \[\hat{H}=\begin{pmatrix}E_{1}&0&\cdots&0\\ 0&E_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&E_{n}\end{pmatrix}. \tag{4}\] Substituting this into Equation 1, the eigenfunctions are given by the standard basis \[\left|\psi_{1}\right\rangle=\begin{pmatrix}1\\ 0\\ \vdots\\ 0\end{pmatrix},\quad\left|\psi_{2}\right\rangle=\begin{pmatrix}0\\ 1\\ \vdots\\ 0\end{pmatrix},\quad\cdots,\quad\left|\psi_{n}\right\rangle=\begin{pmatrix}0 \\ 0\\ \vdots\\ 1\end{pmatrix} \tag{5}\] with corresponding eigenvalues \(E_{1},\cdots,E_{n}\), respectively. From this, an arbitrary stationary state \(\left|\Psi\right\rangle\) of the entire quantum system is simply a complex (\(c_{i}\in\mathbb{C}\)) linear combination: \[\left|\Psi\right\rangle=\sum_{i=1}^{n}c_{i}\left|\psi_{i}\right\rangle,\quad \sum_{i=1}^{n}\left|c_{i}\right|^{2}=1. \tag{6}\] This is the time-independent solution. To get to the full time-dependent solution \(\left|\Psi(t)\right\rangle\), we simply need to compute \(U(t)\). This is made easy by the fact that the exponential of a diagonal matrix is just the diagonal matrix of element exponentials. Concretely, \[U(t) =\exp\left[\begin{pmatrix}-iE_{1}t/\hbar&0&\cdots&0\\ 0&-iE_{2}t/\hbar&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&-iE_{n}t/\hbar\end{pmatrix}\right]\] \[=\begin{pmatrix}e^{-iE_{1}t/\hbar}&0&\cdots&0\\ 0&e^{-iE_{2}t/\hbar}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&e^{-iE_{n}t/\hbar}\end{pmatrix}. \tag{7}\] Therefore, the time-dependent state \(\left|\Psi(t)\right\rangle\) is given by \[\left|\Psi(t)\right\rangle =U(t)\left|\Psi\right\rangle\] \[=\sum_{i=1}^{n}c_{i}e^{-iE_{i}t/\hbar}\left|\psi_{i}\right\rangle. \tag{8}\] This calculation was simple because the Hamiltonian \(\hat{H}\) was diagonal, but for the general case we need to use the series definition of exponentiation. Given a (potentially infinite-dimensional) self-adjoint (Hermitian) operator \(S\) defined on \(\mathcal{H}\), its exponential is defined by \[e^{S}=\sum_{n=0}^{\infty}\frac{S^{n}}{n!}. \tag{9}\] When \(S\) is finite-dimensional, simply diagonalizing the matrix is sufficient to compute \(e^{S}\), and therefore solve the eigenvalue problem. However, when \(S\) is infinite dimensional, the problem becomes much more complicated. To help reduce this complexity, we can consider decomposing an operator into a sum. If we have another self-adjoint operator \(T\) defined on \(\mathcal{H}\) such that \([S,T]=0\) (i.e., \(ST=TS\)), then the exponential of the sum splits: \[e^{S+T}\xi=e^{S}e^{T}\xi,\quad\forall\xi\in\mathcal{H} \tag{10}\] We can actually prove this in a relatively straight-forward manner from the definition of the operator exponential by applying the Binomial Theorem: \[e^{S+T} =\sum_{n=0}^{\infty}\frac{(S+T)^{n}}{n!}\] \[=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}\binom{n}{k}S^{n-k }T^{k}\] \[=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}\frac{n!}{(n-k)!k!} S^{n-k}T^{k}\] \[=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{(n-k)!k!}S^{n-k}T^{k}.\] With this result, notice that every possible product of \(S^{m}\) with \(T^{n}\) occurs for \(m,n\in\mathbb{Z}^{+}\cap\{0\}\). Thus, rewrite the sum as follows: \[\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{(n-k)!k!}S^{n-k}T^{k} =\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\frac{1}{m!n!}S^{m}T^{n}\] \[=\left(\sum_{m=0}^{\infty}\frac{1}{m!}S^{m}\right)\left(\sum_{n=0 }^{\infty}\frac{1}{n!}T^{n}\right)\] \[=e^{S}e^{T}.\] In the case where \(S\) and \(T\) do not commute, this argument fails because the Binomial Theorem no longer applies. It seems like there's no way to generalize this argument for the noncommutative case. For example, in the binomial expansion of \((S+T)^{3}\) with \([S,T]\neq 0\), \(STS\neq S^{2}T\), and so it's impossible to collect terms on the left and right sides of the overall sum. A somewhat surprising result called the Trotter Product Formula is needed for noncommutative operators, which in a strong sense approximates the above splitting of the exponential (Equation 10). This is what we explore for the remainder of this communication. Before the next section, it's important to bring up one prerequisite: unitary evolution groups. Put simply, a unitary evolution group is a group of unitary operators \(G(t)\) for \(t\in\mathbb{R}\) given by a homomorphism of \((\mathbb{R},+)\) (i.e., for all \(s,t\in\mathbb{R}\), \(G(s+t)=G(s)G(t)\)). An important notion is the infinitesimal generator \(T\) of a unitary evolution group, given pointwise by \[T\xi=i\lim_{h\to 0}\frac{G(h)-I}{h}\xi. \tag{11}\] It turns out that (with some technical regularity of the mapping \(t\to G(t)\), namely weak measurability) \(T\) is self-adjoint (by Stone's Theorem on one-parameter unitary groups). Furthermore, it also turns out that any self-adjoint operator \(T\) corresponds to the (strongly/pointwise continuous) unitary evolution group \(U_{T}(t)=e^{-itT}\). This correspondence will be useful in formally proving the Trotter Product Formula. Furthermore, note that the domain for \(S\) and \(T\) was always taken to be the entire Hilbert space. This is because if an operator \(S\) is bounded and self-adjoint, then its dense domain \(D(S)\) can be extended to all of \(\mathcal{H}\) by constructing a new unique extended operator \(\overline{S}\) from \(S\) for which \(\overline{S}=S\) on \(D(S)\). In general, domains only matter for unbounded operators, which we consider in the next section in the statement of the Trotter Product Formula proof. ## 2 The Trotter Product Formula The following proof is adapted from de Oliveria's textbook presentation [1]. **Claim**.: _Let \(S\) and \(T\) be (potentially unbounded) self-adjoint operators on \(\mathcal{H}\). Then, for every \(t\in\mathbb{R}\) and for all \(\xi\in D(S+T)=D(S)\cap D(T)=\mathcal{D}\),_ \[\lim_{n\xrightarrow{}\infty}\left\|e^{-it(S+T)}\xi-\left(e^{-i\frac{t}{n}S}e^{-i \frac{t}{n}T}\right)^{n}\xi\right\|=0. \tag{12}\] _In other words, the following strong (pointwise) operator limit holds:_ \[\operatorname*{s-lim}_{n\to\infty}\left(e^{-i\frac{t}{n}S}e^{-i\frac{t}{n}T} \right)^{n}=e^{-it(S+T)}.\] Proof.: Let \(h\in\mathbb{R}\) such that \(h\neq 0\) and \(\xi\in\mathcal{D}\); then, define \(u_{h}(\xi)\) as \[u_{h}(\xi)=\frac{1}{h}\left(e^{-ihS}e^{-ihT}-e^{-it(S+T)}\right).\] Now, this can be rewritten as \[u_{h}(\xi)=\frac{e^{-ihS}-I}{h}\xi+e^{-ihS}\frac{e^{-ihT}-I}{h}\xi-\frac{e^{- ih(S+T)}-I}{h}\xi\] where \(I\) is the identity operator on \(\mathcal{H}\). Now, we can use the fact that these resemble unitary evolution group generators (Equation 11). Taking the limit \(h\xrightarrow{}0\) for the first and third terms yields \(S\xi\) and \(-(S+T)\xi\), respectively. For the second term, the Dominated Convergence Theorem [2] applies, so that it yields \(T\xi\) as \(h\xrightarrow{}0\). Thus, \(\lim_{h\xrightarrow{}0}u_{h}(\xi)=0\). Since \(u_{h}\) is linear and bounded, and since the operator \(S+T\) is closed (because the sum of two self-adjoint operators is self-adjoint and every self-adjoint operator is closed), one can show through the Uniform Boundedness Principle, as in de Oliveria's work [1], that \[\lim_{h\xrightarrow{}0}\sup_{|s|<|t|}\|u_{h}(\xi_{s})\|=0, \tag{13}\] where \(\xi_{s}\) is given by the unitary evolution group \(\xi_{s}=e^{-is(S+T)}\). Because the unitary evolution group is strongly continuous, it follows that \(J_{\xi,t}=\{\,\xi_{s}\mid|s|\leq|t|\,\}\) is compact and totally bounded in \(J_{\xi,t}\) under the graph norm of \(S+T\). Because \(J_{\xi,t}\) is totally bounded, it can be covered by a finite number of open balls. This leads to an interpolation argument: any \(\xi_{s}\in J_{\xi,t}\) lies inside one of those balls, and from that we can upper bound the graph norm with a distance term that vanishes as \(h\xrightarrow{}0\). Thus, since \(\xi_{s}\in J_{\xi,t}\) is arbitrary, this bound also applies to the \(\xi_{s}\) that maximizes \(\|u_{h}(\xi_{s})\|\). Now that we have shown this, we need to relate \(u_{h}(\xi)\) with the difference in Equation 12. For bounded operators \(A,B\) and any \(n\in\mathbb{Z}^{+}\), one can show that \[A^{n}-B^{n}=\sum_{j=0}^{n-1}A^{j}(A-B)B^{n-1-j}.\] Substituting \(A=e^{-it(S+T)/n}\) (so that \(A^{n}=e^{-it(S+T)}\)) and similarly with \(B=e^{-i\frac{t}{n}S}e^{-i\frac{t}{n}T}\), \[\left(e^{-it(S+T)/n}\right)^{n}-\left(e^{-itS/n}e^{-itT/n}\right)^{n}\] \[=\sum_{j=0}^{n-1}\left(e^{-itS/n}e^{-itT/n}\right)^{j}\left(e^{-it (S+T)/n}-e^{-itS/n}e^{-itT/n}\right)\left(e^{-it(S+T)/n}\right)^{n-1-j}\] Therefore, we can take the norm and apply it to an arbitrary \(\xi\in\mathcal{D}\). Applying the triangle inequality along with the fact that the norm of unitary operators is unity, \[\left\|\sum_{j=0}^{n-1}\left(e^{-itS/n}e^{-itT/n}\right)^{j}\left( e^{-it(S+T)/n}-e^{-itS/n}e^{-itT/n}\right)\left(e^{-it(S+T)/n}\right)^{n-1-j}\xi\right\|\] \[\leq\sum_{j=0}^{n-1}\left\|\left(e^{-itS/n}e^{-itT/n}\right)^{j} \left(e^{-it(S+T)/n}-e^{-itS/n}e^{-itT/n}\right)\left(e^{-it(S+T)/n}\right)^{n -1-j}\xi\right\|\] \[\leq\sum_{j=0}^{n-1}\left\|\left(e^{-itS/n}e^{-itT/n}\right)^{j} \right\|\left\|\left(e^{-it(S+T)/n}-e^{-itS/n}e^{-itT/n}\right)\left(e^{-it(S+ T)/n}\right)^{n-1-j}\xi\right\|\] \[=\sum_{j=0}^{n-1}\left\|\left(e^{-it(S+T)/n}-e^{-itS/n}e^{-itT/n} \right)\left(e^{-it(S+T)/n}\right)^{n-1-j}\xi\right\|\] Now, the only dependence we have on \(j\) is in the last term of the product. If we define \(s_{j}=t(n-1-j)/n\), then that term becomes \(e^{-is_{j}(S+T)}\). Because \(\left|s_{j}\right|<\left|t\right|\), \(\left\{s_{j}\right\}\) is a subset of the interval \(\left[-\left|t\right|,\left|t\right|\right]\). Therefore, we upper bound the previous expression with a supremum over the entire interval: \[\sum_{j=0}^{n-1}\left\|\left(e^{-it(S+T)/n}-e^{-itS/n}e^{-itT/n} \right)e^{-is_{j}(S+T)}\xi\right\|\] \[\leq n\sup_{\left|s\right|<\left|t\right|}\left\|\left(e^{-it(S+T )/n}-e^{-itS/n}e^{-itT/n}\right)e^{-is(S+T)}\xi\right\|\] \[=n\sup_{\left|s\right|<\left|t\right|}\left\|\left(e^{-it(S+T)/n}- e^{-itS/n}e^{-itT/n}\right)\xi_{s}\right\|\] If we let \(h=\left|t\right|/n\), then the last expression becomes \[\frac{\left|t\right|}{h}\sup_{\left|s\right|<\left|t\right|} \left\|\left(e^{-it(S+T)/n}-e^{-itS/n}e^{-itT/n}\right)\xi_{s}\right\|\] \[=\left|t\right|\sup_{\left|s\right|<\left|t\right|}\left\|\frac{1} {h}\left(e^{-it(S+T)/n}-e^{-itS/n}e^{-itT/n}\right)\xi_{s}\right\|\] \[=\left|t\right|\sup_{\left|s\right|<\left|t\right|}\left\|u_{h}( \xi_{s})\right\|.\] Therefore, as \(n\to\infty\), \(h\to 0\) so that \[\left(e^{-i\frac{t}{n}S}e^{-i\frac{t}{n}T}\right)^{n}\xi\xrightarrow{}e^{-it(S+T)} \xi,\quad\xi\in\mathcal{D}.\] ## 3 Applications and Implications Given a finite-dimensional \(n\times n\) (complex) Hermitian matrix \(A\), its matrix exponential can be computed exactly by diagonalizing. By the complex spectral theorem for finite-dimensional matrices, \(A\) can be decomposed as \(A=UDU^{\dagger}\), where \(U\) is a unitary matrix, \(D\) is a diagonal matrix, and \((\cdot)^{\dagger}\) denotes the conjugate transpose. Using the fact that \(A^{m}=(UDU^{\dagger})^{m}=UD^{m}U^{\dagger}\) for any \(m\in\mathbb{Z}^{+}\), it turns out that \(e^{A}\) is given by \[e^{A}=Ue^{D}U^{\dagger}. \tag{14}\] However, when the dimensionality \(n\) is very large, diagonalizing \(A\) can be computationally difficult. Therefore, an approximation is needed to compute matrix exponentials in general instead of using Equation 14 directly. As a very simple example, we can approximate the matrix exponential \(e^{A+B}\) by Trotterization for noncommuting matrices \(A\) and \(B\). Using the series definition of the matrix exponential for sufficiently large \(N\), \[e^{A/N}\approx I+\frac{A}{N},\quad e^{B/N}\approx I+\frac{B}{N}.\] As a consequence of the Trotter Product Formula, \[e^{A+B}\approx\left(e^{A/N}e^{B/N}\right)^{N}\ \approx\left[\left(I+\frac{A}{N} \right)\left(I+\frac{B}{N}\right)\right]^{N}=\left[I+\frac{A}{N}+\frac{B}{N}+ \frac{AB}{N^{2}}\right]^{N}\ \approx\left[I+\frac{A+B}{N}\right]^{N}.\] Matrix multiplication is still \(O(n^{3})\), but it is generally faster and more parallelizable than diagonalization. Furthermore, this can be implemented for large \(N=2^{m}\) using a repeated squaring algorithm [3]. Another interesting observation is that the approximation is commutative with respect to the inputs \(A\) and \(B\). As a final thought, note the similarities between this equation and the limit definition for \(e^{x}\) for \(x\in\mathbb{R}\): \[e^{x}=\lim_{n\xrightarrow{}\infty}\left(1+\frac{x}{n}\right)^{n}.\] For a more practical example of this principle applied, specifically a Trotterized Hamiltonian equation, see Whitfield et al. (2012) [4]. Trotterization is also useful for analyzing operators that individually have "nice" operator exponentials, but when summed together have a complicated operator exponential. A good example of this is the unitary evolution group \(e^{it(\hat{p}^{2}/2m+\hat{x})}\) corresponding to the Hamiltonian operator \(\hat{H}=\frac{\hat{p}^{2}}{2m}+\hat{x}\) (i.e., \(V(x)=x\)). It is difficult to compute this exponential directly, but the exponential of \(it\hat{p}^{2}/2m\) individually corresponds to the free Schrodinger kernel (which is well-understood mathematically), and the exponential of \(it\hat{x}\) individually corresponds to a momentum translation. Thus, the exponential of the sum can be reduced to terms much easier to analyze. ## 4 Conclusion Thus, we have proved the Trotter Product Formula in its full generality. Additionally, we have discussed its motivations from a practical perspective within quantum mechanics. This combination is powerful because it means that Trotterization can both be applied to practical quantum mechanics challenges, such as computing quantum circuits, and theoretical challenges, such as reducing quantum circuit complexity. In some cases, the Trotter Product Formula admits a simpler form. When operators \(A,B\) (on some \(\mathcal{H}\)) are anti-commutative, such that \(AB=-BA\), then \(e^{A+B}\) can be calculated using a generalization of the Binomial Theorem to q-commutative (\(q=-1\)) algebras. A detailed reference and derivation for this can be found in Scurlock (2020) [5]. This can be practically applied to, for example, Pauli gate operations. A detailed work is utilizing this result is Zhao & Yuan (2021) [6].
2303.03156
A Parallel Monte-Carlo Tree Search-Based Metaheuristic For Optimal Fleet Composition Considering Vehicle Routing Using Branch & Bound
Autonomous mobile robots enable increased flexibility of manufacturing systems. The design and operating strategy of such a fleet of robots requires careful consideration of both fixed and operational costs. In this paper, a Monte-Carlo Tree Search (MCTS)-based metaheuristic is developed that guides a Branch & Bound (B&B) algorithm to find the globally optimal solution to the Fleet Size and Mix Vehicle Routing Problem with Time Windows (FSMVRPTW).The metaheuristic and exact algorithms are implemented in a parallel hybrid optimization algorithm where the metaheuristic rapidly finds feasible solutions that provide candidate upper bounds for the B&B algorithm. The MCTS additionally provides a candidate fleet composition to initiate the B&B search. Experiments show that the proposed approach results in significant improvements in computation time and convergence to the optimal solution.
T. M. J. T. Baltussen, M. Goutham, M. Menon, S. G. Garrow, M. Santillo, S. Stockar
2023-03-06T14:19:36Z
http://arxiv.org/abs/2303.03156v4
A Parallel Monte-Carlo Tree Search-Based Metaheuristic For Optimal Fleet Composition Considering Vehicle Routing Using Branch & Bound ###### Abstract Autonomous mobile robots enable increased flexibility of manufacturing systems. The design and operating strategy of such a fleet of robots requires careful consideration of both fixed and operational costs. In this paper, a Monte-Carlo Tree Search (MCTS)-based metaheuristic is developed that guides a Branch & Bound (B&B) algorithm to find the globally optimal solution to the Fleet Size and Mix Vehicle Routing Problem with Time Windows (FSMVRPTW). The metaheuristic and exact algorithms are implemented in a parallel hybrid optimization algorithm where the metaheuristic rapidly finds feasible solutions that provide candidate upper bounds for the B&B algorithm. The MCTS additionally provides a candidate fleet composition to initiate the B&B search. Experiments show that the proposed approach results in significant improvements in computation time and convergence to the optimal solution. **Keywords: Fleet composition, Vehicle Routing, Branch & Bound, Monte-Carlo Tree Search, Metaheuristic** ## I Introduction In the industrial sector, reconfigurable manufacturing systems are increasingly being adopted because of their ability to scale and diversify production by supporting the adaptability of process controls, functions, and operations [1]. A key enabler is the added production flexibility provided by the adoption of fleets of autonomous mobile robots (AMRs) that move material within a plant [2]. In particular, multi-load AMRs enhance efficiency by picking up and dropping off multiple items in a single mission [3]. The design of such a fleet is a strategic problem and involves considerable capital investment [4]. Therefore, all costs related to the acquisition and operation should be considered. Although [5] and [6] have recently shown the relevance of combining vehicle routing and component design of the vehicles in the fleet, the combined vehicle routing and fleet composition has generally received insufficient attention [4]. In this paper, the Vehicle Routing Problem with Time Windows (VRPTW) and capacity constraints on the cargo mass, volume and vehicle range is used to obtain operational costs. The combined VRPTW with the heterogeneous fleet composition problem, is called the Fleet Size and Mix Vehicle Routing Problem with Time Windows (FSMVRPTW). This problem accommodates a heterogeneous fleet and considers both fixed and operational costs [4]. Fleet composition optimization problems are typically posed as a capacitated VRPTW where the fleet size can be varied [7]. Exact algorithms that guarantee optimality for this combinatorial optimization problem, structure the problem as a tree exploration problem and are solved using Branch & Bound (B&B) methods [7]. However, due to the \(\mathcal{NP}\)-hard nature of the problem, the application of exact algorithms is restricted to small problem instances [8]. Real-life VRPTW applications are considerably larger in scale [8] and finding the optimal solution to such a problem is computationally expensive. Therefore, most VRPTWs are solved using metaheuristic methods due to their ability to find near-optimal solution in a limited time [7, 8]. However, such approximate methods do not provide guarantees on the optimality of the solution [7]. Hybrid optimization methods can improve the performance and efficiency of the optimizer by combining the strengths of metaheuristics and exact algorithms. Successful metaheuristics provide a balance between exploration and exploitation of the search space [9]. As such, Monte-Carlo Tree Search (MCTS) is a reinforcement learning algorithm that balances this exploration and exploitation and it is well suited to large-scale combinatorial optimization problems [7, 10, 11]. In fact, MCTS has already been used in literature as a metaheuristic that guides a CPLEX solver toward the optimal solution [12]. Moreover, it is frequently hybridized with other optimization algorithms [11]. MCTS has been found to obtain state-of-the-art results in resource allocation problems (RAP) [13] and in single vehicle instances of the VRPTW, called Travelling Salesperson Problems with Time Windows (TSPTW) [14]. It has also been used to solve VRP problems with variable fleet sizes [13, 14, 15]. However, MCTS has not yet been used to solve FSMVRPTWs that permit different types of vehicles. The first contribution of this paper is the development of an exact incremental B&B algorithm for the FSMVRPTW. This algorithm employs a divide and conquer approach where the VRPTW is partitioned into an (RAP) that first assigns tasks to each robot using a parallel B&B algorithm, and then finds the optimal sequence in which the assigned tasks are completed by solving a nested TSPTW, using another B&B algorithm. The second contribution is a hybrid MCTS-based metaheuristic (UCT-MH), that uses the Upper Confidence bounds applied to Trees algorithm [16] in the fleet composition levels to guide its search and solves the nested TSPTW using a B&B algorithm. The third novelty presented in this paper is the hybrid optimization framework where the UCT-MH guides the incremental B&B to find the optimal solution to the FSMVRPTW. When possible, this B&B is initialized with a fleet composition identified by the rapid search space exploration enabled by the UCT-MH. Additionally, the best solutions found by the UCT-MH update the upper bound used by the incremental B&B, which allows sub-optimal solutions to be pruned earlier. The performance of the proposed method is verified on various real-life case studies. Results show a significant reduction in computation time when the incremental B&B algorithm is guided by the proposed UCT-MH, especially for large problem sizes. ## II Problem Formulation & Methodology Consider a manufacturing plant with a known layout that comprises various spatial constraints, and a set of material handling tasks \(\mathcal{T}\). Each task involves picking-up certain cargo items at inventory locations and dropping them off at their designated drop-off locations within defined time windows. The objective of the optimization is to find the optimal fleet of multi-load capacitated AMRs that completes all the defined tasks \(\mathcal{T}\) while minimizing fixed and operational costs. Let the set \(\mathcal{H}:=\{1,2,...,h\}\) identify \(h\) different AMR types available, each with specific traveling speeds, energy efficiency, cargo capacity, driving range, charge-time etc. Let \(k_{i}\leq k_{i}^{max}:i\in\mathcal{H}\) denote the number of each type of AMR that forms a fleet so that any fleet composition can be fully defined by a vector \(\mathbf{k}\in\mathbb{N}_{0}^{h}\). This fleet is associated with a fixed cost \(J^{f}(\mathbf{k})\) composed of purchase costs, depreciation, etc., that can be captured by \(J^{f}(\mathbf{k})=\mathbf{c}^{\top}\mathbf{k}\) for some \(\mathbf{c}\in\mathbb{R}^{h}\). For completing all the tasks in \(\mathcal{T}\), the operational cost \(J^{o}(\mathbf{k})\) can be any combination of relevant metrics to be minimized such as energy, slack time, number of turns, asset depreciation, etc. [17, 18, 19]. The total cost to be minimized is: \[\min_{\mathbf{k}}J=\mathbf{c}^{\top}\mathbf{k}+J^{o}(\mathbf{k}) \tag{1}\] The fleet operational cost \(J^{o}(\mathbf{k})\) is posed as an RAP that finds the optimal partition of tasks to be assigned to AMRs that minimizes total operational cost. If the total number of robots in the heterogeneous fleet \(\mathbf{k}\) is given by \(m=\sum_{i=1}^{h}k_{i}\), every robot in this fleet can be identified by \(r\in\mathcal{R}_{k}:=\{1,2,...,m\}\). Let the set \(\mathcal{T}_{r}\subseteq\mathcal{T}\) denote the tasks assigned to robot \(r\) by the partitioning of \(\mathcal{T}\), denoted by \(\mathfrak{T}:=\{\mathcal{T}_{r}:r\in\mathcal{R}_{k}\}\), meaning \(\bigcup_{r\in\mathcal{R}_{k}}\mathcal{T}_{r}=\mathcal{T}\) and \(\forall r,s\in\mathcal{R}_{k}:r\neq s,\mathcal{T}_{r}\bigcap\mathcal{T}_{s}=\varnothing\). The optimal partition of task set \(\mathcal{T}\) minimizes \(J^{o}(\mathbf{k})\) in Eq (2). \[J^{o}(\mathbf{k})=\min_{\mathfrak{T}}\sum_{r\in\mathcal{R}_{k}}J^{r}(r, \mathcal{T}_{r}) \tag{2}\] The minimum operational cost \(J^{r}(r,\mathcal{T}_{r})\) for each robot in fleet \(\mathbf{k}\) is dependent on the AMR type, and is also affected by the sequence with which task locations are visited, as it is possible for the robot to pickup multiple items before dropping them off so long as each pickup is visited before the corresponding drop-off. The objective function and constraints that yield \(J^{r}(r,\mathcal{T}_{r})\) are defined in Eq. (3-15). Let robot \(r\) of the fleet be assigned \(n_{r}=|\mathcal{T}_{r}|\) tasks. The set of pickup and drop-off locations are defined as \(\mathcal{V}_{\mathcal{P}}:=\{1,2,...,n_{r}\}\) and \(\mathcal{V}_{\mathcal{D}}:=\{n_{r}+1,n_{r}+2,...,2n_{r}\}\) respectively, so that an item picked up at location \(i\) must be dropped off at location \(n_{r}+i\). The origin and final destination locations of the robot are identified by \(\{0,2n_{r}+1\}\). Let \(\mathcal{V}:=\{\mathcal{V}_{\mathcal{P}}\cup\mathcal{V}_{\mathcal{D}}\cup\{0,2 n_{r}+1\}\}\) be the set of all locations in a graph representation \(\mathcal{G}:=(\mathcal{V},\mathcal{A})\) where \(\mathcal{A}:=\{(i,j)\in\mathcal{V}\times\mathcal{V}\}\) is the arc set. Between every pair of nodes \((i,j)\in\mathcal{A}\), the operational costs \(D_{ij}\in\mathbb{R}^{+}\), energy consumed \(\delta e_{ij}\in\mathbb{R}^{+}\) and travel time \(\delta t_{ij}\in\mathbb{R}^{+}\) are pre-computed before initializing the optimization by solving a path planning problem between every two locations \(i,j\in\mathcal{V}:(i,j)\in\mathcal{A}\). \[J^{r}(r,\mathcal{T}_{r})=\min_{\begin{subarray}{c}\forall i\\ \forall i\in\mathcal{A}\end{subarray}}\sum_{i=0}^{2n_{r}}\sum_{j=1}^{2n_{r}+1}D _{ij}x_{ij}\] (3) s.t. \[x_{ij}\in\{0,1\}\ \forall(i,j)\in\mathcal{A} \tag{4}\] \[\sum_{j=1}^{n_{r}}x_{0j}=1\] (5) \[\sum_{i=0}^{2n_{r}}\!\!x_{il}=\sum_{j=1}^{2n_{r}+1}x_{lj}=1,\quad \forall l\in\{\mathcal{V}\setminus\{0,2n_{r}+1\}\}\] (6) \[\sum_{i=n_{r}+1}^{2n_{r}}x_{i,2n_{r}+1}=1\] (7) \[z_{j}= \begin{cases}z_{i}-\delta e_{ij}\\ \quad\text{if }x_{ij}=1\wedge z_{i}-\delta e_{ij}>0\ \ \forall(i,j)\in\mathcal{A}\\ 1-\delta e_{0j}\\ \quad\text{if }x_{ij}=1\wedge z_{i}-\delta e_{ij}\leq 0\ \ \forall(i,j)\in \mathcal{A}\end{cases}\] (8) \[z_{0}= 1;\ 0\leq z_{i}\leq 1\ \forall i\in\mathcal{V}\] (9) \[T_{ij}= \begin{cases}\delta t_{ij}\\ \quad\text{if }x_{ij}=1\wedge z_{i}-\delta e_{ij}>0\ \ \forall(i,j)\in\mathcal{A}\\ \delta t_{0i}+(1-z_{i}-\delta e_{i0})p^{-1}+\delta t_{0j}\\ \quad\text{if }x_{ij}=1\wedge z_{i}-\delta e_{ij}\leq 0\ \ \forall(i,j)\in \mathcal{A}\end{cases}\] (10) \[x_{ij}=1\to t_{i}+s_{i}+T_{ij}\leq t_{j}\ \forall(i,j)\in\mathcal{A}\] (11) \[t_{i}+s_{i}+T_{i,n+i}\leq t_{n+i}\ \forall i\in\mathcal{V}\] (12) \[e_{i}\leq t_{i}\leq l_{i}\ \forall i\in\mathcal{V}\] (13) \[x_{ij}=1\to y_{j}=y_{i}+q_{j}\ \forall(i,j)\in\mathcal{A}\] (14) \[y_{0}=0;\ 0\leq y_{i}\leq Q_{r}\ \forall i\in\mathcal{V} \tag{15}\] The binary flow variable \(x_{ij}=1\) signifies that the robot uses directed arc \((i,j)\in\mathcal{A}\). Constraints related to the robot starting from the depot \(0\), visiting every location once and terminating the sequence at \(2n_{r}+1\) are enforced by Eq. (4-7). The battery states of charge \(z_{j}\) in Eq. (8-10) are updated as the robot goes about its mission. Whenever the battery is depleted, the robot heads to the depot where it is fully charged up with a constant recharging rate \(p\). The variable \(T_{ij}\) in Eq. (10-12) updates the travel time between locations \(i\) and \(j\) based on whether a recharge is required between the two locations. Time variables \(t_{i}\) in Eq. (11-13) denote the arrival time of the robot at location \(i\in\mathcal{V}\). Each location is associated with a time \(s_{i}\) for material handling and a time window \([e_{i},l_{i}]\) which represents the earliest and latest time at which material handling can start. Cargo constraints are captured in Eq. (14, 15) where payload variables \(y_{i}\) capture the cargo mass being carried by the robot as it leaves location \(i\in\mathcal{V}\). Each robot \(r\) has a cargo capacity limitation of \(Q_{r}\) and each location \(i\in\mathcal{V}\) is associated with a cargo load \(q_{i}\in\mathbb{R}\) such that \(q_{i}+q_{n+i}=0\). Volumetric constraints are modeled similarly. ### _Exact Algorithm: Incremental Branch & Bound_ The incremental B&B systematically partitions the search space into subsets that are arranged in a tree structure. The root of the tree is the original problem and the leaves of the tree are its individual candidate solutions. Between the root and the leaves are intermediate nodes that represent subproblems obtained by recursively partitioning the original problem by a process called branching. B&B algorithms are used to solve these sub-problems. The order according to which these subproblems are examined is determined by a best-first selection criteria, i.e. exploitation, that first explores the problem with the cheapest cost. For minimization problems, the upper bound is the incumbent solution which is the cheapest candidate solution to the original problem found at the leaf node. The upper bound is continuously updated as the tree is explored, and is used to prune sub-optimal branches without recursively evaluating their solutions up to the leaf node. Thus, as the algorithm searches from the root to the leaves, branching is conducted only if the cost at the node is lower than the incumbent solution, and branching can potentially find a better solution than the incumbent solution. Following this process, the B&B algorithm recursively decomposes the original problem until further branching is futile when the solution cannot be improved, or until the original problem has been solved when every feasible branch has been evaluated. The \(\mathcal{N}\mathcal{P}\)-hard RAP problem described by Eq. 2 is solved by the B&B algorithm implemented in a parallel framework that uses \(p\) processing cores, as shown in Fig. 1, where robots in the fleet are identified by subscript \(r\in\mathcal{R}:=\{1,2,...,m\}\). Thus, by splitting the arborescence at some task assignment level and assigning the emanating sub-trees to the available processors, several subproblems are explored simultaneously. During each processor's exploration, updated incumbent solutions are instantaneously made available to every processor in an asynchronous information sharing method using a shared work pool. For each processor, this RAP B&B algorithm is implemented by a recursive function to minimize memory and computational requirements as the tree is explored. Further, since the computation time of B&B algorithms increases with the number of feasible branches at each node, the fleet is initiated with a smaller candidate fleet \(\mathbf{f}^{1}\in\mathbb{N}_{0}^{h}\) than the maximal fleet \(\mathbf{k}^{max}\in\mathbb{N}_{0}^{h}\). After evaluating the total cost of this candidate fleet, the number of robots is incrementally raised until further increments do not reduce the total cost or additional robots remain idle. For each fleet increment, only RAP subproblems that include at least one of the newly added robots are evaluated since other solutions are guaranteed to have been evaluated already. For \(h\) different AMR types available, the fleet is initiated with a candidate fleet \(\mathbf{f}^{1}\leq\mathbf{k}^{max}\), which is chosen based on problem parameters and prior experience so that feasible solutions exist. The RAP of fleet \(\mathbf{f}^{1}\) is then solved using the described parallel B&B algorithm, and its minimum total cost \(J^{1}\) is found, which utilizes robots \(\mathbf{k}^{1}\in\mathbb{N}_{0}^{h}:\mathbf{k}^{1}\leq\mathbf{f}^{1}\). In the increment step, only robot types \(i\) that satisfy \(\mathbf{k}^{1}_{i}=\mathbf{f}^{1}_{i}\) are incremented by 1 for the next candidate fleet \(\mathbf{f}^{2}\). These increments are conducted so long as both \(J^{i+1}\leq J^{i}\) and \(\exists\;i:\mathbf{k}^{1}_{i}=\mathbf{f}^{1}_{i}\). The optimal fleet that completes all tasks while minimizing total cost is then \(k^{i}\) when \(J^{i+1}>J^{i}\) or when \(\nexists i:\mathbf{k}^{i+1}_{i}=\mathbf{f}^{i+1}_{i}\), i.e, the additional robots were idle. At each instance that the RAP subproblem is solved at a node in the arborescence shown in Fig. 1, the TSPTW problem defined by Eq. (3-15) is solved to find the cost at that node. This TSPTW problem is solved using recursive Algorithm 1 that employs another B&B to find the optimal sequence of task completion for each robot. In summary, the B&B incrementally increases the fleet size while minimizing total cost \(J\) of Eq. 1, which includes operational cost \(J^{o}(\mathbf{k})\) of Eq. 2 found using the RAP B&B algorithm and the cost \(J^{r}(r,\mathcal{T}_{r})\) of Eq. (3-15) found using the TSPTW B&B algorithm. ### _Metaheuristic: Monte-Carlo Tree Search_ Each iteration of MCTS involves four steps [20]: _Selection:_ at every node \(v\) in the arborescence, the tree policy selects the next node \(v^{\prime}\). This node selection is initiated at the root node \(v_{0}\) and is used for navigation until the leaf node \(v_{l}\) is reached. _Expansion:_ at the leaf node \(v_{l}\), a random action is taken to expand the tree. _Simulation:_ a Monte-Carlo simulation is performed starting from the expansion node to complete the solution. _Backpropagation:_ the cost/reward of the expansion and simulation is propagated back to the root node \(v_{0}\). The Upper Confidence bounds applied to Trees algorithm [16] was the first variant and formal introduction of MCTS. The proposed metaheuristic (UCT-MH) uses this algorithm to guide the exact incremental B&B algorithm to the optimal solution. While in typical VRPTWs the fleet size is a free Fig. 1: Parallel implementation of the RAP B&B algorithm for the Resource Allocation Problem formulation. variable [7], the proposed metaheuristic selects a fleet size \(m\) and a composition \(\mathbf{k}\) in the fleet sizing and composition stages and tries to solve the VRPTW optimally, fully utilizing that composition. By doing so, the algorithm finds an estimate of the expected total cost associated with a particular fleet size and composition. This estimate serves as a measure for the quality of that branch and can be used by the MCTS to navigate the search. MCTS is most effective as a heuristic at the early stages of the decision problem [12]. Moreover, for smaller problem instances, B&B algorithms are often more suitable than MCTS [14]. As such, the proposed hybrid MCTS algorithm is aimed to utilize the strengths of the different algorithms and combine them into an effective hybrid MCTS-based metaheuristic. Although MCTS was originally designed to solve Markov Decision Processes, without loss of generality, MCTS can be used to solve a design problem by formulating it as a deterministic Markov Decision Process [11]. The optimization problem is modeled as a 3-tuple \(\langle S,A,g\rangle\), where \(S\) is a set of states, \(A\) is a set of actions and \(g(s,a):S\times A\rightarrow[0,g_{max}]\) is a scalar cost function for taking action \(a\) at state \(s\). The state \(s(v)\) contains the parameters that follow from the decisions up to node \(v\). At the root node \(v_{0}\), the fleet size \(m\) is determined by action \(a_{0}\), where \(g_{1}(s_{0}(v_{0}),a_{0}):=0\), for the fleet cost is determined by its composition. Subsequently, the fleet composition \(\mathbf{k}\) is determined by \(a_{1}\in A_{1}(m)\), with fixed cost \(g_{2}(s_{1}(m),a_{1})=J^{f}(\mathbf{k})\). Fig. 2 provides a schematic overview of the problem and the proposed metaheuristic. At the fleet sizing and composition stages, the UCT-MH utilizes the UCB1 tree policy [16] for the selection step at node \(v\) of the search tree: \[\text{UCB1}(v)=\operatorname*{arg\,max}_{v^{\prime}\in\text{children of }v}\frac{Q(v^{\prime})}{N(v^{\prime})}+\sqrt{\frac{2\ln N(v)}{N(v^{\prime})}} \tag{16}\] Here, \(Q(v^{\prime})\) is the total reward of all plays through child node \(v^{\prime}\), \(N(v^{\prime})\) denotes the number of visits of child node \(v^{\prime}\), and \(N(v)\) is the number of visits of the parent node \(v\). The policy function is dependent on the quality of the node being considered as well as the number of evaluations of that node, balancing the exploration and exploitation of the search space [20]. In order to apply the UCB1 policy and have a proper balance between exploration and exploitation, the problem is transformed such that the stage reward \(R_{i}(v)\in[0,1]\)[16]: \[R_{i}(v^{\prime})=1-\frac{g_{i}(v^{\prime})}{g_{max}} \tag{17}\] where \(R_{i}(v^{\prime})\) is the reward of the transition from state \(s_{i-1}(v)\) to state \(s_{i}(v^{\prime})\) and \(v^{\prime}\in\text{children of }v\). It follows that \(Q(v^{\prime})\) is the sum of all rewards of all \(N(v^{\prime})\) plays through node \(v^{\prime}\) back to the root node \(v_{0}\): \[Q(v^{\prime})=\sum_{i=1}^{N(v^{\prime})}R_{i}(v^{\prime})+R_{i}(v)+...+R_{i}(v _{0}) \tag{18}\] Considering that the number of permutations of the RAP is exponential with the number of tasks, it is deemed sufficient to determine the task assignment by a random rollout \((\xi_{1},...,\xi_{n})\). In order to prevent any bias toward another fleet size, it is ensured that the full fleet size is utilized, i.e. each AMR in the fleet will have at least one assignment. The assigned tasks do not have any associated costs/rewards. Since many of the TSPTW instances encountered are small problem instances, it is advantageous to use the same recursive B&B algorithm for TSPTW as described in Section II-A to find the optimal sequence in which the assigned tasks are completed by each robot. Each TSPTW B&B is terminated after a one second time cap since the metaheuristic is not aimed at local convergence. Considering the best first order of exploration, this still finds reasonably good estimates for the operational cost \(\tilde{J}^{o}(k)\). The cost that is obtained through the rollout of the RAP and the TSPTW, is backpropagated through the tree and are assigned to \(Q(v)\) at node \(v\) that is associated with a particular fleet size or composition. This is in turn used by the UCB1 policy function to determine the Figure 2: Overview of the multi-stage design problem, with the FSMVRPTW (red) and the nested VRPTW (blue), and the proposed UCT-MH Algorithm. decisions in the next iteration. As a result, at the root node, the term \(\frac{Q(v^{\prime})}{N(v^{\prime})}\) in (16) is proportional to the total mean cost-to-go for a given fleet size or composition at node \(v^{\prime}\). As the total number of plays at the root node \(N(v_{0})\) grows to infinity, the UCB1 function converges to the expected value of the total cost for a given fleet size. ### _Hybrid Optimization: Guiding B&B with the UCT-MH_ The hybrid optimization framework utilizes the search results of the UCT-MH to guide the exact incremental B&B. Multiple processors are allocated to the B&B algorithm that systematically navigates the tree to solve the problem exactly. Meanwhile, one processor is dedicated to running the UCT-MH which efficiently samples the entire design space to get an estimate of the associated costs. Considering the parallelization overhead of the paralleled B&B algorithm, it can be expected that the UCT-MH already finds a fleet composition candidate \(\mathbf{\hat{k}}\) by the time the B&B is initiated. If such a composition is available, then it is used as the candidate fleet \(\mathbf{f}^{1}=\mathbf{\hat{k}}\) that initializes the B&B algorithm. Moreover, whenever the guiding UCT-MH finds a new best solution, it provides this solution with its associated cost to the guided B&B by adding it to the pooled best cost shown in Fig. 1. This information is used to preemptively prune sub-optimal branches and guide the B&B toward the optimal fleet size and composition, thereby reducing the search space and computation time. ## III Results ### _Computational Experiments_ To study the performance of the proposed hybrid algorithm, the guiding UCT-MH and the guided B&B are compared against the standalone incremental B&B. Four real-life case studies are conducted in MATLAB 2022a at the Ohio Supercomputer Center [21]. For each experiment, a set of \(n\) tasks is defined, each consisting of items of known mass, volume, pick-up and drop-off locations and respective time windows. The fleet size is limited to \(m_{max}\), equally distributed over \(h=3\) different AMR types. Each algorithm is run for a limited time \(t_{max}\) after which the incumbent solutions are compared. Two smaller problems are studied in detail to illustrate the behavior of the UCT-MH in Fig. 3-4. The best-found cost by each algorithm is summarized for all case studies in Table I. ### _Case Studies_ #### Iii-B1 \(n=10\) and \(m_{max}=6\) Figure (a)a shows the UCT-MH exploration of the various fleet sizes, where the mean of the cost-to-go starts to converge and the algorithm gains more confidence in particular solutions as the number of evaluations increases. The guiding UCT-MH finds that \(m=6\) is the best candidate and dedicates more visits to these branches as shown in Fig. (b)b. As a result, the guided B&B quickly focuses on local convergence (Fig. (c)c). As the entire search space is explored, this solution is the guaranteed global optimum. #### Iii-B2 \(n=20\) and \(m_{max}=21\) In Fig. 3(a) several patterns are observed. While small fleet sizes yield infeasible solutions, larger fleet sizes initially show a transient behavior due to the stochastic exploration. The largest fleet sizes always yield feasible solutions, irrespective of the lower-level decisions. Here, an increase in fleet size results in an incremental increase of the mean cost-to-go which is associated with the fleet cost. Remarkably, Fig. 3(b) shows that the standalone B&B is initially faster, however, as the guided B&B already starts from a good candidate branch, the underlying TSPTW is expected to be more difficult to solve. Consequently, the guided B&B discards suboptimal fleets and focuses on local convergence thereby reducing the overall computation time of the guided B&B. ### _Discussion_ The time taken to initialize the parallel B&B algorithm is sufficient for the guiding UCT-MH to find a strong candidate fleet that warm starts the guided B&B. The UCT-MH provides a reduction of computation time ranging from \(38.3\%\) up to \(86.5\%\). The local convergence of the UCT-MH is dependent on the problem size due to the time cap imposed at the TSPTW level. As seen in Table I, for a higher number of tasks where the TSPTW is larger, the gap with the best-known solution is greater (\(\sim 40\%\)). However, the guided B&B is able to close this gap since it conducts local searches systematically. Further, for the case with 100 tasks, the standalone B&B was unable to find any feasible solution in 24 hours while the UCT-MH provided multiple solutions through its efficient stochastic exploration of the design space. ## IV Conclusions In this paper, a hybrid optimization algorithm was developed that uses a Monte-Carlo Tree Search-based metaheuristic (UCT-MH) to guide an exact incremental Branch & Bound algorithm, which solves a real-life Fleet Size and Mix Vehicle Routing Problem with Time Windows. The UCT-MH yields a significant improvement in the computation time and convergence of the B&B by constantly sharing the expected optimal fleet composition as well as the upper bound on the cost. Although in this study MCTS was only employed at the fleet sizing and composition level, future research needs to determine to what depth MCTS can be effective. Moreover, modifications to the selection policy as well as bi-directional communication between the UCT-MH and the B&B algorithm could further improve computation times. ## Acknowledgments This research was supported by the Ford Motor Company as part of the Ford-OSU Alliance Program.
2308.01433
COVID-VR: A Deep Learning COVID-19 Classification Model Using Volume-Rendered Computer Tomography
The COVID-19 pandemic presented numerous challenges to healthcare systems worldwide. Given that lung infections are prevalent among COVID-19 patients, chest Computer Tomography (CT) scans have frequently been utilized as an alternative method for identifying COVID-19 conditions and various other types of pulmonary diseases. Deep learning architectures have emerged to automate the identification of pulmonary disease types by leveraging CT scan slices as inputs for classification models. This paper introduces COVID-VR, a novel approach for classifying pulmonary diseases based on volume rendering images of the lungs captured from multiple angles, thereby providing a comprehensive view of the entire lung in each image. To assess the effectiveness of our proposal, we compared it against competing strategies utilizing both private data obtained from partner hospitals and a publicly available dataset. The results demonstrate that our approach effectively identifies pulmonary lesions and performs competitively when compared to slice-based methods.
Noemi Maritza L. Romero, Ricco Vasconcellos, Mariana R. Mendoza, João L. D. Comba
2023-08-02T21:13:10Z
http://arxiv.org/abs/2308.01433v1
# COVID-VR: A Deep Learning COVID-19 Classification Model Using Volume-Rendered Computer Tomography ###### Abstract The COVID-19 pandemic presented numerous challenges to healthcare systems worldwide. Given that lung infections are prevalent among COVID-19 patients, chest Computer Tomography (CT) scans have frequently been utilized as an alternative method for identifying COVID-19 conditions and various other types of pulmonary diseases. Deep learning architectures have emerged to automate the identification of pulmonary disease types by leveraging CT scan slices as inputs for classification models. This paper introduces COVID-VR, a novel approach for classifying pulmonary diseases based on volume rendering images of the lungs captured from multiple angles, thereby providing a comprehensive view of the entire lung in each image. To assess the effectiveness of our proposal, we compared it against competing strategies utilizing both private data obtained from partner hospitals and a publicly available dataset. The results demonstrate that our approach effectively identifies pulmonary lesions and performs competitively when compared to slice-based methods. COVID-19 Deep Learning Classification Models Computer Tomography Volume Rendering ## 1 Introduction The COVID-19 pandemic began spreading extensively in early 2020, resulting in the unfortunate loss of millions of lives worldwide. Accurate and timely diagnosis of COVID-19 is crucial, especially during periods of high demand that can strain healthcare systems. The gold standard method for diagnosing COVID-19 is the reverse transcriptase-polymerase chain reaction test (RT-PCR) [1]. However, RT-PCR can be time-consuming and yield false-negative results [2, 3]. An alternative approach involves analyzing chest images obtained from X-rays or Computer Tomography (CT) scans [4, 5]. Due to their higher resolution, CTs can lead to more precise diagnostics. During the diagnosis process, radiologists search for characteristic patterns of lung lesions associated with Ground-Glass Opacity (GGO)[6]. GGO appears as a gray or hazy region of increased attenuation in the lung. In the case of COVID-19 infection, GGO exhibits specific features such as its location (peripheral or bilateral), shape (rounded), and appearance (multifocal or closer to opaque lung tissues known as consolidations). For instance, on top of Figure1, the GGOs corresponding to COVID-19 are depicted as light magenta areas within the lungs. A limitation of CT-based diagnosis is that it consumes around 20 minutes of a radiologist's time. To automate this task, machine learning solutions have emerged, such as Deep Learning (DL) architectures based on Convolutional Neural Networks (CNNs) that use as input images the slices of a CT [5, 7]. A slice-based architecture is a common solution since it reflects the local approach radiologists take when looking at individual slices of a CT. Numerous proposals have been presented in the literature, and we refer the reader to a systematic literature review of the most relevant systems [3]. Despite the vast amount of research conducted on this topic, the scientific community is continuously striving to improve the accuracy and effectiveness of existing diagnostic models. In this work, we propose an alternative that follows a global approach of looking at images of the entire lung. We propose to generate volume-rendering images that use transparency to capture the inner structures of the lung, taken from different angles to overcome possible occlusions. The bottom of Figure 1 displays volume rendering images produced from different angles, in which GGOs are associated with darker regions in the lung. Our approach, COVID-VR, is based on a 3D Volume Rendering classification architecture consisting of a pipeline composed of three main modules. The first module receives as input a chest CT and performs a segmentation step that removes material outside the lung. The second module receives a segmented CT and generates volume rendering of the lung from different angles. The third module comprises a ResNet architecture [8] that receives volume rendering images and outputs a classification in two or three classes. We tested COVID-VR using the COVID-CT-MD [9]1 public dataset and a private dataset obtained from partner hospitals in Porto Alegre (Brazil). We present classification results comparing our architecture against three competing strategies, showing that our approach is competitive and can potentially be an alternative for COVID-19 classification from CTs. Footnote 1: [https://github.com/ShahinSHH/COVID-CT-MD](https://github.com/ShahinSHH/COVID-CT-MD) ## 2 Related Work This section summarizes related works relevant to our proposal. First, we review deep learning-based models described for COVID-19 diagnosis and the main candidates for comparing the results of our work. Next, we review the use of volume rendering for a better understanding of COVID-19. ### DL-based models for COVID-19 diagnosis from CTs DL techniques are widely utilized for the classification of CT images. To develop an effective classification system, it is crucial to comprehend the patterns associated with each class. Guidelines for classifying COVID-19 based on Figure 1: COVID-19 lesions in CT slices and volume renderings of the lung: (top) axial, coronal, and sagittal CT slices display ground-glass opacities associated with COVID-19 lesions correspond to light magenta areas inside the lungs; (bottom) volume rendering from different angles reveal ground-glass opacities associated with COVID-19 lesions as darker regions inside the lungs CT images can be found in Radiology standards. The Radiology Society of North America (RSNA) and the British Thoracic Imaging Society (BSTI) have provided similar classification schemes for COVID-19, consisting of four categories. The RSNA categories are COVID-19 typical, indeterminate, atypical, and negative for pneumonia. The typical classification corresponds to patients with lesions associated with ground-glass opacities, such as those seen in Figure 1. The indeterminate classification suggests the absence of typical features, plus multifocal, diffuse, perihilar, nonperipheral, nonrounded, or unilateral ground-glass opacity with or without consolidation. Atypical classification denotes the absence of the previous classifications' features and the presence of isolated consolidation without ground-glass opacities or discrete small nodules. The negative classification indicates no features of pneumonia. Even though both standards describe four classes, the first models to appear in the literature were either binary (COVID-19 or non-COVID-19) or ternary (COVID-19, normal, or Community-Acquired Pneumonia (CAP)). The normal and CAP classes are similar to the negative and indeterminate classes of RSNA. One way to navigate the extensive literature on the topic is to read the survey papers that summarize the proposed systems. The survey paper by Hassan et al. [3] is one of the most complete references on the topic, offering a categorization of CT-based diagnosis methods (classification, segmentation, and detection), frameworks and relevant findings and a list of open-source datasets. In total, they summarize 114 studies. We summarize below the related work to our proposal that uses CT images as inputs to their classification models [10, 11, 12, 13, 14, 15, 16, 17, 18]. DeCoVNet [11] uses a weakly-supervised 3D deep convolutional network to predict the probability of COVID-19 following a binary classification approach. In the first step, the authors used a U-NET to segment the lung in the volume and create a 3D binary mask. The CT and its 3D lung mask are sent to DeCoVNet, which consists of three stages: 3D convolution, 3D residual blocks, and a progressive classifier. Ternary models soon followed binary models to allow the separation of COVID-19 from normal cases and other types of CAPs. An example is COVNet [12], which uses a 3D deep learning framework that takes 3D slices as input, generates features, combines them using a max-pooling operation, and finally generates a probability score for each class. Like DeCoVNet, COVNet also relies on a pre-processing step that uses a U-NET to create a segmented mask for the lung. Amyar et al. [16] proposes a ternary model that uses a shared encoder for the classification and lesion segmentation tasks without requiring labeled segmentation data. Deep-chest [17] uses a VGG-CNN model that supports classification in four classes: COVID-19, pneumonia, lung cancer, and normal cases. Serte and Demirel [18] proposed a binary 3D classification model that fuses image=level predictions to classify CT volumes. Silveira et al. [19] compute omnidirectional images from the center of the lungs and use them as inputs to the DL classification model. Our approach is similar in the sense that it uses volume rendering images of the lungs, but from an external point of view. The ICASSP 2021 competition brought an opportunity to compare methods by providing a public dataset of CTs (COVID-CT-MD [9]) and a contest of performing classification in three possible classes: normal, COVID-19, or CAP. The six-best solutions were presented at the ICASSP conference [20, 21, 22, 23, 24, 25]. The first place (team TheSaviours [20]) has an accuracy of 90%. They describe a two-stage CNN model for detecting COVID-19 and Community-Acquired Pneumonia (CAP). The first stage is designed to detect infections (COVID-19 or CAP). The second stage performs the classification into three classes (CAP, COVID-19, and Normal). The second place (team IIT Delhi [21]) has an accuracy of 88.89%. Their method follows a three-level approach. In the first level, they use a slice-level classifier that performs feature extraction from all the slices of the CT to learn different sizes of infection. The next level performs a patient-level classifier, using four classifiers to distinguish between infected and normal slices. The last level performs an ensemble-learning that combines the scores of the previous level classifiers. The third place (team LLSCP [22]) has an accuracy of 87.78%. They use a multi-stage progressive learning approach composed of a 3D Resnet module, an ensemble binary classifier, and a final combining stage. The remaining entries achieved 85.56% [23], 81.11% [24] and 80% [25]. For comparison purposes, we chose TheSaviours [20] (ICASSP 2021 winner), DeCoVNet [11], and COVNet [12] approaches due to their outstanding performance and the public availability of code or pre-trained models. ### Volume Rendering Applications in COVID-19 Volume Rendering combines opacity mapping and lighting effects values through multiple rendering techniques [26, 27, 28]. It is widely used in medical image visualization [29] to analyze internal spatial relationships between structures, but few studies report its use in the COVID-19 context. Tang et al. [30] report one of the first efforts for visualizing COVID-19 pneumonia, displaying lung lesions in a color coronal image and a tridimensional volume rendering of the lungs, bronchus, and trachea from a 54 years old patient. Li et al. [31] show the advantage of 3D volume rendering to detect the extent of small pulmonary vessel microangiopathy and alveolar damage from an autopsy of a COVID-19 patient. COVID-view [32] describes a system for COVID-19 diagnosis that combines a classification system with explainable visualizations of activations over 2D slices and volume rendering of the lungs. They use Maximum Intensity Projection (MIP) to create projections of volumes that help identify vascular structures. In our work, we use Volume Rendering instead of MIP, and remains an issue for future work on how MIP would work in our system. COVID-view user interface allows users to modify the transfer functions used for volume rendering by choosing from a preset transfer function or by creating their own. They also use a coronal clipping tool that allows better inspection of the inner structures inside the lungs. However, their proposed classification model does not rely on volume rendering images but uses 2D slices. Similarly, COVID [33] proposes a classification model based on 2D slices and a virtual reality platform to explore 3D reconstructed lungs and segmented infected lesions caused by COVID-19. The system relies on software such as Blender to fix problems with the models and Unity to produce the final renderings and support the interaction through VR devices. ## 3 CT Datasets This study used COVID-19-related CT datasets from three different sources. The first source is the _Public_ dataset of CT images prepared for the ICASSP 2021 competition. The other two sources are from partner hospitals in Brazil that provided CT scans for COVID-19 and non-COVID-19 cases. These datasets are referred to as _Private_ datasets. CTs for all datasets are given in the DICOM format. This section describes the details of the creation and statistics of these datasets. ### COVID-CT-MD Public Dataset The COVID-CT-MD public dataset [9] was released for the SPGC-ICASSP competition and adopted for model development in other works (_e.g.,_[20, 21]), thus allowing the comparison of our approach against state-of-the-art methods. The dataset comprises 307 CT scans with patient-level annotations divided into three classes: confirmed positive COVID-19 cases, normal cases, and CAP cases. The COVID-19 cases were collected from February to April 2020, while CAP and normal cases were collected from April 2018 to December 2019 and January 2019 to May 2020, respectively. Although slice-level annotations are available for some patients, we did not explore them in our approach. The labeled CT scans used for model training and validation are based on a stratified random split: 30% of these CT scans are randomly selected as the validation set, and the remaining are used as the train set. Patients are adults recruited in the Babak Imaging Center, Iran, and exam labeling was conducted by experienced radiologists [9]. Table 1 shows the number of CT scans per class. In addition to the train/validation set, the SPGC-ICCASP competition released the SPGC-COVID Test Set [34], with four independent test sets for model evaluation. Three sets were used to calculate the competition results and are applied for performance comparison of our work with previous approaches. Each test dataset contains 30 CT scans, as described by Heidarian et al. [34]. The first set (Test Set 1) comprises COVID-19 and Normal cases (15 and 15, respectively), obtained from the same image center as the train/validation set. The second set (Test Set 2) contains the three classes, COVID-19, Normal, and CAP, with 10 cases for each one, obtained from another imaging center (Tehran Heart Center, Iran) using different scanner and scanner parameters. The third set (Test Set 3) contains the three classes, COVID-19, Normal, and CAP, with ten samples for each one collected in the same imaging center as Test Set 1. Test Set 2 differs from the others because it includes patients with a history of cardiovascular diseases and surgeries. The distribution of samples per class for the SPGC-COVID Test Sets is given in Table 1. ### Private Datasets from Partner Hospitals The private datasets were retrospectively obtained from patients admitted to two Brazilian hospitals in Porto Alegre: a private institution, Hospital Moinhos de Vento (HMV), and a public institution, Hospital de Clinicas de Porto Alegre (HCPA). The Research Ethics Committee in Brazil approved the study of the participating hospitals, and informed consent was waived due to the study retrospective nature2. All data were anonymized by providers to ensure patient privacy. The HMV dataset contains 284 CT scans from patients admitted between March and May 2020, whereas the HCPA dataset comprises 105 CT scans collected from March to June 2020. Both datasets have patient-level annotations by expert radiologists, following the RSNA standard [35]. As previously explained (Section 2), the RSNA standard classifies images into four classes: Typical appearance, Indeterminate appearance, Atypical appearance, and Negative for pneumonia. The first category corresponds to exams showing CT features frequently seen in patients with COVID-19 pneumonia (_e.g.,_ bilateral, peripheral, and multifocal ground-glass opacities), thus representing our class of interest (_i.e.,_ positive). The distribution of CT scans among classes is shown in Table 2. ## 4 The COVID-VR Classification Architecture In this section, we describe COVID-VR, an end-to-end pipeline for the analysis of CT images, from lung segmentation to patient-level classification associated with the presence of COVID-19 based on volume rendering images (Figure 2). ### Lung Segmentation The first step is to convert the input CT images (Figure 2A) from the DICOM format to the NIfTI (Neuroimaging Informatics Technology Initiative) format. We use the ALTIS system [36] to obtain isometric volumes across patients, which requires interpolating and resizing images with a slice spacing of 1mm in all dimensions. Several methods for lung segmentation were considered, such as UBC [37], Lungmask [38], and P-HNN [39]. P-HNN was selected after \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Class**} & \multicolumn{2}{c}{**Train/Validation**} & \multicolumn{3}{c}{**Test**} \\ & \multicolumn{2}{c}{**(307)**} & \multicolumn{3}{c}{**(90)**} \\ \hline & F & M & Total & F & M & Total \\ \hline COVID-19 & 63 & 108 & 171 & 9 & 26 & 35 \\ Normal & 36 & 40 & 76 & 15 & 20 & 35 \\ CAP & 26 & 34 & 60 & 7 & 13 & 20 \\ \hline \hline \end{tabular} \end{table} Table 1: COVID-CT-MD (Public dataset): number of CTs and distribution by class. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Class**} & \multicolumn{2}{c}{**HMV**} & \multicolumn{3}{c}{**HCPA**} \\ & \multicolumn{2}{c}{**(289)**} & \multicolumn{3}{c}{**(105)**} \\ \hline & F & M & Tot. & F & M & Tot. \\ \hline Typical & 34 & 58 & 92 & 13 & 17 & 30 \\ Negative & 54 & 38 & 92 & 11 & 4 & 15 \\ Indeterm. & 32 & 35 & 67 & 16 & 14 & 30 \\ Atypical & 16 & 17 & 33 & 13 & 17 & 30 \\ \hline \hline \end{tabular} \end{table} Table 2: Private dataset: number of CTs and distribution by class. Figure 2: The pipeline of COVID-VR has two stages. Stage 1 prepares data to obtain the input images for model development. In Stage 2, DL models are trained to distinguish among the classes of interest, and their outputs are combined to obtain a final patient-level classification. recommendations by radiologists who compared the results of these methods. Lung segmentation is performed with the pre-trained model of P-HNN, using a probability mask with a threshold of 75% or above to consider a voxel as part of the lung (Figure 2B). It is important to observe that our proposal does not require highly-accurate segmentation results, but enough to discard additional volume around the lungs. Therefore, we did not perform additional testing to check the accuracy of the segmentation. The images generated with volume rendering rely on transfer functions that associate different opacity values that can compensate for small errors in the segmentation algorithm. ### Generating Volume Rendering Images This step generates volume rendering images from the lung segmented in the previous stage (Figure 2C). There are two main problems that we need to address to configure the volume rendering algorithm. The first one is the specification of a transfer function (TF) that reveals the internal structures of the lung and lesions associated with COVID-19. The second problem is choosing the camera positions to generate the volume rendering images. Ideally, we want to generate images from different points of view that better capture the lungs and lesions. Both issues are described in this section. #### 4.2.1 Transfer Function Selection Volume rendering relies on a transfer function (TF) specification that defines a mapping from input values to color, transparency, and opacity. The input values in the segmented lungs are expressed in Hounsfield units (HU), a dimensionless scale obtained from a linear transformation of the measured absorption/attenuation of the X-ray beam. Although there is no universal standard for defining HU intervals for the lungs and GGOs, related works [40; 41] report values of around -700 HU for the lung and in the interval from -700 HU to -300 HU for the GGOs. We use this information to customize TFs. We use the MITK framework [42] to conduct tests with different TFs and to create the volume rendering images from the 3D volume of the segmented lungs. MITK has a user interface that allows for building customized transfer functions. Figure 3: Volume rendering of the lung using customized transfer functions. TF1 shows the boundary of the lungs but misses internal structures. TF2-TF4 shows internal structures but has similar colors for both GGOs (lesions) and the lung. TF5-TF6 better separates lungs and lesions. In our tests, we explored more than 60 different TFs. Figure 3 shows volume rendering images using six different TFs (TF1 to TF6) defined experimentally. Although they were all applied to the same CT exam, the rendered image differs significantly. TF1 highlights the outer layer of the lungs but misses internal structures. TF2 to TF6 reveal internal lesions in different ways. TF2 is defined within the lower range of values in the lung and ground-glass opacity (_i.e.,_ [-700, -300]), mapping only HU values in this interval. This function conserves a particular spatial distribution of the ground-glass opacity present in the lung but misses details on the lung texture and regions without any lesions. TF3 aims to replicate the behavior of 2D CTs that vary in a single color scale (usually gray-scale), mapping original values in the range of [-750, -200] with variations applied to the brightness. TF3 manages to preserve the lung as a three-dimensional image without losing critical information on the internal lesions caused by COVID-19, such as ground-glass opacity. TF4 to TF6 are obtained by including more colors to help differentiate the lung textures and features of the different classes. TF4 shows a thin delineation of the lung inner layer and a prominent highlighting of the bronchi, which in the presence of COVID-19 may show changes such as bronchial wall thickening [43]. For TF5 and TF6, we observe a more precise differentiation of ground-glass opacities from other characteristics, which may be interesting for the classification model as this feature is the most common imaging finding in COVID-19 patients [43]. Thus, by exploring a wide range of mapping combinations for color and opacity, each TF can highlight different regions or features of interest when applied to CT scans, such as external surfaces of the lungs as shown in TF1, arteries as shown in TF3, and ground-glass opacity as in TF5 and TF6. #### 4.2.2 Choosing the Viewing Camera Positions Volume rendering images need to capture the lungs and lesions. An important issue is the placement of the viewing camera in a location outside the 3D model of the lung. We explored placing camera positions looking at the main planes (axial, coronal, and sagittal). We discarded the sagittal plane because it renders one lung in front of the other. Due to transparency, the volume rendering of overlapping lungs might lead to combined patterns that might confuse the classification model. For axial and coronal planes, we rendered images starting from a position orthogonal to each plane and rotated the camera position along camera increments to capture the lungs from slightly different angles. We tested different angle increments and relied on the results from the classification model to fine-tune our choices. We captured images at increments of \(\pm\)1.2\({}^{\circ}\) degrees in the Y and X-axis until reaching a maximum of 12.0\({}^{\circ}\) degrees. There are 21 images on each of the Y and X-axis, leading to 42 different images per view (_i.e.,_ coronal or axial), for a total of 84 images per dataset. The resolution of each image was defined to be 448\(\times\)448px (Figure 2D) due to the constraints posed by the classification model network. Examples of the resulting images from different angles are shown in Figure 4. Figure 4: Volume rendering images used as inputs to the classification model. The first row represents images extracted for the axial plane. The third image in this row shows the (0,0) position in which the camera points to the axial view. From this position, we capture images every 1.2\({}^{\circ}\) towards the horizontal and vertical axis directions, going from -12\({}^{\circ}\) to +12\({}^{\circ}\) on each axis. The second row shows the same process for the coronal plane. ### Classification Models COVID-VR is composed of deep neural networks (DNN) models developed using the Tensorflow framework (Figure 2E). In this section, we detail the architecture and training of the models, as well as their evaluation. #### 4.3.1 Model Architecture and Training We use transfer learning to improve models for COVID-19 classification. We used a pre-trained Convolution Neural Network (CNN) model as the backbone for the architecture, such as the VGG [44], Resnet [8], DenseNet [45], or EfficientNet [46]. Figure 2E shows the Resnet101 as the backbone network, represented in orange. We add a sequence of deep learning modules to this backbone to help model training. For example, following the backbone, we add a global average pooling layer, a 20% dropout layer to avoid overfitting, two fully-connected dense layers, a new 20% dropout layer, and a batch normalization layer. The final module is a dense layer with a Sigmoid activation function for the binary classification task and a Softmax activation function for the ternary classification task. The selection of which network architecture we would use was performed empirically through several preliminary experiments. For model compilation, we use the Adam optimizer with a learning rate of \(2e-5\), with a binary (categorical) Cross-entropy loss function for binary (ternary) models. Model training and validation were conducted using stratified 5-fold cross-validation (CV) for the private datasets. We follow the competition orientation for the public dataset and use a random 70%/30% split from the COVID-CT-MD dataset [9] to generate the train and validation sets, respecting original class distributions among splits. The model is further evaluated with three independent test sets provided by the SPGC-ICASSP competition [34], as previously described (Section 3.1). Additionally, to increase the size and variety of the dataset at the training step and reduce the chances of overfitting, we use data augmentation methods in the training partition, like rotation up to 15\({}^{\circ}\), zoom (\(\pm\)5%), rescaling of 1/255, and shift for width and height images up to 10%. We train an individual DNN for each defined plane, either axial or coronal, based on the image views extracted from it. Moreover, despite the use of patient-level annotations for the supervised learning task, the output of our DNNs is a classification per image view. Thus, the batch of image views obtained for a given CT scan (_e.g.,_ 42 axial views or 42 coronal views) is classified by the corresponding network, specialized in either the axial or coronal view, generating a class label for each image analyzed. We obtain a distribution of class votes for each image batch that passes a consensus-extraction module to generate a patient-level prediction. Figure 2F illustrates the classification approach. It is defined based on two submodels, each predicting the COVID-19 diagnosis using images obtained from a specific view of the reconstructed 3D volume, axial or coronal. We generate a patient-level consensus distribution based on the distribution of class votes received from each submodel by summing up the number of votes per class. For example, in the ternary classification task shown in Figure 2F, the 42 images generated from the axial view are classified by the corresponding submodel with the following distribution: 6 votes for class COVID-19, 21 votes for class CAP, and 15 votes for class Normal. On the other hand, the submodel trained for coronal view images assigns 36 votes for class COVID-19, three votes for class CAP, and three votes for class Normal. The ensemble-based solution generated from the sum of submodels votes results in 42 images classified as COVID-19, 24 images classified as CAP, and 18 images classified as Normal. Figure 2G shows the final model used to predict a single label for the input CT scan and the class with the maximum number of votes according to both views. In this example, the CT would be classified as COVID-19 by our approach. #### 4.3.2 Model Evaluation We use the following evaluation metrics to assess the performance of our COVID-19 models: accuracy (Acc), sensitivity (Sens, also called recall), specificity (Spec), precision (Prec), F1-score (_i.e.,_ the harmonic mean between recall and precision), and the area under the Receiver Operating Characteristic (ROC) Curve (AUC score). Given its multiclass nature in the ternary model, we adopted the micro-and macro-average for all metrics except accuracy. ## 5 Results This section reports the evaluation of the models proposed in COVID-VR. We first compare various backbone networks and transfer functions using the public dataset (COVID-CT-MD). We use the insights from these results in subsequent experiments. Next, we compare the classification performance of the models with state-of-the-art approaches for detecting COVID-19-related pneumonia in the public dataset. Finally, we discuss the performance of the models developed from the private datasets and compare both ternary and binary classification models. Since there is a class imbalance in the datasets, we focus our discussion on the micro-average metric for the ternary models. Full results are provided in Table 5 and Table 8. ### Selection of the Backbone Network We tested several CNN architectures as the backbone network to perform CT scan classification with the models described in Section4.3, including ResNet [8], DenseNet [45], VGG [44], and EfficientNet [46] families. We explored different depths for each architecture (_i.e.,_ ResNet50, ResNet101, DenseNet121, DenseNet201, EfficientNet-B0, EfficientNet-B1, EfficientNet-B6, VGG16, and VGG19) with fixed training and validation sets contained in the COVID-CT-MD dataset. A comparison of the classification performance of the better models for each network family is given in Table 3. We use a fixed transfer function (TF6) to generate volume rendering images in all experiments. The choice of the transfer function is explained in the next section. The performance results correspond to the ternary classification models using the train and validation sets from the public dataset (_i.e.,_ COVID-CT-MD dataset). We show the performance for the COVID-19 class and the overall performance for the three classes. F1 and AUC scores are obtained with the micro-average. Additional results per class can be found in Table 5. Our ternary model achieved the best results using the ResNet101 network as its backbone. The validation accuracy and F1-score were the highest among the best models for each network family. Both accuracy and F1-score for the overall classification in a ternary approach were 90.8%, approximately two percentage points above VGG16, which presented the second-best performance (Table 3). Regarding AUC score, DenseNet121 achieved the highest mark of 96.5 compared to 95.4 for ResNet101. Nonetheless, observing the classification results for the COVID-19 class (Table 3), we note that ResNet101 showed the best performance for all metrics. Moreover, it achieved the best performance for CAP cases and competitive performance for Normal cases (see Table 5). Thus, we chose ResNet101 as the backbone network for COVID-VR, using it for all the subsequent experiments. ### Selection of the Transfer Function We compared the transfer functions discussed in Section 4.2.1 to decide which one we would use to generate volume rendering images. We used the results of the ternary classification model for this purpose, with the train and validation sets from the public dataset. Table 4 summarizes the results. TF4 and TF5 were omitted due to similar performance obtained with TF3 and TF2, respectively. TF6 presented the best accuracy and overall F1 score performance. Although TF6 does not have the highest sensitivity, it had a balanced performance in detecting COVID-19 cases. In addition, TF6 achieved the best sensitivity for CAP and normal cases. Figure 5 compares the ROC curves obtained for distinct TFs. Although there is a small between the AUC score for TF6 and TF2, the ROC curve for TF6 achieves the highest sensitivity for a 10% false-positive rate. Therefore, we chose TF6 as the standard transfer function. Besides the visual detection of different features highlighted by each TF, this test proved relevant, as switching the TF while keeping the same model architecture results in considerable accuracy variation. ### Classification Performance for the Public Dataset We conducted experiments to evaluate COVID-VR for the ternary classification model (COVID-19 vs. CAP vs. Normal) in the validation set and the three test sets from the COVID-CT-MD dataset (here unified in a single test set). All models use the ResNet101 backbone network and TF6 for volume rendering. We compared COVID-VR against the winner of the competition (TheSavious) [20], and two state-of-the-art methods, DeCoVNet [11] and COVNet [12]. Table 5 summarizes the results. COVID-VR achieved the highest overall performance in the validation set. COVID-VR accuracy was 90.8% in contrast to 77.6% obtained with COVNet, which ranked second. In terms of AUC score, \begin{table} \begin{tabular}{l l l l l l} \hline \hline & Metrics & VGG16 & DenseNet121 & EfficientNet-B2 & ResNet101 \\ \hline \multirow{3}{*}{Overall} & Acc & 88.8\% & 87.8\% & 86.7\% & **90.8\%** \\ & F1 & 88.6\% & 87.7\% & 86.7\% & **90.8\%** \\ & AUC & 95.6 & **96.5** & 95.1 & 95.4 \\ \hline \multirow{3}{*}{COVID-19} & Sens & **89.1\%** & 85.5\% & 83.6\% & **89.1\%** \\ & Spec & 88.4\% & **93.0\%** & 90.7\% & **93.0\%** \\ \cline{1-1} & Prec & 90.7\% & 94.0\% & 92.0\% & **94.2\%** \\ \cline{1-1} & F1 & 89.9\% & 89.5\% & 87.6\% & **91.6\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of distinct backbone network architectures in COVID-VR. Model training and validation use the train and validation sets from the COVID-CT-MD public dataset for the ternary classification task (COVID-19 vs. CAP vs. Normal). COVID-VR achieved 95.4 while COVNet achieved 89.8. COVID-VR also had the best predictive power for COVID-19 cases, reaching high and balanced sensitivity and specificity values. When comparing the CAP and Normal cases, COVID-VR had a 100% sensitivity and 94% specificity for the Normal class and 84.2% sensitivity and 97.5% specificity for the CAP class - in both cases surpassing the competing strategies. COVID-VR achieved an accuracy of 86.7% in the test set, while TheSavoiours correctly classified 90.0% of the test instances. COVID-VR was the best method for detecting COVID-19 cases (94.3% sensitivity), keeping high specificity (92.7%), and F1-score (91.7%). In contrast, TheSavoiours achieved the highest F1-score for Normal and CAP cases. The COVID-VR model is trained with labels at the patient level, while the approach presented by TheSavoiours [20] trains a model by exploring labels at the slice level. We reach similar results despite using a coarser-grained annotation in CT scans. Figure 6 compares the micro-average ROC curves for the ternary classification models using the validation test (top) and the test set (bottom). COVID-VR has the best performance in the validation set, notably improving the true positive \begin{table} \begin{tabular}{c c c c c c} \hline & Metrics & TF1 & TF2 & TF3 & TF6 \\ \hline \multirow{3}{*}{Overall} & Acc & 79.6\% & 85.7\% & 87.8\% & **90.8\%** \\ & F1 & 78.7\% & 84.5\% & 87.6\% & **90.8\%** \\ & AUC & 91.8 & **96.3** & 94.7 & 95.4 \\ \hline \multirow{3}{*}{COVID-19} & Sens & 89.1\% & **96.4\%** & 89.1\% & 89.1\% \\ & Spec & 72.1\% & 72.1\% & 88.4\% & **93.0\%** \\ \cline{1-1} & Prec & 80.3\% & 81.5\% & 90.7\% & **94.2\%** \\ \cline{1-1} & F1 & 84.5\% & 88.3\% & 89.9\% & **91.6\%** \\ \hline \end{tabular} \end{table} Table 4: Comparison of distinct transfer functions in COVID-VR. Model training and validation use the train and validation sets from the COVID-CT-MD public dataset for the ternary classification task (COVID-19 vs. CAP vs. Normal). Figure 5: ROC curves for transfer functions TF1, TF2, TF3, and TF6 rate (_i.e._, sensitivity) for false-positive rates ranging from 0 to 0.3. Considering the test set, the performance of the TheSavoiours model improves in experiments with the validation set and surpasses COVID-VR (_i.e._, 97.4 vs. 95.7) in the AUC score. Nonetheless, we highlight that COVID-VR had the most stable performance between validation and test sets, despite the clinical and technical differences introduced in the CT images from the SPGC-COVID Test Set [34]. Finally, we note that COVID-VR had an accuracy close to that reported by the first place of the competition like IITDehli [21] with 88.9%, LLSCP [22] with 87.8%, and UniSheff_EEE [23] with 85.56%. The results for these approaches were not included in the table due to the lack of public code to reproduce the experiments. ### Classification Performance for the Private Dataset We analyzed the performance of the methods for a binary classification task, COVID-19 vs. non-COVID-19, using the private datasets. The experiments considered two definitions for the negative class: in the first, we merged the Negative, Indeterminate, and Atypical classes into a unique non-COVID-19 class, and in the second, we only considered the original negative class (_i.e._, Negative for pneumonia) as the classifiers non-COVID-19 class. In both cases, the Typical classification was considered the positive class (_i.e._, COVID-19). Performance assessment was based on a 5-fold CV, with the configuration of the same folds for COVID-VR, DeCoVNet, and COVNet. COVID-VR obtained the best results in all metrics, with 92.2% of accuracy and 95.6% of AUC for this binary classification task (Table 6). In comparing COVID-19 vs. Normal classification, COVID-VR obtained accuracy and F1-score similar to COVNet, but \begin{table} \begin{tabular}{|c|l|c c|c c c|c c c|c c c|c c c|} \cline{3-13} \multicolumn{13}{c|}{} & \multicolumn{2}{c|}{General} & \multicolumn{4}{c|}{COVID-19} & \multicolumn{4}{c|}{Normal} & \multicolumn{4}{c|}{CAP} \\ \cline{3-13} \multicolumn{13}{c|}{} & Accu. & Kappa & Preci. & Sensi. & Speci & F-score & Preci. & Sensi. & Speci. & F-score & Preci. & Sensi. & Speci. & F-score \\ \hline \multirow{13}{*}{\begin{tabular}{c} Transfer \\ Function \\ Comparison \\ \end{tabular} } & TF 1 & 79.6 & 63.7 & 80.3 & 89.1 & 72.1 & 84.5 & 71.4 & 83.3 & 89.2 & 76.9 & **100.0** & 47.4 & **100.0** & 64.3 \\ & TF 2 & 85.7 & 74.0 & 81.5 & **96.4** & 72.1 & 88.3 & **95.7** & 91.7 & **98.6** & 93.6 & 90.0 & 47.4 & 98.7 & 62.1 \\ & TF 3 & 87.8 & 79.2 & 90.7 & 89.1 & 88.4 & 89.9 & 85.2 & 95.8 & 94.6 & 90.2 & 82.4 & 73.7 & 96.2 & 77.8 \\ & TF 4 & 88.8 & 80.8 & 90.9 & 90.9 & 88.4 & 90.9 & 85.2 & 95.8 & 94.6 & 90.2 & 87.5 & 73.7 & 97.5 & 80.0 \\ & TF 5 & 85.7 & 75.4 & 87.5 & 89.1 & 83.7 & 88.3 & 82.1 & 95.8 & 93.2 & 88.5 & 85.7 & 63.2 & 97.5 & 72.7 \\ & TF 6 & **90.8** & **84.6** & **94.2** & **89.1** & **93.0** & **91.6** & 85.7 & **100.0** & 94.6 & **92.3** & 88.9 & **84.2** & 97.5 & **86.5** \\ \hline \multirow{13}{*}{\begin{tabular}{c} Architecture \\ Comparison \\ \end{tabular} } & VGG16 & 88.8 & 81.0 & 90.7 & **89.1** & 88.4 & 89.9 & **85.9** & **100.0** & **95.9** & **94.1** & 82.4 & 73.7 & 96.2 & 77.8 \\ & DenseNet121 & 87.8 & 79.7 & 94.0 & 85.5 & **93.0** & 89.5 & 82.8 & **100.0** & 93.2 & 90.6 & 78.9 & 89.9 & 94.9 & 78.9 \\ & EfficientNetB26 & 86.7 & 78.0 & 92.0 & 83.6 & 90.7 & 87.6 & 85.7 & **100.0** & 94.6 & 92.3 & 75.0 & 78.9 & 93.7 & 76.9 \\ & ResNet101 & **90.8** & **84.6** & **94.2** & **89.1** & **93.0** & **91.6** & 85.7 & **100.0** & 94.6 & 92.3 & **88.9** & **84.2** & **97.5** & **86.5** \\ \hline \multirow{13}{*}{\begin{tabular}{c} Method \\ Comparison \\ \end{tabular} } & DeCovNet & 67.3 & 44.9 & 74.5 & 74.5 & 67.4 & 74.5 & 71.4 & 41.7 & **94.6** & 52.6 & 51.7 & 78.9 & 82.3 & 62.5 \\ & COVNet & 77.6 & 63.2 & 85.4 & 74.5 & 83.7 & 79.6 & 66.7 & 83.3 & 86.5 & 74.1 & 75.0 & 78.9 & 93.7 & 76.9 \\ & TheSavoiours & 74.5 & 58.8 & 86.7 & 70.9 & 86.0 & 78.0 & 51.4 & 75.0 & 77.0 & 61.0 & **88.9** & **84.2** & **97.5** & **86.5** \\ & Validation & COVID-VR & **90.8** & **84.6** & **94.2** & **89.1** & **93.0** & **91.6** & **85.7** & **100.0** & **94.6** & **92.3** & **88.9** & **84.2** & **97.5** & **86.5** \\ \hline \multirow{13}{*}{ \begin{tabular}{c} Method \\ Comparison \\ (Test) \\ \end{tabular} } & DeCovNet & 52.2 & 30.3 & 48.8 & 57.1 & 61.8 & 52.6 & 77.8 & 20.0 & 96.4 & 31.8 & 50.0 & **100.0** & 71.4 & 66.7 \\ & COVNet & 67.8 & 50.0 & 60.0 & 77.1 & 67.3 & 67.5 & 74.1 & 57.1 & 87.3 & 64.5 & 77.8 & 70.0 & 94.3 & 73.7 \\ & TheSavoiours & **90.0** & **84.6** & **90.9** & 85.7 & **94.5** & 88.2 & 89.2 & **94.3** & 97.1 & **90.0** & **90.0** & 90.0 & **90.0** & **94.4** \\ & COVID-VR & 86.7 & 79.7 & 89.2 & **94.3** & 92.7 & **91.7** & **96.4** & 77.1 & **98.2** & 85.7 & 72.0 & 90.0 & **90.0** & 80.0 \\ \hline \end{tabular} \end{table} Table 5: Ternary classification results by class in the Public COVID-CT-MD. Performance metrics are in percentage. Figure 6: Micro-average ROC curves for the ternary classification task using the public dataset: validation set (left) and test set (right). with higher Sensitivity and AUC scores (Table 7). The ROC curves are given in Figure 7, that shows that in both cases COVID-VR has the best AUC scores. ### Visual Explanation of Activations using Grad-CAM We generated Grad-CAM images [47] to explain the learning of the COVID-VR model. Figure 8 shows the last convolutional layer activation heatmaps for the COVID-VR ternary model (COVID-19, Normal, and CAP) over the axial and coronal view of a COVID-19 patient. The GRAD-CAM uses the default jet colormap. Since The COVID-VR model classifies 42 images (rotated by small angles) per patient and view, each image generates an activation map for each class, illustrated in Figure 8 as thumbnail images. The axial and coronal views show the mean of activation maps for each class from the central point of view of the camera. The Grad-CAM heatmaps show what most influenced the classification. The COVID-19 class corresponds to red areas with lesions in the axial and coronal views. The Normal class activates almost the entire image, avoiding one of the main lesions from COVID-19 activation map in axial and coronal views and other visible lesions in the coronal view. Almost the entire lung is activated in the CAP class but less intense than in the Normal class. ## 6 Discussion In summary, the main results showed that the COVID-VR architecture reached an accuracy of 90.8% and an F1-score of 90.8% in ternary classification using the COVID-CT-MD [9] public dataset. The binary classification of COVID-19 vs. others (Negative, Indeterminate, and Atypical CT images) achieved an accuracy of 92.2% with an F1-score of 87.2%. \begin{table} \begin{tabular}{c|c c c} \hline Metrics & COVID-VR & DeCovNet & COVNet \\ \hline Acc & **92.2\%** & 87.8\% & 89.4\% \\ Sens & **83.6\%** & 78.7\% & 83.6\% \\ Spec & **96.2\%** & 92.0\% & 92.0\% \\ F1 & **87.2\%** & 80.3\% & 83.3\% \\ AUC & **95.6** & 89.2 & 93.1 \\ \hline \end{tabular} \end{table} Table 6: COVID-19 vs. Others task results. Training and validation technique in the private (HMV+HCPA) dataset \begin{table} \begin{tabular}{c|c c c} \hline Metrics & COVID-VR & DeCovNet & COVNet \\ \hline Acc & **96.1\%** & 92.5\% & **96.1\%** \\ Sens & **96.7\%** & 91.0\% & 95.9\% \\ Spec & 95.3\% & 94.3\% & **96.2\%** \\ F1 & **96.3\%** & 92.9\% & **96.3\%** \\ AUC & **98.6** & 96.3 & 97.7 \\ \hline \end{tabular} \end{table} Table 7: COVID-19 vs. Normal task results. Training and validation technique in the private (HMV+HCPA) dataset Figure 7: ROC curves for the COVID-19 against the class others (left) and normal (right) in the private dataset. Finally, COVID-VR had an accuracy of 96.1% with an F1-score of 96.3% in the binary classification COVID-19 vs. Normal (Negative) task using COVID-19 class as the positive class in the private datasets given by partner hospitals. The experiments suggest that COVID-VR achieves the goal of learning to recognize typical COVID-19 patterns in chest CT images in comparison to other competing strategies. The COVID-VR model can help specialists in the COVID-19 diagnosis by performing a binary classification that identifies or discards typical cases of COVID-19. Although TheSavious model leads the performance for the ternary classification model, COVID-VR has competitive results and does not require labeling lesions in slides. In summary, COVID-VR reveals the potential of using images from the exterior of the lungs in comparison to slices used in traditional approaches. To allow our proposal to be reproduced and compared against other proposals, we made the COVID-VR model available at [https://github.com/covid-vr/covid-vr-docker](https://github.com/covid-vr/covid-vr-docker). ## 7 Conclusion and Future Work In this work, we introduced COVID-VR, a novel 3D Volume Rendering classification architecture designed for classifying pulmonary diseases using volume-rendering images of the lungs. The architecture consists of three main modules: segmentation, volume rendering, and classification. The segmentation module removes non-lung material from the input chest CT scan, while the volume rendering module generates lung images from various angles. Unlike slice-based approaches that rely on images from specific CT slices (axial, coronal, or sagittal), the volume rendering technique offers a comprehensive view of the entire lung in each image, overcoming potential occlusions. Transparency is used to render the inner structures of the lung, and images are generated from angles that capture different views of the lung. Finally, the classification module employs a ResNet architecture to classify the volume rendering images into two or three classes (COVID-19, CAP and normal). Figure 8: Grad-CAM visual activations for COVID-VR using a COVID-19 patient from COVID-CT-MD dataset. The first column displays the volume rendering image. The second column shows heatmaps indicating which areas from the input image most activated the model for the class COVID-19, leading it to correctly classify that patient. In contrast, the second and third column show the heatmaps of activation maps for Normal and CAP class respectively. The first and last rows show thumbnails of all the activation maps. To evaluate the effectiveness of our approach, we conducted experiments using a publicly available COVID-CT-MD dataset and a private dataset from partner hospitals. The classification results are compared against competing strategies, demonstrating that COVID-VR achieves competitive classification results without requiring the labeling of lesions in individual slides. We recognize that there is still room for improvement in the COVID-VR model. One area to explore further is the generation of transfer functions using deep learning methods. Additionally, we aim to investigate the application of volume-rendered images in other classification scenarios, expanding the potential of our approach beyond pulmonary diseases. ## Acknowledgments This work was partially financed by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, FAPERGS 20/2551-0000254-3 and CNPq 140313/2017-6.
2301.02431
Equilibrium Spacetime Correlations of the Toda Lattice on the Hydrodynamic Scale
We report on molecular dynamics simulations of spacetime correlations of the Toda lattice in thermal equilibrium. The correlations of stretch, momentum, and energy are computed numerically over a wide range of pressure and temperature. Our numerical results are compared with the predictions from linearized generalized hydrodynamics on the Euler scale. The system size is N=3000,4000 and time t =600, at which ballistic scaling is well confirmed. With no adjustable parameters, the numerically obtained scaling functions agree with the theory within a precision of less than 3.5%.
Guido Mazzuca, Tamara Grava, Thomas Kriecherbauer, Kenneth T-R McLaughlin, Christian B. Mendl, Herbert Spohn
2023-01-06T09:31:00Z
http://arxiv.org/abs/2301.02431v1
# Equilibrium Spacetime Correlations of the Toda Lattice on the Hydrodynamic Scale ###### Abstract We report on molecular dynamics simulations of spacetime correlations of the Toda lattice in thermal equilibrium. The correlations of stretch, momentum, and energy are computed numerically over a wide range of pressure and temperature. Our numerical results are compared with the predictions from linearized generalized hydrodynamics on the Euler scale. The system size is \(N=3000,4000\) and time \(t=600\), at which ballistic scaling is well confirmed. With no adjustable parameters, the numerically obtained scaling functions agree with the theory within a precision of less than \(3.5\%\). ## 1 Introduction A central goal of Statistical Mechanics is to explore the structure of equilibrium correlations for observables of physical interest. These could be static correlations, but more ambitiously also correlations in spacetime. An interesting, but very fine-tuned class of hamiltonians are integrable many-body systems, either classical or quantum. This choice restricts us to systems in one dimension. Then, generically, static correlations have exponential decay whether the model is integrable or not. However, the dynamics of correlations is entirely different. In nonintegrable chains correlations propagate as a few narrow peaks at constant speed which then show characteristic sub-ballistic broadening. On the other hand for integrable models correlations still spread ballistically but now with a broad spectrum of velocities. Such behaviour was confirmed through a molecular dynamics (MD) simulation of the Ablowitz-Ladik model [32], an integrable discretization of the nonlinear Schrodinger equation. A further confirmation came from the simulation of the Toda chain [22]. On the theoretical side, the 2016 construction of generalized hydrodynamics (GHD) was an important breakthrough [3, 6]. This theory provides a powerful tool through which, at least in principle, the precise form of the spectrum of correlations can be predicted. With such a development MD simulations can also be viewed as probing the validity of GHD. From the side of condensed matter physics, integrable quantum models have received considerable attention. Because of size limitations, the simulation of macroscopic profiles are preferred. But time correlations have also been studied through DMRG simulations [4, 5, 8, 34]. In recent years, attention has been given to the spacetime spin-spin correlation of the XXZ model at half-filling and at the isotropic point [10, 20, 25]. The same quantity has also been investigated for a discrete classical chain with 3-spins of unit length and interactions such that the model is integrable [7]. A comparable situation occurs for the classical sinh-Gordon equation, which is integrable as a nonlinear continuum wave equation and possesses an integrable discretization, see [2] for MD simulations for equilibrium time correlations of the discrete model. In our contribution we study the correlations of the Toda chain in thermal equilibrium through MD simulations and compare with predictions from GHD. We will comment on the connection to [22] in the last section. To make our article reasonably self-contained we first discuss the Landau-Lifshitz theory for nonintegrable chains. This theory provides the connection between spacetime correlations and linearized hydrodynamics. For the Toda chain, the theory has to be extended so as to accommodate an infinite number of conserved fields. We report on MD simulations of the Toda chain and compare with linearized GHD. ## 2 Landau-Lifshitz theory The dynamics of the Toda chain is governed by the Hamiltonian \[H=\sum_{j\in\mathbb{Z}}\big{(}\tfrac{1}{2}p_{j}^{2}+\exp(-(q_{j+1}-q_{j})) \big{)}, \tag{1}\] where \((q_{j},p_{j})\in\mathbb{R}^{2}\) are position and momentum of the \(j\)-th particle [43, 44]. Introducing the \(j\)-th stretch (free volume) through \(r_{j}=q_{j+1}-q_{j}\), the equations of motion read \[\frac{d}{dt}r_{j}=p_{j+1}-p_{j}\,,\qquad\frac{d}{dt}p_{j}=-\mathrm{e}^{-r_{j}} +\mathrm{e}^{-r_{j-1}},\qquad j\in\mathbb{Z}. \tag{2}\] By tradition, one introduces coefficients for the range and strength of the interaction potential through \((g/\gamma)\exp(-\gamma(q_{j+1}-q_{j}))\). However, by a suitable change of spacetime scales, the form (2) can be regained, see the discussion in Section 5. The Toda hamiltonian has no free parameters. Since the equilibrium measure for (1) is of product form, static correlations are easily accessible. Time correlations are more challenging, see [36, 37] for early attempts. A novel approach has been developed, now known as GHD. The guiding idea is to first identify the hydrodynamic equations for the Toda chain, which by necessity are a set of nonlinear coupled hyperbolic conservation laws. Given such an input one can construct the corresponding Landau-Lifshitz theory [13, 24], as based on linearized GHD. Before entering into details, it will be useful to first recall the Landau-Lifshitz theory for a chain with a generic interaction potential, denoted by \(V\) (for the Toda lattice \(V(x)=\mathrm{e}^{-x}\)), see [38] and references listed therein. Thus in (1) the interaction term reads \(V(q_{j+1}-q_{j})\) and the equations of motion become \[\frac{d}{dt}r_{j}=p_{j+1}-p_{j}\,,\qquad\frac{d}{dt}p_{j}=V^{\prime}(r_{j})-V^{ \prime}(r_{j-1}).\] To define spacetime correlations we first have to specify the random initial data modelling thermal equilibrium. By Galileian invariance one restricts to the case of zero average momentum. Then the Gibbs states are characterized by the inverse temperature \(\beta>0\) and a parameter \(P\) such that the physical pressure equals \(P/\beta\). For simplicity, we will also refer to \(P\) as pressure. The allowed range of \(P\) depends on \(V\). If \(V\) diverges faster than \(|x|\) for \(|x|\to\infty\), then \(P\in\mathbb{R}\). For the Toda lattice \(P>0\) because of the one-sided divergence of the exponential. In thermal equilibrium \(\{(r_{j},p_{j}),j\in\mathbb{Z}\}\) are a collection of i.i.d. random variables with single site probability density \[Z_{0}(P,\beta)^{-1}\exp\bigl{(}-\beta\bigl{(}\tfrac{1}{2}p_{0}^{2}+V(r_{0}) \bigr{)}-Pr_{0}\bigr{)}. \tag{3}\] Here \(Z_{0}(P,\beta)\) is the normalizing partition function. Note that, with our convention, \(P\) and \(\beta\) appear linearly in the exponent. Expectations with respect to such i.i.d. random variables are denoted by \(\langle\cdot\rangle_{P,\beta}\). We also shorten the notation for the covariance through \(\langle X_{1}X_{2}\rangle_{P,\beta}^{\mathrm{c}}=\langle X_{1}X_{2}\rangle_{P, \beta}-\langle X_{1}\rangle_{P,\beta}\langle X_{2}\rangle_{P,\beta}\), where the particular random variables \(X_{1},X_{2}\) will be obvious from the context. For general \(V\), the conserved fields are stretch, momentum, and energy with densities \[\vec{Q}(j)=\bigl{(}r_{j},p_{j},e_{j}\bigr{)},\qquad e_{j}=\tfrac{1}{2}p_{j}^{ 2}+V_{j}, \tag{4}\] using as shorthand \(V_{j}=V(r_{j})\). \(\vec{Q}\) is a three-vector with components labeled by \(n=0,1,2\). The static space correlator is defined through \[\mathsf{C}_{m,n}(j)=\langle Q_{m}(j)Q_{n}(0)\rangle_{P,\beta}^{\mathrm{c}} \tag{5}\] and the static susceptibility by summing over space, \[\mathsf{C}_{m,n}=\sum_{j\in\mathbb{Z}}\langle Q_{m}(j)Q_{n}(0)\rangle_{P, \beta}^{\mathrm{c}},\] \(m,n=0,1,2\). Since the underlying measure is product, only the \(j=0\) term is nonvanishing and \[\mathsf{C}=\begin{pmatrix}\langle r_{0}r_{0}\rangle_{P,\beta}^{\mathrm{c}}&0& \langle r_{0}e_{0}\rangle_{P,\beta}^{\mathrm{c}}\\ 0&\langle p_{0}p_{0}\rangle_{P,\beta}^{\mathrm{c}}&0\\ \langle r_{0}e_{0}\rangle_{P,\beta}^{\mathrm{c}}&0&\langle e_{0}e_{0}\rangle_ {P,\beta}^{\mathrm{c}}\end{pmatrix},\] the zero entries resulting from \(\langle p_{0}\rangle_{P,\beta}=0\), \(\langle p_{0}^{3}\rangle_{P,\beta}=0\), and \(r_{0},p_{0}\) being independent random variables. Later on we will need the statistics of the conserved fields on the hydrodynamic scale. More precisely, for smooth test functions \(f\), we consider the random field \[\vec{\xi}_{\epsilon}(f)=\sqrt{\epsilon}\sum_{j\in\mathbb{Z}}f(\epsilon j) \bigl{(}\vec{Q}(j)-\langle\vec{Q}(0)\rangle_{P,\beta}\bigr{)}.\] Then, by the central limit theorem for independent random variables, \[\lim_{\epsilon\to 0}\vec{\xi}_{\epsilon}(f)=\int_{\mathbb{R}}\mathrm{d}xf(x) \vec{u}(x),\] where the limit field \(\vec{u}(x)\) is a Gaussian random field on \(\mathbb{R}\) with mean zero, \(\mathbb{E}(\vec{u}(x))=0\), and covariance \[\mathbb{E}(u_{m}(x)u_{n}(x^{\prime}))=\mathcal{C}_{m,n}\delta(x-x^{\prime}), \tag{6}\] in other words, \(\vec{u}(x)\) is Gaussian white noise with correlated components. Microscopically, spacetime correlations are defined by evolving one of the observables to time \(t\) which yields \[\mathcal{S}_{m,n}(j,t)=\langle Q_{m}(j,t)Q_{n}(0,0)\rangle^{\mathrm{c}}_{P, \beta}. \tag{7}\] Note that the Gibbs measure is spacetime stationary and thus without loss of generality both arguments in \(Q_{n}\) in (7) can be taken as \((0,0)\). To understand the structure of \(\mathcal{S}_{m,n}\) one has to rely on approximations. For the long time ballistic regime a standard scheme is the Landau-Lifshitz theory, which views \(Q_{n}(0,0)\) as a small perturbation of the initial Gibbs measure at the origin. This perturbation will propagate and is then probed by the average of \(Q_{m}\) at the spacetime point \((j,t)\). For large \((j,t)\) the microscopic dynamics is approximated by the Euler equations, but only in their linearized version since the perturbation is small. More concretely, the approximate theory will be a continuum field \(\vec{u}(x,t)\) over \(\mathbb{R}\times\mathbb{R}\), which is governed by \[\partial_{t}\vec{u}(x,t)+\mathcal{A}\partial_{x}\vec{u}(x,t)=0\,, \tag{8}\] with random initial conditions as specified in (6). The \(3\times 3\) matrix \(\mathcal{A}\) is constant, i.e. independent of \((x,t)\). To explain the structure of \(\mathcal{A}\) requires some further efforts. We refer to [38] for more details and proofs of the key identities. From the equations of motion one infers that to each density \(Q_{n}(j,t)\) there is a current density \(J_{n}(j,t)\) such that \[\frac{d}{dt}Q_{n}(j,t)+J_{n}(j+1,t)-J_{n}(j,t)=0.\] Explicitly, the current densities are \[\vec{J}(j)=-(p_{j},V^{\prime}_{j-1},p_{j}V^{\prime}_{j-1}), \tag{9}\] where we adopted the convention that omission of time argument \(t\) means time \(0\) fields. One then defines the static current-conserved field correlator \[\mathcal{B}_{m,n}(j)=\langle J_{m}(j)Q_{n}(0)\rangle^{\mathrm{c}}_{P,\beta}, \tag{10}\] and the corresponding susceptibility \[\mathcal{B}_{m,n}=\sum_{j\in\mathbb{Z}}\langle J_{m}(j)Q_{n}(0)\rangle^{ \mathrm{c}}_{P,\beta}.\] Despite its asymmetric looking definition, \[\mathcal{B}_{m,n}=\mathcal{B}_{n,m}. \tag{11}\] As a general property, Euler equations are built on thermally averaged currents. Linearizing them with respect to the average fields yields \[\mathcal{A}=\mathcal{B}\mathcal{C}^{-1}.\] Here \(\mathcal{B}\) appears when differentiating the average currents with respect to the chemical potentials and \(\mathcal{C}^{-1}\) when switching from intensive to extensive variables. By construction \(\mathcal{C}=\mathcal{C}^{\mathrm{T}}\) and \(\mathcal{C}>0\), in addition \(\mathcal{B}=\mathcal{B}^{\mathrm{T}}\) according to (11). Hence \[\mathcal{A}=\mathcal{C}^{1/2}\mathcal{C}^{-1/2}\mathcal{B}\mathcal{C}^{-1/2} \mathcal{C}^{-1/2},\] which ensures that \(\mathcal{A}\) has real eigenvalues and a complete set of left-right eigenvectors. Anharmonic lattices are symmetric under time reversal, which implies the eigenvalues \(\vec{c}=(-c,0,c)\), with \(c>0\) the isentropic speed of sound. We denote the right, resp. left eigenvectors of \(\mathcal{A}\) by \(|\psi_{\alpha}\rangle\) and \(\langle\tilde{\psi}_{\alpha}|\), \(\alpha=0,1,2\). With this input the solution to (8) with initial conditions (6) reads \[\mathcal{S}^{\mathrm{LL}}_{m,n}(x,t) =\mathbb{E}\big{(}u_{m}(x,t)u_{n}(0,0)\big{)}\] \[=(\delta(x-\mathcal{A}t)\,\mathcal{C})_{m,n}=\sum_{\alpha=0}^{2} \delta(x-c_{\alpha}t)(|\psi_{\alpha}\rangle\langle\tilde{\psi}_{\alpha}| \,\mathcal{C})_{m,n}\] with \(m,n=0,1,2\). There are three \(\delta\)-peaks, the heat peak standing still and two sound peaks propagating in opposite directions with speed \(c\). Specifying \(m,n\), each peak has a signed weight which depends on \(\mathcal{C}\) and the left-right eigenvectors of \(\mathcal{A}\). The Landau-Lifshitz theory asserts that the microscopic correlator \[\mathcal{S}_{m,n}(j,t)\simeq\mathcal{S}^{\mathrm{LL}}_{m,n}(x,t)\] for \(j=\lfloor xt\rfloor\), \(\lfloor\cdot\rfloor\) denoting integer part, with \(t\) sufficiently large. The reader might be disappointed by the conclusion. But with such basic information the fine-structure of the peaks can be investigated, in particular their specific sub-ballistic broadening and corresponding scaling functions [31, 38, 39]. When turning to the Toda lattice, the conservation laws are now labeled by \(n=0,1,...\) and thus \(\mathcal{A},\mathcal{B},\mathcal{C}\) become infinite dimensional matrices. The corresponding Landau-Lifshitz theory has been worked out in [40]. As to be discussed in the following section, with appropriate adjustments Eq. (12) is still valid. ## 3 Toda lattice, linearized generalized hydrodynamics The conservation laws of the Toda lattice are obtained from a Lax matrix [11, 26]. For this purpose, we first introduce the Flaschka variables \[a_{j}=\mathrm{e}^{-r_{j}/2}.\] Then the equations of motion become \[\frac{d}{dt}a_{j}=\tfrac{1}{2}a_{j}(p_{j}-p_{j+1}),\quad\frac{d}{dt}p_{j}=a_{j -1}^{2}-a_{j}^{2}. \tag{13}\] The Lax matrix, \(L\), is defined by \[L_{j,j}=p_{j},\qquad L_{j,j+1}=L_{j+1,j}=a_{j},\] \(j\in\mathbb{Z}\), and \(L_{i,j}=0\) otherwise. Clearly \(L=L^{\mathrm{T}}\). The conserved fields are labelled by nonnegative integers and have densities given by \[Q_{0}(j)=r_{j},\qquad Q_{n}(j)=(L^{n})_{j,j}\,, \tag{14}\] with \(n\geq 1\). Note that \(Q_{n}(j)\) is local in the sense that it depends only on the variables with indices in the interval \([j-n,j+n]\). An explicit expression for these quantities is given in [15]. For the current densities one obtains \[J_{0}(j)=-p_{j},\qquad J_{n}(j)=(L^{n}L^{\downarrow})_{j,j},\quad n=1,2,...\,, \tag{15}\] where \(L^{\downarrow}\) is the lower triangular part of \(L\). Then under the Toda dynamics \[\frac{d}{dt}Q_{n}(j,t)+J_{n}(j+1,t)-J_{n}(j,t)=0,\] which is the \(n\)-th conservation law in local form. The first items in the list are stretch and momentum for which our current definitions agree with those in (4), (9). However, for \(n=2\) one obtains \((L^{2})_{0,0}=p_{0}^{2}+a_{-1}^{2}+a_{0}^{2}\) and \((L^{2}L^{\downarrow})_{0,0}=a_{-1}^{2}(p_{-1}+p_{0})\), which differs from (4), (9) on two accounts. First there is the trivial factor of 2. In our numerical plots we use the physical energy density \(e_{j}\). The second point is more subtle. Densities are not uniquely defined, since one can add a difference of some local function and its shift by one. When summing a particular choice for the density over some spatial interval, the result differs from another choice of the density by a boundary term only. Thus the bulk term will have a correction of order \(1/(\mbox{length of interval})\), which does not affect the hydrodynamic equations. For the currents the difference can be written as a total time derivative which is again a boundary term when integrating over some time interval. In this section we adopt the conventions (14) and (15), since the analysis heavily relies on the Lax matrix. Beyond \(n=2\), while the fields no longer have a name, they still have to be taken into account in a hydrodynamic theory. The infinite volume static field-field correlator is defined as in (5) and the current-field correlator as in (10). In particularly, \(B=B^{\rm T}\). Of course, \(C,B\) are now matrices in the Hilbert space of sequences indexed by \(\mathbb{N}_{0}\), i.e. the space \(\ell_{2}(\mathbb{N}_{0})\). To distinguish \(3\times 3\) matrices from their infinite dimensional counterparts, for the latter we use standard italic symbols. The spacetime correlator of the Toda lattice is defined by \[S_{m,n}(j,t)=\langle Q_{m}(j,t)Q_{n}(0,0)\rangle_{P,\beta}^{\rm c}. \tag{16}\] and we plan to construct its Landau-Litshitz approximation. In essence this amounts to an analysis of \[\left({\rm e}^{At}C\right)_{m,n},\qquad A=BC^{-1}. \tag{17}\] While we are mainly interested in the physical fields corresponding to the indices \(m,n=0,1,2\), for the operator in (17) an understanding of the infinite dimensional matrices is required. Starting from the basics, the free energy of the Toda lattice is given by \[F_{\rm eq}(P,\beta)=\log\sqrt{\beta/2\pi}+P\log\beta-\log\Gamma(P).\] In particular, the average stretch, \(\nu\), is determined through \[\nu(P,\beta)=\partial_{P}F_{\rm eq}(P,\beta)=\langle Q_{0}(0)\rangle_{P,\beta }=\log\beta-\psi(P), \tag{18}\] with \(\psi\) the digamma function. Expectations of higher order fields can be written as moments of a probability measure denoted by \(\nu\rho_{\mathfrak{p}}\), \[\kappa_{n}=\langle Q_{n}(0)\rangle_{P,\beta}=\int_{\mathbb{R}}{\rm d}w\nu\rho _{\mathfrak{p}}(w)w^{n}, \tag{19}\] \(n\geq 1\). \(\rho_{\sf p}\) is called particle density. To determine this density one first has to solve the thermodynamic Bethe equations (TBA). For this purpose we introduce the integral operator \[Tf(w)=2\int_{\mathbb{R}}\mathrm{d}w^{\prime}\log|w-w^{\prime}|f(w^{\prime}),\] \(w\in\mathbb{R}\), considered as an operator on \(L^{2}(\mathbb{R},\mathrm{d}w)\) and define the number density \[\rho_{\sf n}(w)=\mathrm{e}^{-\varepsilon(w)}, \tag{20}\] with quasi-energies \(\varepsilon\). The quasi-energies satisfy the TBA equation \[\varepsilon(w)=\tfrac{1}{2}\beta w^{2}-\mu-(T\mathrm{e}^{-\varepsilon})(w), \tag{21}\] where the chemical potential \(\mu\) has to be adjusted such that \[\int_{\mathbb{R}}\mathrm{d}w\rho_{\sf n}(w)=P. \tag{22}\] Thereby the number density depends on the parameters \(P\) and \(\beta\). The TBA equation is closely connected to the \(\beta\)-ensemble of random matrix theory. We rewrite (21) as \[-\log\rho_{\sf n}(w)=\tfrac{1}{2}\alpha w^{2}-\mu-\alpha P(T\rho_{\sf n})(w).\] As \(\alpha\to\infty\), the entropy term on the lefthand side can be neglected and one recognizes the defining equation for the Wigner semi-cirle law on the interval \([-2\sqrt{P},2\sqrt{P}]\). The Lax DOS is the \(P\)-derivative of \(\rho_{\sf n}\), which diverges as \((w\pm 2\sqrt{P})^{-1/2}\) at the two borders. As \(\alpha\) is lowered the borders become smeared to eventually cross over to a Gaussian. In practice, the TBA equation has to be solved numerically. But for thermal equilibrium an exact solution is available [1, 12, 35]. Denoting the solution of (21) for \(\beta=1\) and the constraint (22) by \(\rho_{\sf n}^{*}\) one has \[\rho_{\sf n}^{*}(w)=\frac{\mathrm{e}^{-w^{2}/2}}{\sqrt{2\pi}|\hat{f}_{P}(w)|^{ 2}},\quad\hat{f}_{P}(w)=\int_{0}^{\infty}\mathrm{d}tf_{P}(t)\mathrm{e}^{ \mathrm{i}wt},\quad f_{P}(t)=\sqrt{2}\pi^{-1}\Gamma(P)^{-1/2}t^{P-1}\mathrm{e} ^{-\frac{1}{2}t^{2}}. \tag{23}\] In our numerical simulations it is of advantage to use the exact solution. The TBA equation is a standard tool from GHD as one way to write the Euler-Lagrange equations for the variational principle associated with the generalized free energy. For the Toda lattice such a variational formula was obtained in [9, 42]. Proofs using methods from the theory of large deviations and transfer operator method have also become available [16, 27, 29, 30]. Next we introduce the dressing transformation of some function \(f\) by \[f^{\mathrm{dr}}=\big{(}1-T\rho_{\sf n}\big{)}^{-1}f\] with \(\rho_{\sf n}\) regarded as a multiplication operator. Then number and particle density are related as \[\rho_{\sf n}(w)=\frac{\rho_{\sf p}(w)}{1+T\rho_{\sf p}(w)} \tag{24}\] with inverse \[\rho_{\sf p}=(1-\rho_{\sf n}T)^{-1}\rho_{\sf n}=\rho_{\sf n}\varsigma_{0}^{ \mathrm{dr}}, \tag{25}\] using the convention \(\varsigma_{n}(w)=w^{n}\). For the average currents similar identities are available. The central novel quantity is the effective velocity \[v^{\rm eff}=\frac{\varsigma_{1}^{\rm dr}}{\varsigma_{0}^{\rm dr}}, \tag{26}\] see [45, 3, 6, 41]. Then \[\langle J_{0}(0)\rangle_{P,\beta}=-\kappa_{1},\] and, for \(n\geq 1\), \[\langle J_{n}(0)\rangle_{P,\beta}=\int_{\mathbb{R}}\mathrm{d}w\rho_{\mathsf{p }}(w)(v^{\rm eff}(w)-\kappa_{1})w^{n}.\] In thermal equilibrium we have \(\kappa_{1}=0\). Since in the following there will be many integrals over \(\mathbb{R}\), let us first introduce the abbreviation \[\langle f\rangle=\int_{\mathbb{R}}\mathrm{d}wf(w).\] With this notation the \(C\) matrix turns out to be of the form \[C_{0,0}=\nu^{3}\langle\rho_{\mathsf{p}}\varsigma_{0}^{\rm dr} \varsigma_{0}^{\rm dr}\rangle,\] \[C_{0,n}=C_{n,0}=-\nu^{2}\langle\rho_{\mathsf{p}}\varsigma_{0}^{ \rm dr}(\varsigma_{n}-\kappa_{n}\varsigma_{0})^{\rm dr}\rangle,\] \[C_{m,n}=\nu\langle\rho_{\mathsf{p}}(\varsigma_{m}-\kappa_{m} \varsigma_{0})^{\rm dr}(\varsigma_{n}-\kappa_{n}\varsigma_{0})^{\rm dr}\rangle,\] \(m,n\geq 1\). Note that the matrix \(C\) has the block structure \[C=\begin{pmatrix}C_{0,0}&C_{0,n}\\ C_{m,0}&C_{m,n}\end{pmatrix},\] in the sense that \(C_{m,n}\) for \(m,n\geq 1\) follows a simple pattern. This structure will reappear for \(B\) and \(\mathrm{e}^{At}C\). The field-current correlator \(B\) can be computed in a similar fashion with the result \[B_{0,0}=\nu^{2}\langle\rho_{\mathsf{p}}(v^{\rm eff}-\kappa_{1}) \varsigma_{0}^{\rm dr}\varsigma_{0}^{\rm dr}\rangle,\] \[B_{0,n}=B_{n,0}=-\nu\langle\rho_{\mathsf{p}}(v^{\rm eff}-\kappa_ {1})\varsigma_{0}^{\rm dr}(\varsigma_{n}-\kappa_{n}\varsigma_{0})^{\rm dr}\rangle,\] \[B_{m,n}=\langle\rho_{\mathsf{p}}(v^{\rm eff}-\kappa_{1})( \varsigma_{m}-\kappa_{m}\varsigma_{0})^{\rm dr}(\varsigma_{n}-\kappa_{n} \varsigma_{0})^{\rm dr}\rangle.\] As in (12), we want to determine the propagator of the Landau-Lifshitz theory, denoted by \(S^{\rm LL}_{m,n}(x,t)\). In principle, all pieces have been assembled. However to figure out the exponential of \(A\) requires its diagonalization. Details can be found in [40] and we only mention that one constructs a linear similarity transformation, \(R\), such that \(R^{-1}AR\) is multiplication by \[\nu^{-1}(v^{\rm eff}(w)-\kappa_{1}) \tag{30}\] in \(L^{2}(\mathbb{R},\mathrm{d}w)\). Here \(v^{\rm eff}\) is the effective velocity defined in (26). Using the block convention as in (28), the spacetime correlator in the Landau-Lifshitz approximation is given by \[S^{\rm LL}(x,t)=\int_{\mathbb{R}}\mathrm{d}w\delta\big{(}x-t\nu^ {-1}(v^{\rm eff}(w)-\kappa_{1})\big{)}\nu\rho_{\mathsf{p}}(w)\] \[\qquad\qquad\times\begin{pmatrix}\nu^{2}\varsigma_{0}^{\rm dr}(w )^{2}&\nu\varsigma_{0}^{\rm dr}(w)(\varsigma_{n}-\kappa_{n}\varsigma_{0})^{ \rm dr}(w)\\ \nu\varsigma_{0}^{\rm dr}(w)(\varsigma_{m}-\kappa_{m}\varsigma_{0})^{\rm dr}( w)&(\varsigma_{m}-\kappa_{m}\varsigma_{0})^{\rm dr}(w)(\varsigma_{n}-\kappa_{n} \varsigma_{0})^{\rm dr}(w)\end{pmatrix}.\] Note that \(S^{\rm LL}(x,0)=\delta(x)C\). As a property of the Euler equations, the expression (31) possesses exact ballistic scaling, \[S^{\rm LL}_{m,n}(x,t)=\frac{1}{t}S^{\rm LL}_{m,n}(x/t,1). \tag{32}\] The correlator \(S_{m,n}(j,t)\) is computed in our MD simulations which will then be compared with \(S^{\rm LL}_{m,n}(x,t)\). ## 4 Numerical simulations For a molecular dynamics simulation one has to first specify a finite ring \([1,\ldots,N]\) with suitable boundary conditions. For the dynamics of positions \(q_{j}\) and momenta \(p_{j}\) one imposes \[q_{N+1}=q_{1}+\nu N. \tag{33}\] The parameter \(\nu\) fixes the free volume per particle and can have either sign. In our simulation, we actually allow for a fluctuating free volume by choosing random initial conditions such that \(\{r_{1},p_{1},\ldots,r_{N},p_{N}\}\) are i.i.d. random variables with a single site distribution as specified in (3). Then the deterministic time evolution is governed by (13) with boundary conditions \[r_{0}=r_{N},\qquad p_{N+1}=p_{1}.\] In fact, the boundary condition in (33) amounts to the micro-canonical constraint \[\sum_{j=1}^{N}r_{j}=\nu N.\] If one sets \(\nu=\langle Q_{0}(0)\rangle_{P,\beta}\), then for large \(N\), by the equivalence of ensembles, the two schemes for sampling the correlator \(S_{m,n}(j,t)\) should differ by the size of statistical fluctuations. For a few representative examples we checked that indeed the equivalence of ensembles holds for the particular observables under study. Returning to the choice of system size there is an important physical constraint. In all simulations one observes a sharp right and left front, which travel with constant speed and beyond which spatial correlations are exponentially small. On a ring necessarily the two fronts will collide after some time. Such an encounter has a noticeable effect on the molecular dynamics which is not captured by the linearized GHD analysis. Therefore the simulation time is limited by the time of first collision. Indeed, we note in Figures 1-3 that both linearized GHD and MD clearly display maximal speeds of at most \(\Delta j/\Delta t=2\) for the entire range of \((P,\beta,m,n)\) displayed in these figures. Taking into account that the initial correlations are proportional to \(\delta_{0j}\), we conclude that for a ring of size \(N=3000\) there will be no collision of the two fronts up to time \(t=750\) which is larger than \(t=600\) used in our simulations. Before displaying and discussing our results, we provide more details on numerically solving the TBA equations and on the actual scheme used for MD. ### Details of the numerical implementation #### 4.1.1 Solving linearized GHD To numerically solve the linearized GHD equations, we use a numerical method similar to the one from [33]. First, Eq. (23) can be expressed in terms of the parabolic cylinder function \(D_{\nu}(z)\), which is readily available in Mathematica. This provides the solution to the TBA equations (21), (22). Then, we use a simple finite element discretization of the \(w\)-dependent functions by hat functions, resulting in piecewise linear functions on a uniform grid. After precomputing the integral operator \(T\) in (20) for such hat functions, the dressing transformation (24) becomes a linear system of equations, which can be solved numerically. This procedure yields \(\varsigma_{n}^{\mathrm{dr}}\), and subsequently \(\rho_{\mathsf{p}}\) via (25) and \(v^{\mathrm{eff}}\) via (26). The moments can be computed from \(\kappa_{n}=\int_{\mathbb{R}}\mathrm{d}w\nu\rho_{\mathsf{n}}(w)\varsigma_{n}^{ \mathrm{dr}}(w)\), or (equivalently) Eq. (19). To evaluate the correlator in (31), we note that the delta-function in the integrand results in a parametrized curve, with the first coordinate (corresponding to \(x/t\)) equal to \(\tilde{v}^{\mathrm{eff}}\) from (30), and the second coordinate equal to the remaining terms in the integrand divided by the Jacobi factor \(|\frac{\mathrm{d}}{\mathrm{d}w}\tilde{v}^{\mathrm{eff}}(w)|\) resulting from the delta-function. #### 4.1.2 Molecular dynamics simulations We approximate the expectation value that is contained in the MD-definition of the correlations \(S_{m,n}\) in equation (16) by the following numerical scheme, whose implementation program is written in Python, and can be found at [28]. First, we generate the random initial conditions distributed according to the Gibbs measure, as given by (3) for the i.i.d. random variables \((r_{j},p_{j})_{1\leq j\leq N}\). Specifically, the variables \(p_{j}\) are distributed according to a standard normal random variable, that we generate with Numpy v1.23's native function random.default_rng().normal[18], times \(1/\sqrt{\beta}\). It takes a brief calculation to see that \(r_{j}\) can be chosen to be \(-\ln(X/(2\beta))\) where \(X\) is chi-square distributed with shape parameter \(2P\). We obtain the random variable \(X\) using Numpy v1.23's native function random.default_rng().chisquare. Having chosen the initial conditions in such a manner, we solve equation (2). For the evolution, we adapt the classical Stormer-Verlet algorithm [17] of order 2 to work with the variables \((\mathbf{p},\mathbf{r})\). Specifically, we used a time step equal to \(\delta=0.05\), and, given the solution \((\mathbf{r}(t),\mathbf{p}(t))\) at time \(t\), we approximate the solution at time \(t+\delta\) through the following scheme, \[p_{j}\left(t+\frac{\delta}{2}\right)=p_{j}(t)-\frac{\delta}{2} \left(e^{-r_{j}(t)}-e^{r_{j-1}(t)}\right)\,,\] \[r_{j}(t+\delta)=r_{j}(t)+\delta\left(p_{j+1}\left(t+\frac{ \delta}{2}\right)-p_{j}\left(t+\frac{\delta}{2}\right)\right)\,,\] \[p_{j}(t+\delta)=p_{j}\left(t+\frac{\delta}{2}\right)-\frac{ \delta}{2}\left(e^{-r_{j}(t+\delta)}-e^{r_{j-1}(t+\delta)}\right)\,,\] for all \(j=1,\ldots,N\). In this part of the implementation, we extensively used the library Numba[23] to speed up the computations. Our approximation for the expectation \(S_{m,n}\) is then extracted from \(3\times 10^{6}\) trials with independent initial conditions. Here we take the empirical mean of all trials where for each trial we also take the mean of the \(N=3000\) sets of data that are generated by choosing each site on the ring for \(j=0\). To evaluate the quality of our numerical simulations, we have repeated the numerical experiments up to five times including variations for the length of the ring and evaluating the solutions at more intermediate time steps than displayed in the figures below. Furthermore, we have compared the results with the corresponding outcomes obtained by a MATLAB program that has been developed independently from the Python program, and that follows a different numerical scheme. It uses MATLAB's random number generators randn for initial momenta and rand combined with the rejection method to produce initial stretches. The dynamics is then evaluated by the solver ode45, which exploits the Runge-Kutta method to numerically solve the Hamiltonian system associated with (1) on the ring. We found that the deviations between different experiments are comparable to the size of the amplitudes of the high frequency oscillations that are present in figures 4-5. These oscillations are due to the random fluctuations of the empirical means around their expectation values \(S_{m,n}\). Agreement of different experiments up to the order of these oscillations therefore shows the consistency of the corresponding numerical results. We also want to mention that all the pictures that appeared in this paper are made using the library matplotlib[19]. ### Comparison of linearized GHD with MD at time \(t=600\) We compare the GHD predictions with MD simulations for three different temperatures that correspond to \(\beta=0.5\) (Fig. 1), \(\beta=1\) (Fig. 2), and \(\beta=2\) (Fig. 3). For each \(\beta\) we choose three different values for the pressure parameter \(P\) in such a way that the corresponding mean stretches, given by (18), are positive (\(\approx 2.57\)) for low pressure, negative (\(\approx-0.42\)) for high pressure and approximately zero for medium pressure. We summarize their values in Table 1. In each of the nine cases we have evaluated the Landau-Lifshitz approximations \(S_{m,n}^{\rm LL}(\cdot,1)\), see (31), of the correlators for all \(0\leq n\leq m\leq 2\) using the numerical scheme described in Section 4.1.1. Their graphs are displayed in Figures 1-3 as dashed lines. Note that the speeds of the sound peaks depend significantly on both pressure and temperature. Moreover, the predicted fine-structure of both the heat and the sound peaks are quite different for low pressure when compared to medium and high pressure. The colored lines in Figures 1-3 show our numerical results for the corresponding molecular dynamics. According to the predicted ballistic scaling (32) we plot \(tS_{m,n}(j,t)\) as a function of \(j/t\) for \(t=600\). Here the values of \(S_{m,n}(j,t)\) are approximated using the numerics explained in Section 4.1.2. The agreement between linearized GHD and MD is striking, in particular since there are no adjustable parameters. In all of the 54 comparisons shown in Figures 1-3 the GHD predictions for the fine-structure of heat and sound peaks are in excellent agreement with the ones observed from molecular dynamics at time \(t=600\). As we show in more detail in the next subsection the largest deviations occur mostly near the sound peaks and do not exceed \(3.5\%\) of the peaks' maximal values. ### Deviation of linearized GHD from MD at times \(t=150\) and \(t=600\) The purpose of this subsection is twofold. On the one hand we have a look at the small differences between GHD predictions and molecular dynamics simulations that can hardly be detected in \begin{table} \begin{tabular}{|l|c|c|c|} \hline pressure & \(\beta=0.5\) & \(\beta=1\) & \(\beta=2\) \\ \hline low & \(P=0.32,\ \langle r\rangle\approx+2.58\) & \(P=0.4,\ \langle r\rangle\approx+2.56\) & \(P=0.52,\ \langle r\rangle\approx+2.56\) \\ \hline medium & \(P=0.95,\ \langle r\rangle\approx-0.03\) & \(P=1.5,\ \langle r\rangle\approx-0.04\) & \(P=2.55,\ \langle r\rangle\approx-0.03\) \\ \hline high & \(P=1.21,\ \langle r\rangle\approx-0.42\) & \(P=2.0,\ \langle r\rangle\approx-0.42\) & \(P=3.53,\ \langle r\rangle\approx-0.42\) \\ \hline \end{tabular} \end{table} Table 1: Values for \(\beta\) and \(P\) and the corresponding mean stretches used in experiments Figure 1: Toda correlation functions: GHD predictions \(y\mapsto S_{m,n}^{\rm LL}(y,1)\) vs. numerical simulations of the molecular dynamics \(y\mapsto tS_{m,n}(yt,t)\) at \(t=600\) for \(\beta=0.5\) with low pressure (top), medium pressure (middle) and high pressure (bottom). Numerical simulations are colored according to the legend, the corresponding GHD predictions are displayed by dashed lines. Number of trials: \(3\times 10^{6}\). Figure 2: Toda correlation functions: GHD predictions \(y\mapsto S_{m,n}^{\rm LL}(y,1)\) vs. numerical simulations of the molecular dynamics \(y\mapsto tS_{m,n}(yt,t)\) at \(t=600\) for \(\beta=1.0\) with low pressure (top), medium pressure (middle) and high pressure (bottom). Numerical simulations are colored according to the legend, the corresponding GHD predictions are displayed by dashed lines. Number of trials: \(3\times 10^{6}\). Figure 3: Toda correlation functions: GHD predictions \(y\mapsto S_{m,n}^{\rm LL}(y,1)\) vs. numerical simulations of the molecular dynamics \(y\mapsto tS_{m,n}(yt,t)\) at \(t=600\) for \(\beta=2.0\) with low pressure (top), medium pressure (middle) and high pressure (bottom). Numerical simulations are colored according to the legend, the corresponding GHD predictions are displayed by dashed lines. Number of trials: \(3\times 10^{6}\). Figure 4: Toda correlation functions \(S_{1,1}\) (left) and \(S_{1,0}\) (right) for medium pressure and increasing temperatures (top to bottom). For each value of \(\beta\) and \(P\) the top panels show GHD prediction vs. numerical simulations as in Figures 1-3 but with the the molecular dynamics evaluated at two times \(t=150\) and \(t=600\). The bottom panels display the differences between the GHD prediction and numerical simulations at time \(t=150\) (red) and at time \(t=600\) (green). Number of trials: \(3\times 10^{6}\). Figure 5: Toda correlation functions \(S_{0,0}\) (left) and \(S_{2,0}\) (right) for \(\beta=1\) and increasing pressure (top to bottom). For each value of \(\beta\) and \(P\) the top panels show GHD prediction vs. numerical simulations as in Figure 2 but with the the molecular dynamics evaluated at two times \(t=150\) and \(t=600\). The bottom panels display the differences between the GHD prediction and numerical simulations at time \(t=150\) (red) and at time \(t=600\) (green).Number of trials: \(3\times 10^{6}\). Figures 1-3. On the other hand we indicate how these differences evolve in time by including time \(t=150\) for the molecular dynamics. Recall that the GHD predictions are time-invariant in the scaling \(y\mapsto tS_{m,n}(yt,t)\) we have chosen, see (32). From the \(54\) comparisons that are displayed in Figures 1-3 we select \(12\) cases that are representative and show all the phenomena that we have observed. In Figure 4 we consider correlations \(S_{1,1}\) and \(S_{1,0}\) at medium pressure (cf. Table 1) for all three values of \(\beta\). The small scale fluctuations displayed in the bottom panels are due to the approximation of expectation values by empirical averages. Their amplitudes become smaller if one increases the number of trials. Note that the difference in amplitudes of these functions between \(t=150\) and \(t=600\) is mostly due to the scaling \(y\mapsto tS_{m,n}(yt,t)\) that we use. This implies that the values of the correlations are multiplied by a factor that is \(4\) times larger at the later time. The same holds for the plots in Figure 5 where the correlations \(S_{0,0}\) and \(S_{2,0}\) are shown for fixed \(\beta=1\) and our three different choices for pressure. We now summarize our main findings: 1. The deviations occur mostly near the sound peaks and amount to \(1.5\%\)-\(3.5\%\) of the peaks' maximal values at time \(t=600\). 2. There appear to be small but systematic deviations concerning the shape of the sound peak in all cases. One would need to conduct experiments with a higher resolution, i.e. more sites and consequently larger times and more trials, to determine whether there is indeed such a systematic deviation. With the resolution present in our experiments the question of a systematic deviation with respect to the shape of the peak cannot be decided. 3. In some of the experiments the maximal deviations would be significantly smaller if a constant only depending on the values of \(\beta\), \(P\), \(m\), \(n\) is added to all values of \(S_{m,n}(j,t)\), see e.g. correlations \(S_{0,0}\) and \(S_{2,0}\) for \(\beta=1\), \(P=0.4\) in Figure 5. This seems to be related to the approximation errors for the means \(\langle r\rangle\), \(\langle p\rangle\), and \(\langle e\rangle\), that appear to be less pronounced in the case of momentum \(p\). We have observed that these deviations decrease as the number of trials is increased and we do not expect a systematic deviation between GHD and MD in this respect. 4. For \((\beta;P)\in\{(0.5;0.95),(0.5;1.21)\}\) we observe that the size of the deviations is essentially the same for times \(t=150\) and \(t=600\) whereas for \((\beta;P)\in\{(0.5;0.32),(1;0.4),(2;0.52),\)\((2;2.55),(2;3.53)\}\) these deviations are significantly larger at the smaller time. The remaining two cases \((\beta;P)\in\{(1;1.5),(1;2)\}\) are somewhat in between, also depending on the correlation function that is considered, see Figure 5. This is an indication that the speed of convergence of \(tS_{m,n}(yt,t)\) to the GHD prediction \(S_{m,n}^{\rm LL}(y,1)\) as \(t\to\infty\) depends on the values of \(\beta\) and \(P\). As a rule we have observed that both increasing temperature or increasing pressure leads to a faster speed of convergence. ## 5 Conclusions and outlook As can be seen from Table 1, we picked the intermediate pressure such that \(\nu\simeq 0\). In the particle picture \(\nu=0\) corresponds to the boundary condition \(q_{1}=q_{N}\). In thermal equilibrium the positions then perform an unbiased random walk with typical excursions of order \(\sqrt{N}\). Thus the free volume is of order \(1/\sqrt{N}\). The particles are extremely dense and the picture of successive pair collisions breaks down completely. So one might wonder whether GHD is still valid under such extreme conditions. \(\nu=0\) poses no particular difficulties for MD simulations. In GHD the factor \(1/\nu\) appears in the expression for \(v^{\rm eff}\), see Eq. (31). This makes the numerical scheme slow and only values close to \(\nu=0\) are accessible. However the correlator \(S\) changes smoothly through \(\nu=0\). GHD also covers this seemingly singular point. Simultaneously A. Kundu [21] posted a somewhat puzzling note. He considers the parameter values \(\beta=1\), \(P=1\). When cutting the matrices \(C_{m,n}\) and \(A_{m,n}\) at low orders, the resulting \(S_{m,n}\) consists of a few \(\delta\)-peaks which move at constant velocity. After ballistic scaling, with high precission they turn out to lie on the curve obtained from GHD. A theoretical explanations seems to be missing. In [22] the molecular dynamics of Toda lattice correlations are simulated for the potential \[V_{\rm kd}(x)=\frac{g}{\gamma}{\rm e}^{-\gamma x}\] with arbitrary \(\gamma,g>0\). To distinguish their parameters from ours, the variables in [22] are here denoted by \(\bar{t},\bar{r},\bar{P},\bar{\beta}\). \(\bar{P}\) is the physical pressure and, comparing the Gibbs weights, one obtains the relations \[\beta=\frac{g}{\gamma}\bar{\beta},\qquad P=\frac{1}{\gamma}\bar{P}\bar{\beta}.\] From the equations of motions one deduces \[\bar{t}=\frac{1}{\sqrt{\gamma g}}t,\quad r(t)=\gamma\bar{r}(\bar{t}),\quad p( t)=\frac{g}{\gamma}\bar{p}(\bar{t}).\] Thus, translating to our units, the MD simulations reported in [22] are (i) \(P=0.01\), \(\beta=0.01\), \(N=1024\), \(t=400\), (ii) \(P=1\), \(\beta=1\), \(N=1024\), \(t=200,300\), and (iii) \(P=400\), \(\beta=400\), \(N=256\), \(t=80\). In fact, in all three cases the time scales are identical, \(t=\bar{t}\). Since GHD was not available yet, no comparison could have been attempted. Case (i) is a very dilute chain. In this limit \(\nu\rho_{\sf p}\) is a unit Gaussian. The dressed functions become polynomials as \(\varsigma_{0}^{\rm dr}(w)=a_{0}\), \(\varsigma_{1}^{\rm dr}(w)=a_{1}w\), and \(\varsigma_{2}^{\rm dr}(w)=a_{2}w^{2}+a_{3}\) with coefficients \(a_{0},...,a_{3}\) depending on \((P,\beta)\). Note that for a noninteracting fluid \(a_{3}\) would vanish. As a result \(S_{0,0}\) is Gaussian, \(S_{1,1}\) has two peaks, and \(S_{2,2}\) has either two or three peaks. This is in good agreement with [22] and explains our motivation not to venture into the low density regime. Case (ii) interpolates between our \(\beta=1,P=0.40\) and \(\beta=1,P=1.5\). Note that now \(S_{0,0}\) has a local minimum at \(w=0\), which is very different from the structure in the dilute regime. On the other hand, \(S_{2,2}\) has a local maximum at \(w=0\), as is the case for low density/high temperature. The most interesting parameter value is (iii), which deserves more detailed studies. The issue is the behavior of the Toda chain at very low temperatures. Simply letting \(\beta\to\infty\) will freeze any motion. But the simultaneous limit \(\beta\to\infty\) with \(P=\bar{P}\beta\) at fixed physical pressure \(\bar{P}\) is meaningful, at least statistically. In this limit \(\nu>0\) always. Also the density of states converges to the arcsine distribution, \[\lim_{\beta\to\infty}\nu\rho_{\sf p}(w)=\frac{1}{\pi\sqrt{4\bar{P}-w^{2}}}, \quad|w|\leq 2\sqrt{\bar{P}}.\] To understand the dynamical behavior, the effective potential is expanded as \[{\rm e}^{-r}+\bar{P}r\simeq{{1\over 2}}\bar{P}(r-r_{0})^{2}+c_{0}\] at its minimum \(r_{0}\). Since \(\beta\) is large, the initial fluctuations are of order \(1/\sqrt{\beta}\). Therefore the dynamics can be approximated by a harmonic chain with \(\omega^{2}=\bar{P}\). The equilibrium time correlations of the harmonic chain have intricate oscillatory behavior [14], which in the large \(\beta\) limit should match with the Toda lattice, as partially evidenced through case (iii). Clearly, GHD cannot reproduce such fine details. Still, when averaged on suitable scales, the gross behavior of the harmonic chain oscillations might be visible. ## Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 1440140, while five of the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the fall semester of 2021. The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme "Dispersive hydrodynamics: mathematics, simulation and experiments, with applications in nonlinear waves" where some work on this paper was undertaken. This work was supported by EPSRC grant no EP/R014604/1. TG acknowledges the support of the European Union's H2020 Marie Sklodowska-Curie grant No. 778010 _IPaDEGAN_, of INdAM/GNFM and of the research project Mathematical Methods in NonLinear Physics (MMNLP), Gruppo 4-Fisica Teorica of INFN. GM is financed by the KAM grant number 2018.0344. KTRM was supported by a Visiting Wolfson research fellowship from the Royal Society.
2308.11620
Software-based signal compression algorithm for ROM-stored electrical cables
This project introduces a groundbreaking approach to address the challenge of periodic signal compression. By proposing a novel adaptive coding method, coupled with hardware-assisted data compression, we have developed a new architecture model tailored for efficient data compression. The selected compression scheme has demonstrated remarkable results, showcasing reduced memory communication volume and power consumption in the cache memory path of benchmark systems. With a reduction range of 4.2% to 35.2%, this innovation paves the way for affordable smart sensing, monitoring, diagnostics, and protection in emerging low-cost device types. Consequently, this cutting-edge technology enhances electrical signal compression and contributes to grid improvement. Additionally, we explore the novel application of harnessing wasted thermal energy in the Read-Only Memory (ROM) using thermoelectricity (TE). This approach captures the excess thermal energy, converting it into electrical energy through optimized supercapacitor charging, resulting in efficient energy utilization. This innovation intersects the fields of embedded systems, data compression, energy efficiency, and smart grid technology.
Tshimankinda Jerome Ngoy, Mike Nkongolo
2023-07-09T10:34:13Z
http://arxiv.org/abs/2308.11620v1
# Software-based signal compression algorithm for ROM-stored electrical cables ###### Abstract The purpose of this project is to determine a method for compressing the function codes located in the non-volatile memory of the on-board system after the linking/localisation phase, decompressing these functions, and executing them in the on-board volatile memory. The system and the execution of these functions are referred to as the uncompressed functions of the in-vehicle system. This approach ensures that the software is stored in ROM memory and can be restored after maintaining the runtime environment, protecting it from power surges or viruses occurring in the power cord. The decompression algorithm is stored in the ROM space through the main program, and a preventive method will be developed to compress the signal transmitted by the software over the cable. After the algorithm runs, the decompressed software is loaded into the ROM space to initialise the main program. By creating a backup copy, this method avoids the need to store software on an isolated server where viruses or power surges are less likely to occur. Since the main program initialisation software is stored in a compressed state in the ROM, and the decompression algorithm is compact, the project effectively utilises the ROM space. Additionally, excess energy present in the ROM can be harnessed by applying thermoelectricity (TE) to capture wasted thermal energy from the heated ROM, converting it into electrical energy to charge the battery. This is an open access article under the CC BY-SA license. Tshimankinda Jerome Ngoy Email: [email protected] Wa Nichonglo Mike Nkongolo Department of Informatics, Faculty of Engineering, Built Environment and Information Technology University of Pretoria Hatfield 0028, South Africa [email protected] ## 1 Introduction Currently, millions of kilometers of cables are being used to provide electrical connections in machinery, equipment, buildings and other places. If energy storage devices are used, they are completely separate from these power cords. However, if conductive software and energy storage can be integrated into the same cable, it will revolutionise energy storage applications [1]. Coaxial cable, is one of the most common and basic cable types used to transmit power or signals. Its internal conductor is surrounded by an electrically insulating layer and covered by an external tubular conductive shielding layer. Supercapacitors, also known as electrochemical capacitors, have become one of the most popular energy storage devices in recent years. Compared with other energy storage devices (such as batteries), supercapacitors have a faster charge and discharge rate, higher power density and longer service life [2]. In addition, a new _coaxial supercapacitor (CSC) cable design has been demonstrated, which combines conduction and energy storage by modifying the copper core used for conduction. In order to obtain the large area required for high supercapacitor performance, we developed a _normalized wavelet_ (NW) on the outer surface of the copper wire. An interesting advantage of using the coaxial design is that electrical energy can be conducted through the internal conductive metal wire, and the electrical energy can be stored in the concentric layer of nanostructures added to the internal metal wire, and there is a layer of oxidation between the two. Therefore, the integration of cables and energy storage devices into a single unit provides a very promising opportunity to transmit power and store energy at the same time. In one embodiment, this project includes a system for efficient use of Read-Only Memory (ROM) space in an integrated system. The project system includes an integrated system with a processor, and ROM. The decompression algorithm is stored in the ROM space through the main program. When the vehicle system is powered on, the decompression algorithm is executed by the processor. The decompression algorithm is suitable for the compression software stored in the ROM space [1]. The compression software includes the data needed to initialise the main program. After the algorithm runs, it loads the decompressed software into the ROM space to initialise the main program. Since the software for initialising the main program is stored in the ROM in a compressed state, and since the decompression algorithm is compact, the present invention effectively utilises the ROM space. The space saved makes it possible to use a smaller and cheaper ROM. In addition, the space saved allows the use of a larger, more complex, and more feature-rich main program. In this case, the decompression must be separated from the execution of the decompression function, so that the decompression task can be handed over to another kernel, or it can be started as an execution earlier [1]. Any compression scheme should result in loss of information. This work should aim to quantify acceptable losses. This work must focus on the loss of frequency and amplitude information. Finally, it should be noted that compression only applies to relevant data. If the signal contains random noise superimposed on the fundamental wave, compression may be harmful. In addition, compression schemes used in the interference data domain of electrical systems should try to utilise the following knowledge: the input signal has very high energy at the fundamental frequency, and the waveform is highly periodic. This project studies the design issues related to the realisation, adaptation and customisation of compression algorithms (especially data compression technologies aimed at increasing the energy of sensor arrays). The goal of this method is to reduce the consumption of non-volatile memory while keeping the CPU load low in critical parts. To use less non-volatile memory, uController [3] can be used. The whole process consists of two steps: * Assuming decompression is performed at runtime, the existing in-vehicle system software can be updated to only support compression and decompression and as fast as possible. * Apply the compression tool to the binary/hexadecimal files obtained after the linking/localisation stage. Specific theoretical examples of Infineon Tricore uControllers used and the amount of FLASH memory available are [3]: * 1MB FLASH, * 1.5MB FLASH, * 2MB FLASH After all optimisations, if the flash memory capacity required by the software project is 1.1MB, the company will choose TC1738 uController [3]. But by using the proposed method (compressing multiple functions), the required FLASH size can be reduced to less than 1MB, so that the cheaper TC1734 can be used. The gain is multiplied by the number of embedded systems produced using uController. In fact, for software projects and microcontroller series with different flash memory sizes, it is worthwhile to compress to check whether the same series of lower-level uControllers (with less flash memory) can be used after compression. Due to decompression, we do not want to increase the _central processing unit_ (CPU) load during the execution of the key code, so it is mainly suitable for functions that are executed only once (configuration, initialisation, and control functions). There are also no restrictions on the applicability of repeated items. In Figure 1, the setup is displayed. In this experiment, thermoelectric generator (TEG) was installed between the hybrid heat sink cooler and the CPU processor to produce micro-scale electricity. In our experiment, we employed an Intel Core i3-2100 processor, which when forced to operate using _overclock checking tool_ (OCCT) software may produce heat up to 80 "C. The executed function or external event triggers the executed function, but this is only when the CPU load introduced by the decompression mechanism does not affect the system function. There is a restriction on the functions we want to compress: the binary code obtained for these functions must be relocatable because the execution address is different from the build address. Generally, the binary result used for relative addressing will take up less memory space [4]. The general steps for compressing and decompressing data are shown in Figure 2. Obviously, a compression algorithm with a certain degree of complexity is worth exploring [17, 18, 19]. Also note that these calculations only consider the local power consumption on the compression node. Downstream energy saving can further share the compressed time/energy expenditure [5]. The current and future memory size limitations of sensor processors require reconsideration of memory usage in compression calculations. Although each generation of sensor processor technology shows higher capacity, they usually still provide less than 50KB of code storage space and less data RAM. Therefore, the compression algorithm originally used on desktops or servers must be redesigned to reduce confusion, code size and dynamic memory usage. This project evaluates a method and system to reduce the amount of ROM required by digital embedded systems, and explores methods of lossless compression without loss because it is suitable for a wider range of applications, datasets, and user. As shown in Figure 3, the principal block diagram of the software for electrical cables stored in ROM. First, at the substation the current of the step-down AC-DC or DC-AC voltage regulator is processed by demodulator or modulator signal of the machine which is separated from background noise. Then, the signal is compressed using an algorithm to process the signal in the computer or CPU cooler and later stored in ROMs. Figure 1: Schematic configuration of TEG on CPU processor [7]. Figure 2: Block diagram of data compression and decompression. Data compression covers technologies that can represent information in a compact form. These compact representations are obtained by identifying and using the structure that exists in the data. When digitising a constant envelope sine wave, we will spend a lot of bits to encode its samples. However, we can represent this signal in a compact form in terms of amplitude, frequency, and phase. A simplified diagram of the power supply system is shown in Figure 4. Usually, the generator is powered at the level of 13.8 kV, and the voltage is increased in the power station, so the energy is transmitted up to 1000 kV using high-voltage transmission lines in the range of 138, which are called _high-voltage_ (HV) and _ultra-high-voltage_ (UHV), respectively [6]. When the energy reaches the distribution station, the voltage drops again to the _medium voltage_ (MV) level, which is a characteristic of the distribution network in the system. Generally, the level of the primary distribution line starting from the substation is 13.8 kV (MT) and the length is less than 10 km, but it may be longer in rural areas because the power demand is relatively scarce and scattered. The distribution transformer is connected to the primary feeder at many points to reduce the voltage level from 13.8 kV to 127 V, 220 V or 380 V (approximately) to supply power to the end user. Then, the secondary power distribution system corresponds to the _low voltage_ (LV) feeder. ## 2 Research method The idea is not to focus on the compression method (it can be other innovative methods), but on how to implement all the stages described in the embedded project in a simple and fast way. Generally, data can be compressed by eliminating data redundancy and irrelevance. Modeling and coding are two levels of compressed data: at the first level, the data will be analysed to obtain any redundant information, and it will be extracted to develop the model. In the second level, the difference between the modeled data and the actual data (called residual) is calculated and encoded by coding techniques. For compression, the ZLLIB library was used [7], which it is a free, lossless data-compression library which can be used on almost any computer hardware and operating system. After studying several embedded software systems, we found that most functions can be relocated, or the compiler may be forced to generate relocatable code. Generally, the binary result used for relative addressing will take up less memory space [4]. If the algorithm runs on another powerful code execution core of the same embedded system (such as PCP or auxiliary core on Infineon Tricore uControllers), the decompression of this function can be performed in parallel with the execution of other functions. uController starts earlier than execution when it is idle. Taking these facts as input, the decompression function must be separated from the execution of the decompression function in order to pass the decompression task to another kernel or start it earlier in execution. Figure 4: A typical power system scheme (LV—low voltage, MV—medium voltage, and HV—high voltage) [15]. Figure 3: The search function framework. Both methods are designed to accelerate decompression. In order to evaluate the efficiency of profile-driven and differential compression schemes presented in the previous sections, we compare their performance to those of some of the most known compressors. In particular, we chose two variants of Lempel and Ziv text replacement encoders [8]: the LZSS algorithm of Storer and Szimansky [9] and the LZAR method that combines LZSS with arithmetic coding [10]. LZSS is a byte-oriented compressor, which assumes a ring buffer, which initially contains zero bytes. It reads multiple bytes from the buffer, if convenient, find the longest string corresponding to the last read byte in the buffer, and use the corresponding length and position in the buffer to encode it in binary. The length of the unencoded string LZAR improves on LZSS by taking advantage of the fact that not all bytes have the same frequency of appearance in a cache line. Therefore, higher frequency bytes are encoded with fewer bits, and lower frequency bytes are encoded with more bits; therefore, the total length of the cache line being compressed will be reduced. The arithmetic code is used to generate a variable length pattern that encodes the length and position of compressed bytes. We note that for two LZ type compressors, slots of size \(\mathrm{S}=8\) and \(\mathrm{S}=12\) can be used in the compressed storage area. The results for the case of \(\mathrm{S}=12\) are reported in Section 4, while the data for \(\mathrm{S}=8\) is omitted due to space constraints. The hardware-assisted compression scheme of this research is implemented in the Simplecalar simulation framework [11]. In particular, we adopted the functional cache simulator sim-cache as the simulation engine. We consider a system with compressed main memory (that is, the information is stored in the cache in an uncompressed format), where the compression hardware is located between the cache and the main memory. Here, the fundamental difference between code compression and data compression is that for the latter type of compression, decompression is required while the program is running, while for the former type of compression, only decompression is required. This fact has a profound effect on applicable compression algorithms and architectures: for example, it excludes highly asymmetric schemes in which compression is more involved than decompression. The potential for converting heat into electrical energy in computers has not been extensively studied. In this research, we implemented a thermoelectric generator module, which is combined with the hybrid cooling system on the CPU processor to convert heat into electrical energy, which is the main key part of the computer system, it generates a lot of heat, and its work is closely related to temperature. In addition, thermoelectricity has also been applied to other fields, such as biomass heating [12] and processors in CPU [13]. In our research, we explored the potential development of using thermoelectric generators to generate a new generation of electrical energy from microprocessor-based computers. We observe 0.5W output generated by a single thermoelectric module. Figures 6 and 7 show the _Lempel-Ziv-Welch_ (LZW) implementation of our S-LZW sensor node, the number of instructions and the default main memory usage of the algorithm evaluated in [14]. _Lempel-Ziv-Oberhumer_ (LZO) is an exception. In [14] they said that of amongst all the algorithms they evaluated, LZO has the lowest power consumption for compressing and sending 1MB of data. The developers of LZO have implemented a version specifically for embedded systems (for example, the version we call miniLZO). It should be noted that in the configuration of the system with the processing according to this embodiment, the sequence length encoding algorithm is used to compress the initvars and Zerovars parts, obtaining an average compression ratio of 10:1 and saving 90 KB of ROM space in the target system. The decompression algorithm consists of 24 ARM6 instructions, so it occupies less than 100 bytes of code in the ROM of the target system. In this embodiment, JumpStart 2.2TM is used as a development kit. A better compression algorithm (such as LZWTM) can achieve a compression ratio of 20:1, but the cost is that the code in the ROM space of the target system exceeds 5 KB. Therefore, there is a need for a solution to eliminate the wasted address space of the on-board system. The required solution should reduce the amount of ROM space required to store runtime environment information. The required solution is to use ROM space more efficiently, reduce the cost of the onboard system, and provide greater functionality for the main program code. The present invention provides this required solution. In order to solve the problem of periodic signal compression, we propose an effective lossless compression method for periodic signals based on an adaptive dictionary. When a periodic signal is predicted, the historical information of the signal is important. We have built a dictionary to store signal history information because the historical information of the signal is rich. The data gathered thus far for the compression of electrical signals is summarised in Table 1. We maintained the evaluation measures used by the various strategies in Table 1. As can be seen, deciding which metric will be used to assess the various lossy compression algorithms is a crucial challenge for the development of approaches for compressing electric signals. ## 3 Results and Discussion ### Implementing Thermoelectric Generator on CPU The experiment is to observe the voltage generated when TEG is installed in the processor CPU. To prevent overheating, TEG has been connected to a hybrid radiator cooler, which consists of an aluminum radiator with water flowing in the radiator tube. We used Epcos NTC thermistor model B57891 to detect the temperature of the processor and TEG. All measurements are monitored and recorded using Arduino Uno. Figure 5 displays the current and voltage produced by the CPU processor. The maximum temperature attained is 53 \({}^{\circ}\) C, and the maximum current and voltage are 190 mA and 2.4 V, respectively. The excess energy present in the ROM can be utilised by applying thermoelectricity (TE) to capture the lost electrons (wasted thermal energy) from the heated ROM. The recovered thermal energy is then optimised by being converted into electrical energy for charging the battery. ### Signal compression by LZW Figure 6 verifies the efficiency and adaptability of the method proposed in this project. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Group & Category & \begin{tabular}{c} Basic \\ Technique \\ \end{tabular} & \begin{tabular}{c} Compression \\ Ratio \\ \end{tabular} & \begin{tabular}{c} Distortion \\ Metric \\ \end{tabular} & \begin{tabular}{c} Distortion \\ Value \\ \end{tabular} \\ \hline \hline \multirow{3}{*}{Lossless} & \multirow{3}{*}{1D} & Lempel-Ziv & 5:1 & – & – \\ \cline{3-6} & & Delta-modulation & 2:1 & – & – \\ \cline{2-6} & 2D & JPEG2000 & 9:1 & – & – \\ \hline \hline \multirow{6}{*}{Lossy} & \multirow{3}{*}{Wavelet Transform} & Daubechies DWT & 6:1 to 3:1 & NMSE & \(10^{-5}\) to \(10^{-6}\) \\ \cline{3-6} & & Daubechies DWT & 3.43:1 & NMSE & \(10^{-4}\) \\ \cline{3-6} & & Slantlet DWT & 10:1 & MSE & -19 dB \\ \cline{3-6} & & B-spline DWT & 15:1 & MSE & -25 dB \\ \cline{3-6} & & WPT and LZW & 10:1 & PRD & 10\% \\ \cline{3-6} & \multirow{3}{*}{Wavelet Packet} & WPT and Arithmetic Coding & 6:9:1 & NMSE & \(10^{-5}\) \\ \cline{3-6} & & EZW & 10:1 to 16:1 & NMSE & \(10^{-5}\) \\ \cline{3-6} & \begin{tabular}{c} Mixed Transform \\ and Parametric \\ \end{tabular} & \begin{tabular}{c} Fundamental, Harmonic and \\ Transient Coding \\ \end{tabular} & 16:1 & MSE & -30 dB \\ \cline{3-6} & Parametric Coding & \begin{tabular}{c} Damped Sinusoids \\ Modeling \\ \end{tabular} & \(>16\):1 & SNR & \(>31\) dB \\ \hline \end{tabular} \end{table} Table 1: Comparison of some techniques used to compressed electric signals [15]. Figure 5: (a) Current output and (b) the thermoelectric generator’s voltage as implemented by the CPU processor. Then, noise is added to the original signal in Figure 6, and its signal-to-noise ratio is 34.2 dB. In Figure 7, the prediction effect of the algorithm on the periodic signal with added noise is shown, and the prediction effect of the noise is worse than that of Figure 7. In fact, the method proposed in this article is a lossless compression algorithm [16]. We have added noise to the periodic signal, and the signal amplitude is random within a certain range. It is difficult for the prediction model to use context information to accurately predict the amplitude of the signal, so the output prediction residual will fluctuate in a small range. Therefore, the signal in Figure 6 is also the next depth search direction of the algorithm. We can see that the coding effect of this method on periodic signals in this study is far superior to the other three methods. When the encoding output of the method proposed in this paper is compressed by LZW, the compression rate is much higher than the other three methods. Moreover, the method provided in this article belongs to lossless compression. When compressing the signal by this method, the signal information will not be lost. From the decomposition, we notice how the energy of memory access dominates the total energy budget. On the contrary, the cost of the compressor is almost negligible. This tells us that if the main goal is energy optimisation, it may be interesting to study hardware implementations of more complex (and more efficient compression) schemes (such as the LZ-type method we discuss in this article). In fact, the additional complexity and energy consumption of the compression unit will be offset by the savings generated by reducing memory traffic. Figure 6: (a) Original signal (b) Coding output (c) Decoding output (d) Error between the decoding output and the original signal. Figure 7: (a) The signals containing additive noise (b) Coding output (c) Decoding output (d) Error between the decoding output result and the original signal. ### Main compression techniques for electric signals A voltage dip signal that was generated by employing the _Discrete Wavelet Transform_ (DWT) with Daubechies four coefficient filters is shown in Figure 8 together with the detail bands and the approximation band of the coarsest scale. Be aware that the signal's many occurrences, particularly transients, can be captured by the wavelet transform. Sinusoidal signals are thus inappropriate for an effective representation in the wavelet transform domain due to their small bandwidth and variable frequency. On the other hand, as seen in Figure 8, it has the capacity to catch transient components. This served as the impetus for the study of hybrid coding methods. Figure 9, which depicts a voltage signal and the residue left over after subtracting the fundamental component, serves as an illustration of this. Improved performance can be achieved since the fundamental component--the sinusoidal one--can be fully set using five parameters (beginning and ending samples, amplitude, Figure 8: Detail and approximation bands for an IEEE project group 1159.2 voltage dip signal with a sampling rate of 15 360 Hz. The original signal is represented by the top plot, and the detail bands are displayed from top to bottom in decreasing frequency (increasing scale) order. The bottom plot corresponds to the coarsest scale’s approximation band. Figure 9: Voltage signal that has been tainted by recurring transient events and residue that is generated by removing the sinusoidal component. frequency, and phase) [15]. Figure 10, shows a flow chart of the steps of a process in accordance with one embodiment of the project. ## 4 Conclusion This project addresses the issue of periodic signal compression by proposing a novel adaptive coding method. The encoded output is compressed using the LZW algorithm. To minimise energy consumption in embedded kernel-based systems, we introduce hardware-assisted data compression. Our approach involves a new architecture model specifically designed for data compression, along with various compression methods that are well-suited for this model. By implementing the selected compression scheme, we observe a reduction in memory communication volume and power consumption in the cache memory path of the system running the standard benchmark. The achieved reduction ranges between 4.2% and 35.2%. This strategy aims to facilitate the adoption of smart sensing, monitoring, measuring, diagnostics, and protection in new low-cost device types, thereby enhancing electrical signal compression technology and improving the grid. Additionally, we explore the utilisation of excess energy present in the ROM. We propose harnessing thermoelectricity (TE) to capture wasted thermal energy in the heated ROM, converting it into electrical energy. This recovered thermal energy is then optimised for charging the supercapacitor, enabling efficient energy utilisation.. ## Acknowledgements The authors expresse their gratitudes to the University of Pretoria's Faculty of Engineering, Built Environment, and Information Technology for their support in funding this research project through the Doctorate UCDP Grant A1F637.
2303.13345
A new subspace minimization conjugate gradient method for unconstrained minimization
Subspace minimization conjugate gradient (SMCG) methods have become a class of quite efficient iterative methods for unconstrained optimization and have attracted extensive attention recently. Usually, the search directions of SMCG methods are generated by minimizing approximate models with the approximation matrix $ B_k $ of the objective function at the current iterate over the subspace spanned by the current gradient $ g_k $ and the latest search direction. The $ g_k^TB_kg_k $ must be estimated properly in the calculation of the search directions, which is crucial to the theoretical properties and the numerical performance of SMCG methods. It is a great challenge to estimate it properly. The projection technique has been used successfully to generate conjugate gradient directions such as Dai-Kou conjugate gradient direction. Motivated by the above two observations, in the paper we present a new subspace minimization conjugate gradient methods by using a projection technique based on the memoryless quasi-Newton method. More specially, we project the search direction of the memoryless quasi-Newton method into the subspace spanned by the current gradient and the latest search direction and drive a new search direction, which is proved to be descent. Remarkably, the proposed method without any line search enjoys the finite termination property for two dimensional convex quadratic functions, which is helpful for designing algorithm. An adaptive scaling factor in the search direction is given based on the above finite termination property. The proposed method does not need to determine the parameter $ \rho_k $ and can be regarded as an extension of Dai-Kou conjugate gradient method. The global convergence of the proposed method is analyzed. Numerical comparisons indicate the proposed method is very promising.
Zexian Liu, Yan Ni, Hongwei Liu, Wumei Sun
2023-03-23T15:22:39Z
http://arxiv.org/abs/2303.13345v1
# A new subspace minimization conjugate gradient method for unconstrained minimization ###### Abstract Subspace minimization conjugate gradient (SMCG) methods, as the generalization of traditional conjugate gradient methods, have become a class of quite efficient iterative methods for unconstrained optimization and have attracted extensive attention recently. Usually, the search directions of SMCG methods are generated by minimizing approximate models with the approximation matrix \(B_{k}\) of the objective function at the current iterate over the subspace spanned by the current gradient \(g_{k}\) and the latest search direction. The \(g_{k}^{T}B_{k}g_{k}\) must be estimated properly in the calculation of the search directions, which is crucial to the theoretical properties and the numerical performance of SMCG methods. It is a great challenge to estimate it properly. An alternative solution for this problem might be to design a new subspace minimization conjugate gradient method independent of the parameter \(\rho_{k}\approx g_{k}^{T}B_{k}g_{k}\). The projection technique has been used successfully to generate conjugate gradient directions such as Dai-Kou conjugate gradient direction (SIAM J Optim 23(1), 296-320, 2013). Motivated by the above two observations, in the paper we present a new subspace minimization conjugate gradient methods by using a projection technique based on the memoryless quasi-Newton method. More specially, we project the search direction of the memoryless quasi-Newton method into the subspace spanned by the current gradient and the latest search direction and drive a new search direction, which is proved to be descent. Remarkably, the proposed method without any line search enjoys the finite termination property for two dimensional convex quadratic functions, which is helpful for designing algorithm. An adaptive scaling factor in the search direction is given based on the above finite termination property. The proposed method does not need to determine the parameter \(\rho_{k}\) and can be regarded as an extension of Dai-Kou conjugate gradient method. The global convergence of the proposed method for general nonlinear functions is analyzed under the standard assumptions. Numerical comparisons on the 147 test function from the CUTEst library indicate the proposed method is very promising. Keywords:Conjugate gradient method Subspace minimization memoryless quasi-Newton method Two dimensional quadratic termination Global convergence + Footnote †: journal: ## 1 Introduction We consider the following unconstrained optimization problem: \[\min_{x\in\mathbb{R}^{n}}f(x).\] where \(f\) is continuously differential and its gradient is denoted by \(g\). Due to the low memory requirement, simple form and nice numerical effect, conjugate gradient methods are a class of efficient iterative methods for large scale unconstrained optimization. Conjugate gradient methods are of the following form \[x_{k+1}=x_{k}+\alpha_{k}d_{k}, \tag{1}\] where \(\alpha_{k}\) is the stepsize obtained by a line search and \(d_{k}\) is the search direction given by \[d_{k}=\left\{\begin{aligned} &-g_{k},&\text{if}\:k=0,\\ &-g_{k}+\beta_{k}d_{k-1},&\text{if}\:k>0.\end{aligned}\right. \tag{2}\] Here \(\beta_{k}\) is often called conjugate parameter. In the case that \(f\) is a convex quadratic function and the exact line search is performed, \(\beta_{k}\) should be the same. For nonlinear functions, however, different \(\beta_{k}\) result in different conjugate gradient methods and their properties can be significantly different. Some well-known formulae for \(\beta_{k}\) are called the Fletcher-Reeves (FR) [1], Hestenes-Stiefel (HS) [2], Polak-Ribiere-Polyak (PRP) [3; 4] and Dai-Yuan (DY) [5] formulae, and are given by \[\beta_{k}^{FR}=\frac{\left\|g_{k}\right\|^{2}}{\left\|g_{k-1}\right\|^{2}}, \quad\beta_{k}^{HS}=\frac{g_{k}^{T}y_{k-1}}{d_{k-1}^{T}y_{k-1}},\quad\beta_{ k}^{PRP}=\frac{g_{k}^{T}y_{k-1}}{\left\|g_{k-1}\right\|^{2}},\quad\beta_{k}^{ DY}=\frac{\left\|g_{k}\right\|^{2}}{d_{k-1}^{T}y_{k-1}},\] where \(\left\|\cdot\right\|\) denotes the Euclidean norm. By deleting the third term of the memoryless quasi-Newton search direction, Hager and Zhang [6] presented a famous efficient conjugate gradient method (CG_DESCENT, We also call it HZ CG algorithm for short) with \[\beta_{k}^{HZ}=\frac{g_{k+1}^{T}y_{k}}{d_{k}^{T}y_{k}}-2\frac{\left\|y_{k} \right\|^{2}}{d_{k}^{T}y_{k}}\frac{g_{k+1}^{T}d_{k}}{d_{k}^{T}y_{k}}, \tag{3}\] and established the global convergence under the standard Wolfe line search. And the numerical results in [6; 7] indicated that CG_DESCENT with the approximate Wolfe line search (AWolfe line search): \[\sigma g_{k}^{T}d_{k}\leq g(x_{k}+\alpha_{k}d_{k})^{T}d_{k}\leq(2\delta-1)\,g _{k}^{T}d_{k},\] where \(0<\delta<0.5\) and \(\delta\leq\sigma<1\), is very efficient. In 2013, Dai and Kou [8] projected a multiple of the memoryless BFGS direction of Perry [9] and Shanno [10] into the manifold \(\{-g_{k+1}+sd_{k}:s\in\mathbb{R}\}\) and presented a family of conjugate gradient algorithms (CGOPT, We also call them Dai-Kou CG algorithms for short) with the improved Wolfe line search, and the numerical results in [8] suggested that CGOPT with the following parameter: \[\beta_{k}^{DK}=\frac{g_{k+1}^{T}y_{k}}{d_{k}^{T}y_{k}}-\frac{\left\|y_{k} \right\|^{2}}{d_{k}^{T}y_{k}}\frac{g_{k+1}^{T}d_{k}}{d_{k}^{T}y_{k}} \tag{4}\] is the most efficient. CG_DESCENT and CGOPT are both popular and quite efficient CG software packages. So far, conjugate gradient methods have attracted extremely extensive attention and the advance about conjugate gradient methods can be referred as [11]. In conjugate gradient methods, the stepsize \(\alpha_{k}\) is often required to satisfy certain line search conditions. Among them, the strong Wolfe line search is often used in the early convergence analysis, which aims to find a stepsize satisfying the following conditions \[f\left(x_{k}+\alpha_{k}d_{k}\right)\leq f\left(x_{k}\right)+ \sigma\alpha_{k}g_{k}^{T}d_{k}, \tag{1.5}\] \[\left|g_{k+1}^{T}d_{k}\right|\leq-\delta g_{k}^{T}d_{k}, \tag{1.6}\] where \(0<\delta<\sigma<1\). The standard Wolfe line search is also preferred due to the relatively easy numerical implementation, which aims to find a stepsize satisfying (1.5) and \[g_{k+1}^{T}d_{k}\geq\delta g_{k}^{T}d_{k}. \tag{1.7}\] The sufficient descent property of the search direction plays an important role in the convergence analysis, which requires the search direction to satisfy \[g_{k}^{T}d_{k}\leq-c\|g_{k}\|^{2}, \tag{1.8}\] where \(c>0\). The subspace minimization conjugate gradient (SMCG) methods are the the generalization of traditional conjugate gradient methods, and have also received much attention recently. The subspace minimization conjugate gradient methods were first proposed by Yuan and Stoer [12] in 1995, where the search direction is computed by minimizing a quadratic model over the subspace \(V_{k}=Span\left\{g_{k},s_{k-1}\right\}\): \[\min_{d\in V_{k}}\,g_{k}^{T}d+\frac{1}{2}d^{T}B_{k}d, \tag{1.9}\] where \(B_{k}\in\mathbb{R}^{n\times n}\) is a symmetric and positive definite approximation to the Hessian matrix and satisfies the standard secant equation \(B_{k}s_{k-1}=y_{k-1}\). Since \(d_{k}\in V_{k}\) can be expressed as \[d=ug_{k}+vs_{k-1}, \tag{1.10}\] where \(u,v\in\mathbb{R}\), by substituting (1.10) into (1.9) and using the standard secant equation, we arrange (1.9) as the following form : \[\min_{u,v\in\mathbb{R}}\,\,\left(\begin{array}{c}\|g_{k}\|^{2}\\ g_{k}^{T}s_{k-1}\end{array}\right)^{T}\,\begin{pmatrix}u\\ v\end{pmatrix}+\frac{1}{2}\begin{pmatrix}u\\ v\end{pmatrix}^{T}\,\begin{pmatrix}\rho_{k}&g_{k}^{T}y_{k-1}\\ g_{k}^{T}y_{k-1}\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}, \tag{1.11}\] where \(\rho_{k}\approx g_{k}^{T}B_{k}g_{k}\), namely, \(\rho_{k}\) is the estimate of \(g_{k}^{T}B_{k}g_{k}\). At first the SMCG methods received little attention. For example, Andrei [13] presented an efficient SMCG method, where the search direction is generated over \(-g_{k}+Span\left\{s_{k-1},y_{k-1}\right\}\); based on [13], Yang et al. [14] developed another SMCG method, in which the search direction is generated over \(-g_{k}+Span\left\{s_{k-1},s_{k-2}\right\}\). A significant work about the SMCG method was given by Dai and Kou [15] in 2016. More specially, Dai and Kou established the finite termination for two dimensional convex quadratic functions of the SMCG method and presented a Barzilai-Borwein conjugate gradient (BBCG) methods with an efficient estimate of the parameter \(\rho_{k}\): \[\rho_{k}^{BBCG3}=\frac{3}{2}\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}\|g_{k} \|^{2} \tag{1.12}\] based on the BB method [15]. Motivated by the SMCG method and \(\rho_{k}^{BBCG3}\), Liu and Liu [16] extended the BBCG3 method to general unconstrained optimization and presented an efficient subspace minimization conjugate gradient method (SMCG_BB) with the generalized Wolfe line search. Since then, a lot of SMCG methods emerged for unconstrained optimization. Based on [16], Li et al. [17] presented a new SMCG method based on conic model and quadratic model; Wang et al. [18] proposed a new SMCG method based on tensor model and quadratic model; Zhao et al. [19] presented a new SMCG method based on regularization model and quadratic model, and the numerical results in [19] indicated these SMCG methods is very efficient. Recently, Sun et al. [20] proposed some accelerated SMCG methods based on [19]. More advance about subspace minimization conjugate gradient method can be referred [21; 22]. Subspace minimization conjugate methods are a class of efficient iterative methods for unconstrained optimization. On the one hand, the search direction of SMCG method is often parallel to the HS conjugate gradient method [15]. On the other hand, traditional conjugate gradient method with \(d_{k}=-g_{k}+\beta_{k}d_{k-1}\) is only the special case of SMCG method with \(d_{k}=u_{k}g_{k}+v_{k}s_{k-1}\). In other words, SMCG methods can not only inherit some important properties of traditional conjugate gradient methods but also have more choices for scaling the gradient \(g_{k}\) by \(u_{k}\), which will induce that SMCG method without the exact line search can enjoy some additional nice theoretical properties such as the finite termination for two dimensional convex functions due to the term \(u_{k}\) in the search direction compared to the traditional conjugate gradient methods. In addition, SMCG methods have also illustrated nice numerical performance [16; 19]. Based on the observation, SMCG methods have great potentiality and should be received more attention. However, the estimate \(\rho_{k}\) of \(g_{k}^{T}B_{k}g_{k}\) must be determined before calculating the search direction. The parameter \(\rho_{k}\) is very crucial to the property and the numerical performance of SMCG methods, and we still do not understand how the parameter \(\rho_{k}\) affects the numerical behavior of the SMCG method. It is thus a great challenge to determine the parameter properly. A simple analysis for the choice of \(\rho_{k}\) is given here. In (1.11), the term \(g_{k}^{T}y_{k-1}\) and \(s_{k-1}^{T}y_{k-1}\) are obtained by using the standard secant equation to eliminate \(B_{k}\), namely, \(g_{k}^{T}y_{k-1}=g_{k}^{T}B_{k}s_{k-1}\) and \(s_{k-1}^{T}y_{k-1}=s_{k-1}^{T}B_{k}s_{k-1}\). For \(g_{k}^{T}B_{k}g_{k}\), we can not use the standard secant equation to eliminate \(B_{k}\), which implies that \(B_{k}\) must be given or estimated before computing the search direction. Is the matrix \(B_{k}\) of \(g_{k}^{T}B_{k}g_{k}\) required to satisfy the standard secant equation? If yes, some memoryless quasi-Newton updating formulae can be applied to generate \(B_{k}\). It is however observed that the resulting \(\rho_{k}\) can not bring the desired numerical effect [15]. If not, the matrices \(B_{k}\) in \(g_{k}^{T}B_{k}g_{k}\) and in \(g_{k}^{T}y_{k-1}\) are inconsistent, and we do not know what will happen even if the resulting estimate \(\rho_{k}\) is efficient. For example, in the efficient choice \(\rho_{k}^{BBCG3}\), the \(B_{k}\) estimated by \(\frac{3}{2}\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{2}y_{k-1}}I\) is not satisfied the standard secant equation, while the \(B_{k}\) in \(g_{k}^{T}y_{k-1}\) satifies the standard secant equation. And we do not know why \(\rho_{k}^{BBCG3}\) is so efficient in this way. It induces a challenge in determining the parameter \(\rho_{k}\) properly, which causes that it is far from consensus for good choice of the parameter \(\rho_{k}\) so far. As a result, it is no doubt that the choice of the parameter \(\rho_{k}\) is a obstacle for the development of SMCG methods. A question is naturally to be asked: can we develop an efficient SMCG method without determining the parameter \(\rho_{k}\)? In the paper we do not focus on exploiting new choices for \(\rho_{k}\) since it is difficult to determine it properly, as mentioned above. Instead, we are interested to focus on the above question and study a new subspace minimization conjugate method without determining the important parameter \(\rho_{k}\). Motivated by Dai-Kou conjugate gradient method [15], we project the search direction of the memoryless quasi-Newton method into the subspace spanned by the current gradient and the latest search direction and develop a new SMCG method for unconstrained optimization. The new search direction is proved to be descent. It is remarkable that the SMCG method without any line search enjoys finite termination for two dimensional convex quadratic functions. With the improved Wolfe line search, the convergence of the proposed method for general nonlinear functions is established under the standard assumptions. Numerical experiments on the 147 test functions from the CUTEst library [23] indicates the proposed method is very promising. The remainder of this paper is organized as follows. We develop a new SMCG method for unconstrained optimization and exploit some important properties of the new search direction in Section 2. In Section 3 we establish the global convergence of the proposed method for general nonlinear functions under the standard assumptions. Some numerical experiments are conducted in Section 4. Conclusions are given in the last section. ## 2 New subspace minimization conjugate gradient method independent of the parameter \(\rho_{k}\) In the section, we first derive the new search directions, analyze their important properties, develop an adaptive scaling factor and present a new subspace minimization conjugate gradient method independent of the parameter \(\rho_{k}\) for unconstrained optimization. ### The proposed search directions and their important properties We are interested to the self-scaling memoryless BFGS method by Perry [9] and Shanno [10], where the search direction \(\bar{d}_{k}^{PS}\) is given by \[\bar{d}_{k}^{PS}=-\frac{1}{\tau_{k}}g_{k}+\left[\frac{g_{k}^{T}y_{k-1}}{\tau_ {k}s_{k-1}^{T}y_{k-1}}-\left(1+\frac{\left\|y_{k-1}\right\|^{2}}{\tau_{k}s_{k- 1}^{T}y_{k-1}}\right)\frac{g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}\right]s_{k-1} +\frac{1}{\tau_{k}}\frac{g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}y_{k-1}.\] Here \(\tau_{k}>0\) is the scaling parameter. The scaling memoryless quasi-Newton method is indeed three-term conjugate gradient method. Specially, if the line search is exact, namely, \(g_{k}^{T}s_{k-1}=0\), then the search direction \(\bar{d}_{k}^{PS}\) is HS conjugate gradient direction with scaling factor \(\frac{1}{\tau_{k}}\). It is not difficult to see that the search direction \(\bar{d}_{k}^{PS}\) only satisfies the following Dai-Liao conjugate condition [24], namely, \[\left(\bar{d}_{k}^{PS}\right)^{T}y_{k-1}=-t_{k}g_{k}^{T}s_{k-1},\ \ \text{where}\ \ t_{k}=1.\] As we know, the adaptive choice for \(t_{k}\) in Dai-Liao conjugate gradient methods [24] is usually more efficient than the prefixed choice. In addition, some famous and efficient conjugate gradient methods such as HZ conjugate gradient method [6] and Dai-Kou conjugate gradient method [8] are Dai-Liao conjugate gradient methods with adaptive parameters \(t_{k}\). Therefore, to impose the search direction \(\bar{d}_{k}^{PS}\) to satisfy the Dai-Liao conjugate condition with adaptive parameter, we multiple \(\bar{d}_{k}^{PS}\) by \(\tau_{k}\) and obtain the following direction: \[d_{k}^{PS}=-g_{k}+\left[\frac{g_{k}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}}-\left( \tau_{k}+\frac{\left\|y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)\frac{g_ {k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}\right]s_{k-1}+\frac{g_{k}^{T}s_{k-1}}{s_{k -1}^{T}y_{k-1}}y_{k-1}. \tag{2.1}\] Obviously, the search direction \(d_{k}^{PS}\) satisfies Dai-Liao conjugate condition \(\left(d_{k}^{PS}\right)^{T}y_{k-1}=-\tau_{k}g_{k}^{T}s_{k-1}\). Noted that the scaling factor \(\tau_{k}\) is indeed the adaptive parameter in Dai-Liao conjugate gradient method. The self-scaling memoryless BFGS method by Perry [9] and Shanno [10] has been applied successfully to generate the famous and efficient Dai-Kou conjugate gradient method [8]. The search direction in subspace minimization conjugate gradient method is usually generated in the subspace \(Span\left\{g_{k},s_{k-1}\right\}\), which means that \(d_{k}=u_{k}g_{k}+v_{k}s_{k-1}\), where \(u_{k}\) and \(v_{k}\) are undetermined parameters. Different from (11) requiring to estimate the parameter \(\rho_{k}\), based on the search direction \(d_{k}^{PS}\), we will give a new way to derive \(u_{k}\) and \(v_{k}\) without requiring to estimate the parameter \(\rho_{k}\).. We consider the case that \(g_{k}\) is not parallel to \(s_{k-1}\), namely, \[\overline{\omega}_{k}\!=\!\frac{\left(g_{k}^{T}s_{k-1}\right)^{2}}{\left\|g_ {k}\right\|^{2}\left\|s_{k-1}\right\|^{2}}\leq\xi_{1}, \tag{2}\] where \(0<\xi_{1}<1\) is close to \(1\). Otherwise, the search direction is naturally set to be \(d_{k}=-g_{k}\). By projecting the search direction \(d_{k}^{PS}\) into the subspace \(Span\left\{g_{k},s_{k-1}\right\}\), we get the following subproblem: \[\min_{d_{k}=u_{k}g_{k}+v_{k}s_{k-1}}\,\left\|d_{k}^{PS}-d_{k}\right\|_{2}^{2}. \tag{3}\] Solving the subproblem (3) yields the search direction \[d_{k}=u_{k}g_{k}+v_{k}s_{k-1}, \tag{4}\] where \[u_{k}=-1+\frac{g_{k}^{T}y_{k-1}g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}\left\|g_{ k}\right\|^{2}}-\frac{\left\|g_{k}\right\|^{2}\!\left(g_{k}^{T}s_{k-1}\right)^{2} -g_{k}^{T}y_{k-1}\!\left(g_{k}^{T}s_{k-1}\right)^{3}/s_{k-1}^{T}y_{k-1}}{ \left\|g_{k}\right\|^{2}\left\|g_{k}\right\|^{2}\left\|s_{k-1}\right\|^{2}- \left(g_{k}^{T}s_{k-1}\right)^{2}\right]}, \tag{5}\] \[v_{k}=\frac{g_{k}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}}+\frac{\left\|g_{k}\right\|^ {2}g_{k}^{T}s_{k-1}-g_{k}^{T}y_{k-1}\left(g_{k}^{T}s_{k-1}\right)^{2}/s_{k-1}^ {T}y_{k-1}}{\left\|g_{k}\right\|^{2}\left\|s_{k-1}\right\|^{2}-\left(g_{k}^{T }s_{k-1}\right)^{2}}-\left(\tau_{k}+\frac{\left\|y_{k-1}\right\|^{2}}{s_{k-1}^ {T}y_{k-1}}\right)\frac{g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}. \tag{6}\] It is not difficult to see that \(u_{k}\) and \(v_{k}\) can be rewritten as the following forms: \[u_{k}=\frac{1}{1-\overline{\omega}_{k}}\left(-1+\frac{g_{k}^{T}y_{k-1}g_{k}^ {T}s_{k-1}}{s_{k-1}^{T}y_{k-1}\|g_{k}\|^{2}}\right),\;v_{k}=\frac{1-2\bar{ \omega}_{k}}{1-\bar{\omega}_{k}}\;\frac{g_{k}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}} -\left(\tau_{k}+\frac{\left\|y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}-\frac{s _{k-1}^{T}y_{k-1}}{(1-\overline{\omega}_{k})\|s_{k-1}\|^{2}}\right)\frac{g_{k}^ {T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}, \tag{7}\] which are similar to the forms of conjugate gradient method. The new search direction (7) can be regarded the extension of Dai-Kou conjugate gradient direction. It is noted that the parameter \(\tau_{k}\) in (6) is the scaling factor in the memoryless quasi-Newton method, which is crucial to the numerical performance of the corresponding methods. There are various choices for \(\tau_{k}\), and in the analysis on the descent property and global convergence, the following choices \[\tau_{k}^{R}=\frac{s_{k-1}^{T}y_{k-1}}{\left\|s_{k-1}\right\|^{2}},\;\;\;\;\; \tau_{k}^{H}=\frac{\left\|y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}},\;\;\;\;\; \tau_{k}^{(1)}=1 \tag{8}\] are considered. We also give an adaptive choice of \(\tau_{k}\) in Section 3.2 based on Theorem 2.1. **Remark 1**: _If the line search is exact, namely, \(g_{k}^{T}s_{k-1}=0\), then it follows that \(u_{k}=-1\) and \(v_{k}=\frac{g_{k-1}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}}\) or \(v_{k}=\frac{g_{k}^{T}y_{k-1}}{\alpha_{k-1}\left\|g_{k-1}\right\|^{2}}\), which mean the search direction (4) reduces to the HS or PRP conjugate gradient direction._ **Remark 2**: _The search direction (4) satisfies the Dai-Liao conjugate condition_ \[d_{k}^{T}y_{k-1}=\left[\frac{\left(g_{k}^{T}y_{k-1}\right)^{2}\left\|s_{k-1} \right\|^{2}}{s_{k-1}^{T}y_{k-1}}-2g_{k}^{T}y_{k-1}g_{k}^{T}s_{k-1}+\left\|g_ {k-1}\right\|^{2}s_{k-1}^{T}y_{k-1}}{\Delta_{k}}-\left(\tau_{k}+\frac{\left\|y _{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)\right]g_{k}^{T}s_{k-1}\overset{ \Delta}{=}t_{k}g_{k}^{T}s_{k-1}.\] We will establish an interesting property--the finite termination of the SMCG method with (1) and (4) in the following theorem. **Theorem 2.1**: _Consider the SMCG method (1.1) and (2.4) with \(\tau_{k}=1\) for the convex quadratic function \(q\left(x\right)=\frac{1}{2}x^{T}Ax+b^{T}x,x\in\mathbb{R}^{2}\), where \(A\in\mathbb{R}^{2\times 2}\) is a symmetric and positive definite matrix and \(b\in\mathbb{R}^{2}\). Assume that \(d_{0}=-\alpha_{0}g_{0}\), where \(\alpha_{0}\) is the exact stepsize. Then, we must have that \(g_{j}=0\) for some \(j\leq 3\)._ ProofAssume that \(g_{j}\neq 0\) for \(j=0,1,2\). Since the first step is a Cauchy steepest descent step, we know \[g_{1}^{T}s_{0}=0. \tag{2.9}\] By (2.4), \(s_{1}=d_{1}\) and Remark 2, we have that \[s_{1}^{T}y_{0}=d_{1}^{T}y_{0}=t_{1}g_{1}^{T}s_{0}=0,\] where \(t_{1}\) is given by Remark 2. Thus, \[y_{1}^{T}s_{0}=s_{1}^{T}As_{0}=0. \tag{2.10}\] Since \(n=2,s_{0}\neq 0\) and \(y_{1}=g_{2}-g_{1}\), we know from (2.9) and (2.10) that \(g_{1},g_{2}\)\(y_{1}\) are collinear and there must exist some real number \(a\neq 0\) such that \[y_{1}=ag_{2}. \tag{2.11}\] By (2.5) and (2.6), we have \[u_{2} =-1+\frac{g_{2}^{T}y_{1}g_{2}^{T}s_{1}}{\|g_{2}\|^{2}s_{1}^{T}y_{ 1}}-\frac{\|g_{2}\|^{2}g_{2}^{T}s_{1}-g_{2}^{T}y_{1}\left(g_{2}^{T}s_{1}\right) ^{2}/s_{1}^{T}y_{1}}{\|g_{2}\|^{2}\|^{2}}\] \[=-1+\frac{a\|g_{2}\|^{2}g_{2}^{T}s_{1}}{a\|g_{2}\|^{2}g_{2}^{T}s_{ 1}}-\frac{\|g_{2}\|^{2}g_{2}^{T}s_{1}-a\|g_{2}\|^{2}\left(g_{2}^{T}s_{1}\right) ^{2}/(ag_{2}^{T}s_{1})}{\|g_{2}\|^{2}\|s_{1}\|^{2}-\left(g_{2}^{T}s_{1}\right) ^{2}}\frac{g_{2}^{T}s_{1}}{\|g_{2}\|^{2}}\] \[=-1+1+0\] \[=0,\] \[v_{2} =\frac{g_{2}^{T}y_{1}}{s_{1}^{T}y_{1}}-\left(\tau_{k}+\frac{\|y _{1}\|^{2}}{s_{1}^{T}y_{1}}\right)\frac{g_{2}^{T}s_{1}}{s_{1}^{T}y_{1}}+\frac{ \|g_{2}\|^{2}g_{2}^{T}s_{1}-g_{2}^{T}y_{1}\left(g_{2}^{T}s_{1}\right)^{2}/s_{1 }^{T}y_{1}}{\|g_{2}\|^{2}\|s_{1}\|^{2}-\left(g_{2}^{T}s_{1}\right)^{2}}\] \[=\frac{a\|g_{2}\|^{2}}{ag_{2}^{T}s_{1}}-\left(\tau_{k}+\frac{a^{2 }\|g_{1}\|^{2}}{ag_{2}^{T}s_{1}}\right)\frac{g_{2}^{T}s_{1}}{ag_{2}^{T}s_{1}}+0\] \[=-\frac{\tau_{k}}{a},\] which implies that \(s_{2}=d_{2}=-\frac{\tau_{k}}{a}s_{1}\). Therefore, \[g_{3}=g_{2}+y_{2}=g_{2}+As_{2}=g_{2}-\frac{\tau_{k}}{a}As_{1}=g_{2}-\frac{\tau _{k}}{a}y_{1}=(1-\tau_{k})g_{2}. \tag{2.12}\] Since \(\tau_{k}=\tau_{k}^{(1)}=1\), we have \(g_{3}=0\), which completes the proof. **Remark 3**: _It follows that the SMCG method (1.1) and (2.4) with \(\tau_{k}=1\) without any line search except the first Cauchy steepest descent iteration enjoys the finite termination for two dimensional convex quadratic functions. It seems that it is not possible to obtain the same conclusion for traditional conjugate gradient methods without the exact line search._ Together with (2.2), let us consider the following search direction: \[d_{k}=\left\{\begin{aligned} &-g_{k},&&\text{if }k=0\text{ or }\overline{\omega}_{k}>\xi_{1},\\ & u_{k}g_{k}+v_{k}s_{k-1},&&\text{otherwise}, \end{aligned}\right. \tag{2.13}\] where \(\overline{\omega}_{k},\,u_{k}\) and \(v_{k}\) are given by (2.2), (2.5) and (2.6), respectively. We first do the following assumption: **Assumption 2.1**: _(i)The objective function \(f\) is continuously differentiable on \(\mathbb{R}^{n}\); (ii) The level set \(\mathcal{L}=\left\{x|f\left(x\right)\leq f\left(x_{0}\right)+\sum\limits_{k\geq 0 }\overline{\eta}_{k}\right\}\) is bounded, where \(\sum\limits_{k\geq 0}\overline{\eta}_{k}<+\infty\); (iii) The gradient \(g\) is Lipschitz continuous on \(\mathbb{R}^{n}\), namely, there exists a constant \(L>0\) such that_ \[\parallel g(x)-g(y)\parallel\leq L\parallel x-y\parallel,\ \ \forall x,y\in \mathbb{R}^{n}. \tag{2.14}\] Denote \[p_{k}=\frac{\left\|y_{k-1}\right\|^{2}\left\|s_{k-1}\right\|^{2}}{\left(s_{k- 1}^{T}y_{k-1}\right)^{2}},\ \ \gamma_{k}=\tau_{k}\frac{\left\|s_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}. \tag{2.15}\] The following lemma discusses the descent property of the search direction (2.13). **Lemma 2.1**: _Assume that \(f\) satisfies Assumption 2.1 (iii), and consider the subspace minimization conjugate gradient methods (1.1) and (2.13) with any one of \(\tau_{k}\) in (2.8). If \(s_{k-1}^{T}y_{k-1}>0\), then_ \[g_{k}^{T}d_{k}<0. \tag{2.16}\] _Furthermore, if \(f\) is uniformly convex, namely, there exists \(\mu>0\) such that_ \[\left(g\left(x\right)-g\left(y\right)\right)^{T}\left(x-y\right)\geq\mu\|x-y \|^{2},\ \ \forall x,y\in\mathbb{R}^{n}, \tag{2.17}\] _then there must exists \(c>0\) such that_ \[g_{k}^{T}d_{k}<-c\left\|g_{k}\right\|^{2}. \tag{2.18}\] _Proof_ We prove the conclusion by dividing it into the following two cases. (i) \(d_{k}=-g_{k}\). We know easily that (2.16) and (2.18) both hold. (ii) \(d_{k}=u_{k}g_{k}+v_{k}s_{k-1}\), where \(u_{k}\) and \(v_{k}\) are given by (2.5) and (2.6), respectively. It is not difficult to get that \[g_{k}^{T}d_{k}=-\|g_{k}\|^{2}+\frac{2g_{k}^{T}s_{k-1}g_{k}^{T}y_{k-1}}{s_{k-1} ^{T}y_{k-1}}-\left(\tau_{k}+\frac{\left\|y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k -1}}\right)\frac{\left(g_{k}^{T}s_{k-1}\right)^{2}}{s_{k-1}^{T}y_{k-1}}, \tag{2.19}\] which, in this sense, implies that the above search direction can be treated as \[d_{k} =-g_{k}+\left[\frac{2g_{k}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}}-\left( \tau_{k}+\frac{\left\|y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)\frac{g_{ k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}\right]s_{k-1}\] \[=-\left(I-\frac{2s_{k-1}y_{k-1}^{T}-\left(\tau_{k}+\frac{\left\| y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)s_{k-1}s_{k-1}^{T}}{s_{k-1}^{T}y_{k-1}} \right)g_{k} \tag{2.20}\] \[\stackrel{{\Delta}}{{=}}-H_{k}g_{k}.\] For the symmetric part of \(H_{k}\): \[\bar{H}_{k}=\frac{H_{k}+H_{k}^{T}}{2}=I-\frac{s_{k-1}y_{k-1}^{T}+y_{k-1}s_{k-1 }^{T}}{s_{k-1}^{T}y_{k-1}}+\bar{t}_{k}\frac{s_{k-1}s_{k-1}^{T}}{s_{k-1}^{T}y_{k -1}}, \tag{2.21}\] where \(\bar{t}_{k}=\tau_{k}+\frac{\left\|y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}\), it is not difficult to verify that \[g_{k}^{T}d_{k}=-g_{k}^{T}H_{k}g_{k}=-g_{k}^{T}\left(\frac{H_{k}+H_{k}^{T}}{2}+ \frac{H_{k}-H_{k}^{T}}{2}\right)g_{k}=-g_{k}^{T}\bar{H}_{k}g_{k}+0=-g_{k}^{T} \bar{H}_{k}g_{k}. \tag{2.22}\] Now, we only need to analyze the smallest eigenvalues of \(\bar{H}_{k}\). Rewriting \(\bar{H}_{k}\) as \[\bar{H}_{k}=I-\frac{\left(y_{k-1}-\bar{t}_{k}s_{k-1}\right)s_{k-1}^{T}}{s_{k-1}^ {T}y_{k-1}}-\frac{s_{k-1}y_{k-1}^{T}}{s_{k-1}^{T}y_{k-1}}, \tag{2.23}\] we know that \[\det\left(\bar{H}_{k}\right)=-\frac{\|y_{k-1}\|^{2}\|s_{k-1}\|^{2}}{\left(s_{k- 1}^{T}y_{k-1}\right)^{2}}+\frac{\bar{t}_{k}\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1} }=\tau_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}, \tag{2.24}\] which implies that \[\lambda_{\min}\lambda_{\max}=\tau_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}. \tag{2.25}\] It follows from \(trace\left(\bar{H}_{k}\right)=n-2+\bar{t}_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}}=(n-2)+\lambda_{\min}+\lambda_{\max}\) that \[\lambda_{\min}+\lambda_{\max}=\frac{\|y_{k-1}\|^{2}\|s_{k-1}\|^{2}}{\left(s_{k -1}^{T}y_{k-1}\right)^{2}}+\tau_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}. \tag{2.26}\] Combining \(s_{k-1}^{T}y_{k-1}>0\), (2.25) and (2.26) yields \[\lambda_{\min} =\frac{\frac{\|y_{k-1}\|^{2}\|s_{k-1}\|^{2}}{\left(s_{k-1}^{T}y_{ k-1}\right)^{2}}+\tau_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}-\sqrt{ \left(\frac{\|y_{k-1}\|^{2}\|s_{k-1}\|^{2}}{\left(s_{k-1}^{T}y_{k-1}\right)^{2 }}+\tau_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)^{2}-4\tau_{k} \frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}}}{2}\] \[=\frac{p_{k}+\gamma_{k}-\sqrt{\left(p_{k}+\gamma_{k}\right)^{2}-4 \gamma_{k}}}{2} \tag{2.27}\] \[>0,\] where \(p_{k}\) and \(\gamma_{k}\) are given by (2.15). As a result, for any one of \(\tau_{k}\) in (2.8), we have \(-g_{k}^{T}d_{k}=g_{k}^{T}\bar{H}_{k}g_{k}\geq\lambda_{\min}\|g_{k}\|^{2}>0\), which implies that (2.16). We next analyze the sufficient descent property of the search direction with different \(\tau_{k}\) in (2.8) when \(f\) is uniformly convex. (a) \(\tau_{k}=\tau_{k}^{H}=\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}\). We have that \(\gamma_{k}=\tau_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}=p_{k}\) and \(\lambda_{\min}=p_{k}-\sqrt{p_{k}^{2}-p_{k}}\). Since \[\frac{d\lambda_{\min}}{dp_{k}}=1-\frac{2p_{k}-1}{2\sqrt{p_{k}^{2}-p_{k}}}<0,~{ }~{}\forall p_{k}>1,\] \(\lambda_{\min}\) is monotonically decreasing in \([1,+\infty)\) and thus \[\lambda_{\min}>1/2.\] Noted that when \(\tau_{k}=\tau_{k}^{H}=\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}\), the sufficient descent property of the search direction is proved without the uniformly convexity condition (2.17). (b) \(\tau_{k}=\tau_{k}^{B}=\frac{s_{k-1}^{T}y_{k-1}}{\|s_{k-1}\|^{2}}\). We have that \(\gamma_{k}=\tau_{k}\frac{\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}=1\) and \(\lambda_{\min}=\frac{p_{k}+1-\sqrt{\left(p_{k}+1\right)^{2}-4}}{2}\). Since \[\frac{d\lambda_{\min}}{dp_{k}}=\frac{1}{2}-\frac{p_{k}+1}{2\sqrt{\left(p_{k}+1 \right)^{2}-4}}<0,~{}~{}\forall p_{k}>1, \tag{2.28}\] \(\lambda_{\min}\) is monotonically decreasing in \([1,+\infty)\). By Assumption 2.1 (iii) and (2.17), we know that \[p_{k}=\left(\frac{\|s_{k-1}\|\left\|y_{k-1}\right\|}{s_{k-1}^{T}y_{k-1}}\right) ^{2}\leq\left(\frac{L\|s_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)^{2}\leq\frac{L ^{2}}{\mu^{2}}. \tag{2.29}\] Therefore, \[\lambda_{\min}\geq\frac{L^{2}/\mu^{2}+1-\sqrt{\left(L^{2}/\mu^{2}+1\right)^{2}-4 }}{2}.\] (c) \(\tau_{k}=\tau_{k}^{(1)}=1\). We have \(\gamma_{k}=\tau_{k}\frac{\left|s_{k-1}\right|^{2}}{s_{k-1}^{2}y_{k-1}}=\frac{ \left|s_{k-1}\right|^{2}}{s_{k-1}^{2}y_{k-1}}\) and \(\lambda_{\min}=\frac{p_{k}+\gamma_{k}-\sqrt{\left(p_{k}+\gamma_{k}\right)^{2} -4\gamma_{k}}}{2}\). Since \[\frac{\partial\lambda_{\min}}{\partial p_{k}}=\frac{1}{2}-\frac{p_{k}+\gamma_{ k}}{2\sqrt{\left(p_{k}+\gamma_{k}\right)^{2}-4\gamma_{k}}},\] \(\lambda_{\min}\) is monotonically decreasing with respect to \(p_{k}\). It follows from (2.29) that \(p_{k}\leq L^{2}\gamma_{k}^{2}\). Let \(\bar{L}\) be any value such that \(\left(\gamma_{k}+L^{2}\gamma_{k}^{2}\right)^{2}-4\gamma_{k}\geq 0\). Thus, \(p_{k}\leq\max\left\{L^{2},\bar{L}^{2}\right\}\gamma_{k}^{2}\stackrel{{ \Delta}}{{=}}\bar{L}^{2}\gamma_{k}^{2}\). As a result, \[\lambda_{\min}\geq\frac{\gamma_{k}+\bar{L}^{2}\gamma_{k}^{2}-\sqrt{\left( \gamma_{k}+\bar{L}^{2}\gamma_{k}^{2}\right)^{2}-4\gamma_{k}}}{2}\stackrel{{ \Delta}}{{=}}\bar{\phi}\left(\gamma_{k}\right). \tag{2.30}\] It follows from (2.17) that \(\gamma_{k}\leq\frac{1}{\mu}\). It is not difficult to verify that \(\bar{\phi}^{\prime}\left(\gamma_{k}\right)<0\), which implies that \(\bar{\phi}\left(\gamma_{k}\right)\) is monotonically decreasing in \(\left(0,\frac{1}{\mu}\right]\) and \[\lambda_{\min}\geq\bar{\phi}\left(\frac{1}{\mu}\right)=\frac{1/\mu+\bar{L}^{2 }/\mu^{2}-\sqrt{\left(1/\mu+\bar{L}^{2}/\mu^{2}\right)^{2}-4/\mu}}{2}.\] In conclusion, for any one of \(\tau_{k}\) in (2.8), there must exists \(c>0\) such that \(\lambda_{\min}\geq c\), which implies that \[g_{k}^{T}d_{k}=-g_{k}^{T}\bar{H}_{k}g_{k}\leq-\lambda_{\min}\|g_{k}\|^{2}\leq -c\|g_{k}\|^{2}.\] It completes the proof. Powell [25] constructed a counterexample showing that the PRP method with exact line search may not converge for general nonlinear functions. It follows from Remark 1 that Powell's example can also be used to show that the method (1.1) and (2.13) with any one of \(\tau_{k}\) in (2.8) may not converge for general nonlinear functions. Therefore, motivated the truncation form in [8], we truncate similarly \(v_{k}\) in (2.6) as \[\bar{v}_{k}=\max\left\{v_{k},\eta_{k}\right\}, \tag{2.31}\] where \[\eta_{k}=-l_{k}\frac{\left|g_{k}^{T}s_{k-1}\right|}{\left\|s_{k-1}\right\|^{2} },\ \ l_{k}=\left\{\begin{array}{ll}\xi_{2},&\mbox{if}\,g_{k}^{T}s_{k-1}\leq 0, \\ \min\left\{\max\left\{\bar{\xi}_{2},-1+\left(1+u_{k}\right)/\varpi_{k}\right\},\bar{\xi}_{2}\right\},&\mbox{otherwise}.\end{array}\right. \tag{2.32}\] Here \(0<\bar{\xi}_{2}<1\), and \(0<\xi_{2}<1\). When applying the conjugate gradient methods with the exact line search to solve quadratic minimization problems, the sequence of the corresponding gradients is orthogonal, namely, \(g_{k}^{T}g_{j}=0,\ 0\leq j\leq k-1\). For general nonlinear functions, one also hope that \(\left|g_{k}^{T}g_{k-1}\right|\) may not be far from \(0\). When \(\left|g_{k}^{T}g_{k-1}\right|>\xi\left\|g_{k}\right\|^{2}\), where \(0<\xi<1\), Powell [26] suggested that the search direction should be restarted with \(d_{k}=-g_{k}\). Powell's restart strategy is quite efficient and has been used widely in the numerical implementation of conjugate gradient methods. As a result, if the condition \[-\eta_{1}\|g_{k}\|^{2}\leq g_{k}^{T}g_{k-1}\leq\eta_{2}\|g_{k}\|^{2},\ \ \ 0<\eta_{2}<1,\ \eta_{1}>\eta_{2} \tag{2.33}\] holds, our method will be also restarted with \(-g_{k}\). It follows from (2.33) that \[0<(1-\eta_{2})\leq\frac{g_{k}^{T}y_{k-1}}{\left\|g_{k}\right\|^{2}}\leq\left(1+ \eta_{1}\right). \tag{2.34}\] Therefore, the search direction is summarized below: \[d_{k}=\left\{\begin{aligned} &-g_{k},&\text{if }k=0\text{ or }\overline{\omega}_{k}>\xi_{1}\text{ or }(\ref{eq:1})\text{ holds,}\\ & u_{k}g_{k}+\bar{v}_{k}s_{k-1},&\text{ otherwise,}\end{aligned}\right. \tag{2.35}\] where \(\bar{v}_{k}\) and \(\overline{\omega}_{k}\) are given by (2.31) and (2.2), respectively. Lemma 2.2: _Assume that \(f\) satisfies Assumption 2.1 (iii), and consider the subspace minimization conjugate gradient methods (1.1) and (2.35) with any one of \(\tau_{k}\) in (2.8). If \(s_{k-1}^{T}y_{k-1}>0\), then there exists \(c>0\) such that_ \[g_{k}^{T}d_{k}\leq-c\|g_{k}\|^{2}. \tag{2.36}\] Proof: We prove it in the following cases. (i) \(d_{k}=-g_{k}\). We have \(g_{k}^{T}d_{k}=-\left\|g_{k}\right\|^{2}\). (ii) \(d_{k}=u_{k}g_{k}+\eta_{k}s_{k-1}\). If \(g_{k}^{T}s_{k-1}\leq 0\), then by (2.7) and (2.32) we have that \[g_{k}^{T}d_{k} =\left(-\|g_{k}\|^{2}+\frac{g_{k}^{T}y_{k-1}g_{k}^{T}s_{k-1}}{s_{ k-1}^{T}y_{k-1}}\right)/\left(1-\overline{\omega}_{k}\right)+l_{k}\overline{ \omega}_{k}\|g_{k}\|^{2}\] \[=-\left(\frac{1}{1-\overline{\omega}_{k}}-l_{k}\overline{\omega }_{k}-\frac{g_{k}^{T}y_{k-1}g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}\left\|g_{k} \right\|^{2}}\right)\left\|g_{k}\right\|^{2}.\] It follows from (2.2), \(g_{k}^{T}s_{k-1}\leq 0\) and (2.34) that \[\frac{1}{1-\overline{\omega}_{k}}-l_{k}\overline{\omega}_{k}-\frac{g_{k}^{T}y _{k-1}g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}\left\|g_{k}\right\|^{2}}\geq\frac{ 1}{1-\overline{\omega}_{k}}-l_{k}\overline{\omega}_{k}\geq 1-\xi_{1}.\] Therefore, when if \(g_{k}^{T}s_{k-1}\geq 0\), we obtain that \[g_{k}^{T}d_{k}\leq-(1-\xi_{1})\left\|g_{k}\right\|^{2}.\] If \(g_{k}^{T}s_{k-1}>0\), then \[g_{k}^{T}d_{k} =u_{k}\|g_{k}\|^{2}-l_{k}\overline{\omega}_{k}\|g_{k}\|^{2}\] \[=-\left(-u_{k}+l_{k}\overline{\omega}_{k}\right)\|g_{k}\|^{2}.\] According to (2.32), we can easily have that \(-u_{k}+l_{k}\overline{\omega}_{k}\geq 1-\overline{\omega}_{k}\geq 1-\xi_{1}\). Therefore, when \(g_{k}^{T}s_{k-1}>0\), we have that \[g_{k}^{T}d_{k}\leq-\left(1-\xi_{1}\right)\left\|g_{k}\right\|^{2}.\] (iii) \(d_{k}=u_{k}g_{k}+v_{k}s_{k-1}\). According to (2.7) and (2.35), we obtain that \[v_{k} =\frac{1-2\overline{\omega}_{k}}{1-\overline{\omega}_{k}}\frac{g_ {k}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}}-\left(\tau_{k}+\frac{\left\|y_{k-1}\right\| ^{2}}{s_{k-1}^{T}y_{k-1}}\right)\frac{g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}+ \frac{1}{1-\overline{\omega}_{k}}\frac{g_{k}^{T}s_{k-1}}{\left\|s_{k-1}\right\| ^{2}} \tag{2.37}\] \[\geq-l_{k}\frac{\left|g_{k}^{T}s_{k-1}\right|}{\left\|s_{k-1} \right\|^{2}}.\] If \(g_{k}^{T}s_{k-1}\leq 0\), then from (2.19) and (2.34) we know that \(g_{k}^{T}d_{k}\leq-\left\|g_{k}\right\|^{2}\). So we only need to consider the case of \(g_{k}^{T}s_{k-1}>0\). Multiplying both sides of (2.37) by \(\frac{g_{k}^{T}s_{k-1}}{\left\|g_{k}\right\|^{2}}\) yields \[\frac{1-2\overline{\omega}_{k}}{1-\overline{\omega}_{k}}\frac{g_{k}^{T}y_{k-1}} {s_{k-1}^{T}y_{k-1}}\frac{g_{k}^{T}s_{k-1}}{\left\|g_{k}\right\|^{2}}-\left( \tau_{k}\frac{\left\|s_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}+\frac{\left\|y_{k -1}\right\|^{2}\left\|s_{k-1}\right\|^{2}}{\left\|g_{k}\right\|^{2}\left\|s_{k -1}\right\|^{2}}+\frac{\overline{\omega}_{k}}{1-\overline{\omega}_{k}}\geq-l_{k} \frac{\left(g_{k}^{T}s_{k-1}\right)^{2}}{\left\|g_{k}\right\|^{2}\left\|s_{k -1}\right\|^{2}}.\] According to (2.15), we have that \[\frac{\frac{1}{\overline{\omega}_{k}}-2\,g_{k}^{T}y_{k-1}}{\frac{\overline{\omega }_{k}}{\overline{\omega}_{k}}-1}\,\frac{g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}} \geq\gamma_{k}+p_{k}-\frac{1}{1-\overline{\omega}_{k}}-l_{k}. \tag{2.38}\] It follows from (2.16) in Lemma 2.1 and \(g_{k}^{T}s_{k-1}>0\) that \(0<\frac{g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}<1\). It follows from (2.2) and (2.38) that \[p_{k}\leq\gamma_{k}+p_{k}\leq 1+\eta_{1}+l_{k}+\frac{1}{1-\overline{\omega}_{ k}}\leq 1+\eta_{1}+\bar{\xi}_{2}+\frac{1}{1-\xi_{1}}\stackrel{{\Delta}}{{=}} \xi_{0}. \tag{2.39}\] We next derive the conclusion for any one of \(\tau_{k}\) in (2.8) based on (2.27) as follows. (a) \(\tau_{k}=\tau_{k}^{H}=\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}\). We have that \(g_{k}^{T}d_{k}\leq-0.5\left\|g_{k}\right\|^{2}\) by Lemma 2.1. (b) \(\tau_{k}=\frac{s_{k-1}^{T}y_{k-1}}{\|s_{k-1}\|^{2}}\). According to Lemma 2.1, we know that \(\lambda_{\min}=\frac{1+p_{k}-\sqrt{\left(p_{k}-1\right)\left(p_{k}+3\right)} }{2}\) and \(\lambda_{\min}\) is monotonically decreasing in \([1,+\infty)\). Combining with (2.39), we obtain \[\lambda_{\min}\geq\frac{1+\xi_{0}-\sqrt{\left(\xi_{0}-1\right)\left(\xi_{0}+ 3\right)}}{2}>0.\] (c) \(\tau_{k}=1\). According to Lemma 2.1, we know that \[\lambda_{\min}=\frac{\gamma_{k}+p_{k}-\sqrt{\left(\gamma_{k}+p_{k}\right)^{2} -4\gamma_{k}}}{2}.\] Since \(\frac{\partial\lambda_{\min}}{\partial p_{k}}=1-\frac{\gamma_{k}+p_{k}}{\sqrt {\left(\gamma_{k}+p_{k}\right)^{2}-4\gamma_{k}}}<0\), \(\lambda_{\min}\) is monotonically decreasing with respect to \(p_{k}\) in \([1,+\infty)\). Similarly to Lemma 2.1, we can obtain \[\lambda_{\min}\geq\frac{\gamma_{k}+\tilde{L}^{2}\gamma_{k}^{2}-\sqrt{\left( \gamma_{k}+\tilde{L}^{2}\gamma_{k}^{2}\right)^{2}-4\gamma_{k}}}{2}\stackrel{{ \Delta}}{{=}}\tilde{\phi}\left(\gamma_{k}\right),\] where \(\tilde{L}\) is the same as that in(2.30). It is not difficult to verify that \(\tilde{\phi}^{\prime}\left(\gamma_{k}\right)<0\), which, together with \(\gamma_{k}\leq\xi_{0}\) implied by (2.39), yields that \(\bar{\phi}\left(\gamma_{k}\right)\) is monotonically decreasing and \[\lambda_{\min}\geq\bar{\phi}\left(\xi_{0}\right)=\frac{\xi_{0}+\tilde{L}^{2} \gamma_{k}^{2}-\sqrt{\left(\xi_{0}+\tilde{L}^{2}\xi_{0}^{2}\right)^{2}-4\xi_ {0}}}{2}>0.\] In conclusion, for any one of \(\tau_{k}\) in (2.8), there must exists \(c>0\) such that \(\lambda_{\min}\geq c\), which together with (2.22) implies that \[g_{k}^{T}d_{k}=-g_{k}^{T}\bar{H}_{k}g_{k}\leq-\lambda_{\min}\|g_{k-1}\|^{2} \leq-c\|g_{k-1}\|^{2}.\] It completes the proof. ### Adaptive choice of \(\tau_{k}\) The choice of \(\tau_{k}\) is also crucial to the search direction (2.35). \(\tau_{k}\) is given by the following observation. From Theorem 2.1, we know that the SMCG method (1.1) and (2.4) with \(\tau_{k}=1\) and without any line search expect the first Cauchy steepest descent iteration can enjoy the finite termination property when the objective function \(f\) is 2 dimensional strictly convex quadratic function. Therefore, the choice of \(\tau_{k}=1\) may be preferred in some cases. According to [27; 28], \[\mu_{k}=\left|\frac{2\left(f_{k-1}-f_{k}+g_{k}^{T}s_{k-1}\right)}{s_{k-1}^{T}y _{k-1}}-1\right| \tag{2.40}\] is a quantity measuring how \(f\) is close to a quadratic on the line segment between \(x_{k-1}\) and \(x_{k}\). If the following condition [29; 28] holds, namely, \[\mu_{k}\leq\xi_{3}\quad\text{or}\quad\max\left\{\mu_{k},\mu_{k-1}\right\}\leq \xi_{4}, \tag{2.41}\] where \(\xi_{3}\) and \(\xi_{4}\) are small positives and \(\xi_{3}<\xi_{4}\), \(f\) might be very close to a quadratic function on the line segment between \(x_{k-1}\) and \(x_{k}\). Therefore, if (2.41) and the following condition \[\|g_{k}\|^{2}\leq\xi_{6}\text{ or }\left(\|g_{k}\|^{2}>\xi_{6}\text{ and }\|s_{k-1}\|^{2}\leq\xi_{5}\right) \tag{2.42}\] hold, then the search direction (2.1) should not be scaled, namely, \(\tau_{k}=1\). Here, \(\xi_{5},\ \xi_{6},>0\). It is noted that the condition (2.42) means that the current iterative point \(x_{k}\) is close to the stationary point or close to the latest iterative point \(x_{k-1}\). Therefore, \(\tau_{k}\) is given by \[\tau_{k}=\left\{\begin{aligned} & 1,\quad\text{if (\ref{eq:11}) \ and (\ref{eq:11}) \ hold},\\ &\tau_{k}^{B},\ \text{ otherwise},\end{aligned}\right. \tag{2.43}\] where \(\tau_{k}^{B}\) is given by (2.8) ### The initial stepsize and the improved Wolfe line search It is universally accepted that the choice of initial stepsize is of great importance for a line search method. Unlike general quasi-Newton methods, it is challenging to determine a suitable initial stepsize for a SMCG method. The initial stepsize in our method is also similar to Algorithm 3.1 in [8], and the main difference lies in that we replace \[\alpha_{k}^{(0)}=\max\left\{\varphi\alpha_{k-1},-2\left|f_{k}-f_{k-1}\right|/ g_{k}^{T}d_{k}\right\} \tag{2.44}\] in Step 1 of Algorithm 3.1 in [8] by \[\bar{\alpha}_{k}^{(0)}=\left\{\begin{aligned} &\alpha_{k}^{(0)},& \text{if }d_{k}=-g_{k},\\ &\min\left\{1,\alpha_{k}^{(0)}\right\},&\text{if }d_{k} \neq-g_{k},\end{aligned}\right.\] where \(\alpha_{k}^{(0)}\) is given by (2.44). The motivation behind is that the search direction (2.4) is closest to the search direction of the memoryless quasi-Newton method, which usually prefers the unit stepsize 1. The improved Wolfe line search proposed by Dai and Kou [8] is an quite efficient Wolfe line search, which can avoid some numerical drawbacks of the original Wolfe line search. It aims to find the stepsize satisfying the following conditions: \[f\left(x_{k}+\alpha_{k}d_{k}\right)\leq f\left(x_{k}\right)+ \min\left\{\epsilon\left|f\left(x_{k}\right)\right|,\delta\alpha_{k}g_{k}^{T}d _{k}+\bar{\eta}_{k}\right\}, \tag{2.45}\] \[g(x_{k}+\alpha_{k}d_{k})^{T}d_{k}\geq\sigma g_{k}^{T}d_{k}. \tag{2.46}\] where \(0<\epsilon\), \(0<\delta<\sigma<1\), \(0<\bar{\eta}_{k}\) and \(\sum\limits_{k\geq 0}\overline{\eta}_{k}<+\infty\). The above improved Wolfe line search is used in our method. ### The proposed method Denote \[r_{k-1}=\frac{2\left(f_{k}-f_{k-1}\right)}{\left(g_{k}+g_{k-1}\right)^{T}s_{k-1}} -1,\ \ \ \overline{r}_{k-1}=f_{k}-f_{k-1}-0.5\left(g_{k}^{T}s_{k-1}+g_{k}^{T}s_{k} \right). \tag{2.47}\] Similarly to the restart condition in [16; 30], if there are continuously many iterations such that \(r_{k-1}\) or \(\overline{r}_{k-1}\) is close to \(0\), our algorithm is also restarted with \(-g_{k}\). The subspace minimization conjugate gradient method is described in detail as follow. ``` Step 0. Initialization. Given \(x_{0}\in\mathbb{R}^{n}\), \(\varepsilon>0\), \(\delta\), \(\sigma\), \(\epsilon_{1}\), \(\xi_{1}\), \(\xi_{2}\), \(\xi_{3}\), \(\xi_{4}\), \(\xi_{5}\), \(\xi_{6}\), MaxRestart, MinQuad. Set IterQuad = 0, \(\text{IterRestart}=0\), \(k=0\), Step 1. If \(\|g_{0}\|_{\infty}\leq\varepsilon\), then stop. Otherwise, \(d_{0}=-g_{0}\). Step 2. Calculate the stepsize satisfying the improve Wolfe line search (2.45) and (2.46). Step 3. Set \(x_{k+1}=x_{k}+\alpha_{k}d_{k}\). If \(\|g_{k}\|_{\infty}\leq\varepsilon\), then stop. Step 4. Update the restart condition. IterRestart = IterRestart + 1. If \(|r_{k-1}|\leq\epsilon_{1}\) or \(|\bar{r}_{k-1}|\leq\epsilon_{1}\)[16; 30], then IterQuad = IterQuad + 1, otherwise IterQuad := 0. Step 5. Calculate the search direction. 5.1 Restart. If IterRestart = MaxRestart or (IterQuad = MinQuad and IterQuad = IterRestart), set \(d_{k}=-g_{k}\) and \(\text{IterRestart}:=0\), IterQuad := 0. Set \(k=k+1\) and go to Step 2. 5.2 Compute the search direction \(d_{k}\) by (2.35) with \(\tau_{k}\) in (2.43). Set \(k=k+1\) and go to Step 2. ``` **Algorithm 1** Subspace Minimization Conjugate Gradient Methods (SMCG) In the SMCG method, IterRestart denote the number of iterations since the last restart. IterQuad denote the number of continuous iterations such that \(r_{k-1}\) or \(\overline{r}_{k-1}\) is close to \(0\). ## 3 Convergence Analysis We will establish the global convergence of Algorithm 1 for general functions under Assumption 2.1 in the section. Since Algorithm 1 is restarted with \(d_{k}=-g_{k}\) at least MaxRestart iterations, the global convergence can be obtained easily. So we consider the global convergence properties of Algorithm 1 without the restart in Step 5.1. In addition, since \(\tau_{k}\) in (2.43) chooses adaptively between \(1\) and \(\tau_{k}^{B}\), the convergence result based on any one of \(\tau_{k}\) in (2.8) suffices to that of Algorithm 1. So we establish the global convergence of the SMCG method (1.1) and (2.35) under the Assumption 2.1. According to the improved Wolfe line search (2.45) and (2.46) and Assumption 2.1 (ii), we know easily that \[\sum_{k=0}^{+\infty}-\alpha_{k}g_{k}^{T}d_{k}<+\infty\ \text{ and }\ \alpha_{k}\geq\frac{-\left(1-\sigma\right)g_{k}^{T}d_{k}}{L\|d_{k}\|^{2}}, \tag{3.1}\] which implies that \[\sum_{k=0}^{+\infty}\frac{\left(g_{k}^{T}d_{k}\right)^{2}}{\|d_{k}\|^{2}}<+\infty. \tag{3.2}\] Together with Lemma 2.2, we obtain \[\sum_{k=0}^{+\infty}\frac{\|g_{k}\|^{4}}{\|d_{k}\|^{2}}<+\infty. \tag{3.3}\] The above inequality is important to analyze the convergence of the proposed method. The next lemma will be used to the convergent analysis of the SMCG method (1.1) and (2.35). **Lemma 3.1**: _Assume \(f\) satisfies Assumption 2.1, consider the subspace minimization conjugate gradient method (1.1) and (2.35) with any one of \(\tau_{k}\) in (2.8), and \(\alpha_{k}\) is calculated by the improved Wolfe line search satisfying (2.45) and (2.46). If \(\|g_{k}\|\geq\gamma_{1}>0\) holds for all \(k\geq 1\), then_ \[\sum_{k=0}^{\infty}\|\widetilde{u}_{k}-\widetilde{u}_{k-1}\|^{2}<+\infty, \tag{3.4}\] _where \(\widetilde{u}_{k}=\dfrac{d_{k}}{\|d_{k}\|}\)._ _Proof_ We first derive a bound for \(u_{k}\) in (2.5). By (2.46), Lemma 2.2 and \(\|g_{k}\|\geq\gamma_{1}\), we have that \[y_{k}^{T}d_{k}\geq-\left(1-\sigma\right)g_{k}^{T}d_{k}\geq c\left(1-\sigma \right)\|g_{k}\|^{2}\geq c\gamma_{1}^{2}\left(1-\sigma\right) \tag{3.5}\] and \[g_{k+1}^{T}d_{k}\geq\sigma g_{k}^{T}d_{k}=\sigma g_{k+1}^{T}d_{k}-\sigma y_{k }^{T}d_{k}. \tag{3.6}\] It follows from Lemma 2.2 that \[g_{k+1}^{T}d_{k}=y_{k}^{T}d_{k}+g_{k}^{T}d_{k}<y_{k}^{T}d_{k}. \tag{3.7}\] Combining \(\sigma<1\), (3.6) and (3.7) yields that \[\dfrac{\left|g_{k+1}^{T}d_{k}\right|}{y_{k}^{T}d_{k}}\leq\max\left\{1,\dfrac{ \sigma}{1-\sigma}\right\}. \tag{3.8}\] According to Assumption 2.1, we know that there are two positive constants \(D\) and \(\gamma_{2}\) such that \[D=\max\left\{\|y-z\|:y,z\in\mathcal{L}=\left\{x|f\left(x\right)\leq f\left(x_ {0}\right)+\sum_{k\geq 0}\bar{\eta}_{k}\right\}\right\},\ \ \|g_{k}\|\leq\gamma_{2}. \tag{3.9}\] It is note that \(d_{k}\neq 0\) for all \(k\geq 1\), otherwise Lemma 2.1 will imply \(g_{k}=0\). It indicates that \(\widetilde{u}_{k}\) is well defined. Therefore, by using (2.2), (2.7), (3.8), (3.9) and (2.14), we obtain \[|u_{k}|\leq\dfrac{1}{1-\overline{\omega}_{k}}\left(1+\left|\dfrac{g_{k}^{T}s_ {k-1}}{s_{k-1}^{T}y_{k-1}}\right|\dfrac{\left|g_{k}^{T}y_{k-1}\right|}{\left\| g_{k}\right\|^{2}}\right)\leq\dfrac{1}{1-\xi_{1}}\left(1+\max\left\{1,\dfrac{ \sigma}{1-\sigma}\right\}\left(1+\eta_{1}\right)\right)\overset{\Delta}{=} \bar{c}_{1}>1. \tag{3.10}\] We divide \(\bar{v}_{k}\) in (2.31) into the following two parts: \[v_{k}^{+}=\max\left\{\dfrac{g_{k}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}}-\left(\tau _{k}+\dfrac{\|y_{k}\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)\dfrac{g_{k}^{T}s_{k-1} }{s_{k-1}^{T}y_{k-1}}+\dfrac{g_{k}^{T}s_{k-1}\|g_{k}\|^{2}-\frac{g_{k}^{T}y_{ k-1}\left(g_{k}^{T}s_{k-1}\right)^{2}}{s_{k-1}^{T}y_{k-1}}}-\eta_{k},0\right\}, \tag{3.11}\] and \[v_{k}^{-}=\eta_{k}=-l_{k}\dfrac{\left|g_{k}^{T}s_{k-1}\right|}{\left\|s_{k-1 }\right\|^{2}}, \tag{3.12}\] which satisfy \(\bar{v}_{k}=v_{k}^{+}+v_{k}^{-}\). It follows that the search direction \(d_{k}=u_{k}g_{k}+\bar{v}_{k}s_{k-1}\) in (2.35) can be rewritten as \[d_{k}=u_{k}g_{k}+\left(v_{k}^{+}+v_{k}^{-}\right)s_{k-1}.\] Denote \[\omega_{k}=\dfrac{u_{k}g_{k}+v_{k}^{-}s_{k-1}}{\|d_{k}\|},\ \ \delta_{k}=\dfrac{v_{k}^{+}\left\|s_{k-1}\right\|}{\|d_{k}\|}. \tag{3.13}\] Thus, \(\widetilde{u}_{k}\) can be rewritten as \[\widetilde{u}_{k}=\dfrac{d_{k}}{\|d_{k}\|}=\omega_{k}+\delta_{k}\widetilde{u }_{k-1}. \tag{3.14}\] Using the identity \(\|\widetilde{u}_{k}\|=\|\widetilde{u}_{k-1}\|=1\), we get that \[\|\omega_{k}\|=\|\widetilde{u}_{k}-\delta_{k}\widetilde{u}_{k-1}\|=\|\delta_{k} \widetilde{u}_{k}-\widetilde{u}_{k-1}\|\,. \tag{3.15}\] It follows from \(\delta_{k}\geq 0\), the triangle inequality and (3.15) that \[\begin{split}\|\widetilde{u}_{k}-\widetilde{u}_{k-1}\|& \leq\|(1+\delta_{k})\,\widetilde{u}_{k}-(1+\delta_{k})\,\widetilde{u}_{k-1} \|\\ &\leq\|\widetilde{u}_{k}-\delta_{k}\widetilde{u}_{k-1}\|+\|\delta_ {k}\widetilde{u}_{k}-\widetilde{u}_{k-1}\|\\ &=2\,\|\omega_{k}\|\,.\end{split} \tag{3.16}\] By (2.35), (2.32), (3.10) and (3.12), we can obtain that \[\left\|u_{k}g_{k}+v_{k}^{-}s_{k-1}\right\|\leq|u_{k}|\,\|g_{k}\|+\left|v_{k}^{ -}\right|\|s_{k-1}\|\leq\left(\bar{c}_{1}+\bar{\xi}_{2}\right)\|g_{k}\|\,. \tag{3.17}\] Combining (3.16), (3.13) and (3.17) yields \[\|\widetilde{u}_{k}-\widetilde{u}_{k-1}\|\leq 2\,\|\omega_{k}\|\leq 2\left( \bar{c}_{1}+\bar{\xi}_{2}\right)\frac{\|g_{k}\|}{\|d_{k}\|}. \tag{3.18}\] Similarly, for the search direction \(d_{k}=-g_{k}\) in (2.35), we can easily obtain (3.18) by setting \(u_{k}=-1\), \(v_{k}^{+}=v_{k}^{-}=0\) in (3.13) due to \(\bar{c}_{1}>1\). Therefore, together with (3.3) and \(\|g_{k}\|\geq\gamma_{1}\), we have \[\sum_{k=0}^{\infty}\|\widetilde{u}_{k}-\widetilde{u}_{k-1}\|^{2}\leq\frac{4 \big{(}\bar{c}_{1}+\bar{\xi}_{2}\big{)}^{2}}{\gamma_{1}^{2}}\sum_{k=0}^{\infty }\frac{\|g_{k}\|^{4}}{\|d_{k}\|^{2}}<+\infty, \tag{3.19}\] which completes the proof. The global convergence is established under Assumption 2.1 in the following theorem. Theorem 3.1: _Assume \(f\) satisfies Assumption 2.1, consider the subspace minization conjugate gradient method (1.1) and (2.35) with any one of \(\tau_{k}\) in (2.8), and \(\alpha_{k}\) is calculated by the improved Wolfe line search satisfying (2.45) and (2.46). Then,_ \[\liminf_{k\to\infty}\|g_{k}\|=0. \tag{3.20}\] Proof: We proceed it by contradiction, namely, suppose that \(\|g_{k}\|\geq\gamma_{1}\), where \(\gamma_{1}>0\), for all \(k\geq 0\). By Lemma 2.2 and Cauchy-Schwarz inequality, we have that \[\|d_{k-1}\|\geq c\,\|g_{k-1}\|\geq c\gamma_{1},\] which together with Assumption 2.1 (i) yields \[\|s_{k-1}\|=\alpha_{k-1}\,\|d_{k-1}\|\leq\frac{D}{c\gamma_{1}}\,\|d_{k-1}\|\,,\] where \(D\) is given by (3.9). As a result, for the choices of \(\tau_{k}\) in (2.8), it is not difficult to obtain from (3.5) that \[\tau_{k}^{(1)}=1=\frac{1}{c\gamma_{1}}c\gamma_{1}\leq\frac{1}{c\gamma_{1}}\, \|d_{k-1}\| \tag{3.21}\] and \[\tau_{k}^{B}=\frac{s_{k-1}^{T}y_{k-1}}{\|s_{k-1}\|^{2}}\leq\tau_{k}^{H}=\frac{ \left\|y_{k-1}\right\|^{2}}{s_{k-1}^{T}y_{k-1}}\leq\frac{L^{2}D}{c\gamma_{1}^{ 2}\left(1-\sigma\right)}\,\|d_{k-1}\|\,. \tag{3.22}\] The following is divided into the three steps. (i) A bound for \(v_{k}\) and \(\eta_{k}\) in (2.35). By (2.35), (3.8), (3.21), (3.22), (2.14), (3.9) and \(\gamma_{1}\leq\|g_{k}\|\leq\gamma_{2}\), we have \[\begin{split}|v_{k}|&=\left|\frac{1-2\omega_{k}}{1- \omega_{k}}\frac{g_{k}^{T}y_{k-1}}{s_{k-1}^{T}y_{k-1}}-\left(\tau_{k}+\frac{\| y_{k-1}\|^{2}}{s_{k-1}^{T}y_{k-1}}\right)\frac{g_{k}^{T}s_{k-1}}{s_{k-1}^{T}y_{k-1}}+ \frac{1}{1-\bar{\omega}_{k}}\frac{g_{k}^{T}s_{k-1}}{\|s_{k-1}\|^{2}}\right|\\ &\leq\frac{1}{1-\xi_{1}}\frac{\gamma_{2}L}{c\gamma_{1}^{2}\left( 1-\sigma\right)}\left\|d_{k-1}\right\|+\max\left\{1,\frac{\sigma}{1-\sigma} \right\}\left(\max\left\{\frac{1}{c\gamma_{1}},\frac{L^{2}D}{c\gamma_{1}^{2} \left(1-\sigma\right)}\right\}+\frac{L^{2}D}{c\gamma_{1}^{2}\left(1-\sigma \right)}\right)\|d_{k-1}\|\\ &\qquad\qquad+\frac{1}{1-\xi_{1}}\frac{L\gamma_{2}}{\left(1- \sigma\right)c\gamma_{1}^{2}}\left\|d_{k-1}\right\|\\ &\leq\left[\frac{2}{1-\xi_{1}}\frac{\gamma_{2}L}{c\gamma_{1}^{2} \left(1-\sigma\right)}+\max\left\{1,\frac{\sigma}{1-\sigma}\right\}\left(\max \left\{\frac{1}{c\gamma_{1}},\frac{L^{2}D}{c\gamma_{1}^{2}\left(1-\sigma \right)}\right\}+\frac{L^{2}D}{c\gamma_{1}^{2}\left(1-\sigma\right)}\right) \right]\|d_{k-1}\|\\ &\stackrel{{\Delta}}{{=}}\bar{c}_{2}\left\|d_{k-1}\right\| \end{split} \tag{3.23}\] and \[\begin{split}|\eta_{k}|&\leq\frac{l_{k}\left\|g_{k} \right\|\left\|s_{k-1}\right\|}{\left\|s_{k-1}\right\|^{2}}\leq\bar{\xi}_{2} \left\|d_{k-1}\right\|\frac{\left\|g_{k}\right\|}{\left\|s_{k-1}\right\|\left\| d_{k-1}\right\|}\leq L\bar{\xi}_{2}\left\|d_{k-1}\right\|\frac{\left\|g_{k}\right\|}{ \left\|y_{k-1}\right\|\left\|d_{k-1}\right\|}\\ &\leq L\bar{\xi}_{2}\frac{\gamma_{2}}{c\gamma_{1}^{2}\left(1- \sigma\right)}\left\|d_{k-1}\right\|\stackrel{{\Delta}}{{=}}\bar{c }_{2}\left\|d_{k-1}\right\|\end{split} \tag{3.24}\] (ii) A bound on the steps \(s_{k}\). For any \(l\geq k\), by the definition of \(\widetilde{u}_{k}\) in Lemma 3.1 we have \[x_{l}-x_{k}=\sum_{j=k}^{l-1}\left(x_{j+1}-x_{j}\right)= \sum_{j=k}^{l-1}\|s_{j}\|\widetilde{u}_{j}=\sum_{j=k}^{l-1}\|s_{j} \|\widetilde{u}_{k}+\sum_{j=k}^{l-1}\|s_{j}\|\left(\widetilde{u}_{j}- \widetilde{u}_{k}\right), \tag{3.25}\] which yields that \[\sum_{j=k}^{l-1}\|s_{j}\|\leq\|x_{l}-x_{k}\|+\sum_{j=k}^{l-1}\|s_{j}\|\left\| \widetilde{u}_{j}-\widetilde{u}_{k}\right\|\leq D+\sum_{j=k}^{l-1}\|s_{j}\| \left\|\widetilde{u}_{j}-\widetilde{u}_{k}\right\|. \tag{3.26}\] Let \(\Delta\) be any positive integer such that \[\Delta\geq 4\bar{c}_{2}D. \tag{3.27}\] According to Lemma 3.1, we can choose \(k_{0}>0\) such that \[\sum_{i\geq k_{0}}\left\|\widetilde{u}_{i+1}-\widetilde{u}_{i}\right\|^{2}\leq \frac{1}{4\Delta}. \tag{3.28}\] If \(j>k\geq k_{0}\) and \(j-k\leq\Delta\), then we know from (3.28) and the Cauchy-Schwarz inequality that \[\|\widetilde{u}_{j}-\widetilde{u}_{k}\|\leq\sum_{i=k}^{j-1}\|\widetilde{u}_{i+ 1}-\widetilde{u}_{i}\|\leq\sqrt{j-k}\!\left(\!\sum_{i=k}^{j-1}\!\left\| \widetilde{u}_{i+1}-\widetilde{u}_{i}\right\|^{2}\!\right)^{1/2}\leq\sqrt{ \Delta}\!\left(\frac{1}{4\Delta}\right)^{1/2}=\frac{1}{2}. \tag{3.29}\] Combining (3.29) with (3.26) implies \[\sum_{j=k}^{l-1}\|s_{j}\|\leq 2D, \tag{3.30}\] where \(l>k\geq k_{0}\) and \(l-k\leq\Delta\). (iii) A bound on the directions \(d_{l}\). For the search direction (2.35), by (3.10), (3.24) and (3.23), we have \[\left\|d_{l}\right\|^{2}\leq(u_{l}\left\|g_{l}\right\|+\max\left\{\left|\eta_{l }\right|,\left|v_{l}\right|\right\}\left\|s_{l-1}\right\|)^{2}\leq 2\bar{c}_{1}^{2}\gamma_{2}^{2}+2\bar{c}_{3}^{2}\|s_{l-1} \|^{2}\|d_{l-1}\|^{2}, \tag{3.31}\] where \(\bar{c}_{3}=\max\left\{\bar{c}_{2},\bar{c}_{2}\right\}\), \(\bar{c}_{1}\) and \(\bar{c}_{2}\) are given by (3.10) and (3.23), respectively. Let \(S_{i}=2\bar{c}_{3}^{2}\|s_{i}\|^{2}\), for any \(l>k_{0}\), we have \[\|d_{l}\|^{2}\leq 2\bar{c}_{1}^{2}\gamma_{2}^{2}\left(\sum\limits_{i=k_{0}+1}^{ l}\prod\limits_{j=i}^{l-1}S_{j}\right)+\|d_{k_{0}}\|^{2}\prod\limits_{j=k_{0}}^{ l-1}S_{j}. \tag{3.32}\] Note that the product is define to be \(1\) whenever the index range is vacuous. Now we derive a product of \(\Delta\) consecutive \(S_{j}\) by the arithmetic-geometric mean inequality, (3.30) and (3.27) for any \(k\geq k_{0}\): \[\prod\limits_{j=k}^{k+\Delta-1}S_{j} =\prod\limits_{j=k}^{k+\Delta-1}2\bar{c}_{3}^{2}\|s_{j}\|^{2}= \left(\prod\limits_{j=k}^{k+\Delta-1}\sqrt{2}\bar{c}_{3}\left\|s_{j}\right\| \right)^{2} \tag{3.33}\] \[\leq\left(\frac{\sum\limits_{j=k}^{k+\Delta-1}\sqrt{2}\bar{c}_{ 3}\left\|s_{j}\right\|}{\Delta}\right)^{2\Delta}\leq\left(\frac{2\sqrt{2}\bar{ c}_{3}D}{\Delta}\right)^{2\Delta}\leq\frac{1}{2^{\Delta}}.\] As a result, there must exist a positive constant \(c_{3}>0\) such that \(\|d_{l}\|\leq c_{3}\). Combining \(\|g_{k}\|\geq\gamma_{1}\) with \(\|d_{l}\|\leq c_{3}\) yields \[\sum\limits_{k=0}^{+\infty}\frac{\|g_{k}\|^{4}}{\|d_{k}\|^{2}}=+\infty.\] which contradicts (3.3). Therefore, we obtain (3.20), which completes the proof. Remark 4: _It is not difficult to see that if \(f\) is convex, then we can obtain the strong convert result:_ \[\lim\limits_{k\rightarrow\infty}\|g_{k}\|=0.\] It follows from (2.45) that \(f_{k+1}-f_{k}\leq\bar{\eta}_{k}\), which together with \(\sum\limits_{k\geq 0}\overline{\eta}_{k}<+\infty\) and Assumption 2.1 (ii) implies that the sequence \(\left\{f_{k}\right\}\) is convergent. We denoted its limiter by \(f^{*}\). It also can deduce from Theorem 3.1 and Assumption 2.1 (ii) that there exists a convergent subquece \(\left\{x_{k_{i}}\right\}\) of \(\left\{x_{k}\right\}\) such that \(g\left(\hat{x}\right)=0\), where \(x_{k_{j}}\rightarrow\hat{x}\) when \(j\rightarrow+\infty\). Since \(f\) is convex on \(\mathbb{R}^{n}\), we have \(f^{*}=f(\hat{x})\leq f(x),\ \forall x\in\mathbb{R}^{n}\). If there exists another convergent subquece \(x_{k_{j}}\) of \(x_{k}\) such that \(g\left(\bar{x}\right)\neq 0\), where \(x_{k_{j}}\rightarrow\bar{x}\) when \(j\rightarrow+\infty\). Since \(f(\bar{x})=f^{*}\leq f(x),\ \forall x\in\mathbb{R}^{n}\), we know that \(\bar{x}\) is a global minimizer, which contradicts \(g\left(\bar{x}\right)\neq 0\). Therefore, all accumulations of \(\left\{x_{k}\right\}\) are stationary points, which implies that \(\lim\limits_{k\rightarrow\infty}\|g_{k}\|=0\). ## 4 Numerical experiments We compare Algorithm 1 with CGOPT (1.0) [8], SMCG_BB [16] and CG_DESCENT (5.3) [6] in the section. It is widely accepted that CGOPT and CG_DESCENT are the most two famous conjugate gradient software packages. Algorithm 1 was implemented based on the C code of CGOPT (1.0), which is available from Dai's homepage: [http://lsec.cc.ac.cn/~dyh/software.html](http://lsec.cc.ac.cn/~dyh/software.html). The test collection includes 147 unconstrained optimization problems from the CUTEst library [23], which can be found in [31], and the dimensions of the 147 test problems and the initial points are all default. The codes of CG_DESCENT (5.3) and SMCG_BB can be found in Hager's homepage: [http://users.class.ufl.edu/hager/papers/Software](http://users.class.ufl.edu/hager/papers/Software) and [https://www.scholat.com/liinexian](https://www.scholat.com/liinexian), respectively. In Algorithm 1, we take the following parameters: \[\xi_{1}=0.75,\ \xi_{2}=0.5,\ \bar{\xi}_{2}=0.2,\ \bar{\xi}_{2}=10,\ \eta_{1}=0.99,\] \[\eta_{2}=3,\;\xi_{3}=7.5\times 10^{-5},\xi_{4}=9\times 10^{-4},\;\xi_{5}=0.9,\;\xi_{6}=10\] and the other parameters used the default value in CGOPT (1.0). CGOPT (1.0), CG_DESCENT (5.3) and SMCG_BB used all default values of these parameter in their codes but the stopping condition. Especially, CG_DESCENT (5.3) used the default line search--the combination of the original Wolfe conditions and the approximate Wolfe conditions, which performed very well in the numerical tests. All test methods are terminated if \(\|\;g_{k}\|_{\infty}\leq 10^{-6}\) is satisfied. The performance profiles introduced by Dolan and More [32] are used to display the performances of these test algorithms. In the following figures, "\(N_{iter}\)", "\(N_{f}\)", "\(N_{g}\)" and "\(T_{cpu}\)" represent the number of iterations, the number of function evaluations, the number of gradient evaluations and CPU time (s), respectively. The numerical experiments are divided into the following three groups. In the first group of the numerical experiments, we test the numerical performance of Algorithm 1 with different \(\tau_{k}\) in (2.43) and (2.8). The default \(\tau_{k}\) in Algorithm 1 is given by (2.43). Figures 1-3 present the numerical performance in term of the number of iterations, the number of function evaluations and the number of gradient evaluations. We do not test the performance about the running time since it is similar to the above figures. As observed in the Figures 1-3, \(\tau_{k}\) in (2.43) is the most efficient for Algorithm 1, followed by \(\tau_{k}^{B}\), and \(\tau_{k}^{H}\) is the worst. In the second group of the numerical experiments, we compare the performance of Algorithm 1 with that of SMCG_BB and CGOPT (1.0). As shown in Figure 4, we observe that the Algorithm 1 performs much better than SMCG_BB and CGOPT (1.0) in term of the number of iterations, since it successfully solves about 56% test problems with the least iterations, while the numbers of SMCG_BB and CGOPT (1.0) are about 37% and 27%, respectively. Figure 5 shows that Algorithm 1 enjoys large advantage over CGOPT (1.0) and performs slightly better than SMCG_BB in term of the number of function evaluations. As shown in Figure 6, we can see that Algorithm 1 is superior much to CGOPT (1.0) and SMCG_BB in term of the number of gradient evaluations. Figure 7 indicates that Algorithm 1 is faster than SMC Figure 7: \(T_{cpu}\) In the third group of the numerical experiments, we compare the performance of Algorithm 1 with that of CG_DESCENT (5.3). As shown in Figure 8, we observed that Algorithm 1 performs better than CG_DESCENT (5.3) in term of the number of iterations, which is a little beyond our expectations. Figure 9 indicates that Algorithm 1 is inferior to CG_DESCENT (5.3) in term of the number of function evaluations. The reason is that in the numerical experiments Algorithm 1 used the improved Wolfe line search, while CG_DESCENT (5.3) used the combination of the quite efficient approximate Wolfe line search and the standard Wolfe line search. It follows from Section 3 that Algorithm 1 is globally convergent, whereas there is no guarantee for the global convergence of CG_DESCENT with the very efficient approximate Wolfe line search [7]. As shown in Figure 10, we see that Algorithm 1 enjoys large advantage over CG_DESCENT (5.3) in term of the number of gradient evaluations. To see the comprehensive performance about \(N_{f}\) and \(N_{g}\), we compare the performance based on \(N_{f}+3N_{g}\) in Figure 11. Figure 11 indicates that Algorithm 1 is also superior to CG_DESCENT (5.3) based on the total performance about \(N_{f}\) and \(N_{g}\) though it is at disadvantage in term of \(N_{f}\). Figure 12 shows that Algorithm 1 is faster than CG_DESCENT (5.3). The above numerical experiments indicate that Algorithm 1 is superior to CGOPT(1.0), CG_DESCENT (5.3) and SMCG_BB. It seems that SMCG method with \(d_{k}=u_{k}g_{k}+v_{k}s_{k-1}\) can illustrate greater potential for large scale unconstrained optimization compared to the traditional conjugate gradient method with \(d_{k}=-g_{k}+\beta_{k}d_{k-1}\). ## 5 Conclusion and Discussion SMCG methods are quite efficient iterative methods for large scale unconstrained optimization. However, it is usually required to determine the important parameter \(\rho_{k}\approx g_{k}^{T}B_{k}g_{k}\), which is crucial to the theoretical properties and the numerical performance and is difficult to be selected properly. By taking advantage of the memoryless quasi-Newton method, we present a new subspace minimization conjugate gradient method based on project technique, which is independent of the important parameter \(\rho_{k}\). It is remarkable that the SMCG method without the exact line search enjoys finite termination for two dimensional convex quadratic functions, which will guide us in the design of the proposed method. The proposed method can be regarded as the extension of Dai-Kou conjugate gradient method. The descent property of the search direction is analyzed. We also establish the global convergence for general nonlinear functions of the proposed method. Numerical experiments indicate that the proposed method is very promising. We believe that SMCG methods are able to become strong candidates for large scale unconstrained optimization. ###### Acknowledgements. We would like to thank Professor Yu-Hong Dai in Chinese Academy of Sciences for his valuable and insightful comments on this manuscript. This research is supported by National Science Foundation of China (No. 12261019, 12161053), Guizhou Provincial Science and Technology Projects (No. QHKJC-ZK[2022]YB084). ###### Acknowledgements. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. ## Conflict of interest The authors declare no competing interests.
2303.04639
Arion: Arithmetization-Oriented Permutation and Hashing from Generalized Triangular Dynamical Systems
In this paper we propose the (keyed) permutation Arion and the hash function ArionHash over $\mathbb{F}_p$ for odd and particularly large primes. The design of Arion is based on the newly introduced Generalized Triangular Dynamical System (GTDS), which provides a new algebraic framework for constructing (keyed) permutation using polynomials over a finite field. At round level Arion is the first design which is instantiated using the new GTDS. We provide extensive security analysis of our construction including algebraic cryptanalysis (e.g. interpolation and Gr\"obner basis attacks) that are particularly decisive in assessing the security of permutations and hash functions over $\mathbb{F}_p$. From an application perspective, ArionHash aims for efficient implementation in zkSNARK protocols and Zero-Knowledge proof systems. For this purpose, we exploit that CCZ-equivalence of graphs can lead to a more efficient implementation of Arithmetization-Oriented primitives. We compare the efficiency of ArionHash in R1CS and Plonk settings with other hash functions such as Poseidon, Anemoi and Griffin. For demonstrating the practical efficiency of ArionHash we implemented it with the zkSNARK libraries libsnark and Dusk Network Plonk. Our result shows that ArionHash is significantly faster than Poseidon - a hash function designed for zero-knowledge proof systems. We also found that an aggressive version of ArionHash is considerably faster than Anemoi and Griffin in a practical zkSNARK setting.
Arnab Roy, Matthias Johann Steiner, Stefano Trevisani
2023-03-08T14:58:11Z
http://arxiv.org/abs/2303.04639v3
Arion: Arithmetization-Oriented Permutation and Hashing from Generalized Triangular Dynamical Systems ###### Abstract In this paper we propose the (keyed) permutation Arion and the hash function ArionHash over \(\mathbb{F}_{p}\) for odd and particularly large primes. The design of Arion is based on the newly introduced Generalized Triangular Dynamical System (GTDS), which provides a new algebraic framework for constructing (keyed) permutation using polynomials over a finite field. At round level Arion is the first design which is instantiated using the new GTDS. We provide extensive security analysis of our construction including algebraic cryptanalysis (e.g. interpolation and Grobner basis attacks) that are particularly decisive in assessing the security of permutations and hash functions over \(\mathbb{F}_{p}\). From an application perspective, ArionHash aims for efficient implementation in zkSNARK protocols and Zero-Knowledge proof systems. For this purpose, we exploit that CCZ-equivalence of graphs can lead to a more efficient implementation of Arithmetization-Oriented primitives. We compare the efficiency of ArionHash in R1CS and Plonk settings with other hash functions such as Poseidon, Anemoi and Griffin. For demonstrating the practical efficiency of ArionHash we implemented it with the zkSNARK libraries libsnark and Dusk Network Plonk. Our result shows that ArionHash is significantly faster than Poseidon - a hash function designed for zero-knowledge proof systems. We also found that an aggressive version of ArionHash is considerably faster than Anemoi and Griffin in a practical zkSNARK setting. ## 1 Introduction With the advancement of Zero-Knowledge (ZK), Multi-Party Computation (MPC) and Fully Homomorphic Encryption (FHE) in recent years new efficiency measures for symmetric-key primitives allowing efficient implementation in these schemes, namely low multiplicative complexity and low multiplicative depth, have been introduced. The block ciphers, permutations and hash functions with low multiplicative complexity are also referred to as Arithmetization-Oriented (AO) primitives. A significant number of these new types of AO primitives are defined over large finite fields of prime order \(p\gg 2\) for target applications. Our focus in this paper will be such a low multiplicative complexity construction over \(\mathbb{F}_{p}\) for large primes. Some generic definitions and results in this paper are applicable to any odd prime, thus we describe these results and definitions accordingly. However, the security of the construction(s) will be analyzed only for large primes. To put this paper into context with previous AO constructions we give a short overview of their developments. The AO primitives proposed in the literature until now can be categorized into three generations. Gen I: LowMC[3], MiMC[2] Gen II: Hades[34], Poseidon[33], GMiMC[1], Rescue-Prime[4] Gen III: Reinforced Concrete[32], Griffin[31], Anemoi[16], Arion (this paper) The first generation consists of constructions which demonstrated that one can construct secure and efficient ciphers and hash functions with low degree primitives at round level. In particular, LowMC introduced the partial Substitution Permutation Network (SPN) strategy in AO. In the second generation researchers tried to obtain further efficiency improvements from Feistel and (partial) SPNs to obtain new efficient primitives. Moreover, more focus was given on constructions native in large prime fields \(\mathbb{F}_{p}\) rather than \(\mathbb{F}_{2^{n}}\). This resulted in Hades which combines full and partial SPNs over \(\mathbb{F}_{p}\), and its derived sponge function Poseidon which is now a widely deployed hash function for ZK applications. The current third generation adopts new design principles which neither reduce to the Feistel nor the SPN that culminated in the Generalized Triangular Dynamical System (GTDS) [48]. Moreover, this generation diverted from the consensus that one needs low degree polynomials to instantiate a secure and efficient AO primitive. In this paper we propose new AO primitives - Arion (block cipher) and the hash function derived from it ArionHash. At round level Arion (and ArionHash) like Griffin, utilize(s) a polynomial of very high degree in one branch and low degree polynomials in the remaining branches to significantly cut the number of necessary rounds compared to the previous generations. Anemoi also utilizes a high degree permutation, the so-called open Flystel, at round level, but to limit the number of constraints in a prover circuit the authors proved that the open Flystel is CCZ-equivalent (cf. [19] and [16, SS4.2]) to a low degree function, the so-called closed Flystel. Lastly, Reinforced Concrete is the first AO hash function that utilizes look-up tables which significantly reduces the number of necessary rounds of Reinforced Concrete and consequently also the number of constraints in a prover circuit. ### Our Results In this paper we propose the block cipher Arion and the hash function ArionHash (Section 2), using the Generalized Triangular Dynamical System [48]. The block cipher and hash function are constructed over \(\mathbb{F}_{p}\) with the target to achieve low multiplicative complexity in a prover circuit. Utilizing the structure of GTDS enables us to provide a systematic security analysis of the newly proposed block cipher and hash function. The GTDS structure also allows us to choose the best suited parameters for the efficiency. We provide extensive security analysis of the proposed block cipher and hash function against state-of-the-art cryptanalysis techniques to justify their security (Section 3). Our construction validates the soundness of the generic GTDS structure that uses polynomial dynamical system for constructing cryptographic permutations over finite fields. Although Arion and ArionHash are defined on arbitrary finite fields \(\mathbb{F}_{p}\), the parameters of the block cipher and hash function are chosen in such way to be compatible with the primes chosen for the target ZK application namely, for BLS12 and BN254 curves. We propose aggressive versions of Arion and ArionHash namely, \(\alpha\)-Arion and \(\alpha\)-ArionHash. The difference between Arion (and ArionHash) and its aggressive version is that the former avoids a recently proposed probabilistic Grobner basis [27] attack (Section 3.2 and Appendix C in the full version of the paper [49]). To demonstrate and compare the efficiencies of our constructions (Section 4) we implemented them using the zkSNARK libraries libsnark[50], a C++ library used in the privacy protecting digital currency Zcash [35], and Dusk Network Plonk[23], a Rust library used in the privacy-oriented blockchain protocol Dusk. Our results show that ArionHash is significantly (2x) faster than Poseidon - an efficient hash function designed for zkSNARK applications. The efficiency of ArionHash is comparable to the recently proposed (but not yet published at a peer-reviewed venue) hash functions Anemoi and Griffin. We find that \(\alpha\)-ArionHash for practical choices of parameters in a Merkle tree mode of hashing is faster than Griffin. We also reveal that CCZ-equivalence between the graphs of the ArionHash GTDS and another closely related GTDS leads to a more efficient implementation of ArionHash in ZK applications compared to the naive circuit for ArionHash (Section 4.1). Our public GitHub repository [https://github.com/sca-research/Arion](https://github.com/sca-research/Arion) contains reference implementations in SageMath, C++ and Rust, our OSCAR implementation to perform Grobner basis experiments, and our SageMath code to estimate the security levels of Arion & ArionHash. ## 2 The (Keyed) Permutation and Hash Function ### Overview of the Design Rationale Before we define Arion and ArionHash, we quickly summarize the design rationale behind our construction. * By utilizing the GTDS to instantiate the permutation we aim to achieve fast degree growth in each component like in SPNs and non-linear mixing between the components as in Feistel network. Our GTDS, see Definition 1, incorporates the strength of both SPN and Feistel in a single primitive at round level. * It follows from the generic security analysis in [48, SS5] that the univariate permutations, the SPN part, of the GTDS determine worst case security bounds against differential and linear cryptanalysis. Hence, we chose parameters that minimize these bounds. * To thwart interpolation attacks we opted for a design that can achieve a degree overflow in the input variables in the first round, see Lemma 2 and Table 2. This is achieved in the SPN part of the GTDS by applying a low degree univariate permutation polynomial \(p_{1}\) in all branches except the last one and by applying a high-degree inverse permutation \(p_{2}^{-1}\) in the last branch. * We opted for a linear layer that mixes all branches in every round. This is achieved with a circulant matrix which has only non-zero entries. * For the high degree inverse permutation, the naive circuit for \(p_{2}^{-1}(x)=y\) introduces many multiplicative constraints, though one can always transform such a circuit into a circuit for \(x=p_{2}(y)\) in constant time, see Section 4.1. This trick plays a fundamental role in the efficiency of \(\mathsf{ArionHash}\) circuits. ### Keyed Permutation We start with the definition of the generalized triangular dynamical system of \(\mathsf{Arion}\). **Definition 1** (GTDS of \(\mathsf{Arion}\)).: _Let \(p\in\mathbb{Z}_{>4}\) be a prime, and let \(\mathbb{F}_{p}\) be the field with \(p\) elements. Let \(n,d_{1},d_{2},e\in\mathbb{Z}_{>1}\) be integers such that_ 1. \(d_{1}\) _is the smallest positive integer such that_ \(\gcd\left(d_{1},p-1\right)=1\)_,_ 2. \(d_{2}\) _is an arbitrary integer such that_ \(\gcd\left(d_{2},p-1\right)=1\)_, and_ 3. \(e\cdot d_{2}\equiv 1\mod p-1\)_._ _For \(1\leq i\leq n-1\) let \(\alpha_{i,1},\alpha_{i,2},\beta_{i}\in\mathbb{F}_{p}\) be such that \(\alpha_{i,1}^{2}-4\cdot\alpha_{i,2}\) is a quadratic non-residue modulo \(p\). The generalized triangular dynamical system \(\mathcal{F}_{\mathsf{Arion}}=\{f_{1},\ldots,f_{n}\}\) of \(\mathsf{Arion}\) is defined as_ \[f_{i}(x_{1},\ldots,x_{n}) =x_{i}^{d_{1}}\cdot g_{i}(\sigma_{i+1,n})+h_{i}(\sigma_{i+1,n}), \qquad 1\leq i\leq n-1,\] \[f_{n}(x_{1},\ldots,x_{n}) =x_{n}^{e},\] _where_ \[g_{i}(x) =x^{2}+\alpha_{i,1}\cdot x+\alpha_{i,2},\] \[h_{i}(x) =x^{2}+\beta_{i}\cdot x,\] \[\sigma_{i+1,n} =\sum_{j=i+1}^{n}x_{j}+f_{j}(x_{1},\ldots,x_{n}).\] Note that the GTDS \(\mathcal{F}=\{f_{1},\ldots,f_{n}\}\) must be considered as ordered tuple of polynomials since in general the order of the \(f_{i}\)'s cannot be interchanged. Since \(\alpha_{i,1}^{2}-4\cdot\alpha_{i,2}\) is a non-residue modulo \(p\) for all \(1\leq i\leq n-1\) the polynomials \(g_{i}\) do not have any zeros over \(\mathbb{F}_{p}\), therefore we can invert the GTDS with the procedure described in [48, Proposition 8, Corollary 9]. In Table 1 we propose suitable exponents for \(d_{2}\) which can be evaluated with at most 9 multiplications. All exponents are chosen so that \(\mathsf{Arion}\) and \(\mathsf{ArionHash}\) provide at least 128 bit security against Grobner basis attacks while minimizing the number of multiplicative constraints in a prover circuit, see Sections 3.2 and 4.1. Let us compute the degrees of the polynomials in the GTDS. Lemma 2: _Let \(n,d_{1},e\geq 1\) be integers, and let \(\mathcal{F}_{\mathsf{Arion}}=\{f_{1},\ldots,f_{n}\}\) be an \(\mathsf{Arion}\) GTDS. Then_ \[\deg\left(f_{i}\right)=2^{n-i}\cdot\left(d_{1}+e\right)-d_{1}.\] Proof: We perform an upwards induction, for \(n\) and \(n-1\) the claim is clear. Suppose the claim is true for indices greater than or equal to \(i\), i.e. \(\deg\left(f_{i}\right)=\deg\left(f_{i}\right)=2^{n-i}\cdot\left(d_{1}+e \right)-d_{1}\). By construction, the leading monomial of \(f_{i-1}\) is the leading term of the polynomial \(x_{i-1}^{d_{1}}\cdot f_{i}^{2}\). Thus, \[\deg\left(f_{i-1}\right) =\deg\left(x_{i-1}^{d_{1}}\cdot f_{i}^{2}\right)=d_{1}+2\cdot \left(2^{n-i}\cdot\left(d_{1}+e\right)-d_{1}\right)\] \[=2^{n-(i-1)}\cdot\left(d_{1}+e\right)-d_{1},\] which proves the claim. \(\sqcap\)\(\sqcup\) \begin{table} \begin{tabular}{c|c|c} \hline \(d_{2}\) & Evaluation chain & Number of \\ & Multiplications \\ \hline 121 & \(y=\left(x^{2}\right)^{2}\), \(z=\left(y^{2}\cdot y\right)^{2}\), \(x^{121}=\left(z^{2}\right)^{2}\cdot z\cdot x\) & 9 \\ 123 & \(y=x^{2}\cdot x\), \(z=\left(\left(y^{2}\right)^{2}\right)^{2}\), \(x^{123}=\left(z^{2}\right)^{2}\cdot z\cdot y\) & 9 \\ 125 & \(y=\left(x^{2}\right)^{2}\cdot x\), \(z=\left(\left(y^{2}\right)^{2}\right)^{2}\), \(x^{125}=z^{2}\cdot z\cdot y\) & 9 \\ 129 & \(y=\left(\left(\left(x^{2}\right)^{2}\right)^{2}\right)^{2}\), \(z=\left(\left(y^{2}\right)^{2}\right)^{2}\), \(x^{129}=z\cdot x\) & 8 \\ 161 & \(y=\left(x^{2}\right)^{2}\cdot x\), \(z=\left(\left(y^{2}\right)^{2}\right)^{2}\), \(x^{161}=\left(z^{2}\right)^{2}\cdot x\) & 9 \\ 193 & \(y=\left(x^{2}\right)\cdot x\), \(z=\left(\left(\left(y^{2}\right)^{2}\right)^{2}\right)^{2}\), \(x^{193}=\left(z^{2}\right)^{2}\cdot x\) & 9 \\ 195 & \(y=\left(x^{2}\right)\cdot x\), \(z=\left(\left(\left(y^{2}\right)^{2}\right)^{2}\right)^{2}\), \(x^{195}=\left(z^{2}\right)^{2}\cdot y\) & 9 \\ 257 & \(y=\left(\left(\left(x^{2}\right)^{2}\right)^{2}\right)^{2}\), \(z=\left(\left(\left(y^{2}\right)^{2}\right)^{2}\right)^{2}\), \(x^{257}=z\cdot x\) & 9 \\ \hline \end{tabular} \end{table} Table 1: Efficient evaluation of exponents \(d_{2}\in\{121,123,125,129,161,193,195,257\}\). To introduce mixing between the blocks we chose a circulant matrix whose product with a vector can be efficiently evaluated. Definition 3 (Affine layer of Arion): Let \(p\in\mathbb{Z}\) be a prime and let \(\mathbb{F}_{p}\) be the field with \(p\) elements. The affine layer of Arion is defined as \[\mathcal{L}_{\mathbf{c}}:\mathbb{F}_{p}^{n}\rightarrow\mathbb{F}_{p}^{n}, \quad\mathbf{x}\mapsto\operatorname{circ}\left(1,\ldots,n\right)\mathbf{x}+ \mathbf{c},\] where \(\operatorname{circ}\left(1,\ldots,n\right)\in\mathbb{F}_{p}^{n\times n}\) is the circulant matrix3 Footnote 3: We shortly recall the definition of (right) circulant matrices: Let \(k\) be a field and let \(\mathbf{v}=(v_{1},\ldots,v_{n})\in k^{n}\), then the circulant matrix of \(\mathbf{v}\) is defined as \[\operatorname{circ}(\mathbf{v})=\begin{pmatrix}v_{1}&v_{2}&\ldots&v_{n-1}&v_ {n}\\ v_{n}&v_{1}&\ldots&v_{n-2}&v_{n-1}\\ &\ddots&&\\ v_{2}&v_{3}&\ldots&v_{n}&v_{1}\end{pmatrix}.\] with entries \(1,\ldots,n\) in the first row and \(\mathbf{c}\in\mathbb{F}_{p}^{n}\) is a constant vector. Remark 4: For any prime number \(p\in\mathbb{Z}\) with \(p>130\) and \(n=2,3,4\) the matrix \(\operatorname{circ}\left(1,\ldots,n\right)\) is a MDS matrix over \(\mathbb{F}_{p}\). The following algorithm provides an efficient way to evaluate the matrix-vector product for \(\operatorname{circ}\left(1,\ldots,n\right)\). ``` 0:\(\mathbf{v}=(v_{1},\ldots,v_{n})^{\intercal}\in\mathbb{F}_{p}^{n}\) 0:\(\operatorname{circ}\left(1,\ldots,n\right)\mathbf{v}\in\mathbb{F}_{p}^{n}\) 1: Initialize \(\mathbf{w}=(0,\ldots,0)\in\mathbb{F}_{p}^{n}\). 2: Compute \(\sigma=\sum_{i=1}^{n}v_{i}\) and set \(w_{1}=\sigma+\sum_{i=1}^{n}(i-1)\cdot v_{i}\). 3: Set \(i=2\). 4:while\(i\leq n\)do 5: Set \(w_{i}=w_{i-1}-\sigma+n\cdot v_{i-1}\). 6:\(i=i+1\). 7:return\(\mathbf{w}\) ``` **Algorithm 1** Efficient evaluation of matrix-vector product To define a keyed permutation we need a key addition which we denote as \[\mathcal{K}_{\mathbf{k}}:\mathbb{F}_{p}^{n}\times\mathbb{F}_{p}^{n}\to \mathbb{F}_{p}^{n},\qquad(\mathbf{x},\mathbf{k})\mapsto\mathbf{x}+\mathbf{k}.\] The keyed permutation Arion is now defined as follows. Definition 5 (Arion): Let \(p\in\mathbb{Z}\) be a prime and let \(\mathbb{F}_{p}\) be the field with \(p\) elements, and let \(n>1\) and \(r\geq 1\) be integers. For \(1\leq i\leq r\) let \(\mathcal{F}_{\textsf{Arion}}^{(i)}:\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{n}\) _be an_ \(\mathsf{Arion}\) _GTDS and for_ \(1\leq i\leq r\) _let_ \(\mathcal{L}_{\mathbf{c}_{i}}:\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}^{n}\) _be affine layers from Definition_ 3_. The_ \(i^{\text{th}}\) _round function of_ \(\mathsf{Arion}\) _is defined as_ \[\mathcal{R}_{\mathbf{k}}^{(i)}:\mathbb{F}_{p}^{n}\times\mathbb{F }_{p}^{n}\to\mathbb{F}_{p}^{n},\] \[(\mathbf{x},\mathbf{k})\mapsto\mathcal{K}_{\mathbf{k}}\circ \mathcal{L}_{\mathbf{c}_{i}}\circ\mathcal{F}_{\mathsf{Arion}}^{(i)}\left( \mathbf{x}\right).\] _Then \(\mathsf{Arion}\) is defined as the following composition_ \[\mathsf{Arion}:\mathbb{F}_{p}^{n}\times\mathbb{F}_{p}^{n\times(r+ 1)} \to\mathbb{F}_{p}^{n},\] \[\left(\mathbf{x},\mathbf{k}_{0},\mathbf{k}_{1},\ldots,\mathbf{k }_{r}\right) \mapsto\mathcal{R}_{\mathbf{k}_{r}}^{(r)}\circ\cdots\circ\mathcal{ R}_{\mathbf{k}_{1}}^{(1)}\circ\mathcal{L}_{\mathbf{0}}\circ\mathcal{K}_{ \mathbf{k}_{0}}\left(\mathbf{x}\right).\] Further, we denote with \(\mathsf{Arion}\)-\(\pi\) the unkeyed permutation, i.e. \(\mathsf{Arion}\) is instantiated with the key \(\mathbf{k}_{0}=\ldots=\mathbf{k}_{r}=\mathbf{0}\). Since our final aim is to construct a hash function using the \(\mathsf{Arion}\)-\(\pi\), we analyze \(\mathsf{Arion}\) only for keys \(\mathbf{k}_{j}=\mathbf{k}\), where \(\mathbf{k}\in\mathbb{F}_{p}^{n}\), in every round. We do not give any key scheduling algorithm for keys whose sizes are larger than the block size. Instantiation of \(\mathsf{Arion}\) with such a key is not the main topic of this paper. ### Hash Function For the hash function \(\mathsf{ArionHash}\) over \(\mathbb{F}_{p}^{n}\) we instantiate \(\mathsf{Arion}\)-\(\pi\) in sponge mode [9, 10]. The state size \(n=r+c\) is split into the rate part \(r\) and the capacity part \(c\). In [10, Theorem 2] it has been proven that for a random permutation the sponge construction is indistinguishable from a random oracle up to \(\min\left\{p^{r},p^{c/2}\right\}\) queries. Therefore, to provide \(\kappa\) bits of security, \(p^{r}\geq 2^{\kappa}\) and \(p^{c/2}\geq 2^{\kappa}\), we require that \[r\geq\frac{\kappa}{\log_{2}\left(p\right)},\qquad\text{and}\qquad c\geq\frac{2 \cdot\kappa}{\log_{2}\left(p\right)}. \tag{1}\] Given an input message \(m\) we choose a similar padding rule as for \(\mathsf{Poseidon}\)[33, SS4.2], we add the smallest number of zeros \(<r\) such that the size of \(m\mid\mid 0^{*}\) is a multiple of \(r\). If we have to pad the message, then we replace the initial value \(\mathtt{IV}\in\mathbb{F}_{p}^{c}\) with \(|m|\mid\mid\mathtt{IV}^{\prime}\in\mathbb{F}_{p}^{c}\), where \(|m|\in\mathbb{F}_{p}\) is the size of the input message \(m\) and \(\mathtt{IV}^{\prime}\in\mathbb{F}_{p}^{c-1}\) is an initial value. ### Instantiations Target primes for \(\mathsf{Arion}\) & \(\mathsf{ArionHash}\) are the 250 bit prime numbers BLS12 and BN2544. Since degree growth of the \(\mathsf{Arion}\) GTDS is dominated by the power permutation in the bottom component we list the smallest integer \(m\in\mathbb{Z}\) such that \(m\cdot e\geq p\) in Table 2. Therefore, by Lemma 2 for \(n=3\) and all exponents except \(d_{2}=193\) the Arion GTDS surpasses degree \(p\) in the first component. In Table 3 we provide the parameters for Arion and ArionHash and their aggressive versions \(\mathfrak{a}\)-Arion and \(\mathfrak{a}\)-ArionHash with \(d_{1},d_{2}\in\mathbb{Z}\) such that \(\gcd\left(d_{i},p-1\right)=1\), \(d_{1}=3,5\) and \(121\leq d_{2}\leq 257\). The number of rounds for Arion and ArionHash are chosen to provide 128 bit security against the most efficient probabilistic algorithm (available till date) for polynomial system solving in a Grobner basis attack on ArionHash. Since \(2^{\frac{250}{2}}=2^{125}\) we consider all possible rate-capacity pairs \(n=c+r\) suitable for ArionHash over BLS12 and BN254. As hash output of ArionHash over BLS12 and BN254 we recommend to use a single \(\mathbb{F}_{p}\) element. \begin{table} \begin{tabular}{c|c|c} \hline & \multicolumn{3}{c}{\(\lceil p/e\rceil\)} \\ \hline \(d_{2}\) & BLS12 & BN254 \\ \hline 121 & n.a. & 3 \\ 123 & n.a. & n.a. \\ 125 & 3 & 2 \\ 129 & n.a. & n.a. \\ 161 & 3 & 4 \\ 193 & 13 & 3 \\ 195 & n.a. & n.a. \\ 257 & 3 & 2 \\ \hline \end{tabular} \end{table} Table 2: Smallest positive integer so that \(m\in\mathbb{Z}\) such that \(m\cdot e\geq p\) for BLS12 and BN254. The number of rounds for \(\mathfrak{x}\)-Arion and \(\mathfrak{x}\)-ArionHash are chosen to provide 128 bit security against the most efficient deterministic algorithm for polynomial system solving in a Grobner basis attack on ArionHash. For more details on the security with respect to Grobner basis attacks we refer to Appendix C in the full version of the paper [49]. ## 3 Security Analysis of Arion ### Statistical Cryptanalysis **Differential Cryptanalysis.** In differential cryptanalysis [13] and its variants the propagation of input differences through the rounds of a block cipher or hash function is exploited to recover the secret key or to construct a collision. For the Arion GTDS the probability that an input difference \(\boldsymbol{\Delta x}\in\mathbb{F}_{q}^{n}\setminus\{\mathbf{0}\}\) propagates to the output difference is \(\boldsymbol{\Delta y}\in\mathbb{F}_{q}^{n}\) is bounded by (see [48, Theorem 18, Corollary 19]) \[\mathbb{P}\left[\mathcal{F}_{\textsf{Arion}}\colon\boldsymbol{\Delta x} \rightarrow\boldsymbol{\Delta y}\right]\leq\left(\frac{d_{2}}{p}\right)^{ \operatorname{wt}(\boldsymbol{\Delta x})}\leq\frac{d_{2}}{p}, \tag{2}\] where \(\operatorname{wt}:\mathbb{F}_{q}^{n}\rightarrow\mathbb{Z}\) denotes Hamming weight. For \(p\geq 2^{250}\) and \(d_{2}\leq 2^{9}\) this probability is bounded by \(2^{-241\cdot\operatorname{wt}(\boldsymbol{\Delta x})}\leq 2^{-241}\). Under the assumption that the rounds of Arion are statistically independent we can estimate the probability of any non-trivial differential trail via \(2^{-241\cdot r}\). Moreover, even if an adversary can search a restricted differential hull of size \(2^{120}\) between the \(2^{\text{nd}}\) and the \(r^{\text{th}}\) round, then two rounds are already sufficient to provide 128 bit security against differential cryptanalysis. For more details we refer to Appendix A.1 in the full version of the paper [49]. Note that a small differential probability also rules out the boomerang attack [52, 37] which exploits two complementary differential patterns that span the cipher, of which one must at least cover two rounds. **Truncated Differential & Rebound Cryptanalysis.** In a truncated differential attack [38] an attacker can only predict parts of the difference between pairs of text. We expect that the Arion GTDS admits truncated differentials of Hamming weight 1 with probability 1 for the first round. On the other hand, if \(\operatorname{wt}(\mathbf{v})=1\), then we have that \(\operatorname{wt}\big{(}\operatorname{circ}(1,\ldots,n)\mathbf{v}\big{)}=n\). Therefore, such a truncated differential activates all inputs in the second round of Arion. Hence, for \(p\geq 2^{250}\) and \(d_{2}\leq 2^{9}\) the differential probability for the second round is bounded by \(2^{-250\cdot n}\). Even if an adversary can search restricted differential hulls of size \(2^{120}\) after the first round, this probability and Equation (2) nullify truncated differential attacks within the 128 bit security target. In a rebound attack [41, 45] an adversary connects two (truncated) differentials in the middle of a cipher or hash function. Probability 1 truncated differentials can cover at most one round of Arion, so \(r-2\) rounds can be covered with an inside-out approach. By our previous analysis we do not expect that a successful rebound attack can be mounted on 4 or more rounds on Arion & ArionHash within the 128 bit security target. For more details we refer to Appendix A.2 in the full version of the paper [49]. #### 3.1.1 Linear Cryptanalysis. Linear cryptanalysis [5, 44] utilizes affine approximations of round functions for a sample of known plaintexts. For any additive character \(\chi:\mathbb{F}_{q}\to\mathbb{C}\) and any affine approximation \(\mathbf{a},\mathbf{b}\in\mathbb{F}_{q}^{n}\setminus\{\mathbf{0}\}\), the linear probability of the Arion GTDS is bounded by (see [48, Theorem 24, Corollary 25]) \[\mathrm{LP}_{\mathcal{F}_{\mathsf{Ation}}}(\chi,\mathbf{a},\mathbf{b})\leq \frac{\left(d_{2}-1\right)^{2}}{q}. \tag{3}\] Therefore, for \(p\geq 2^{250}\) and \(d_{2}\leq 2^{9}\) this probability is bounded by \(2^{-232}\), and henceforth under the assumption of statistically independent rounds of Arion the linear probability of any non-trivial linear trail is bounded by \(2^{-232\cdot r}\). Moreover, even if an adversary can search a restricted linear hull of size \(2^{120}\) between the \(2^{\mathrm{nd}}\) and the \(r^{\mathrm{th}}\) round, then two rounds are already sufficient to provide 128 bit security against linear cryptanalysis. For more details we refer to Appendix A.3 in the full version of the paper [49]. ### Algebraic Cryptanalysis Interpolation & Integral Cryptanalysis.Interpolation attacks [36] construct the polynomial vector representing a cipher without knowledge of the secret key. If such an attack is successful against a cipher, then an adversary can encrypt any plaintext without knowledge of the secret key. Recall that any function \(F:\mathbb{F}_{q}^{n}\to\mathbb{F}_{q}\) can be represented by a polynomial \(f\in\mathbb{F}_{p}[\mathbf{X}_{n}]=\mathbb{F}_{q}[x_{1},\ldots,x_{n}]/\left(x _{1}^{q}-x_{1},\ldots,x_{n}^{q}-x_{n}\right)\), thus at most \(q^{n}\) monomials can be present in \(f\). After the first round of Arion-\(\pi\) we expect that the terms \[\left(\sum_{i=1}^{n}i\cdot x_{i}\right)^{e}+\sum_{i=1}^{n}i\cdot x_{i} \tag{4}\] are present in every branch. After another application of the round function we expect to produce the terms \[\left(\left(\sum_{i=1}^{n}i\cdot x_{i}\right)^{e}+\sum_{i=1}^{n}i\cdot x_{i} \right)^{e}\mod\left(x_{1}^{p}-x_{1},\ldots,x_{n}^{p}-x_{n}\right) \tag{5}\] in every branch. By our specification \(e\) is the inverse exponent of a relatively low degree permutation, therefore we expect that after two rounds almost all monomials from \(\mathbb{F}_{p}[\mathbf{X}_{n}]\) are present in every component of Arion. For more details we refer to Appendix B.1 in the full version of the paper [49]. For a polynomial \(f\in\mathbb{F}_{q}[x_{1},\ldots,x_{n}]\) an integral distinguisher [11, 39] exploits that for any affine subspace \(V\subset\mathbb{F}_{q}^{n}\) with \(\deg\left(f\right)<\dim\left(V\right)\cdot\left(q-1\right)\) one has that \[\sum_{\mathbf{x}\in V}f(\mathbf{x})=0. \tag{6}\] If almost all monomials are present in \(\mathsf{Arion}\)-\(\pi\), then \(\deg\left(\mathsf{Arion}\mbox{-}\pi\right)\approx n\cdot\left(q-1\right)\) in every component, so only \(V=\mathbb{F}_{q}^{n}\) is a suitable subspace for an integral distinguisher. Therefore, we do not expect that non-trivial integral distinguishers exist for \(\mathsf{Arion}\mbox{-}\pi\). For more details we refer to Appendix B.2 in the full version of the paper [49]. #### 3.1.2 Grobner Basis Analysis. In a Grobner basis attack [17, 21] one models a cipher or hash function as fully determined polynomial system and then solves for the key or preimage. For Grobner basis analysis of \(\mathsf{Arion}\) & \(\mathsf{ArionHash}\) we assume that a degree reverse lexicographic (DRL) Grobner basis can be found in \(\mathcal{O}(1)\). We base the security of \(\mathsf{Arion}\) & \(\mathsf{ArionHash}\) solely on the complexity of solving their polynomial systems via state of the art deterministic and probabilistic Grobner basis conversion algorithms [27, 28, 29] combined with the univariate polynomial solving algorithm of Bariant et al. [8, SS3.1]. With this methods, solving a fully determined polynomial system over a finite field \(\mathbb{F}_{q}\) with known DRL Grobner basis via deterministic methods requires \[\mathcal{O}\left(n\cdot d^{\omega}+d\cdot\log\left(q\right)\cdot\log\left(d \right)\cdot\log\left(\log\left(d\right)\right)+d\cdot\log\left(d\right)^{2} \cdot\log\left(\log\left(d\right)\right)\right) \tag{7}\] field operations, and with deterministic methods \[\mathcal{O}\left(\sqrt{n}\cdot d^{2+\frac{n-1}{n}}+d\cdot\log\left(q\right) \cdot\log\left(d\right)\cdot\log\left(\log\left(d\right)\right)+d\cdot\log \left(d\right)^{2}\cdot\log\left(\log\left(d\right)\right)\right) \tag{8}\] field operations, where \(q\) is the size of the finite field, \(n\) is the number of variables, \(d\) is the \(\mathbb{F}_{q}\)-vector space dimension of the polynomial ring modulo the polynomial system and \(2\leq\omega<2.3727\) is a linear algebra constant. We conjecture that the quotient space dimension of \(\mathsf{Arion}\) grows or bounded by \[\dim_{\mathbb{F}_{p}}\left(\mathcal{F}_{\mathsf{Arion}}\right)\left(n,r,d_{1},d_{2}\right)=\left(d_{2}\cdot\left(d_{1}+2\right)^{n-1}\right)^{r}, \tag{9}\] and for \(\mathsf{ArionHash}\) we conjecture that the dimension grows or is bounded by \[\dim_{\mathbb{F}_{p}}\left(\mathcal{F}_{\mathsf{ArionHash}}\right)\left(n,r,d _{1},d_{2}\right)=\left(2^{n-1}\cdot d_{2}\cdot\left(d_{1}+1\right)-d_{1} \cdot d_{2}\right)^{r}. \tag{10}\] Round numbers for \(\mathsf{Arion}\) & \(\mathsf{ArionHash}\) in Table 3 are chosen to resist deterministic as well as probabilistic Grobner basis attacks against an ideal adversary with \(\omega=2\) within the 128 bit security target. Round numbers for \(\mathfrak{a}\mbox{-}\mathsf{Arion}\) & \(\mathfrak{a}\mbox{-}\mathsf{ArionHash}\) in Table 3 are chosen to resist only deterministic Grobner basis attacks within the 128 bit security target. For ArionHash one can set up a collision polynomial system by connecting two preimage polynomial systems. Note that this polynomial system is in general not fully determined, therefore an adversary has to randomly guess some variables before solving the system. If an adversary guesses output variables of the sponge until the collision polynomial system is fully determined, then we conjecture that the quotient space dimension of the collision polynomial system grows or is bounded by \[\dim_{\mathbb{F}_{p}}\left(\mathcal{F}_{\mathsf{ArionHash},coll}\right)\left(n,r,d_{1},d_{2}\right)=\left(\dim_{\mathbb{F}_{p}}\left(\mathcal{F}_{\mathsf{ ArionHash}}\right)\left(n,r,d_{1},d_{2}\right)\right)^{2}. \tag{11}\] Thus, we do not expect a collision Grobner basis attack to be more performative than a preimage attack. For more details we refer to Appendix C in the full version of the paper [49]. ## 4 Performance Evaluation In this section, we compare various instances of ArionHash, Anemoi, Griffin and Poseidon with respect to RICS (Section 4.2) and Plonk (Section 4.3). For starters, we discuss the theoretical foundation of an efficient implementation of an ArionHash circuit. In the Anemoi proposal it was revealed that CCZ-equivalence is a route to construct high degree permutations that can be verified with CCZ-equivalent low degree functions [16, SS4.1]. In Section 4.1 we follow this approach to prove that a ArionHash circuit can be transformed into an efficient circuit that avoids the computation of \(x^{e}\) via an affine transformation. ### Reducing the Number of Constraints By definition of the Arion GTDS a prover circuit will have to verify that \[y=x^{e}, \tag{12}\] though since \(e\) induces the inverse power permutation to \(d_{2}\in\{121,123,125,129,\)\(161,193,195,257\}\) the naive circuit for Equation (12) will introduce many constraints. On the other hand, from a prover's perspective Equation (12) is equivalent to \[y^{d_{2}}=\left(x^{e}\right)^{d_{2}}=x, \tag{13}\] for all \(x\in\mathbb{F}_{p}\). Thus, in an implementation to reduce the number of multiplicative constraints we are well advised to implement the equivalent circuit instead of the naive circuit. We also would like to note that the same trick was applied in Griffin[31] to reduce the number of constraints. In the design of Anemoi[16, SS4] a new tool was introduced to reduce the number constraints for an Anemoi circuit: CCZ-equivalence [19]. The authors have found a high degree permutation, the open Flystel, which is CCZ-equivalent to a low degree function, the closed Flystel. Consequently, this can be exploited to significantly reduce the number of constraints in a prover circuit (cf. [16, Corollary 2]). Let us now formalize the trick in Equation (13) in terms of CCZ-equivalence. **Definition 6**.: _Let \(\mathbb{F}_{q}\) be a finite field, and let \(F,G:\mathbb{F}_{q}^{n}\rightarrow\mathbb{F}_{q}^{m}\) be functions._ _(1) The graph of_ \(F\) _is defined as_ \[\Gamma_{F}=\Big{\{}\big{(}\mathbf{x},F(\mathbf{x})\big{)}\mid\mathbf{x}\in \mathbb{F}_{q}^{n}\Big{\}}.\] _(2) \(F\) and \(G\) are said to be CCZ-equivalent if there exists an affine permutation \(\mathcal{A}\) of \(\mathbb{F}_{q}^{n}\times\mathbb{F}_{q}^{m}\) such that_ \[\Gamma_{F}=\mathcal{A}(\Gamma_{G}).\] Now let us describe a GTDS that is equivalent to the \(\mathsf{Arion}\) GTDS. **Proposition 7**.: _Let \(\mathbb{F}_{p}\) be a prime field, and let \(n,d_{1},d_{2},e\in\mathbb{Z}_{>1}\) be integers such that_ 1. \(d_{1}\) _is the smallest positive integer such that_ \(\gcd\left(d_{1},p-1\right)=1\)_,_ 2. \(d_{2}\) _is an arbitrary integer such that_ \(\gcd\left(d_{2},p-1\right)=1\)_, and_ 3. \(e\cdot d_{2}\equiv 1\mod p-1\)_._ _Let \(\mathcal{F}_{\mathsf{Arion}}=\{f_{1},\ldots,f_{n}\}\) be the \(\mathsf{Arion}\) GTDS, let \(g_{i},h_{i}\in\mathbb{F}_{q}[x]\) be the polynomials that define \(\mathcal{F}_{\mathsf{Arion}}\), and let the GTDS \(\mathcal{G}=\{\hat{f}_{1},\ldots,\hat{f}_{n}\}\) be defined as_ \[\hat{f}_{i}(x_{1},\ldots,x_{n}) =x_{i}^{d_{1}}\cdot g_{i}(\tau_{i+1,n})+h_{i}(\tau_{i+1,n}), \qquad 1\leq i\leq n-1,\] \[\hat{f}_{n}(x_{1},\ldots,x_{n}) =x_{n}^{d_{2}},\] _where_ \[\tau_{i+1,n}=\sum_{j=i+1}^{n}x_{j}+\hat{f}_{j}(x_{1},\ldots,x_{n}).\] _Then \(\mathcal{F}_{\mathsf{Arion}}\) is CCZ-equivalent to \(\mathcal{G}\)._ Proof.: We consider the affine permutation \(\mathcal{A}:\mathbb{F}_{q}^{2n}\rightarrow\mathbb{F}_{q}^{2n}\) that swaps the \(n^{\text{th}}\) element with the \((2n)^{\text{th}}\) element, moreover we consider the substitution \(\mathbf{x}=\big{(}\hat{x}_{1},\ldots,\hat{x}_{n-1},\hat{x}_{n}^{d_{2}}\big{)}\). Now we apply the affine permutation to \(\Gamma_{\mathcal{F}_{\mathsf{Arion}}}\) which yields \[\mathcal{A}\big{(}\mathbf{x},\mathcal{F}_{\mathsf{Arion}}(\mathbf{x})\big{)}= \begin{pmatrix}\{\hat{x}_{i}\}_{1\leq i\leq n-1}\\ \hat{x}_{n}^{e\cdot d_{2}}\\ \Big{\{}f_{i}\left(\hat{x}_{i},\ldots,\hat{x}_{n}^{d_{2}}\right)\Big{\}}_{1 \leq i\leq n-1}\\ \hat{x}_{n}^{d_{2}}\end{pmatrix}.\] By construction of \(d_{2}\) and \(e\) we have that \(x^{e\cdot d_{2}}=x\) for every \(x\in\mathbb{F}_{p}\). Let's now investigate what happens to the \(f_{i}\)'s. Starting with \(f_{n-1}\), we have that \[\sigma_{n,n}\left(\hat{x}_{n}^{d_{2}}\right)=\hat{x}_{n}^{d_{2}}+\hat{x}_{n}^{ e\cdot d_{2}}=\hat{x}_{n}+\hat{x}_{n}^{d_{2}}=\tau_{n,n}(\hat{x}_{n}),\] for all \(\hat{x}_{n}\in\mathbb{F}_{p}\), and therefore \[f_{n-1}\left(\hat{x}_{n-1},\hat{x}_{n}^{d_{2}}\right)=\hat{f}_{n-1}(\hat{x}_{n -1},\hat{x}_{n}).\] Inductively, we now go through all the branches to conclude that \(f_{i}\left(\hat{x}_{i},\ldots,\hat{x}_{n}^{d_{2}}\right)=\hat{f}_{i}(\hat{x}_{i },\ldots,\hat{x}_{n})\) for all \(1\leq i\leq n-1\) which proves that \(\mathcal{A}\big{(}\mathbf{x},\mathcal{F}(\mathbf{x})\big{)}=\big{(}\hat{ \boldsymbol{\mathcal{A}}},\mathcal{G}(\hat{\boldsymbol{\mathcal{A}}})\big{)}\). **Corollary 8**.: _Verifying that \((y_{1},\ldots,y_{n})=\mathcal{F}(x_{1},\ldots,x_{n})\) is equivalent to verifying that \((y_{1},\ldots,y_{n-1},x_{n})=\mathcal{G}(x_{1},\ldots,x_{n-1},y_{n})\)._ Note that it follows from [48, Theorem 18, 24] that the Arion GTDS \(\mathcal{F}\) and its CCZ-equivalent GTDS \(\mathcal{G}\) from Proposition 7 are in the same security class with respect to differential and linear cryptanalysis. Unlike as for Anemoi the CCZ-equivalent GTDS \(\mathcal{G}\) is not a low degree function, though when implementing it as prover circuit we never have use multiplications to compute \(\tau_{i+1,n}\). ### RICS Performance of ArionHash Estimating the number of multiplicative constraints in a RICS circuit for ArionHash is straightforward. Lemma 9: _Let \(\mathbb{F}_{p}\) be a finite field, let \(r,n\geq 2\) be integers, and let ArionHash with \(r\) rounds be defined over \(\mathbb{F}_{p}^{n}\). For \(i=1,2\) denote with \(d_{i,\text{inc}}\) the minimal number of multiplications to compute the univariate power permutation \(x^{d_{i}}\). Then a prover RICS circuit for ArionHash needs_ \[N_{\textsf{ArionHash}}=r\cdot\big{(}\,(n-1)\cdot(d_{1,\text{inc}}+2)+d_{2, \text{inc}}\big{)}\] _multiplicative constraints._ Proof: By Corollary 8 one needs \(d_{2,\text{inc}}\) constraints in the \(n^{\text{th}}\) branch. In each of the remaining \(n-1\) branches one needs \(d_{1,\text{inc}}\) constraints for the power permutation, \(1\) constraints for the computation of \(g_{i}\) and \(h_{i}\) and \(1\) multiplication for the product of the power permutation and \(g_{i}\). Analog the number of RICS constraints for Anemoi, Griffin and Poseidon (cf. [16, SS7], [31, SS7.2] and [33]) are given by \[N_{\textsc{Griffin}} =2\cdot r\cdot(d_{inc}+n-2)\,, \tag{14}\] \[N_{\texttt{Anemoi}} =\frac{r\cdot n}{2}\cdot(d_{inc}+2)\,,\] (15) \[N_{\textsc{Poseidon}} =d_{inc}\cdot(n\cdot r_{f}+r_{p})\,. \tag{16}\] In Table 4 we compiled the round numbers of the hash functions. In Table 5 we compare the theoretical number of constraints for R1CS of various hash functions. Moreover, in Appendix D.1 of the full version of the paper [49] we compare the performance of Arion, Griffin and Poseidon using the C++ library libsnark [50] that is used in the privacy-protecting digital currency Zcash [35]. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multicolumn{6}{c}{R1CS Constraints} \\ \hline & ArionHash & \(\alpha\)-ArionHash & Griffin & Anemoi & Poseidon \\ \hline \(n\) & \multicolumn{6}{c}{\(d_{1}=3\)} \\ \hline 3 & 102 & 85 & 96 & & 216 \\ 4 & 126 & 84 & 112 & 96 & 232 \\ 5 & 120 & 100 & & & 248 \\ 6 & 145 & 116 & & 120 & 264 \\ 8 & 148 & 148 & 176 & 160 & 296 \\ \hline \(n\) & \multicolumn{6}{c}{\(d_{1}=5\)} \\ \hline 3 & 114 & 76 & 96 & & 240 \\ 4 & 120 & 96 & 110 & 120 & 264 \\ 5 & 125 & 116 & & & 288 \\ 6 & 170 & 136 & & 150 & 312 \\ 8 & 176 & 176 & 162 & 200 & 360 \\ \hline \end{tabular} \end{table} Table 5: R1CS constraint comparison 256 bit prime fields and 128 bit security with \(d_{2}\in\{121,123,125,161,193,195,257\}\). Round numbers for Anemoi, Griffin and Poseidon are taken from [16, §A.4], [31, Table 1] and [33, Table 1]. ### Plonk Performance of ArionHash Plonk[30] is a zkSNARK proof system which does not utilize R1CS constraints. In Plonk a 2-(input)-wire constraint is of the form, see [30, SS6], \[(a\cdot b)\cdot q_{M}+a\cdot q_{L}+b\cdot q_{R}+c\cdot q_{O}+q_{C}=0, \tag{17}\] \(a\) and \(b\) denote the left and right input variable, \(c\) denotes the output variable and \(q_{M}\), \(q_{L}\), \(q_{R}\), \(q_{O}\) and \(q_{C}\) denote the "selector coefficient" of the multiplication, the variables and the constant term. The 3-(input)-wire Plonk constraint has 3 addition gates \[(a\cdot b)\cdot q_{M}+a\cdot q_{L}+b\cdot q_{R}+c\cdot q_{O}+d\cdot q_{F}+q_{C} =0, \tag{18}\] where \(d\) is the "fourth" variable and \(q_{F}\) its selector coefficient. Counting the number of Plonk constraints is more subtle, since we now have to account for additions too. **Lemma 10**.: _Let \(\mathbb{F}_{p}\) be a finite field, let \(r,n\geq 2\) be integers, and let ArionHash with \(r\) rounds be defined over \(\mathbb{F}_{p}^{n}\). For \(i=1,2\) denote with \(d_{i,\text{inc}}\) the minimal number of multiplications to compute the univariate power permutation \(x^{d_{i}}\)._ 1. _A prover circuit needs_ \[(n-1)\cdot(d_{1,inc}+6)+d_{2,inc}-1\] _2-wire and_ \[(n-1)\cdot(d_{1,inc}+4)+d_{2,inc}\] _3-wire Plonk constraints for the_ _ArionHash GTDS._ 2. _A prover circuit needs_ \[4\cdot(n-1)\] _2-wire and_ \[\begin{cases}n,&n=2,3,\\ n+2+\left\lceil\frac{n-3}{2}\right\rceil+\left\lceil\frac{n-4}{2}\right\rceil, &n\geq 4,\end{cases}\] _3-wire Plonk constraints for the affine layer of_ _ArionHash__._ _Then a prover circuit needs_ \[N_{\textsf{ArionHash},2}=r\cdot\left((n-1)\cdot(d_{1,inc}+6)+d_{2,inc}-1 \right)+(r+1)\cdot\begin{cases}n\cdot(n-1),&n=2,3\\ 4\cdot(n-1),&n\geq 4,\end{cases}\] _2-wire and_ \[N_{\textsf{ArionHash},3}=r\cdot\left((n-1)\cdot(d_{1,inc}+4)+d_{2,inc}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad Proof: For (1), again we can use the CCZ-equivalent GTDS \(\mathcal{G}\), see Proposition 7, to build the circuit for the ArionHash GTDS. For \(x^{d_{i}}\) one needs \(d_{i,inc}\), so we need \((n-1)\cdot d_{1,inc}+d_{2,inc}\) constraints for the univariate permutation polynomials. For \(\tau_{n,n}\) one needs one constraint, and for \(\tau_{i+1,n}\), where \(i<n-1\), one needs two 2-wire constraints, so in total one needs \(1+2\cdot(n-2)\) 2-wire constraints to compute the \(\tau_{i+1,n}\)'s. On the other, hand for 3-wire constraints one can compute all \(\tau_{i+1,n}\)'s with one constraint, so \(n-1\) in total. To compute \[g_{i} =\tau^{2}+\alpha_{i,1}\cdot\tau_{i+1,n}+\alpha_{i,2},\] \[h_{i} =\tau^{2}+\beta_{i}\cdot\tau_{i+1,n}\] one needs two constraints since one can build any quadratic polynomial with the 2-wire Plonk constraint, see Equation (17). To compute \(x_{i}^{d_{1}}\cdot g_{i}+h_{i}\) one needs two 2-wire constraints and one 3-wire constraint. We have to do this \((n-1)\) times, hence in total we need \((n-1)\cdot d_{1,inc}+d_{2,inc}+1+2\cdot(n-2)+4\cdot(n-1)\) 2-wire and \((n-1)\cdot d_{1,inc}+d_{2,inc}+(n-1)+(n-1)\cdot(2+1)\) 3-wire constraints. For (2), we build the circuit with Algorithm 1. To compute a sum of \(m\) elements one needs \(m-1\) 2-wire constraints and \(1+\left\lceil\frac{m-3}{2}\right\rceil\) 3-wire constraints. We have to do this for \(\sigma\) and for \(\sum_{i=2}^{n}(i-1)\cdot v_{i}\), so we need \((n-1)+(n-2)+1\) 2-wire and \(1+\left\lceil\frac{n-3}{2}\right\rceil+1+\left\lceil\frac{n-4}{2}\right\rceil+1\) 3-wire constraints to compute \(w_{1}=\sigma+\sum_{i=2}^{n}(i-1)\cdot v_{i}+c_{1}\), where we do constant addition in the addition of the two sums. For the \(i^{\text{th}}\) component, we have that \(w_{i}=w_{i-1}-\sigma+n\cdot v_{i}-c_{i-1}+c_{i}\), so we need two 2-wire and one 3-wire constraints. We have to do this \(n-1\) times, hence in total we need \((n-1)+(n-2)+1+2\cdot(n-1)\) 2-wire and \(3+\left\lceil\frac{n-3}{2}\right\rceil+\left\lceil\frac{n-4}{2}\right\rceil+n-1\) 3-wire constraints. Note that for \(n\geq 4\) Algorithm 1 yields more efficient 2-wire and 3-wire circuits than generic matrix multiplication which always needs \(n\cdot(n-1)\) 2-wire and \(n\cdot\left(1+\left\lceil\frac{n-3}{2}\right\rceil\right)\) 3-wire constraints. For ArionHash's main competitors Anemoi, Griffin and Poseidon we list the formulae to compute their Plonk constraints in Table 6. In Table 7 we compare the theoretical number of constraints for Plonk for various hash functions. \begin{table} \begin{tabular}{c|c|c} \hline \hline Hash & 2-wire constraints & 3-wire constraints \\ \hline Amoni & \(\frac{\tau_{n}}{2}\cdot(d_{inc}+5)+(r+1)\cdot\begin{cases}2,&n=2,\\ n\cdot\left(\frac{\tau_{n}}{2}-1\right),&n=4,\\ 10,&n=6,\\ 16,&n=8\end{cases}\) & \(\frac{\tau_{n}}{2}\cdot(d_{inc}+3)+(r+1)\cdot\begin{cases}(r+1)\cdot n,&n=2,4,\\ 6,&n=6,\\ 12,&n=8\end{cases}\) \\ \hline Griffin & \(\begin{cases}5,&n=3,\\ 8,&n=4,\\ 24,&n=8,\\ \frac{\tau_{n}}{2}+2\cdot n-4,&n\geq 12\end{cases}\) & \(\tau\cdot(2\cdot d_{inc}+3\cdot n-8)+(r+1)\cdot\begin{cases}3,&n=3,\\ 6,&n=4,\\ 20,&n=8,\\ \frac{\tau_{n}}{2}+4\cdot\left\lceil\frac{n-1}{2}+n,&n\geq 12\end{cases}\) \\ \hline Poseidon & \(d_{inc}\cdot(n\cdot r_{f}+r_{p})+(r+1)\cdot n\cdot(n-1)\) & \(d_{inc}\cdot(n\cdot r_{f}+r_{p})+(r+1)\cdot n\cdot\begin{cases}n,&n=2,3,\\ \left\lceil\frac{n-4}{2}\right\rceil,&n\geq 4\end{cases}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Plonk constraints for Anemoi [16, §7.2], Griffin [31, §7.3] and Poseidon [33]. Moreover, in Appendix D.2 of the full version of the paper [49] we compare the performance of Arion and Poseidon using the Rust library Dusk Network Plonk [23]. **Acknowledgments.** Matthias Steiner and Stefano Trevisani were supported by the KWF under project number KWF-3520\(|\)31870\(|\)45842.
2304.10833
Outsourced Analysis of Encrypted Graphs in the Cloud with Privacy Protection
Huge diagrams have unique properties for organizations and research, such as client linkages in informal organizations and customer evaluation lattices in social channels. They necessitate a lot of financial assets to maintain because they are large and frequently continue to expand. Owners of large diagrams may need to use cloud resources due to the extensive arrangement of open cloud resources to increase capacity and computation flexibility. However, the cloud's accountability and protection of schematics have become a significant issue. In this study, we consider calculations for security savings for essential graph examination practices: schematic extraterrestrial examination for outsourcing graphs in the cloud server. We create the security-protecting variants of the two proposed Eigen decay computations. They are using two cryptographic algorithms: additional substance homomorphic encryption (ASHE) strategies and some degree homomorphic encryption (SDHE) methods. Inadequate networks also feature a distinctively confidential info adaptation convention to allow the trade-off between secrecy and data sparseness. Both dense and sparse structures are investigated. According to test results, calculations with sparse encoding can drastically reduce information. SDHE-based strategies have reduced computing time, while ASHE-based methods have reduced stockpiling expenses.
D. Selvaraj, S. M. Udhaya Sankar, D. Dhinakaran, T. P. Anish
2023-04-21T09:19:33Z
http://arxiv.org/abs/2304.10833v1
# Outsourced Analysis of Encrypted Graphs in the Cloud with Privacy Protection ###### Abstract _- Huge diagrams have unique properties for organizations and research, such as client linkages in informal organizations and customer evaluation lattices in social channels. They necessitate a lot of financial assets to maintain because they are large and frequently continue to expand. Owners of large diagrams may need to use cloud resources due to the extensive arrangement of open cloud resources to increase capacity and computation flexibility. However, the cloud's accountability and protection of schematics have become a significant issue. In this study, we consider calculations for security savings for essential graph examination practices: schematic extraterrestrial examination for outsourcing graphs in the cloud server. We create the security-protecting variants of the two proposed Eigen decay computations. They are using two cryptographic algorithms: additional substance homomorphic encryption (ASHE) strategies and some degree homomorphic encryption (SDHE) methods. Inadequate networks also feature a distinctively confidential info adaptation convention to allow the trade-off between secrecy and data sparseness. Both dense and sparse structures are investigated. According to test results, calculations with sparse encoding can drastically reduce information. SDHE-based strategies have reduced computing time, while ASHE-based methods have reduced stockpiling expenses._ - Cloud, Protection, Outsourcing data, Homomorphic encryption, Eigen deterioration. 20232023202320232023202232022320223202232022232022232022320222320223202223202232022232022320223202232022320223202232022320223202232022320223202232022320223202232022320223202232022320232022320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202323202320232023202320232023232023202323202323202323202320232320232320232023203232320232023232023232023232320232320232323232323232323232323232323232323232323232323232323232323232323232323232323232323232323233232323233233232332323232332323232332323233233232323232332323323232323233233232332332332332323323323323323323323323323233233232332332332332332332332332332332332332332332332333233323 the same as adding fictitious edges to achieve differential privacy [6]. The new entries are encoded 0s; therefore, they have no impact on the calculation of the matrix, so the reliability is unaffected by this edge insertion. ## 2 Related work Unfortunately, gathering and processing graph data via the cloud raises privacy issues. Individuals are reluctant to provide these datasets since they are typically sensitive because they need to have faith in the ability of the data proprietors to keep the data source safe in the cloud server. On the other hand, as data are now crucial to doing business or conducting a scientific study, data owners also have a tremendous stake in maintaining their ownership of these valuable data. Furthermore, according to recent research and events, sensitive data stored in the cloud is vulnerable to data loss, spying, and malicious insiders. They are finding ways to accommodate consumers' and data proprietors' worries in cloud-based data extraction. To assure security, several matrix processing methodologies have indeed been presented. These secured outsourced alternatives are tailored for large-scale linear regression solutions and applications involving multiplication and additive noise filtering. Their methods could be more effective because they reveal sensitive data, rely on numerous servers that are not collaborating, or need significant overhead. Use client-cloud cooperation and matrix disruption to solve systems of equations iteratively. R. Bost [7] builds three main categorization protocols--decision trees, hyperplane decisions, and Naive Bayes --that satisfy this privacy restriction. They also make it possible for these methods to work with AdaBoost. They show that such libraries can also be utilized to design other predictors, such as multiplexing and feature extraction. These constructions are based on new libraries of essential components for reliably generating classifiers. They applied filters and libraries into practice and evaluated them. When used with actual clinical data, the efficient methods accomplish a diagnosis in a few milliseconds to a few seconds. By fusing a customer's query information with permission data credentials and indices, D. Leilei [52] presents a Dynamic Multi-client SSE (DMSSE) method with support for boolean queries. The system restricts a client's search capability to appropriate terms and enables a data owner to authorize numerous clients to run boolean inquiries over an encrypted format. The advantages of our DMSSE scheme over current MSSE solutions include the following: 1) Lack of interaction. After receiving search authorization, clients are free to do their searches without the assistance of the data owner. 2) Active. The data holder can effectively change the search authorization of a customer without impacting other customers. Using the DMSSE method in a large encoded file is beneficial, as shown by empirical assessments performed on actual data. Li et al. [9] presented a dynamic additive homomorphic encryption scheme and discussed a couple of crucial dilemmas using attribute-based encryption and the k-nearest neighbor algorithms. However, none of the searchable encryption alternatives can be employed to accomplish optimized route discovery with assistance for information retrieval over cryptographic graph data. F. Berger [48] to discover an implied representation of a molecule's ring system. They offer effective cyclic graph referential integrity techniques that could speed up lookups by acting as molecular descriptors. The precise construction of a molecular graph's well-defined collection of rings is yet another task. They provide a brand-new approach for calculating a graph's relevant cycle set. Catalano, D [11] demonstrate a method for converting linearly homomorphic encryption into a system that can assess degree-2 calculations on encrypted message. The translation is remarkably easy to implement and only necessitates one very minor requirement on the baseline continuously homomorphic scheme: the communication field should be a public ring that allows evenly distributed sampling of its members. With practically all current number-theoretic linearly elliptic curve schemes, including Goldwasser-Micali, Paillier, or ElGamal, they can instantiate the transformations as a result. When addressing a subset of degree-2 harmonics in which the amount of modifications of degree-2 terms is constrained by a fixed, our resultant techniques ensure circuit confidentiality and are small. Z. Cui [12] concentrates on a fundamental issue with geo-tagged data: identifying the top k frequently occurring phrases in a particular region of the cloud's spatial data. They first create a Region Tree Index (RTI) for geo-tagged data. Then, Sorted Terms and Weights (SSTW) are suggested to be stored in RTI using the array collection architecture. The top k often occurring phrases in a specific area are computed using an effective k Terms Search method. Finally, thorough tests confirm the viability of the suggested scheme. In a cloud computing context, Xianyi [13] provides a method to carry out privacy-protecting optimum route discovery with assistance for semantic search on the encrypted graph (PORF). Based on the concept of searchable encryption and the stemming process, we developed a method by creating a safe query index to execute optimum path discovery with assistance from the keyword web. For our system, they provide a rigorous security analysis. Furthermore, through experiments, they also evaluate the plan's effectiveness. GOOSE, a safe architecture for Graph Contracting and SPARQL Analysis, is presented by R. Ciucanu [14]. To obtain the following attractive data security, GOOSE uses cryptosystem and secure multi-party computation: (i) no cloud node could indeed gain knowledge of the graph; (ii) no cloud node could indeed simultaneously learn the query and the query responses, and (iii) an outside network spectator could indeed gain knowledge of graph; the query; or the query answers. The core of the W3C's SPARQL 1.1 specification, Unions of Conjunctions of Regular Path Queries (UCRPQ), is supported by GOOSE as a query language and recursion queries. They demonstrate that the latency associated with cryptographic techniques scales linearly with input and output sizes.The FHE-based technique for mathematical systems will need to be demised at many levels. Re-encryption to preserve the usefulness of encrypted data [10, 15]. Larger cipher messages and substantial processing expenses are needed for this. On the other side, the data owner wishes to control and analyze the growing customer data using public cloud services [50]. This study considers calculations for security savings for one essential chart examination: diagram extraterrestrial analysis for offshore charts in the cloud. The main task: Multiple information extraction procedures also depend on the Eigen decomposition of large frameworks. We consider a cloud-driven design with personal identification, content providers, and cloud suppliers as three synergistic groups. Charts are referred to as frameworks; their parts are stored and assembled by dispersed clients [17]. The information proprietor subsequently collaborates with cloud part initiatives to drive creepy investigation while safeguarding data security against the reputable but enquiring cloud service. While computations are made according to data contributors and propriators. ## 3 Proposed Methodology Using the SDHE and ASHE methodologies within the cloud-centric architecture presents several obstacles, which our research tackles. (1) Since SDHE permits homomorphic multiplication solely on a single level, implementing cloud-side operations is simple. However, the full extent of their costs has yet to be discovered. (2) ASHE techniques have smaller cipher text sizes, making storage and transmission effective. However, in the cloud, data providers must acquire, decode, and analyze information locally to ensure computational anonymity, as shown in Fig. 1. We determine the privacy risk associated with sending sparse graph matrices and create a productive local differential--a secret technique for adding fictitious edges with identically encrypted values. Both may be rebuilt and adapted to a cloud infrastructure to accomplish practical-based division. The real effort of separating the consumer and cloud parts protects the confidentiality of data and analytic output, as shown in Fig. 2. Figure 1: Technical Architecture of Outsourced Analysis of Encrypted Data Protected search enables authorized data users to seek through the encoded data of the data owner and privately offers anonymized search terms [6, 18, 20, 21]. It is a compelling adaptation of conventional cryptography for the cloud computing environment and is fueled by efficient content recovery from encrypted cloud data that has been subcontracted [16, 22-26]. A significant amount of study has been done on safe search terms and difficulties in cloud technology, with the goals of consistently enhancing search effectiveness, lowering computation and communication costs, and enhancing the range of search features with greater privacy and security protections [51]. All of these. strategies share the fundamental presumption that perhaps the cloud is an "honest-but-curious" phenomenon thatconsistently maintains resilient and reliable software and hardware environments. As a consequence, whenever a search is finished, the cloud provider consistently returns accurate and comprehensive search queries without exception. For safe keyword search over secure cloud data, we officially present the provable secure searching model of the system and threat model and construct a perfectly all-right search outcomes classification method. We suggest a quick signature method based on public-key cryptography without certificates to validate objects' veracity. ### 3.1 Added Substance Homomorphic Encryption The following characteristic of added ingredient homomorphic encryption exists. The additive homomorphic procedure is shown as follows for two integers. Fig. 2: Process Flow of Proposed Approach \[E_{n}(x+y)=E_{n}(x)+\ E_{n}(y) \tag{1}\] We will utilize Paillier cryptography as an example of one of the most effective ASHE strategies to illustrate our ASHE-based procedures. A series of pseudo-homomorphic processes that form the basis of our procedures are made possible by additive homomorphic encryption. Unencrypted for one parameter, or either, we obtain \[E_{n}(xy)=\sum_{i=1}^{y}E_{n}(x)=\sum_{i=1}^{x}E_{n}(y) \tag{2}\] \(\rm E_{a}(xy)=E_{n}(x)^{y}\ mod\ P_{x}^{2}\), where \(\rm P_{x}\) is the public key, provides a more effective method to multiply for Paillier cryptography. Since an operand isn't encoded, we refer to it as pseudo-homomorphic multiplication. We can deduce the pseudo-homomorphic dot product, matrix-matrix multiplication (MMM), and matrix-vector multiplication (MVM), each employing one unencrypted parameter, from all these two essential aspects. Protecting the unencrypted operand is the main difficulty faced by ASHE-based solutions. ### Some Degree Homomorphic Encryption In recent years, systems for some degree of homomorphic encryption (SDHE) have indeed been created to accomplish a level or more of homomorphic multiplier concurrent with ASHE. For instance, it is possible to determine on encrypted numbers \(\rm E(n_{i})\) while decoding them the sums \(\rm(n_{1}+n_{2})(n_{3}+n_{4})+(n_{5}+n_{6})(n_{7}+n_{8})\). Keep in mind that each value only requires one multiplication. In comparison, the multiplication in \(\rm n_{1}\), \(\rm n_{2}\)\(\rm n_{3}\) occurs twice. The degree-2 functions are frequently computed homomorphically using the SHE methods. Several well-known SHE prelisting: the BGN strategy, utilizing group pairings with elliptic curves, the RLWE method, relying on the ring learning-with-error issue; and the Catalano et al. [11] strategy, focused on an adaptation of the AHE strategy. We will employ the RLWE method in the analysis instead of the other two because of cost concerns [28-33]. The decryption of the BGN technique relies on processing a discrete log, which has an O(\(\rm\forall q\)) cost for unencrypted variables in the [0, q] range. We discover that it takes more than one second to decode 20-bit data by using the component dlog brute force technique, which may undoubtedly be reduced with some adjustment. The ciphertext extension of the Catalano et al. [11] algorithm led to its exclusion. When an N-dimensional space and an N*N-encoded matrix are multiplied, the result will contain O(\(\rm N^{2}\)) encoded components, which are too pricey to be sent to the customer. We omit the specifics among these techniques owing to space constraints. **Algorithm - Privacy-preserving - (PP) sparse submission (Hs, \(\rm D_{p}\), \(\rm An_{a,b}\)).** **Input:** Hs - histogram, \(\rm D_{p}\) - parameter (differential privacy), \(\rm An_{a,b}\) - precise node degree. Determine the bin containing \(\rm An_{a,b}\), where \(\rm Up_{a}\) and \(\rm Lo_{a}\) are its upper and lower bounds. \(x\ \leftarrow\ (\rm Up_{a}\ -\ Lo_{a})/D_{p}\); \(y\ \leftarrow\ \rm x\ *3.9\); / for \(\rm y\ \approx 3.9\) for \(\rm x=1\) the y scales linearly with \(\rm x\): \(\rm y\approx 3.9x\); Generate a variable \(\rm\phi_{a,b}\) based on dispersal Laplace (0, \(\rm x\)); \(K_{a,b}\leftarrow\ |y|+\phi_{a,b}\); add \(An_{a,b}\) inadequate encryption and actual references to the listing; arbitrarily choose \(\rm K_{a,b}\) edges away from the others \(N-An_{a,b}\) edges and as the encoded zero bits, encrypt it; Therefore, provide index (a, b) of the items for \(\rm b\geq a\) if the graph is directionless; if not, submit all \(An_{a,b}+\ K_{a,b}\) items. To create a brand-new Parlier Encryption-based verification object request method in which the Cloud provider has no idea whatever information the user is seeking or even which certification items will be presented to the user. To assess the precision and effectiveness of our suggested system, we offer comprehensive security specification and verification, as well as carry out thorough performance trials. ### Query Process The data user can validate the findings using the query result verification mechanism. In this article, we created a secure, straightforward to combine by providing a specific query result set. If the accumulation somehow fails to return either the number of or whichever qualifying files, the search client can do further checks and verify the accuracy of each data source in the accumulation [34-38]. This is known as a fine-grained query results validation mechanism. The cloud computing idea enables speedy deployment and distribution of a shared pool of reconfigurable computational power, such as networking, processors, memory, programs, and applications, with minimum administrative labor or service provider participation. Three separate keys will instantly be produced for the encrypted format when the data owner transmits to a remote server. To secure the anonymity of the validation objects while minimizing space and communication costs. Keys for trapdoors, verification objects, and decryption are generated automatically. The trapdoor key distinguishes between data owners and hackers [5,27,39-42,46]. The query results group and related validation objects are returned after a query is complete and are provided to the querying user, who uses the validation item to check the accuracy and comprehensiveness of the query results. Our suggested query outcomes validation approach allows the information to execute completeness verification before decoding search queries rapidly and verifies each encrypted data file in the query results set by the query client. When a cloud server or other unauthorized party accesses information or data that the user has stored. Anytime someone tries to access the information or data, the data user will receive a warning. We may stop unauthorized users from obtaining user data or information by validating the verification object [43, 44, 45, 49]. When the data held within them cannot be retrieved typically, data recovery is the act of saving (retrieving) unavailable, stolen, distorted, corrupted, or reformatted information from secondary stores, portable media, or files. We can still retrieve the entire document even if a hacker has access to the data or tampers with it. ## 4 Experimental Evaluations We have demonstrated that, given the framework's supposition, all strategies that have been constructed ensure privacy. The tests will assess different expenses related to these strategies to determine which algorithms are more effective. Our analysis has three main components: evaluating the Search complexity in carrying out the ASHE and SDHE-based privacy-preserving (ii) the Query time for the cloud and data providers with various cryptographic techniques. ### Setup After the data owners' logins have been verified and granted access, the datasets will be uploaded to the cloud so everyone can access them. After choosing the file from the system, we must enter the date for the cloud system upload. We have a search function that may show the encrypted search, the uploaded time, the owner of the data, and the action. So that we can find out who submitted the file, its owner, and when it was posted, since there is action here, we must request the individual who submitted the file. When you request files from a user, they will respond with whether you can retrieve those files or not. The specific individual cannot access the item from the repository if the owner does not grant access. If the user grants access to the file, the person can take any action they require. The four essential components are file name, user name, timestamp, state, and activity. The state will be given, and the activity will be a document action allowed just after the owner has provided the authorization. ### Storage Complexity When compared to existing methods like Dynamic Searchable Symmetric Encryption [9], Linearly-Homomorphic Encryption [11], and SPARQL [14], our schemes' storage complexity is O(N\({}^{2}\)) and O(N + \(\lambda\)), correspondingly. In reality, the needed level of protection is supposed to be indeed achieved with a high enough security parameter \(\lambda\) level. Even though there are 230 documents, as shown in Fig. 3, the complexity of storage is reduced when we select \(\lambda\)\(=\) 2\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text ### _Search Complexity_ Moreover, the search overhead of the proposed strategies, as well as the SPARQL strategies [14], is O(N\({}^{2}\) ) and O(\(\lambda\cdot\log^{2}\) N ), accordingly. Even if there are 2\({}^{30}\) documents, as depicted in Fig. 4, our strategies are less search-complex than SPARQL strategies. ### _Query Time_ The length of the dictionary and the number of documents significantly impact the calculation cost in the query phase, as illustrated in Fig. 5. In contrast, the amount of query terms essentially has no effect. The strategies could be effective during the query stage as well. Our methods reduce storage complexity, modifying sophistication, and difficulty associated with creating indexing, a trapdoor, and a search. Exceptionally, compared to other systems, the upgrading complexity of our approaches may be nearly nonexistent. ## 5 Conclusion We develop a platform for the spectrum analysis of huge matrices while maintaining privacy, which offers solid confidentiality assurance defense against sincere but inquisitive cloud providers. Secured graph data can be uploaded to the cloud by data contributors, and Using secure protocols, the analysis is conducted between the data owner and the cloud. The system successfully restricted in-house analyses to the resource-restricted data owner and storage capacity and safely outsourced the expensive analyses to the cloud. We create two privacy-preserving strategies for spectrum analyzers and investigate how they are built using additive substance homomorphic encryption (ASHE) and some degree of homomorphic encryption (SDHE). The plaintext operands of the AHE methods must be protected from attackers, so we created masking approaches that fulfill the needed privacy guarantees while enabling the data owner to increase complexity. Large sparse matrices aid the privacy-preserving approach. We created the privacy-preserving dense data submission methodology for resource providers to find a balance between sparse data and anonymity. The approach to data sparsity dramatically lowers costs for the data owner. Using ciphertext packing in the RLWE-based approaches reduces computation overhead, while the Paillier-based methods significantly reduce online storage and data propriators' transmission losses. In the future, the cloud will need to seek across the complete database. It is highly wasteful and renders the technique of outsourced data-Search worthless. Future research in this field will focus on improvements for the effective verification of vast amounts of data that have already been outsourced. This technology currently only operates in partially authorized clouds, but it will eventually be expanded to include all cloud settings and can offer higher security. Additionally, we can expand our search approach in the future to employ external devices while protecting confidentiality.
2303.05527
The dark side of FIRE: predicting the population of dark matter subhaloes around Milky Way-mass galaxies
A variety of observational campaigns seek to test dark-matter models by measuring dark-matter subhaloes at low masses. Despite their predicted lack of stars, these subhaloes may be detectable through gravitational lensing or via their gravitational perturbations on stellar streams. To set measurable expectations for subhalo populations within LambdaCDM, we examine 11 Milky Way (MW)-mass haloes from the FIRE-2 baryonic simulations, quantifying the counts and orbital fluxes for subhaloes with properties relevant to stellar stream interactions: masses down to 10^6 Msun, distances < 50 kpc of the galactic center, across z = 0 - 1 (lookback time 0 - 8 Gyr). We provide fits to our results and their dependence on subhalo mass, distance, and lookback time, for use in (semi)analytic models. A typical MW-mass halo contains ~16 subhaloes >10^7 Msun (~1 subhalo >10^8 Msun) within 50 kpc at z = 0. We compare our results with dark-matter-only versions of the same simulations: because they lack a central galaxy potential, they overpredict subhalo counts by 2-10x, more so at smaller distances. Subhalo counts around a given MW-mass galaxy declined over time, being ~10x higher at z = 1 than at z = 0. Subhaloes have nearly isotropic orbital velocity distributions at z = 0. Across our simulations, we also identified 4 analogs of Large Magellanic Cloud satellite passages; these analogs enhance subhalo counts by 1.4-2.7 times, significantly increasing the expected subhalo population around the MW today. Our results imply an interaction rate of ~5 per Gyr for a stream like GD-1, sufficient to make subhalo-stream interactions a promising method of measuring dark subhaloes.
Megan Barry, Andrew Wetzel, Sierra Chapman, Jenna Samuel, Robyn Sanderson, Arpit Arora
2023-03-09T19:00:05Z
http://arxiv.org/abs/2303.05527v2
The dark side of FIRE: predicting the population of dark matter subhaloes around Milky Way-mass galaxies ###### Abstract A variety of observational campaigns seek to test dark-matter models by measuring dark-matter subhaloes at low masses. Despite their predicted lack of stars, these subhaloes may be detectable through gravitational lensing or via their gravitational perturbations on stellar streams. To set measurable expectations for subhalo populations within \(\Lambda\)CDM, we examine 11 Milky Way (MW)-mass haloes from the FIRE-2 baryonic simulations, quantifying the counts and orbital fluxes for subhaloes with properties relevant to stellar stream interactions: masses down to \(10^{6}\,\mathrm{M}_{\odot}\), distances \(\lesssim 50\) kpc of the galactic center, across \(z=0-1\) (\(\mathrm{r_{\mathrm{lookback}}}=0-8\) Gyr). We provide fits to our results and their dependence on subhalo mass, distance, and lookback time, for use in (semi)analytic models. A typical MW-mass halo contains \(\approx 16\) subhaloes \(>10^{7}\,\mathrm{M}_{\odot}\) (\(\approx 1\) subhalo \(>10^{8}\,\mathrm{M}_{\odot}\)) within 50 kpc at \(z\approx 0\). We compare our results with dark-matter-only versions of the same simulations: because they lack a central galaxy potential, they overpredict subhalo counts by \(2-10\times\), more so at smaller distances. Subhalo counts around a given MW-mass galaxy declined over time, being \(\approx 10\times\) higher at \(z=1\) than at \(z\approx 0\). Subhaloes have nearly isotropic orbital velocity distributions at \(z\approx 0\). Across our simulations, we also identified 4 analogs of Large Magellanic Cloud satellite passages; these analogs enhance subhalo counts by \(1.4-2.7\) times, significantly increasing the expected subhalo population around the MW today. Our results imply an interaction rate of \(\sim 5\) per Gyr for a stream like GD-1, sufficient to make subhalo-stream interactions a promising method of measuring dark subhaloes. keywords: galaxies: haloes -- Local Group -- dark matter -- methods: numerical ## 1 Introduction A key prediction of the cold dark matter (CDM) model is the existence of effectively arbitrarily low-mass self-gravitating dark-matter (DM) structures, known as haloes, including subhaloes that reside within a more massive halo (Bullock & Boylan-Kolchin, 2017). Alternative models, such as warm dark matter (WDM) and fuzzy dark matter, predict a lower cutoff in the (sub)halo mass function (for example Hu et al., 2000; Ostdiek et al., 2022). Current constraints on low-mass (sub)haloes come from luminous galaxies, such as the faint satellite galaxies around the Milky Way (MW). Measurements of ultra-faint galaxies imply that (sub)haloes exist down to \(\sim 10^{8}\,\mathrm{M}_{\odot}\)(for example Jethwa et al., 2018; Nadler et al., 2021). Theoretical works show that (sub)haloes below this mass are below the atomic cooling limit and therefore unable to retain enough gas before cosmic reionization to support star formation, leaving them starless and thus invisible to direct detection (for example Bullock et al., 2000; Somerville, 2002; Benson et al., 2002). The discovery of completely dark (sub)haloes would represent another key success of the CDM model, and such measurement (or lack thereof) would provide key constraints on the properties of dark matter. To date, researchers have devised two potential avenues for indirectly detecting these dark (sub)haloes. One method uses gravitational lensing: the lensed light from a background galaxy allows us to determine a foreground galaxy's mass distribution (Mao & Schneider, 1998), including low-mass (sub)haloes that reside along the line of sight. Most work using this method focuses on population statistics (Sengil & Dvorkin, 2022; Wagner-Carena et al., 2022; Ostdiek et al., 2022), although Vegetti et al. (2012, 2014) identified individual satellites with total mass \(10^{8}-10^{9}\,\mathrm{M}_{\odot}\) at \(z\approx 0.2-0.5\). Existing works predominantly examine galaxies with DM halo masses \(M_{\mathrm{halo}}\gtrsim 10^{13}\,\mathrm{M}_{\odot}\) at these redshifts, notably higher than MW-mass galaxies with \(M_{\mathrm{halo}}\approx 10^{12}\,\mathrm{M}_{\odot}\) at \(z=0\)(for example Bland-Hawthorn & Gerhard, 2016). The MW itself provides a separate means to measure dark subhaloes, via perturbations to thin streams of stars that originate from the tidal disruption of a globular cluster (GC) or satellite galaxy (Ibata et al., 2002; Johnston, 2016). If a subhalo passes near such a stellar stream, its gravitational field can impart an identifiable gap, spur, or other perturbation, whose properties depend on the subhalo's mass, size, velocity, and other orbital parameters. Recent works explored how subhaloes with masses \(\gtrsim 10^{5}\,\mathrm{M}_{\odot}\) can induce observable features in stellar streams (for example Yoon et al., 2011; Erkal et al., 2016; Banik et al., 2018; Bonaca et al., 2019; Carlberg 2020); less massive subhaloes lack the energy necessary to leave observable evidence of interaction. To confirm that a dark subhalo induced a particular perturbation, one must rule out the effects of luminous objects, including the MW's \(>50\) known satellite galaxies (McConnachie, 2012; Simon, 2019) and \(>150\) known GCs (Harris, 2010), as well as giant molecular clouds (Amorisco et al., 2016) and other stellar streams (Dillamore et al., 2022). Of the dozens of currently well-known streams around the MW (for example Grillmair and Carlin, 2016; Mateu, 2023), most studies focus on two, GD-1 and Pal 5, given their relative proximity to the MW and the high-quality GD phase-space data available for them. GD-1 is \(\approx 15\) kpc long, with a pericenter of 13 kpc and apocenter of 27 kpc, and it formed \(\approx 3\) Gyr ago (Doke and Hattori, 2022), likely from a progenitor GC (Bonaca and Hogg, 2018). Pal 5 is \(\approx 10\) kpc long (Starkman et al., 2020), with a pericenter of 8 kpc and an apocenter of 19 kpc (Yoon et al., 2011), and it formed \(\approx 8\) Gyr ago from the Pal 5 GC (Odenickren et al., 2001). GD-1 and Pal 5 represent perhaps the ideal streams on which to study the potential gravitational impacts of dark subhaloes, though _Gaia_ data release 3 (DR3) now provides even more detailed 6D phase-space measurements of stars in more streams (Gaia Collaboration et al., 2021). In addition to the subhaloes orbiting the MW itself, the CDM model also predicts that large satellites such as the Large Magellanic Cloud (LMC) host their own orbiting subhaloes (for example Deason et al., 2015; Sales et al., 2016; Jahn et al., 2019; Santos-Santos et al., 2021). Given that the LMC just passed its pericenter of \(d\approx 50\) kpc from the MW center (Kallivayalil et al., 2013), the inner halo currently may be in a temporary period of enhanced subhalo enrichment (Dooley et al., 2017, 2018). Theoretical predictions for the counts, orbits, and sizes of dark subhaloes can help support or rule out a dark subhalo origin for observed gaps or other features in stellar streams. Previous works have predicted subhalo populations in the mass regime relevant to subhalo-stream interactions (\(\approx 10^{5}-10^{8}\,\mathrm{M}_{\odot}\)). Most used dark-matter-only (DMO) simulations that do not account for the effects of baryonic matter (for example Yoon et al., 2011; Mao et al., 2015; Griffen et al., 2016). However, incorporating baryonic physics significantly can affect subhalo populations. Primarily, the presence of the central galaxy induces additional tidal stripping on subhaloes that orbit near it, which, as previous works (for example D'Onghia et al., 2010; Garrison-Kimmel et al., 2017; Webb and Boy, 2020) showed, causes DMO simulations to overpredict subhalo count significantly, by 5\(\times\) or more, near the central galaxy. Additionally, gas heating from the cosmic UV background reduces the initial masses and subsequent accretion history of low-mass (sub)haloes, making them lower mass today (for example Bullock et al., 2000; Somerville, 2002; Benson et al., 2002). The FIRE-2 cosmological zoom-in simulations are well suited for predicting the population of low-mass dark subhaloes, given their high resolution and inclusion of relevant baryonic physics; most importantly, the formation of realistic MW-mass galaxies. As a critical benchmark, previous works have shown that the luminous subhaloes (satellite galaxies) around MW-mass galaxies in FIRE-2 broadly match the distributions of stellar masses and internal velocities (Vetzel et al., 2016; Garrison-Kimmel et al., 2019), as well as radial distance distributions (Samuel et al., 2020), of satellite galaxies observed around the MW and M31, as well as MW analogs in the SAGA survey (Geha et al., 2017). Furthermore, as Samuel et al. (2021) showed, FIRE-2 MW-mass galaxies with an LMC-like satellite show much better agreement with various metrics of satellite planarity, as observed around the MW and M31, which motivates further exploration of potential effects of the LMC on the population of low-mass, dark subhaloes. In this work, we extend these FIRE-2 predictions of satellite populations down to lower-mass subhaloes, with DM masses as low as \(10^{6}\,\mathrm{M}_{\odot}\), which, as we described above, are likely to be completely dark (devoid of stars). We examine subhaloes within 50 kpc of MW-mass galaxies, the regime most relevant for observable interactions of dark subhaloes with stellar streams. We expand in particular on the work of Garrison-Kimmel et al. (2017); we examine subhaloes across a much larger set of 11 MW-mass haloes (instead of 2), and we time-average our multiple simulation snapshots (instead of just the one at \(z=0\)) for improved statistics for the small number of subhaloes that survive near the MW-mass galaxy. ## 2 Methods ### FIRE-2 simulations of MW-mass haloes We analyze simulated host galaxy haloes from the FIRE-2 cosmological zoom-in simulations (Hopkins et al., 2018). We generated each simulation using Gizzo (Hopkins, 2015), which models \(N\)-body gravitational dynamics with an updated version of the GADGET-3 TreePM solver (Springel, 2005), and hydrodynamics via the meshless finite-mass method. FIRE-2 incorporates a variety of gas heating and cooling processes, including free-free radiation, photoionization and recombination, Compton, photo-electric and dust collisional, cosmic ray, molecular, metal-line, and fine-structure processes, including 11 elements (H, He, C, N, O, Ne, Mg, Si, S, Ca, Fe). We use the model from Faucher-Giguere et al. (2009) for the cosmic UV background, in which HI reionization occurs at \(z\approx 10\). Each simulation consists of dark-matter, star, and gas particles. Star formation occurs in gas that is self-gravitating, Jeans-unstable, cold (\(T<10^{4}\) K), dense (\(n>1000\) cm\({}^{-3}\)), and molecular as in Krumholz and Gnedin (2011). Once formed, star particles undergo several feedback processes, including core-collapse and Type Ia supernovae, continuous stellar mass loss, photoionization and photoelectric heating, and radiation pressure. We generated cosmological initial conditions for each simulation at \(z\approx 99\), within periodic boxes of length \(70.4-172\) Mpc using the MUSIC code (Hahn and Abel, 2011). We assume flat \(\Lambda\)CDM cosmologies, with slightly different parameters across our host selection: \(h=0.68-0.71\), \(\Omega_{\Lambda}=0.69-0.734\), \(\Omega_{\rm m}=0.266-0.31\), \(\Omega_{\rm b}=0.0455-0.048\), \(\sigma_{8}=0.801-0.82\), and \(n_{8}=0.961-0.97\), broadly consistent with Planck Collaboration et al. (2020). We saved 600 snapshots from \(z=99\) to 0, with typical spacing \(\lesssim 25\) Myr. We examine host haloes from two suites of simulations. The first is the _Latte_ suite of individual MW-mass haloes (introduced in Wetzel et al., 2016), which have dark-matter halo masses of \(M_{\rm 200m}=1-2\times 10^{12}\,\mathrm{M}_{\odot}\) and no neighboring haloes of similar or greater mass within at least \(\approx 5R_{\rm 200m}\), where \(R_{\rm 200m}\) is defined as the radius within which the density is 200 times the mean matter density of the Universe. Gas cells and star particles have initial masses of \(7070\,\mathrm{M}_{\odot}\), while dark-matter particles have a mass of \(3.5\times 10^{4}\,\mathrm{M}_{\odot}\). _Latte_ uses gravitational force softening lengths of 40 pc for dark matter and 4 pc for star particles (comoving at \(z>9\) and physical thereafter), and the gravitational softening for gas is adaptive, matching the hydrodynamic smoothing, down to 1 pc. We also examine host haloes from the ELVIS on FIRE suite of Local Group-like MW+M31 halo pairs (introduced in Garrison-Kimmel et al., 2019). These simulations have \(\approx 2\times\) better mass resolution than _Latte_: Romeo and Juliet have initial masses of \(3500\,\mathrm{M}_{\odot}\) for baryons and \(1.9\times 10^{4}\,\mathrm{M}_{\odot}\) for dark matter, while Romulus & Remus and Thelma & Louise have initial masses of \(4000\,\mathrm{M}_{\odot}\) for baryons and \(2.0\times 10^{4}\,\mathrm{M}_{\odot}\) for dark matter. ELVIS uses gravitational force softening lengths of \(\approx 32\) pc for dark matter, \(2.7-4.4\) pc for stars, and\(0.4-0.7\) pc (minimum) for gas. To ensure similarity to the MW, we selected host galaxies from these suites that have a stellar mass within a factor of \(\approx 2\) of \(M_{\mathrm{MW}}\approx 5\times 10^{10}\,\mathrm{M}_{\odot}\)(Bland-Hawthorn & Gerhard, 2016), which leaves 11 total hosts: 6 from _Late_ and 5 from ELVIS. Table 1 lists their properties at \(z\approx 0\). For each simulation, we also generated a DMO version at the same resolution, and we compare these against our baryonic simulations to understand the effects of baryons on subhalo populations. The primary effect of baryons for our analysis of low-mass subhaloes is simply the additional gravitational potential of the MW-mass galaxy (Garrison-Kimmel et al., 2017). ### Finding and measuring subhaloes We examine subhaloes, which we define as lower-mass haloes that reside within a \(R_{200\mathrm{m}}\) of a MW-mass host halo. We identify dark-matter subhaloes using the Rockstar 6D halo finder (Behroozi et al., 2012), defining (sub)haloes as regions of space with a dark matter density \(>200\times\) the mean matter density. We include subhaloes that have a bound mass fraction of \(>0.4\) and at least 30 dark-matter particles, then construct merger trees using CONSISTENT-TREES (Behroozi et al., 2012). For numerical stability, we first generate (sub)halo catalogs using only dark-matter particles, then we assign star particles to haloes in post-processing (see Samuel et al., 2020). From the merger trees, we select subhaloes with masses, distances, and redshifts that are most relevant for observable gravitational interactions with stellar streams (as in Koposov et al., 2010; Thomas et al., 2016; Li et al., 2022). Throughout, we examine subhaloes according to their _instantaneous_ mass, given that this mass, rather than the pre-infall 'peak' mass, is more relevant for the strength of stream interactions (or the strength of gravitational lensing perturbations). We examine three thresholds in instantaneous mass: \(M_{\mathrm{sub}}>10^{6}\), \(>10^{7}\), and \(>10^{8}\,\mathrm{M}_{\odot}\), corresponding to a minimum of \(\approx 30-60\), \(300-600\), and \(3000-6000\) dark-matter particles, being lower in the Latte simulations and higher in the ELVIS Local Group-like simulations. These subhaloes typically had \(\gtrsim 4\times\) higher mass (more dark-matter particles) prior to MW infall and tidal mass stripping, independent of subhalo mass. When examining subhaloes in DMO simulations, we reduce their masses by the cosmic baryon mass fraction (\(\approx 15\) per cent), assuming that these subhaloes would have lost essentially all of their baryonic mass, consistent with the properties of subhaloes at these masses in our baryonic simulations (see also Bullock & Boylan-Kolchin, 2017). We include all subhaloes above these mass thresholds regardless of whether they are luminous or dark. For subhaloes within 50 kpc, the fraction that contain at least 6 star particles (the limit of our galaxy catalog) is: 30 per cent at \(M_{\mathrm{sub}}>10^{8}\,\mathrm{M}_{\odot}\), 5 per cent at \(M_{\mathrm{sub}}>10^{7}\,\mathrm{M}_{\odot}\), and \(\lesssim 1\) per cent at \(M_{\mathrm{sub}}>10^{6}\,\mathrm{M}_{\odot}\). In Appendix A, we explore the resolution convergence of our results. In summary, our tests show that the counts of subhaloes with \(M_{\mathrm{sub}}>10^{7}\,\mathrm{M}_{\odot}\) are well converged, but our simulations likely underestimate subhalo counts at \(M_{\mathrm{sub}}>10^{6}\,\mathrm{M}_{\odot}\) by up to \(\approx 1.5-2\) (depending on distance) at \(z\approx 0\), so one should consider those results as lower limits to the true counts. We show results for \(M_{\mathrm{sub}}>10^{6}\,\mathrm{M}_{\odot}\) in a lighter shade to reinforce this caution. We use three metrics to quantify subhalo counts: number enclosed, number density, and orbital radial flux. The number enclosed, \(N(<d)\), includes all subhaloes within a given distance \(d\) from the host galaxy center. We calculate the subhalo number density, \(n(d)\), by counting all subhaloes within a spherical shell 5 kpc thick, with the shell midpoint centered at \(d\), and dividing by the volume of the shell. For orbital radial flux, \(f(d)\), we count a subhalo as passing through a host-centered spherical surface of radius \(d\) if, between two adjacent snapshots, it orbited from outside to inside the surface or vice versa. We do not distinguish between inward and outward flux. While our snapshot spacing of \(20-25\) Myr provides good time resolution, we also interpolate the distances of subhaloes between snapshots. For each subhalo within 5 kpc of a given distance bin, we apply a cubic spline fit to its distance from the host for several snapshots surrounding the current snapshot to determine its distance between snapshots. Using these interpolated distances is important at small \(d\), where the surface-crossing times are shortest: it increases the measured flux by \(\approx 20\) per cent at \(d<10\) kpc compared to using the snapshots alone. We examine trends back to \(z=1\) (lookback time \(t^{\mathrm{lb}}\approx 8\) Gyr), because observable dynamical perturbations to stellar streams could have occurred several Gyr ago (see Yoon et al., 2011). Furthermore, because subhalo counts are subject to time variability and Poisson noise, especially at small distances, given that an orbit spends the least time near pericenter, we follow the approach in Samuel et al. \begin{table} \begin{tabular}{l c c c c c c} \hline Name & \(M_{\mathrm{star}}\) [\(10^{10}\,\mathrm{M}_{\odot}\)] & \(M_{200\mathrm{m}}\) [\(10^{12}\,\mathrm{M}_{\odot}\)] & \(N_{\mathrm{subhalo}}\) (\(>10^{6}\,\mathrm{M}_{\odot}\)) & \(N_{\mathrm{subhalo}}\) (\(>10^{7}\,\mathrm{M}_{\odot}\)) & \(N_{\mathrm{subhalo}}\) (\(>10^{8}\,\mathrm{M}_{\odot}\)) & Introduced in \\ \hline m12m & 10.0 & 1.6 & 123 & 21.3 & 1.4 & [1] \\ Romulus & 8.0 & 2.1 & 143 & 16.4 & 1.8 & [2] \\ m12b & 7.3 & 1.4 & 90 & 14.0 & 0.9 & [2] \\ m12f & 6.9 & 1.7 & 106 & 18.2 & 1.4 & [3] \\ Thelma & 6.3 & 1.4 & 179 & 30.9 & 2.7 & [2] \\ Romeo & 5.9 & 1.3 & 168 & 18.9 & 1.6 & [2] \\ m12i & 5.5 & 1.2 & 131 & 20.4 & 1.9 & [4] \\ m12c & 5.1 & 1.4 & 246 & 47.4 & 4.2 & [2] \\ m12w & 4.8 & 1.1 & 162 & 18.8 & 1.9 & [5] \\ Remus & 4.0 & 1.2 & 132 & 20.4 & 2.3 & [2] \\ Juliet & 3.4 & 1.1 & 207 & 29.2 & 2.5 & [2] \\ \hline average & 6.1 & 1.4 & 153 & 23.3 & 2.1 & \\ \end{tabular} \end{table} Table 1: Properties of the MW/M31-mass galaxies/haloes at \(z\approx 0\) in the FIRE-2 simulations. We include galaxies with \(M_{\mathrm{star}}=2.5-10\times 10^{10}\,\mathrm{M}_{\odot}\), within a factor of \(\approx 2\) of the MW. The last 3 columns list the number of subhaloes above the given threshold in instantaneous dark-matter mass that are within 50 kpc of the host, time-averaged across \(z=0-0.15\) (1.9 Gyr). [1]: Hopkins et al. (2018), [2]: Garrison-Kimmel et al. (2019), [3]: Garrison-Kimmel et al. (2017), [4]: Wetzel et al. (2016), [5]: Samuel et al. (2020) (2020): for each host halo, we time-average its subhalo population across 92 snapshots at \(z=0-0.15\) (\(t^{\rm lb}=0-2.15\) Gyr). We then compute the median and 68 per cent distribution across the 11 host haloes. When examining redshift evolution, we average over 3 redshift ranges: \(z=0.0-0.1\) (\(t^{\rm lb}=0-1.3\) Gyr, 66 snapshots), \(z=0.5-0.6\) (\(t^{\rm lb}=5.1-5.7\) Gyr, 25 snapshots), and \(z=1.0-1.1\) (\(t^{\rm lb}=7.8-8.2\) Gyr, 14 snapshots). We use the publicly available Python packages, GizmoAnalysis (Wetzel & Garrison-Kimmel, 2020) and HaloAnalysis (Wetzel & Garrison-Kimmel, 2020) to analyze these data. ### LMC satellite analogs Numerous works have demonstrated the likely contribution of the LMC to the population of luminous satellite galaxies around the MW (for example Hargis et al., 2014; Deason et al., 2015; Sales et al., 2016; Jethwa et al., 2016; Dooley et al., 2017, 2017). This motivates the possibility that the LMC also may have contributed a significant fraction of non-luminous lower-mass subhaloes as well. To assess if the presence of the LMC today affects predictions for subhaloes close to the MW, we select host haloes that contain a satellite that is an analog to the LMC, following Samuel et al. (2022), with the following constraints: 1. Pericentric passage at \(t^{\rm lb}<6.4\) Gyr (\(z<0.7\)): we choose this broad time window to capture a larger number of (rare) LMC-like passages. 2. \(M_{\rm sub,peak}>4\times 10^{10}\,{\rm M}_{\odot}\) or \(M_{\rm star}>5\times 10^{8}\,{\rm M}_{\odot}\): consistent with observations and inferences of the LMC's mass (see Erkal et al., 2019; Vasiliev et al., 2021). 3. \(d_{\rm peri}<50\) kpc: consistent with the current measured pericentric distance of the LMC (see Kallivayalil et al., 2013). 4. The satellite is at its first pericentric passage, consistent with several lines of evidence that suggest that the LMC is on its first infall into the MW (see Kallivayalil et al., 2013; Sales et al., 2016). From our 11 MW-mass haloes, this leaves 4 LMC analogs that meet all four criteria. Table 2 lists their properties, including masses and pericenters. Because we are interested in how the LMC affects recent MW subhalo populations, we show properties of each LMC satellite analog when it first reached a distance of 50 kpc from the galaxy center, corresponding to the LMC's current distance. ## 3 Results ### Counts and orbital radial fluxes In Figure 1, we characterize subhalo counts via three metrics: (1) the cumulative number of subhaloes, \(N(<d)\), within a spherical shell at distance \(d\) from the MW-mass galaxy center; (2) the number density of subhaloes, \(n(d)\), within \(\pm 2.5\) kpc of \(d\), and (3) the orbital radial flux of subhaloes through a spherical surface at \(d\), including both incoming and outgoing subhaloes. We show each metric for three thresholds in subhalo instantaneous dark-matter mass: \(M_{\rm sub}>10^{6}\), \(10^{7}\), and \(10^{8}\,{\rm M}_{\odot}\). Here and throughout, we show results for \(M_{\rm sub}>10^{6}\,{\rm M}_{\odot}\) with a lighter shade, to emphasize that those counts are likely lower limits, given the resolution considerations in Appendix A. All three metrics of subhalo counts decrease roughly linearly with increasing subhalo mass at a given \(d\). Within the approximate orbital distances of GD-1 and Pal 5, \(d\approx 10-30\) kpc (Price-Whelan & Bonaca, 2018), we predict \(\approx 4\) subhaloes of \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\) and at least 20 subhaloes of \(M_{\rm sub}>10^{6}\,{\rm M}_{\odot}\). We find no significant differences in subhalo counts between _Late_ hosts and ELVIS hosts. Interestingly, both number density and flux vary only weakly with \(d\), to within a factor of a few. This is in contrast with the DMO simulations, which show a strong rise in these quantities towards smaller \(d\). Unlike Garrison-Kimmel et al. (2017), who analyzed Figure 1: Counts and orbital radial fluxes of dark-matter subhaloes as a function of distance, \(d\), from the central MW-mass galaxy at \(z\approx 0\). For each halo, we time-average its subhalo population over 92 snapshots across \(z=0-0.15\) (1.9 Gyr). Solid lines and shaded regions show the median and 68 per cent distribution across the 11 host haloes. We show results for \(M_{\rm sub}>10^{6}\,{\rm M}_{\odot}\) in a lighter shade to indicate potential resolution effects (see Section 2.2). Dashed lines show dark-matter-only (DMO) simulations of the same haloes. Dotted lines show the fits in Table 3; a lighter shade indicates points outside of the distance range used for fitting. **Top**: Cumulative number of subhaloes, \(N(<d)\), within sphere of radius \(d\). **Middle**: Number density, \(n(d)\), of subhaloes within a spherical shell \(\pm 2.5\) kpc of \(d\). **Bottom**: Orbital radial flux of subhaloes, that is, the number of subhaloes per kpc\({}^{2}\) per Gyr passing either inwards or outwards through a spherical surface of radius \(d\). Because of the additional gravitational tidal stripping from the MW-mass galaxy in the baryonic simulations (unlike in the DMO simulations), both \(n(d)\) and flux vary only weakly with \(d\), to within a factor of a few. only the snapshot at \(z=0\), our averaging across multiple snapshots reveals significant populations of subhaloes at small \(d\). Figure 2 quantifies the differences between the baryonic and DMO simulations, showing the ratio of the number density, \(n(d)\), at a given \(d\), given that many previous works used DMO simulations to explore subhalo populations. Subhalo counts in baryonic simulations are systematically smaller than those in DMO, especially at small \(d\), primarily because of the additional gravitational tidal stripping from the MW-mass galaxy (for example Garrison-Kimmel et al., 2017; Kelley et al., 2019). DMO simulations overpredict subhalo counts by \(\approx 2-3\times\alpha\) at \(d\approx 50\) kpc and up on order of magnitude at \(d\lesssim 10\) kpc, across all mass thresholds. At our fiducial stellar stream distances of \(d\approx 10-30\) kpc, the ratio is \(\approx 0.05-0.25\) (that is, \(4-20\times\) more subhaloes in DMO simulations). These results are similar to Samuel et al. (2020), who analyzed more-massive, luminous subhaloes with \(M_{\rm peak}>8\times 10^{8}\,{\rm M}_{\odot}\) in the same simulations, and found good agreement in the radial distance distribution with observations of satellites around the MW and M31. Figure 2 shows their fit for the ratio of luminous satellites to DMO subhaloes via the dotted line, being \(\approx 0.3\) at \(d\approx 50\) kpc. Figure 3 shows the same metrics as Figure 1, for only subhaloes with \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\), at 3 redshifts: \(z\approx 0\), \(\approx 0.5\), and \(\approx 1\) (still averaged across multiple snapshots, see Section 2.2). All subhalo counts decreased over cosmic time at a given \(d\); from \(z=1\) to \(z=0\), subhalo number density declined by a factor of \(\approx 15\). We expect such a decline because, as explored in (for example Wetzel et al., 2009), subhalo merging and destruction rates at earlier times are faster than the infall rate of new subhaloes, given the decline in overall accretion rate as the Universe expands. This decline occurred in both baryonic and DMO simulations, but at small \(d\), the decrease over time is more significant in the baryonic simulations, given the higher rates of tidal stripping from the MW-mass galaxy. We find decreases of \(\approx 20\times\) at \(d=10\) kpc and \(\approx 3\times\) at \(d=50\) kpc from \(z=1\) to \(z=0\). Figure 4 quantifies this decrease over time, for the three mass thresholds, via the ratio of \(N(<50\) kpc) at a given lookback time to the same value today. In baryonic simulations, this decrease has a slight mass dependence, while declines are similar across all masses in the DMO versions. Thus, using subhalo counts only at \(z=0\) would underestimate the subhalo population averaged across the last several Gyr, especially if the observable impacts on stellar streams persist for several Gyr. Our results at higher redshift also inform typical gravitational lensing studies, given that most observed lenses for measuring (sub)halo populations are at \(z>0\). For use in (semi)analytic models, we fit the three metrics--cumulative number, \(N(<d)\), number density, \(n(d)\), and orbital flux, \(f(d)\)--to the functional form \[p(>M,d,a)=c_{0}e^{-c_{1}a}\left(\frac{d}{d_{0}}\right)^{c_{2}a+c_{3}}\left( \frac{M}{M_{0}}\right)^{c_{4}}, \tag{1}\] where \(a\) is the expansion scale factor, \(d\) is the distance from the MW-mass halo center, \(M\) is the threshold in instantaneous subhalo mass, \(d_{0}\) is a unit normalization of 1 kpc, and \(M_{0}=10^{7}\,{\rm M}_{\odot}\) is our fiducial mass threshold. Table 3 lists the best-fit parameters, \(c_{0}\), \(c_{1}\), \(c_{2}\), and \(c_{3}\), for each metric. We generated each of these constants from a particular curve using the Levenberg-Marquardt algorithm: \begin{table} \begin{tabular}{l c c c c c c} \hline Host & \(t_{50\,\rm kpc}^{\rm B}\) [Gyr] & \(z_{50\,\rm kpc}\) & \(M_{\rm star}\,[10^{9}\,{\rm M}_{\odot}]\) & \(M_{\rm sub,int}\,[10^{11}\,{\rm M}_{\odot}]\) & \(M_{\rm sub,peak}\,[10^{11}\,{\rm M}_{\odot}]\) & \(d_{\rm peri}\) [kpc] \\ \hline m12w & 6.0 & 0.60 & 1.25 & 0.9 & 1.3 & 8 \\ m12b & 5.1 & 0.50 & 7.13 & 1.7 & 2.1 & 38 \\ m12f & 3.1 & 0.27 & 2.62 & 1.1 & 1.6 & 36 \\ m12c & 1.0 & 0.08 & 1.17 & 1.2 & 1.7 & 18 \\ \hline \end{tabular} \end{table} Table 2: Properties of the four LMC satellite analogs, at the lookback time that each one first orbits within a distance of 50 kpc from their MW-mass host galaxy. See Section 2.3 for details on their selection criteria. While we list the actual (initial) pericentric distance of their orbit, we present all results when these satellites first were at \(d\approx 50\) kpc, to provide context for the LMC at its current distance. \(M_{\rm sub,max}\) indicates the instantaneous subhalo dark-matter mass of the LMC analog this time, while \(M_{\rm sub,peak}\) is its peak (sub)halo mass throughout its history. \begin{table} \begin{tabular}{l c c c c c} \hline \(p(>M,a)\) & \(c_{0}\) & \(c_{1}\) & \(c_{2}\) & \(c_{3}\) & \(c_{4}\) \\ \hline N(<d) & 1.24 & 12.10 & 2.21 & 1.54 & -0.93 \\ n(d) [kpc\({}^{-3}\)] & 0.12 & 10.53 & 1.97 & -1.36 & -1.01 \\ f(d) [Gyr\({}^{-1}\) kpc\({}^{-2}\)] & 8.94 & 9.08 & 1.48 & -1.10 & -0.94 \\ \hline \end{tabular} \end{table} Table 3: Fit parameters to Equation 1 for subhalo counts and orbital radial fluxes, where \(a\) is the expansion scale factor, \(d\) is the distance from the MW-mass galaxy in kpc, with \(d_{0}=1\) kpc as unit normalization, and \(M\) is the lower limit on subhalo instantaneous dark-matter mass in \({\rm M}_{\odot}\), with \(M_{0}=10^{7}\,{\rm M}_{\odot}\) as unit normalization. Cumulative number, \(N(<d)\) represents the total number of subhaloes enclosed in a sphere of radius \(d\) centered on the MW-mass galaxy; number density, \(n(d)\), represents the subhalo density in a spherical shell within \(\pm 2.5\) kpc of \(d\); and radial flux, \(f(d)\) represents the number of subhaloes per area that cross into or out of \(d\) per Gyr. Accounting for the presence of the LMC (see Section 3.3) boosts these counts by \(\approx 1.4-2.7\times\). Figure 2: Ratio of the number density of subhaloes at \(z\approx 0\) (as in Figure 1, middle) in baryonic simulations relative to dark-matter-only (DMO) versions of the same host haloes. We show the median and 68 per cent distribution across the 11 haloes. This ratio illustrates the significant depletion of subhaloes in baryonic simulations, especially at small \(d\), primarily from the increased gravitational tidal stripping from the presence of the MW-mass galaxy. The typical ratio at \(d=10-30\) kpc, the approximate distances of the GD-1 and Pal-5 streams, is \(0.05-0.25\), so DMO simulations overpredict subhalo counts by \(4-20\times\). We show the fit from Samuel et al. (2020) for more massive (luminous) subhaloes, with \(M_{\rm sub,peak}>8\times 10^{8}\,{\rm M}_{\odot}\), as a dotted line, which matches well our results at \(M_{\rm sub}>10^{8}\,{\rm M}_{\odot}\). \(c_{0}\) and \(c_{3}\) from \(M_{\rm sub}>10^{7}\) at \(z=0\) (orange curve in Figure 1), \(c_{1}\) and \(c_{2}\) from \(M_{\rm sub}>10^{7}\) M\({}_{\odot}\) at \(z=1\) (brown curve in Figure 3), and \(c_{4}\) from \(M_{\rm sub}>10^{8}\) M\({}_{\odot}\) at \(z=0\) (green curve in Figure 1). We indicate the distance region used for each fit in Figure 1 and Figure 3 in color; curves are shown in grayscale outside of this region. We did not use results for \(M_{\rm sub}>10^{6}\) M\({}_{\odot}\) to fit any parameters, given possible limitations from numerical resolution (see Appendix A). However, the dotted lines in Figures 1 and 3 show that our fits for this mass threshold are generally within the 68 per cent host-to-host scatter at \(d<50\) kpc, reinforcing that any numerical underestimate is likely less than a factor of \(\approx 2\). ### Orbital velocity distributions An essential component for modeling subhalo-stream interactions is the direction of the subhalo orbit relative to the stream (Yoon et al., 2011; Banik et al., 2018). Figure 5 shows the subhalo orbital velocity components across varying masses and redshifts. The first row shows a metric of the orbital isotropy of the subhalo population, via the ratio of the average absolute radial velocity, \(|v_{\rm rad}|\), to the average tangential velocity, \(v_{\rm tan}\), normalized so that unity represents statistically isotropic orbits. The left three columns show results at \(z\approx 0\) for our three thresholds in instantaneous mass. The right two columns show subhaloes of \(M_{\rm sub}>10^{7}\) M\({}_{\odot}\) at \(z\approx 0.5\) and \(z\approx 1\), as in Figure 3. At \(z\approx 0\), subhaloes in baryonic simulations are consistent with isotropic orbits, in contrast to DMO simulations, in which subhalo orbits are radially biased. Our results suggest that one can approximate a statistically isotropic velocity distribution when modeling and interpreting possible orbits for subhaloes at a given \(d\) at \(z\approx 0\). However, this was not always true: at earlier cosmic times, subhaloes in baryonic simulations were somewhat more radially biased, by up to \(1.3\times\) at \(z=0.5\) and up to \(1.4\times\) at \(z=1\), with larger radial bias at larger \(d\). The DMO simulations also had higher radial bias at earlier times. Most likely, the higher radial bias at earlier cosmic times in both baryonic and DMO simulations arises because subhaloes necessarily fell in more recently, reflecting their initial infall orbits more directly (for example Weltzel, 2011). Subhaloes that are on highly radial orbits also pass closer to host center and thus strip/merge more quickly. That said, the reason why the additional gravitational effects of the MW-mass galaxy in the baryonic simulations should lead to a surviving subhalo population with nearly isotropic orbits at \(z\approx 0\) is not obvious; we defer a more in-depth investigation to future work. Figure 4: Ratio of the cumulative number of subhaloes enclosed within 50 kpc at varying redshifts to the number at \(z=0\), as a measure of the relative depletion of subhaloes over cosmic time. Solid lines show the mean across the 11 host haloes, while dashed lines show dark-matter-only (DMO) simulations of the same haloes. Dashed lines for \(t^{\rm h}>5\) Gyr show only _Lute_ hosts. We show results for \(M_{\rm sub}>10^{6}\) M\({}_{\odot}\) in a lighter shade to indicate potential resolution effects (see Section 2.2). Since \(z=1\) (\(t^{\rm th}\approx 8\) Gyr), the subhalo population in baryonic simulations has decreased by \(5-10\)%, especially at higher subhalo masses. Subhalo counts at \(M_{\rm sub}>10^{8}\) were as high as \(10\times\) their values today, at \(t^{\rm th}\gtrsim 8\) Gyr (extending above the axis). Figure 3: Counts and orbital radial flux versus distance, \(d\). from the MW-mass galaxy, for dark-matter subhaloes with \(M_{\rm sub}>10^{7}\) M\({}_{\odot}\) at different redshifts. We show the median and 68 per cent distribution across the 11 host haloes. We time-average each one over the range \(z=0-0.1\), \(0.5-0.6\), and \(1-1.1\), corresponding to lookback times \(0-1.3\), \(5.1-5.7\), and \(7.8-8.2\) Gyr. Dashed lines show median values for dark-matter-only (DMO) simulations of the _Lute_ hosts. Dotted lines show the fits from Table 3; a lighter shade indicates points outside of the distance range used for fitting. **Top**: Cumulative number, \(N(<d)\), within a sphere of radius \(d\). **Middle**: Number density, \(n(d)\), within a spherical shell \(2.5\) kpc of \(d\). **Bottom**: Orbital radial flux, that is, the number of subhaloes per Gyr passing in and out of a spherical surface of radius \(d\). All subhalo counts decrease over cosmic time, by up to \(\approx 20\)x at \(d=10\) kpc, and less dramatically (\(\sim 3\times\)) at \(d=50\) kpc. The counts of subhaloes in baryonic simulations decreased more dramatically than in DMO simulations (\(\approx 4\times\) at most), especially at small \(d\), because of additional tidal stripping from the MW-mass galaxy. To provide deeper insight into the orbital velocity isotropy, the bottom two rows of Figure 5 show the individual velocity components, \(|v_{\rm rad}|\) and \(v_{\rm tan}\). Beyond \(\simeq 40\) kpc, \(|v_{\rm rad}|\) is similar in both baryonic and DMO simulations. In both, \(|v_{\rm rad}|\) increases at smaller \(d\), where the gravitational potential is deeper. However, \(|v_{\rm rad}|\) is larger at small \(d\) in the baryonic simulations, because the formation of a MW-mass galaxy deepens the potential. In the bottom row, \(v_{\rm tan}\) is higher in baryonic simulations at all \(d\), though again the enhancement is most significant at small \(d\). In addition to the host galaxy deepening the potential, it also provides additional gravitational tidal stripping for subhaloes that orbit close to it, which have small orbital angular momentum, as Garrison-Kimmel et al. (2017) showed. This in turn biases the resultant subhalo population at a given \(d\) to have higher \(v_{\rm tan}\). Thus, the stronger enhancement of \(v_{\rm tan}\) leads to the change from radially biased orbits in DMO to statistically isotropic orbits in baryonic simulations for surviving subhaloes above a given mass threshold. At earlier times, both \(|v_{\rm rad}|\) and \(v_{\rm tan}\) in baryonic simulations were more similar to those in DMO simulations than they are at \(z\approx 0\), demonstrating how the tidal effects of the host galaxy affected the subhalo population over time. The host galaxy stellar mass increased significantly over this time interval: relative to its stellar mass at \(z=0\), it typically was only half as large at \(z=0.5\) and only about a quarter as large at \(z=1\)(Santistevan et al., 2020; Bellardini et al., 2022). ### Subhalo enhancement during LMC passage To predict the current subhalo population around the MW, we examine the potential impact of the LMC, a massive satellite galaxy (\(M_{\rm DM}\sim 10^{11}\,{\rm M}_{\odot}\)) that recently passed its pericenter of 50 kpc Kallivayalil et al. (2013). We focus on the 4 simulations with LMC satellite analogs in Section 2.3: m12w, m12b, m12f, and m12c. Figure 6 shows the subhalo counts over time in each simulation, quantified as the cumulative number of subhaloes within 50 kpc of the MW-mass host, for \(M_{\rm sub}>10^{6}\,{\rm M}_{\odot}\) and \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\). Counts for both mass thresholds visibly increased during the \(\sim 50\) Myr after the LMC analog first reached \(d=50\) kpc, which we indicate with a dotted line. We do not show subhaloes \(>10^{8}\,{\rm M}_{\odot}\) because of their low counts (\(\lesssim 10\) subhaloes at any given time) and therefore significant Poisson scatter, but they show similar increases in all 4 simulations. The grey shaded region indicates the number of subhaloes that ROCKSTAR identifies as having been a satellite of the LMC analog halo any time before becoming a satellite of the MW-mass halo, demonstrating that this enhancement is primarily (though not entirely) from subhaloes that were satellites of the LMC analog. This period of subhalo enrichment lasts for only \(\lesssim 0.5\) Gyr after the LMC analog's first pericentric passage, consistent with previous works that have shown that satellites of LMC-mass satellite galaxies phase mix on this timescale (for example Deason et al., 2015). While some of these additional subhaloes persist indefinitely, the subsequent phase mixing of their orbits leads to no strong temporal Figure 5: Orbital velocities of subhaloes versus distance, \(d\), from the MW-mass galaxy. Solid lines show the mean, and shaded regions show the 68 per cent distribution across the 11 host haloes, while dashed lines show dark-matter-only (DMO) simulations of the same haloes. Dashed lines for \(z=0.5\) and \(z=1\) show only _Larte_ hosts. Lighter shade shows bins where more than 1 halo had an average of 0 subhaloes. **Left**: subhaloes at \(z\approx 0\), for \(M_{\rm sub}>10^{6}\), \(10^{7}\), and \(10^{8}\,{\rm M}_{\odot}\). **Right**: subhaloes with \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\) at \(z\approx 0,0.5\), and 1. **Top**: Orbital velocity isotropy, via the dimensionless ratio of the median absolute radial velocity to the median tangential velocity, normalized such that isotropic orbits have a value of 1. While subhaloes in DMO simulations have radially biased orbits at all redshifts, subhaloes in baryonic simulations orbit in a nearly statistically isotropic distribution at \(z\approx 0\). At higher redshifts, subhalo orbits in baryonic simulations were increasingly radially biased. **Middle**: Median absolute radial velocity, \(|v_{\rm rad}|\). The deepening of the gravitational potential from the MW-mass galaxy in the baryonic simulations increases \(v_{\rm rad}\) at small \(d\) relative to DMO, but the two are nearly identical at \(d\gtrsim 40\) kpc. **Bottom**: Tangential velocity, \(v_{\rm tan}\), is higher in baryonic simulations than in DMO, and this enhancement persists at all \(d\). In addition to the deepening of the gravitational potential, as above, subhaloes with small \(v_{\rm tan}\) are more likely to get tidally stripped by the host galaxy and fall below the mass threshold, which further enhances \(v_{\rm tan}\) of the surviving population. Thus, subhalo orbits are statistically isotropic at \(z\lesssim 0.5\) (\(t^{\rm B}\lesssim 6\) Gyr). enhancement. While these LMC analogs have smaller pericenters about their MW-mass host (\(8-38\) kpc) than the \(d_{\rm peri}\approx 50\) kpc of the LMC, all 4 show significant enhancement already when the LMC analog first crosses within \(d=50\) kpc (vertical dotted line). The latest LMC analog to reach \(d=50\) kpc is in m12c at \(z=0.07\) (\(t^{\rm lb}=0.95\) Gyr), making it temporally the most similar to the LMC; subhalo counts in this host are \(2-4\times\) higher than in the other MW-mass hosts at the same redshift. Table 4 quantifies the enhancement in the cumulative number within 50kpc and the orbital radial fluxes at \(d=50\) kpc of subhaloes in our hosts with an LMC satellite analog via two ratios. The first row compares subhalo counts in each of the four hosts with an LMC analog at \(t_{50\;\rm kpc}\), the time at which the LMC analog first reached \(d=50\) kpc, to the value in the same host \(100-500\) Myr earlier. The second row compares subhalo counts in each host with an LMC analog within \(\pm 50\) Myr of \(t_{50\;\rm kpc}\) to the average counts in all 11 MW-mass hosts at the same time. Both the subhalo counts and fluxes increase \(\approx 1.4-2.7\times\) for both metrics, with greater enhancement for subhaloes at higher masses. For context, we also examined the fractional enhancement in cumulative number inside \(R_{200\rm m}\) (instead of \(d<50\) kpc). In this case, the enhancement in absolute number is much higher (\(\approx 100\) for \(M_{\rm sub}>10^{7}\,\rm M_{\odot}\)) than for \(d<50\) kpc (\(\approx 30\)), which means that only a fraction of the subhaloes that accreted with the LMC analog contribute to our results at \(d<50\) kpc. However, the _fractional_ enhancement inside \(R_{200\rm m}\) (relative to other hosts at the same time) is weaker than inside \(d<50\) kpc, being \(\approx 1.1\times\) at all masses, because of the much larger number of preexisting subhaloes within \(R_{200\rm m}\) than within \(d<50\) kpc. Figure 7 shows the relative enhancement in subhalo number density, \(n(d)\), as a function of \(d\), within our hosts with an LMC analog within \(\pm 50\) Myr of \(t_{50\;\rm kpc}\), compared to all other hosts at the same redshift (as in Table 4, row 2). We find a typical enhancement of \(\sim 1.5-2\times\) at all distances \(>10\) kpc, with relatively weak dependence on both distance and subhalo mass. Given that the LMC is just past its first pericentric passage, our results imply that the MW currently is experiencing a significant boost, typically \(1.4-2.7\times\), in its population of subhaloes at distances \(\lesssim 50\) kpc, both relative to itself a few hundred Myr earlier, and relative to other MW-mass host haloes without an LMC analog at \(z=0\). Thus, in making predictions of subhalo counts around the MW today, one should multiply our host-averaged fits in Equation 1 by \(\approx 2\times\) (Table 3). ### Predictions for interaction rates with stellar streams We conclude by synthesizing our results to make approximate estimates for the interaction rates of subhaloes with stellar streams around the MW. As case studies, we use the fiducial streams, GD-1 (\(d_{\rm peri}=13\) kpc, \(d_{\rm apo}=27\) kpc, length \(l=15\) kpc) and Pal 5 (\(d_{\rm peri}=8\) kpc, \(d_{\rm apo}=19\) kpc, \(l=10\) kpc), approximating each as a thin cylinder. We use relevant impact parameters (\(b\)) for potentially observable subhalo interactions with streams, for each of our three mass thresholds, from Yoon et al. (2011): \(b<0.58\) kpc for \(M_{\rm sub}>10^{6}\,\rm M_{\odot}\), \(b<1.6\) kpc for \(M_{\rm sub}>10^{7}\,\rm M_{\odot}\), and \(b<4.5\) kpc for \(M_{\rm sub}>10^{8}\,\rm M_{\odot}\). We then compute our average subhalo flux at a galactocentric distance \(\approx 20\) kpc for GD-1 and \(\approx 14\) kpc for Pal 5. We use interaction rates at \(z<0.15\) (\(t^{\rm lb}=0.15\) Gyr), and we apply an enhancement of \(2\times\) from the LMC. Under these conditions, _for GD-1 we estimate \(\approx 4-5\) interactions per Gyr with subhaloes \(>10^{6}\,M_{\odot}\approx 1-2\) per Gyr with \(>10^{7}\,\rm M_{\odot}\), and \(\approx 0-1\) per Gyr with \(>10^{8}\,\rm M_{\odot}\); for Pal 5, we estimate \(\approx 2-3\) interactions per Gyr with subhaloes \(>10^{6}\,M_{\odot}\)._\(\approx 0-1\) per Gyr with \(>10^{7}\,\rm M_{\odot}\)_, and \(\approx 0-1\) per Gyr with \(>10^{8}\,\rm M_{\odot}\)._ If observable features in streams, such as gaps, persist for many Gyrs, then the evolution across cosmic time is important to incorporate. In this case, we can estimate the 'effective' rate by averaging our fluxes across \(z=0-0.5\) (\(t^{\rm lb}=0-5.1\) Gyr), but now omitting the boost factor from the (recently accreted) LMC. For each stream, this increases the interaction rate per Gyr by \(\approx 2\times\) for \(M_{\rm sub}>10^{6}\) and \(10^{7}\,\rm M_{\odot}\), and up to \(4\times\) for \(M_{\rm sub}>10^{8}\,\rm M_{\odot}\). While only estimates, these encounter rates offer context for our results. Even with the additional tidal effects of the MW-mass galaxy in baryonic simulations, which significantly reduces the population of subhaloes at these distances relative to DMO simulations, _our results imply that the interaction rates between stellar streams and dark subhaloes are still sufficiently high that searching for tidally induced features, like gaps, in streams is a promising venture._ ## 4 Summary and discussion ### Summary of Results Using 11 MW-mass host galaxies from the FIRE-2 suite of cosmological simulations, we presented predictions for the counts and orbital distributions of low-mass subhaloes at \(d\lesssim 50\) kpc around the MW and MW-mass galaxies. Our primary goal is to inform studies that model potentially observable interactions between such \begin{table} \begin{tabular}{l l l l} \hline & subhalo mass threshold [\(\,\rm M_{\odot}\)] & number enhancement & flux enhancement \\ \hline relative to same MW-mass halo, & \(>10^{6}\) & \(1.12\pm 0.05\) & \(1.26\pm 0.06\) \\ \(100-500\) Myr prior & \(>10^{7}\) & \(1.40\pm 0.08\) & \(1.61\pm 0.11\) \\ & \(>10^{8}\) & \(1.42\pm 0.32\) & \(2.32\pm 1.06\) \\ \hline relative to all MW-mass haloes & \(>10^{6}\) & \(1.41\pm 0.49\) & \(1.56\pm 0.51\) \\ at same redshift & \(>10^{7}\) & \(1.83\pm 0.72\) & \(2.02\pm 0.84\) \\ & \(>10^{8}\) & \(2.15\pm 1.15\) & \(2.73\pm 1.49\) \\ \hline \end{tabular} \end{table} Table 4: Enhancement of the number and orbital radial flux of subhaloes within 50 kpc of the MW-mass host during 4 LMC satellite analog events (see Table 2). We measure the subhalo population within \(\pm 50\) Myr of when each LMC analog first reached \(d=50\) kpc, and we show the mean and standard deviation of the ratio (as defined below) across these 4 LMC analog events. **Top rows**: ratio of each MW-mass halo at the time the LMC analog reached \(d=50\) kpc to the average for the same MW-mass halo \(100-500\) Myr prior, before LMC infall. Subhalo counts and fluxes show a consistent enhancement (\(1.1-2.3\times\)), so the infall of the LMC analog significantly boosted the host halo’s subhalo population. **Bottom rows**: ratio of each MW-mass halo with an LMC analog to the average of all 11 MW-mass haloes at the same redshift. Subhalo counts and fluxes show similar enhancements (\(1.4-2.7\times\)). _Thus, the MW today likely has a significantly enhanced (\(\approx 2\times\)) population of subhaloes relative to similar-mass host haloes today and relative to its own population several 100 Myr ago._ subhaloes and stellar streams. We explored the dependence on subhalo mass, distance, redshift, and the presence of an LMC satellite analog, and we provided analytic fits to these dependencies. Our primary results are: 1. The incorporation of baryonic physics significantly reduces subhalo counts compared with DMO simulations, primarily because of additional tidal force from the MW-mass galaxy potential. At \(z\approx 0\), DMO simulations overpredict subhalo counts by \(\approx 4-5\) times at \(d\approx 20\) kpc. These differences were less pronounced at earlier cosmic times, with DMO simulations overpredicting counts by \(\approx 1.5\) times at \(z=0.5\). 2. _We predict that \(>20\) (\(>4\)) subhaloes with instantaneous mass \(>10^{6}\,M_{\odot}\) (\(>10^{7}\,M_{\odot}\)) exist within the distances of streams like GD-1 and Pal 5 (\(d\lesssim 30\) kpc), and at least 1 subhalo \(>10^{8}\,M_{\odot}\) resides within \(d<50\) kpc._ Thus, despite the strong depletion of subhaloes in baryonic simulations relative to DMO, significant numbers of subhaloes survive at the distances of observed stellar streams. This is unlike Garrison-Kimmel et al. (2017), who found no surviving subhaloes within \(\approx 15\) kpc, but they only examined two of these FIRE-2 simulations (m12i, m12f) at a single snapshot at \(z=0\). 3. At \(z\approx 0\), subhalo number density and orbital flux are nearly constant with distance, out to at least \(d\approx 60\) kpc. 4. Subhalo counts decreased significantly over cosmic time, from both the declining rate of infall of new subhaloes and the increasingly strong tidal field of the host galaxy. At \(z=1\), the MW-mass hosts had \(\approx 10\) times more subhaloes at a given distance. This decline over time is stronger at smaller distances and at higher subhalo masses. 5. _Subhaloes orbit with statistically isotropic velocities at \(z\approx 0\)_, but they were increasingly radially biased at earlier times. This is unlike DMO simulations, in which subhalo orbits are always radially biased. 6. _The initial infall of an LMC satellite analog boosts the number of subhaloes within 50 kpc of the MW-mass host by \(1.4-2.7\) times, relative to the same host a few hundred Myr earlier or relative to similar-mass host haloes at the same time._ Thus, predictions and models for subhalo-stream interaction rates over the last few 100 Myr should take into account this enhancement from the LMC. ### Discussion As expected, we find similar overall results as Garrison-Kimmel et al. (2017), who examined two of the same FIRE-2 haloes as we do (m12i, m12f). However, we emphasize key differences and extensions of our work compared with theirs. First, we include more host haloes (11) for better statistics. Second, and equally importantly, we time-averaged our results across multiple snapshots. Given our snapshot time spacing of \(20-25\) Myr, this provides a much more statistically representative picture of subhaloes at small distances, where their velocities are highest and where they spend the least time in their orbit. Furthermore, we interpolate subhalo distances between snapshots (see Section 2.2) to avoid missing orbits at particularly small distances. Unlike Garrison-Kimmel et al. (2017), who found no surviving subhaloes within \(\approx 15\) kpc of m12i and m12f at \(z=0\), we find that subhaloes survive, if briefly, at all \(d\gtrsim 5-10\) kpc at all mass thresholds. Figure 6: Number of subhaloes within \(d<50\) kpc of the MW-mass galaxy versus cosmic time, for the 4 simulations that have an LMC satellite analog. We show subhaloes with instantaneous mass \(>10^{6}\) and \(>10^{7}\,\mathrm{M}_{\odot}\), with the latter multiplied by 3 for clarity. Grey shaded regions show subhaloes that were satellites of the LMC analog any time prior to infall. Vertical dotted lines shows when the LMC analog first orbited within 50 kpc of the MW–mass galaxy, which is the current distance of the LMC from the MW. All 4 cases show significant enhancement in subhaloes for \(\approx 1-2\) Gyr after first infall, after which orbital phase mixing leaves no coherent enhancement during subsequent pericentric passages. Figure 7 and Table 4 quantify the enhancement of subhaloes during LMC passage. Figure 7: The enhancement of subhaloes around MW-mass galaxies with an LMC satellite analog. We compute the ratio of the average number density, \(n(d)\), of subhaloes around each of the 4 hosts with an LMC satellite analog (m12w, m12c, m12f, and m12b), time-averaged over \(\pm 50\) Myr when the LMC analog fist crossed within \(d=50\) kpc, relative to the average across all 11 MW-mass haloes at the same redshift. Shaded regions show the standard deviation across the 4 hosts. _MW-mass haloes with an LMC satellite analog show a strong enhancement, typically \(1.4-2.7\times\), with only a weak decline with distance._ We next discuss caveats to our results. While we selected these host galaxies/haloes for their similarity to the MW, they are not exact analogs. Each one has a different formation and merger history, resulting in significant host-to-host variation (Santistevan et al., 2020). In general, we averaged our results across these 11 hosts and included the host-to-host scatter, to present cosmologically representative results for subhaloes around MW-mass galaxies. This is statistically likely to encompass the population around the MW, but there is no guarantee of it. We compared our results for our isolated haloes to our Local Group analogs and found negligible differences, consistent with the comparisons among the ELVIS DMO simulations in Wetzel et al. (2015), indicating that such environmental selection is not a significant factor affecting these low-mass subhaloes at distances \(\lesssim 50\) kpc. More critically, our results demonstrate that the presence of an LMC satellite analog boosts low-mass subhalo counts by \(\approx 2\times\) at distances \(\lesssim 50\) kpc, indicating that this is one of, and likely the, most important factor in predicting subhalo populations around the MW today and over the last few 100 Myr. All of our LMC analogs have smaller pericenters than the LMC, ranging from \(8-38\) kpc. We mitigated this by measuring subhaloes when the LMC analog first cross within the LMC's current pericentric distance of \(\approx 50\) kpc. Arora et al. (ion) examine subhalo populations in selected hosts of the FIRE-2 simulations at different angular locations around the galactic center, including the spatial relation to LMC analogs, as well as specific subhalo-stream encounter rates in the presence of a massive satellite. We examined results only for one dark-matter model, CDM, but there are many other possible candidates. Extensions of our work would include examining FIRE simulations with alternative dark-matter models (such as Robles et al., 2017; Shen et al., 2022). As with any simulation, our results are susceptible to resolution effects. We reiterate that, motivated by quantifying the strength of gravitational interactions with stellar streams, we selected subhaloes above a given _instantaneous_ threshold in mass, so numerical convergence requires that our simulations accurately model the amount of mass stripping in subhaloes down to our given instantaneous mass threshold(s). Thus, our results are not sensitive to modeling any mass stripping (physical or numerical) that occurs below this threshold, or to the more challenging question of modeling/defining subhalo 'disruption', which occurs below these mass thresholds. In Appendix A, we quantify resolution convergence by comparing our fiducial subhalo counts to those in both lower-resolution and higher-resolution versions of the same haloes. To summarize, our tests indicate that subhalo counts at instantaneous \(M_{\rm sub}>10^{7}\,\mathrm{M}_{\odot}\) are reasonably well converged, but at \(M_{\rm sub}>10^{6}\,\mathrm{M}_{\odot}\) our simulations underpredict the counts by up to \(\approx 1.5-2\times\) (which we have indicated throughout), so our results there are lower limits, which means that the actual interaction rates with streams would be even higher. And again, in fitting our results, we did not include any values at \(M_{\rm sub}>10^{6}\,\mathrm{M}_{\odot}\), so our fit values there are extrapolations from our higher masses. We also discuss the numerical convergence of our subhaloes in the context of the criteria that van den Bosch & Ogiya (2018) provided from idealized simulations of individual subhaloes orbiting in fixed host halo potential without a central disk. They consider a subhalo to be sufficiently resolved on its bound mass fraction, \(f_{\rm bound}\), the ratio of its instantaneous mass to its peak mass (typically just before accretion), with a minimum \(f_{\rm bound}\) determined by the subhalo's scale radius, \(r_{\rm s,0}\), and the number of DM particles it had at its peak mass, \(N_{\rm peak}\). They define \(f_{\rm bound}^{\rm min,1}=C\left(\epsilon/r_{\rm s,0}\right)^{2}\) and \(f_{\rm bound}^{\rm min,2}=0.32\left(N_{\rm peak}/1000\right)^{-0.8}\), where \(C\) is a constant that depends on the subhalo's concentration parameter (we use \(C\approx 10\), based on their Section 6.4), \(\epsilon\) is the Plummer force softening of the simulation, which is 40 pc for all of our simulations, and \(N_{\rm peak}\) is the peak number of constituent DM particles a subhalo experienced, typically prior to accretion. van den Bosch & Ogiya (2018) consider a subhalo converged if it satisfies both criteria, that is, \(f_{\rm bound}^{\rm min}={\rm MAX}(f_{\rm bound}^{\rm min,1},f_{\rm bound}^{ \rm min,2})\). For reference, we note the median values for these relevant quantities for each of our subhalo samples at \(z=0\). For \(M_{\rm sub}>10^{6}\,\mathrm{M}_{\odot}\) (1,436 subhaloes), the median \(M=2.4\times 10^{6}\,\mathrm{M}_{\odot}\), \(M_{\rm peak}=1.3\times 10^{7}\,\mathrm{M}_{\odot}\), \(r_{\rm s,0}=0.15\) kpc, and \(N_{\rm peak}=517\). For \(M_{\rm sub}>10^{7}\,\mathrm{M}_{\odot}\) (209 subhaloes), the median \(M=2.9\times 10^{7}\,\mathrm{M}_{\odot}\), \(M_{\rm peak}=8.5\times 10^{7}\,\mathrm{M}_{\odot}\), \(r_{\rm s,0}=0.18\) kpc, and \(N_{\rm peak}=3419\). For \(M_{\rm sub}>10^{8}\,\mathrm{M}_{\odot}\) (13 subhaloes), the median \(M=2.3\times 10^{8}\,\mathrm{M}_{\odot}\), \(M_{\rm peak}=8.2\times 10^{8}\,\mathrm{M}_{\odot}\), \(r_{\rm s,0}=1.5\) kpc, and \(N_{\rm peak}=39,461\). For all mass thresholds, the median \(f_{\rm bound}\approx 0.26\), that is, all samples have experienced the same typical fraction of mass stripping since \(M_{\rm peak}\). Applying the convergence criterion from van den Bosch & Ogiya (2018) to our samples, for \(M_{\rm sub}>10^{6}\), \(>10^{7}\), and \(>10^{8}\,\mathrm{M}_{\odot}\), the fraction of subhaloes that meet the criterion for mass resolution, \(f_{\rm bound}^{\rm min,1}\), is 17 per cent, 92 per cent, and 100 per cent, respectively. The criterion for spatial resolution, \(f_{\rm bound}^{\rm min,1}\), is more stringent. Enforcing both \(f_{\rm bound}^{\rm min,1}\) and \(f_{\rm bound}^{\rm min,2}\) brings these fractions down to 6 per cent, 39 per cent, and 69 per cent. Nearly all subhaloes at \(M_{\rm sub}>10^{7}\) and \(10^{8}\,\mathrm{M}_{\odot}\) had \(f_{\rm bound}>f_{\rm bound}^{\rm min,2}\) (92 per cent and 100 per cent respectively), so \(f_{\rm bound}^{\rm min,1}\) dominates this population's convergence fraction. In agreement with our resolution tests, the convergence fraction for \(M_{\rm sub}>10^{6}\,\mathrm{M}_{\odot}\) is significantly lower. However, the idealized simulations in van den Bosch & Ogiya (2018) did not include a central disk potential, which significantly increases the physical tidal force and therefore mass stripping at \(d\lesssim 50\) kpc. This may relax the criteria on \(f_{\rm bound}^{\rm min}\); for example, in an extreme limit of a strong tidal field that induces (nearly) complete physical mass stripping at first pericenter, numerical considerations of resolving subhaloes above a given instantaneous mass threshold across many orbits become less significant. Webb & Bovy (2020) also explored the effects of resolution on simulated subhaloes, using re-simulations taken from the Via Lactea II simulation, and they also included a MW-mass disk potential. They found that subhaloes with \(M_{\rm sub}\sim 10^{7}\,\mathrm{M}_{\odot}\) at the resolution and force softening lengths of our FIRE-2 simulations can lose up to 60 per cent of their mass over the course of their lifetimes (across up to \(\sim 5\) Gyr) relative to their counterparts at higher resolution, while subhaloes at \(M_{\rm sub}\sim 10^{6}\,\mathrm{M}_{\odot}\) dissipate entirely. Though, their static host galaxy potential had higher mass at earlier times than our cosmological simulations: over the last 5 Gyr, the central galaxy in our simulations increased by typically \(\approx 30\) per cent. Additionally, individual subhalos exhibit a wide range of infall times; since most mass loss occurs during infall, subhalos with later infall times are subject to artifical mass loss effects for a shorter period of time. Santistevan et al. (2023) examined the infall times of luminous satellites in the same simulations, finding a 68th percentile range of 4 - 10 Gyr at the low subhalos masses that we analyze here. We reiterate that our convergence tests in Appendix A provide the most direct numerical test of our cosmological setup. Comparing with previous works, our results generally agree with those that modeled a MW-mass galaxy potential. Compared to D'Onghia et al. (2010), who examined subhaloes in the Aquarius DMO simulation with an added galaxy disk potential, we find the same order-of-magnitude results for \(M_{\rm sub}>10^{6}\,{\rm M}_{\odot}\) (their counts being \(\approx 2.5\times\) higher than ours within \(d=50\) kpc, and approximately the same as ours within \(d=20\) kpc), but lower counts for \(M_{\rm sub}>10^{8}\,{\rm M}_{\odot}\) (\(\approx 4\times\) within \(d=50\), \(\approx 20\times\) within \(d=20\) kpc). Other works that compared subhalo populations in DMO simulations to those that also model a central galaxy potential found that DMO simulations overpredict subhalos at \(d\lesssim 50\) kpc by \(\approx 1.5\times\) (D'Onghia et al., 2010), \(\approx 1.8\times\)(Garrison-Kimmel et al., 2017), \(\approx 3.3\times\)(Kelley et al., 2019), and \(\approx 3\times\)(Nadler et al., 2021). By, comparison, we find \(\approx 2-3\times\), on average, with some dependence on subhalo mass. Additionally, Webb & Bovy (2020) found that, broadly speaking, DMO simulations overpredict the _entire_ subhalo population within a MW-mass halo by a factor of \(\approx 1.6\), in broad agreement with our simulations, which have a mean DMO excess of 1.56 for subhalos with \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\) at \(z=0\). This reinforces that the most important effect of baryons for low-mass subhaloes is simply the addition of the tidal field from the central galaxy, as Garrison-Kimmel et al. (2017) demonstrated by showing similar results for the FIRE-2 baryonic simulations compared with simply adding a central galaxy potential to DMO simulations of the same haloes. This agreement supports the use of an embedded central galaxy potential in DMO simulations as a computationally inexpensive alternative to simulations with full baryonic physics. Furthermore, if using existing DMO simulations (for example, as in Hargis et al., 2014; Griffen et al., 2016), one can increase the accuracy of subhalo counts by reducing them using the distance-dependent correction fits from Samuel et al. (2020), which agree with our results, or using the machine-learning approach to subhalo orbital histories, as in Nadler et al. (2018). Sawala et al. (2017) examined subhaloes of instantaneous mass \(10^{6.5}-10^{8.5}\,{\rm M}_{\odot}\) in the APOSTLE simulations of Local Group analogs (DM particle mass \(\approx 10^{4}\,{\rm M}_{\odot}\), force softening \(\approx 134\) pc); we find broadly similar subhalo counts for both \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\) (within \(\approx 1.4\times\) or our counts) and \(>10^{8}\,{\rm M}_{\odot}\) (within \(\approx 1.5\times\) of our counts) at \(d=50\) kpc. Sawala et al. (2017) also compared their results to DMO versions of the same simulations and found similar DMO overpredictions of \(\approx 2\times\) at \(d=50\) kpc for \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\), with more dramatic DMO overprediction than our results at smaller distances (\(\approx 4\times\) at \(d=20\) kpc). However, the typical central galaxy in APOSTLE has significantly lower stellar mass, with \(M_{\rm star}\approx 1.8\times 10^{10}\,{\rm M}_{\odot}\), compared to our typical \(M_{\rm star}\approx 6\times 10^{10}\,{\rm M}_{\odot}\), which is similar to the MW. We also note similar trends in subhalo tangential and radial velocities, although subhalo orbits are generally less isotropic at small distances. Zhu et al. (2016) compared a baryonic versus DMO version of the Aquarius simulation, finding that DMO overpredicts subhaloes by \(\approx 3\times\) at \(M_{\rm sub}>10^{7}\,{\rm M}_{\odot}\) and \(\approx 4-5\times\) at \(M_{\rm sub}>10^{8}\,{\rm M}_{\odot}\) within the host halo's radius. The larger-volume, lower-resolution Illustris and EAGLE simulations also demonstrate similar general trends of subhalo depletion in baryonic relative to DMO versions at small distances (for example Chua et al., 2017; Despali & Vegetti, 2017). We also compare to previous work that used simulations to predict subhalo populations and subhalo-stream interaction rates. Our estimates for interaction rates (Section 3.4) are similar to those of Yoon et al. (2011), who used a lower-resolution DMO halo, designed to be similar to the MW's, with an added stream, predicted the Pal 5 stream to have \(\approx 20\) detectable subhalo-induced gaps. Banik et al. (2018) simulated the evolution of GD-1 near a MW potential and estimated that the MW hosts \(\approx 0.4\times\) the number of subhaloes in a comparable DMO simulation, generally consistent with our results (Figure 2). To conclude, we presented cosmological predictions for subhalo counts and orbital fluxes (Figures 1 and 3) as well as velocity distributions (Figure 5), and we provided fits to these results, to inform studies that seek to predict and interpret observable effects of subhalo gravitational interactions on stellar streams. ## Acknowledgments We thank Adrian Price-Whelan for suggesting that we explore the effect of the LMC on predictions for subhalo populations. We thank Isaiah Santisevan, Pratik Gandhi, and Nicolas Garavito for helpful comments and discussion. MB and AW received support from: NSF via CAREER award AST-2045928 and grant AST-2107772; NASA ATP grant 80NSSC20K0513; HST grants AR-15809, GO-15902, GO-16273 from STScI. We completed this work in part at the Aspen Center for Physics, supported by NSF grant PHY-1607611. We ran simulations using: XSEDE, supported by NSF grant ACI-1548562; Blue Waters, supported by the NSF; Frontera allocations AST21010 and AST20016, supported by the NSF and TACC; Pleales, via the NASA HEC program through the NAS Division at Ames Research Center. We used NumPy (Harris et al., 2020), SciPy (Jones et al., 2001), AstroPy (Astropy Collaboration et al., 2018), and MatPlotLib (Hunter, 2007), as well as the publicly available package HaloAnalysis (Wetzel & Garrison-Kimmel (2020c), available at [https://bitbucket.org/awetzel/halo_analysis/](https://bitbucket.org/awetzel/halo_analysis/)). ## Data Availability The data in these figures and all of the Python code that we used to generate these figures are available at the following GitHub repository: [https://bitbucket.org/meganbarry/sh_functions/](https://bitbucket.org/meganbarry/sh_functions/). FIRE-2 simulations are publicly available (Wetzel et al., 2022) at [http://flatub.flatironinstitute.org/fire](http://flatub.flatironinstitute.org/fire). Additional FIRE simulation data is available at [https://fire.northwestern.edu/data](https://fire.northwestern.edu/data). A public version of the Gizmo code is available at [http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html](http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html).
2310.02370
Biological Aggregations from Spatial Memory and Nonlocal Advection
We investigate a nonlocal single-species reaction-diffusion-advection model that integrates the spatial memory of previously visited locations and nonlocal detection in space, resulting in a coupled PDE-ODE system reflective of several existing models found in spatial ecology. We prove the existence and uniqueness of a H\"older continuous weak solution in one spatial dimension under some general conditions, allowing for discontinuous kernels such as the top-hat detection kernel. A robust spectral and bifurcation analysis is also performed, providing the rigorous analytical study not yet found in the existing literature. In particular, the essential spectrum is shown to be entirely negative, and we classify the nature of the bifurcation near the critical values obtained via a linear stability analysis. A pseudo-spectral method is used to solve and plot the steady states near and far away from these critical values, complementing the analytical insights.
Di Liu, Yurij Salmaniw, Jonathan R. Potts, Junping Shi, Hao Wang
2023-10-03T18:51:27Z
http://arxiv.org/abs/2310.02370v1
# Biological aggregations from spatial memory and nonlocal advection ###### Abstract We investigate a nonlocal single-species reaction-diffusion-advection model that integrates the spatial memory of previously visited locations and nonlocal detection in space, resulting in a coupled PDE-ODE system reflective of several existing models found in spatial ecology. We prove the existence and uniqueness of a Holder continuous weak solution in one spatial dimension under some general conditions, allowing for discontinuous kernels such as the top-hat detection kernel. A robust spectral and bifurcation analysis is also performed, providing the rigorous analytical study not yet found in the existing literature. In particular, the essential spectrum is shown to be entirely negative, and we classify the nature of the bifurcation near the critical values obtained via a linear stability analysis. A pseudo-spectral method is used to solve and plot the steady states near and far away from these critical values, complementing the analytical insights. nonlocal advection bifurcation analysis spatial memory pattern formation ## 1 Introduction ### Background and model formulation Spatial memory is a key feature driving the movement of mobile organisms Fagan et al. (2013). As organisms move, they gather information about where they have been, building a map that informs future movement decisions. This process generates a feedback mechanism, whereby previous visitations of favourable locations can cause repeated visits, resulting in the organism confining itself to certain specific areas of the landscape. In animal ecology, such memory processes have been hypothesised to be foundational in the construction of home ranges (Briscoe et al., 2002; Borger et al., 2008, Van Moorter et al., 2009], small areas where an animal decides to perform its daily activities instead of roaming more widely. Conversely, memory of unfavourable locations can cause animals to relocate. For example, memory has been shown to play a key role in migratory movements [Bracis and Mueller, 2017, Abrahms et al., 2019] and avoiding conspecifics to form territories or home ranges [Potts and Lewis, 2016a, Ellison et al., 2020]. Understanding how memory processes help to shape the space use of animals is thus becoming a question of increasing interest in both empirical ecology [Fagan et al., 2013, Merkle et al., 2014, Davis et al., 2021] and mathematical modelling [Potts and Lewis, 2016a, Shi et al., 2021, Wang and Salmaniw, 2023a]. From a modelling perspective, a key tool for modelling movement in response to remembered space use is via an advection term in a partial differential equation (PDE). This advection term is typically nonlocal in space, for both biological and mathematical reasons. From a biological perspective, nonlocality is important because organisms will generally sense their surrounding environment - for example through sight, smell, or touch - and make movement decisions accordingly [Benhamou, 2014, Martinez-Garcia et al., 2020]. Moreover, this nonlocality occurs not only in animals but also in cells [Armstrong et al., 2009, Painter et al., 2023]. Mathematically, nonlocal advection is often crucial for well-posedness and avoiding blow-up of PDEs [Bertozzi and Laurent, 2007, Giunta et al., 2022]. Alongside advection, mathematical models of organism movement typically have a diffusive term, accounting for the aspects of movement that we are not explicitly modelling (such as foraging), and may also have a reaction term representing the births and deaths of organisms. This leads to the formalism of reaction-diffusion-advection equations (RDAs). In a one dimensional spatial domain \(\Omega\), such an RDA might have the following general form \[u_{t}=du_{xx}+\alpha(ua_{x})_{x}+f(u),\quad x\in\Omega,\ t>0. \tag{1.1}\] Here, \(d>0\) denotes the rate of diffusion (exploratory movement), \(\alpha\in\mathbb{R}\) denotes the rate of advection towards (\(\alpha<0\)) or away from (\(\alpha>0\)) the environmental covariates described by the function \(a(x,t)\), and \(f(u)\) describes population changes through birth/death processes. When paired with an appropriate initial/boundary condition, we seek to analyze dynamical behaviours of the solution \(u(x,t)\) as it depends on parameters appearing in the equation. The aspect of memory then appears in the advection term \(a(x,t)\). A recent review paper by [Wang and Salmaniw, 2023b] covers in detail the development of equations to model memory, as well as the related concept of learning, along with a large collection of open problems and directions in this area. The central idea is to model spatial memory as a map, \(k(x,t)\), which evolves over time as the organism learns about its environment [Fagan et al., 2013, Potts and Lewis, 2016a, 2019]. This map may represent something in the mind of a specific animal, sometimes called a 'cognitive map' [Harten et al., 2020, Peer et al., 2021], or it could represent memory of past animal locations embedded in the environment, e.g. due to animals depositing scent marks or forging trails. Here, we seek to explore the influence of such a map on the space-use patterns of a single population, \(u\). To this end, we describe the evolution of \(k(x,t)\) through the ordinary differential equation for each \(x\in\Omega\), following Potts and Lewis [2016a] \[k_{t}=g(u)-(\mu+\beta u)k,\quad t>0. \tag{1.2}\] Here, the function \(g(\cdot)\) describes the uptake rate of the map \(k\) as it depends on the population \(u(x,t)\); \(\mu\geq 0\) describes the rate at which memories fade over time; and \(\beta\geq 0\) describes a rate at which organisms remove a location from their memory map on revisitation (e.g. if animals want to avoid overuse of a location [Potts et al., 2022]). Note that, for simplicity, we have assumed that all organisms in a population share a common memory map. This makes it perhaps more amenable to modelling the distribution of cues left on the environment, e.g. scent marks or visual cues [Lewis and Murray, 1993, Moorcroft et al., 2006], rather than memory contained in the minds of animals. Alternatively, if the population modelled by \(u(x,t)\) has some process of relatively-rapid information sharing, then we can view \(k(x,t)\) as a shared memory amongst the population (e.g. for social insects this may be valid). As another example, if \(u(x,t)\) is the probability distribution of a single animal (in which case \(f(u)=0\) per force) then \(k(x,t)\) can be used to model a map in the mind of an individual [Potts and Lewis, 2016a]. Prototypical examples of the function \(g(u)\) might be \(g(u)=\rho u\), denoting memory accruing in proportion to animal visitations, or \(g(u)=\rho u^{2}\), denoting uptake of memory when members of \(u\) encounter one another (here, \(\rho\) is a constant). However, these can, in principle, both lead to unbounded memory. Therefore we can either take another functional form, such as \(g(u)=\rho u^{2}/(1+cu)\) (cf. the Holling type II functional response [Holling, 1965]), or modify Equation (1.2) to the following as in Potts and Lewis [2016a]: \[k_{t}=g(u)(\kappa-k)-(\mu+\beta u)k,\quad t>0, \tag{1.3}\] where \(\kappa>0\) denotes a theoretical maximal memory capacity. To combine this mechanism of spatial memory with nonlocal perception, we model nonlocal effects through a spatial convolution: \[\overline{k}(x,t)=(G*k)(x,t)=\frac{1}{|\Omega|}\int_{\Omega}G(x-y)k(y,t)dy. \tag{1.4}\] Here, the function \(G(\cdot)\) is referred to as a _perceptual kernel_ or _detection function_, which describes how an animals' ability to perceive landscape information varies with distance Fagan et al. (2017); Wang and Salmaniw (2023). Common forms of the detection function \(G(\cdot)\) include the _Gaussian_ detection function, the _exponential_ detection function, or the _top-hat_ detection function, each taking the respective forms in \(\Omega=\mathbb{R}\): \[G(x) :=\frac{1}{\sqrt{2\pi}R}e^{-x^{2}/2R^{2}}, \tag{1.5}\] \[G(x) :=\frac{1}{2R}e^{-|x|/R},\] (1.6) \[G(x) :=\begin{cases}\frac{1}{2R},&-R\leq x\leq R,\\ 0,&\text{otherwise}.\end{cases} \tag{1.7}\] Here, \(R\geq 0\) is the _perceptual radius_, describing the maximum distance at which landscape features can be distinguished Fagan et al. (2017). Roughly, the Gaussian detection function provides the most information far away from the location of observation, whereas the top-hat detection function gives a strict limit on how far the organism can detect information. In general, it is reasonable to assume the detection function satisfies 1. \(G(x)\) is symmetric about the origin; 2. \(\int_{\mathbb{R}}G(x)dx=1\); 3. \(\lim_{R\to 0^{+}}G(x)=\delta(x)\); 4. \(G(x)\) is non-increasing from the origin. Here, \(\delta(x)\) denotes the Dirac-delta distribution. Each of the Gaussian, exponential, and top-hat kernels satisfy these properties over \(\mathbb{R}\); appropriate modification is sometimes required in a bounded domain. Readers are encouraged to review Fagan et al. (2017); Wang and Salmaniw (2023) for further discussion on detection kernels and some of the challenges in defining nonlocal kernels near a boundary region. Taking the advective potential \(a(x,t)=\overline{k}(x,t)\), where \(k\) solves either (1.2) or (1.3), we combine the equation describing movement (1.1) with a dynamic spatial map in \(\Omega=(-L,L)\), \(L>0\), to arrive at the following two systems of equations subject to periodic boundary conditions: \[\begin{cases}u_{t}=du_{xx}+\alpha(u\overline{k}_{x})_{x}+f(u),&x\in(-L,L),\ t>0,\\ k_{t}=g(u)-(\mu+\beta u)k,&x\in(-L,L),\ t>0,\end{cases} \tag{1.8.a}\] and \[\begin{cases}u_{t}=du_{xx}+\alpha(u\overline{k}_{x})_{x}+f(u),&x\in(-L,L),\ t>0,\\ k_{t}=g(u)(\kappa-k)-(\mu+\beta u)k,&x\in(-L,L),\ t>0.\end{cases} \tag{1.8.b}\] In either case, we denote by \(u(x,0)=u_{0}(x)\), \(k(x,0)=k_{0}(x)\) the initial data, chosen to be \(2L\)-periodic in \(\Omega\). As discussed in Wang and Salmaniw (2023), boundary conditions in a nonlocal setting in a bounded domain are highly non-trivial in general. More precisely, it is not clear how to appropriately define the spatial convolution (1.4) near the boundary of the domain while remaining analytically tractable. For this reason, we appeal to a periodic boundary condition, which requires no further modification of (1.4) near the boundary points \(\{-L,L\}\). While problems (1.8.a) and (1.8.b) appear similar in form, it is of interest to understand exactly when and how these two formulations differ in their solution behaviours: should they be identical, it seems reasonable to choose the more tractable model depending on the goals; should they differ significantly, it is reasonable to determine _when_ and _how_ they differ, which gives insights into the validity of either case in a given context. There exist a number of works that consider a multi-species model of the form taken in either (1.8.a) or (1.8.b), see, e.g., Potts and Lewis (2016, 2019). In these works, a linear stability analysis is performed to determine conditions sufficient for pattern formation to occur. These models are comparable in that they include a cognitive map through an additional, dynamic equation, and they also incorporate nonlocal perception. Other models with nonlocal advective operators have been studied by Ducrot et al. (2018); Giunta et al. (2021); Jungel et al. (2022), where some global existence results are obtained. In Ducrot et al. (2018), fractional Sobolev spaces are utilized in a one-dimensional torus to establish a global existence result which includes the possibility of a top-hat kernel; however, the model does not incorporate a dynamic cognitive map. In Giunta et al. (2021), a global existence result is establish using a contraction mapping argument, but the regularity requirements of the nonlocal kernel do not include the top-hat detection function. In Jungel et al. (2022), a global existence result is obtained for a special case of the \(n\)-species cross-diffusion system considered in Giunta et al. (2021) which includes the top-hat detection function, but the kernels are assumed to be in _detailed balance_, which greatly reduces the applicability for biological application. Other memory-based movement models has been investigated in Shi et al. (2020), Song et al. (2022), where the cognitive map is now given by a nonlocal integral operator in _time_. In such cases, the problem is a delay partial differential equation. The stability of coupled PDE-ODE models have also been studied in works such as Marciniak-Czochra et al. (2017), Li et al. (2017). With our models at hand, the major goals of this paper are as follows. First, we seek to prove the well-posedness of models (1.8.a) and (1.8.b). In particular, in Section 2 we prove the existence of a unique, global weak solution when the detection function \(G(\cdot)\) satisfies an \(L^{p}\)-embedding type condition (see Hypotheses (H3)), which includes the discontinuous top-hat detection function. This provides an answer to Open Problems 10 and 12 found in Wang and Salmaniw (2023a), at least for the single species case. We then shift our attention to the solution behaviour and the potential for pattern formation at a steady state. In Sections 3-4, we perform a robust stability and bifurcation analysis to understand the long term behaviour of the solution as it depends on parameters \(d\), \(\alpha\), the uptake rate \(g(\cdot)\) and the kernel \(G(\cdot)\). While an intuitive understanding of the relevant factors influencing pattern formation can be gleaned from a less formal linear stability analysis (see Section 3.2), further care is needed for nonlocal advective operators as the essential spectrum may be non-empty. This is different from standard reaction-diffusion systems where the essential spectrum is empty. In our case, we find that the essential spectrum is entirely negative, and so has no impact on changes of stability. Section 5 is then dedicated to a detailed case study with the top-hat detection function. To explore (perhaps subtle) differences in our formulations, we focus on three particular forms of uptake \(g(\cdot)\) to better understand differences in fundamental assumptions for the function \(g\). Numerical simulations using a pseudo-spectral method Giunta et al. (2021), Wang and Salmaniw (2023a) are presented to highlight these differences. ### Preliminaries & Hypotheses Denote \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). Recall that the following eigenvalue problem \[\begin{cases}-\phi^{\prime\prime}(x)=l\phi(x),&x\in(-L,L),\\ \phi(-L)=\phi(L),\ \phi^{\prime}(-L)=\phi^{\prime}(L),\end{cases} \tag{1.8}\] with eigenvalues and eigenfunctions \[l_{\pm n}=\frac{n^{2}\pi^{2}}{L^{2}},\ \ \phi_{\pm n}(x)=e^{\frac{\pm in }{L}x}=\cos\left(\frac{n\pi}{L}x\right)\pm i\sin\left(\frac{n\pi}{L}x\right), \ n\in\mathbb{N}_{0}. \tag{1.9}\] We define the linear spaces \[L^{2}_{per}(-L,L)=\left\{h\in L^{2}(-L,L):h=\sum_{n=-\infty}^{ \infty}c_{n}\phi_{n}\text{ with }\sum_{n=-\infty}^{\infty}|c_{n}|^{2}<\infty\right\},\] and \[H^{2}_{per}(-L,L)=\{h\in L^{2}_{per}(-L,L):h^{\prime\prime}\in L ^{2}_{per}(-L,L)\},\] where \[c_{n}=\langle h,\phi_{n}\rangle=\frac{1}{2L}\int_{-L}^{L}h(x) \phi_{-n}(x)dx.\] We then denote by \(X\) and \(Y\) the spaces \(H^{2}_{per}(-L,L)\times L^{2}_{per}(-L,L)\) and \(L^{2}_{per}(-L,L)\times L^{2}_{per}(-L,L)\), respectively. We always assume the following for the spatial kernel \(G\): * \(\begin{cases}0\leq G(x)\in L^{1}_{per}(-L,L),\ G(-x)=G(x)\text{ for all }x\in(-L,L),\\ (2L)^{-1}\int_{-L}^{L}G(y)dy=1.\end{cases}\) The Gaussian, exponential and top-hat detection functions all satisfy (H0) in \((-L,L)\) with the following respective scaling prefactors: \(\mathrm{erf}(L/\sqrt{2}R)\), \(1-e^{-L/R}\), and \(2L\). For the stability and bifurcation analysis performed in Sections 3-4, we assume that the growth rates \(f\) and \(g\) satisfy * \(f(u)\in C^{3}([0,\infty))\), \(f(0)=f(1)=0\), \(f^{\prime}(0)>0\), \(f^{\prime}(1)<0\), \(f(u)>0\) for \(u\in(0,1)\) and \(f(u)<0\) for \(u>1\). * \(g(u)\in C^{3}([0,\infty))\), \(g(u)>0\) on \((0,\infty)\), \(g(0)=0\), and \(g(1)=\rho>0\). To establish the well-posedness of the problem, we also assume in addition to (H0) that the kernel \(G(\cdot)\) satisfies the following \(L^{p}\)-type estimate for any \(R>0\) fixed: * \(\|\overline{x}_{x}\|_{L^{p}(\Omega)}\leq C\left\|z\right\|_{L^{p}(\Omega)}\quad \text{ for all }z\in L^{p}(\Omega),\;1\leq p\leq\infty.\) A prototypical example of \(f(u)\) is the logistic function \(f(u)=u(1-u)\), foundational in models of population growth. Biologically-motivated examples of \(g(u)\) include \(g(u)=\rho u\), \(g(u)=\rho u^{2}\), and \(g(u)=\rho u^{2}/(1+cu)\), which were discussed in the paragraph prior to Equation (1.3). Note that the hypotheses required in the bifurcation analysis are generally stronger than those required for well-posedness; for this reason, we state the sufficient hypotheses for the existence of a solution directly in the statement of Theorem 1.1. Throughout this paper, we denote the null space of a linear operator \(L\) by \(\mathcal{N}(L)\), the domain of \(L\) by \(\mathcal{D}(L)\), the range of \(L\) by \(\mathcal{R}(L)\), the resolvent set of \(L\) by \(\rho(L)\), and the spectrum of \(L\) by \(\sigma(L)\). We always denote by \(Q_{T}:=\Omega\times(0,T)=(-L,L)\times(0,T)\). ### Statement of Main Results Our first result establishes the existence of a unique, nontrivial solution \((u,k)\). Due to the weak regularity assumption (H0) on the kernel \(G(\cdot)\), we do not expect solutions to be classical necessarily. Denote by \(h(u,k)\) the right hand side of the equation for the map \(k\) in either (1.8.a) or (1.8.b). We call \((u,k)\) a _weak solution_ to either (1.8.a) or (1.8.b) if, given any test function \(\phi_{i}\in L^{2}(0,T;H^{1}(\Omega))\), \(i=1,2,\) there holds \[\iint_{Q_{T}}u_{t}\phi_{1}\mathrm{d}x\mathrm{d}t+\iint_{Q_{T}}( du_{x}+\alpha u\overline{k}_{x})(\phi_{1})_{x}\mathrm{d}x\mathrm{d}t=\iint_{Q_{T}}f (u)\phi_{1}\mathrm{d}x\mathrm{d}t, \tag{1.10}\] \[\iint_{Q_{T}}k_{t}\phi_{2}\mathrm{d}x\mathrm{d}t=\iint_{Q_{T}}h( u,k)\phi_{2}\mathrm{d}x\mathrm{d}t, \tag{1.11}\] and the initial data is satisfied in the sense of \(H^{1}(\Omega)\) (in fact, the initial data will be satisfied in the sense of \(C(\overline{\Omega})\) by the Sobolev embedding). We call a weak solution a _global weak solution_ if (1.10)-(1.11) holds for any \(T>0\). We have the following well-posedness result for problems (1.8.a) and (1.8.b). **Theorem 1.1**: _Fix \(T>0\), \(\alpha\in\mathbb{R}\setminus\{0\}\), \(d,R>0\), \(\mu,\beta,\kappa\geq 0\), and assume that the kernel \(G(\cdot)\) satisfies (H0) and (H3). Suppose that for some \(\sigma\in(0,1)\), \(f,g\in C^{2+\sigma}(\mathbb{R}^{+})\) with \(f(0)=g(0)=0\). Assume that \(f\) satisfies the bound_ \[f(z)\leq f^{\prime}(0)z\quad\text{ for all }\quad z\geq 0,\] _while \(g\) satisfies the bounds_ \[g(z)\leq N(1+z^{q})\quad\text{ for all }\quad z\geq 0,\] \[|g^{\prime}(z)|\leq\tilde{N}(1+z^{\tilde{q}})\quad\text{ for all }\quad z\geq 0,\] _for some constants \(N,\tilde{N}>0\), \(q\geq 1\) and \(\tilde{q}\geq 0\). Finally, assume that the initial data \(u_{0},k_{0}\) satisfy_ \[0<u_{0}(x),k_{0}(x)\in W^{1,2}(\Omega)\text{ are periodic in }\Omega.\] _Then, there exists a unique, global weak solution \((u,k)\) solving problem (1.8.a) in the sense of (1.10)-(1.11) satisfying \(u\geq 0,k\geq 0\) so long as there exists \(M>0\) so that_ \[g(z)\leq M(\mu+\beta z)\quad\text{ for all }z\geq 0.\] _For problem (1.8.b), there exists a unique, global weak solution in the sense of (1.10)-(1.11) satisfying \(u\geq 0,k\geq 0\) with no further restriction on \(g(\cdot)\) other than (1.2). Moreover, in either case there holds_ \[u\in L^{\infty}(0,T;L^{p}(\Omega))\cap C^{\sigma,\sigma/2}( \overline{Q}_{T}),\quad u_{x},\;u_{t}\in L^{2}(0,T;L^{2}(\Omega)),\] \[k\in L^{\infty}(0,T;L^{p}(\Omega))\cap C^{\sigma,\sigma/2}( \overline{Q}_{T}),\quad k_{x},\;k_{t}\in L^{\infty}(0,T;L^{2}(\Omega)),\] _for any \(1<p\leq\infty\), for some \(\sigma\in(0,1/2)\), and the initial data is satisfied in the sense of \(C(\overline{\Omega})\)._ **Remark 1.2**: _In the theorem above, we generally require some global polynomial growth control over the memory uptake function \(g(\cdot)\). From this result, we see that problem (1.8.a) is significantly more restrictive than problem (1.8.b) in terms of further growth conditions on \(g(\cdot)\). Indeed, the first case requires that the memory uptake behaves roughly linearly, particularly for large arguments, while the second case requires no further growth condition. From our previous discussion of biologically-motivated forms of \(g(\cdot)\), we see that the forms \(g(u)=\rho u\) and \(g(u)=\rho u^{2}/(1+cu)\) satisfy the necessary conditions for either system. On the other hand, quadratic growth \(g(u)=\rho u^{2}\) as described in Potts and Lewis (2016a) satisfies the conditions for system (1.8.b) but not (1.8.a). This highlights an essential key difference between these two problems in terms of their well-posedness._ For the detection function \(G\) satisfying (H0), the Fourier coefficient \(C_{n}(G)\) is defined for any \(n\in\mathbb{N}\) as follows: \[C_{n}(G)=\frac{1}{2L}\int_{-L}^{L}e^{-\frac{i\pi n}{L}y}G(y)dy=\frac{1}{2L}\int_ {-L}^{L}\cos\left(\frac{n\pi}{L}y\right)G(y)dy, \tag{1.12}\] For \(n\in\mathbb{N}\), if \(C_{n}(G)\neq 0\), define \[\alpha_{n}=\frac{-(\mu+\beta)^{2}}{[g^{\prime}(1)(\mu+\beta)-\beta\rho]C_{n}( G)}\left(d-\frac{f^{\prime}(1)}{l_{n}}\right). \tag{1.13}\] Note that \(C_{n}(G)\) could be positive or negative. Define \[\begin{split}\Sigma^{+}:&=\{n\in\mathbb{N}:\alpha_ {n}>0\},\ \Sigma^{-}:=\{n\in\mathbb{N}:\alpha_{n}<0\},\\ \alpha_{r}:&=\min_{n\in\Sigma^{+}}\alpha_{n},\ \ \alpha_{l}:=\max_{n\in\Sigma^{-}}\alpha_{n}.\end{split} \tag{1.14}\] Note that \(\alpha_{r}>0>\alpha_{l}\) as \(\sum_{-\infty}^{\infty}|C_{n}(G)|^{2}<\infty\). Then we have the following theorem regarding the stability of the unique constant positive steady state \(U_{*}=(1,\rho/(\mu+\beta))\) with respect to (1.8.a). **Theorem 1.3**: _Assume that assumptions (H0)-(H2) are satisfied, and let \(\Sigma^{+},\Sigma^{-},\alpha_{l},\alpha_{r}\) be defined as in (1.14). Then_ * _The constant steady state solution_ \(U_{*}\) _is locally asymptotically stable with respect to (_1.8.a_) if_ \(\alpha_{l}<\alpha<\alpha_{r}\)_._ * _The constant steady state solution_ \(U_{*}\) _is unstable with respect to (_1.8.a_) if_ \(\alpha<\alpha_{l}\) _or_ \(\alpha>\alpha_{r}\)_._ Similarly we define \[\widehat{\alpha}_{n}=\frac{-(\rho+\mu+\beta)^{2}}{\kappa[g^{\prime}(1)(\mu+ \beta)-\beta\rho]C_{n}(G)}\left(d-\frac{f^{\prime}(1)}{l_{n}}\right), \tag{1.15}\] and \[\begin{split}\widehat{\Sigma}^{+}:&=\{n\in\mathbb{ N}:\widehat{\alpha}_{n}>0\},\ \widehat{\Sigma}^{-}:=\{n\in\mathbb{N}:\widehat{\alpha}_{n}<0\},\\ \widehat{\alpha}_{r}:&=\min_{n\in\Sigma^{+}}\widehat {\alpha}_{n},\ \ \widehat{\alpha}_{l}:=\max_{n\in\widehat{\Sigma}^{-}}\widehat{\alpha}_{n}.\end{split} \tag{1.16}\] Then we have the stability results for the unique constant positive steady state \(\widehat{U_{*}}=(1,\rho\kappa/(\rho+\mu+\beta))\) with respect to (1.8.b). **Theorem 1.4**: _Assume that assumptions (H0)-(H2) are satisfied, and let \(\widehat{\Sigma}^{+},\widehat{\Sigma}^{-},\widehat{\alpha}_{l},\widehat{ \alpha}_{r}\) be defined as in (1.16). Then_ * _The constant steady state solution_ \(\widehat{U}_{*}\) _is locally asymptotically stable with respect to (_1.8.b_) if_ \(\widehat{\alpha}_{l}<\alpha<\widehat{\alpha}_{r}\)_._ * _The constant steady state solution_ \(\widehat{U}_{*}\) _is unstable with respect to (_1.8.b_) if_ \(\alpha<\widehat{\alpha}_{l}\) _or_ \(\alpha>\widehat{\alpha}_{r}\)_._ The quantities \(\alpha_{n}\) (\(\widehat{\alpha}_{n}\)) defined in (1.13) ((1.15)) are the critical parameter values such that the stability of the spatially-constant steady state changes, and they are also bifurcation points for (1.8.a) ((1.8.b)) where spatially non-homogeneous steady state solutions bifurcate from the constant ones as found in the following Theorems. **Theorem 1.5**: _Assume that assumptions (H0)-(H2) are satisfied, and \(n\in\mathbb{N}\) such that \(C_{n}(G)\neq 0\). Then near \((\alpha,U)=(\alpha_{n},U_{*})\), problem (1.8.a) has a line of trivial solutions \(\Gamma_{0}:=\{(\alpha,U_{*}):\alpha\in\mathbb{R}\}\) and a family of non-constant steady state solutions bifurcating from \(\Gamma_{0}\) at \(\alpha=\alpha_{n}\) in a form of_ \[\Gamma_{n}:=\{(\alpha_{n}(s),u_{n}(s,\cdot),k_{n}(s,\cdot)):-\delta<s<\delta\} \tag{1.17}\] _with_ \[\begin{cases}\alpha_{n}(s)=\alpha_{n}+\alpha_{n}^{\prime}(0)s+o(s),\\ u_{n}(s,x)=1+s\cos\left(\frac{n\pi}{L}x\right)+s^{2}z_{1n}(s,x),\\ k_{n}(s,x)=\frac{\rho}{\mu+\beta}+s\frac{g^{\prime}(1)(\mu+\beta)-\beta\rho}{( \mu+\beta)^{2}}\cos\left(\frac{n\pi}{L}x\right)+s^{2}z_{2n}(s,x),\end{cases} \tag{1.18}\] _where \(z_{n}(s)=(z_{1n}(s,\cdot),z_{2n}(s,\cdot))\) satisfies \(\underset{s\to 0}{\lim}\|z_{n}(s)\|=0\). Moreover the set of steady state solutions of (1.8.a) near \((\alpha_{n},U_{*})\) consists precisely of the curves \(\Gamma_{0}\) and \(\Gamma_{n}\)._ **Theorem 1.6**: _Assume that assumptions (H0)-(H2) are satisfied, and \(n\in\mathbb{N}\) such that \(C_{n}(G)\neq 0\). Then near \((\widehat{\alpha},\widehat{U})=(\widehat{\alpha}_{n},\widehat{U}_{*})\), equation (1.8.b) has a line of trivial solutions \(\widehat{\Gamma}_{0}:=\{(\alpha,\widehat{U}_{*}):\alpha\in\mathbb{R}\}\) and a family of non-constant steady state solutions bifurcating from \(\widehat{\Gamma}_{0}\) at \(\alpha=\widehat{\alpha}_{n}\) in a form of_ \[\widehat{\Gamma}_{n}:=\{(\widehat{\alpha}_{n}(s),\widehat{u}_{n}(s,\cdot),\widehat{k}_{n}(s,\cdot)):-\delta<s<\delta\} \tag{1.19}\] _with_ \[\begin{cases}\widehat{\alpha}_{n}(s)=\widehat{\alpha}_{n}+\widehat{\alpha}_{n }^{\prime}(0)s+\sigma(s),\\ \widehat{u}_{n}(s,x)=1+s\cos\Big{(}\frac{n\pi}{L}x\Big{)}+s^{2}\widehat{z}_{1 n}(s,x),\\ \widehat{k}_{n}(s,x)=\frac{\rho}{\rho+\mu+\beta}+\kappa s\frac{g^{\prime}(1)( \mu+\beta)-\beta\rho}{(\mu+\beta+\rho))^{2}}\cos\Big{(}\frac{n\pi}{L}x\Big{)} +s^{2}\widehat{z}_{2n}(s,x),\end{cases} \tag{1.20}\] _where \(\widehat{z}_{n}(s)=(\widehat{z}_{1n}(s,\cdot),\widehat{z}_{2n}(s,\cdot))\) satisfies \(\lim\limits_{s\to 0}\lVert\widehat{z}_{n}(s)\rVert=0\). Moreover the set of steady state solutions of (1.8.b) near \((\widehat{\alpha}_{n},\widehat{U}_{*})\) consists precisely of the curves \(\widehat{\Gamma}_{0}\) and \(\widehat{\Gamma}_{n}\)._ We also classify the nature of the bifurcation at these critical values, see Theorems 4.4 and 4.5. Together, these results show that the central quantity governing spontaneous pattern formation is the advective strength towards or away from memorised areas, encapsulated in \(\alpha\). Since \(\alpha_{l}<0<\alpha_{r}\) holds necessarily from the assumptions made, the key driver of pattern formation is that \(\alpha\) is of sufficient magnitude. If \(\alpha\) is negative, then we have attraction towards remembered areas, similar to many nonlocal models of biological aggregation (e.g. Carrillo et al. (2019)). Some examples are found in Figures 2, 5 and 8. On the flip-side, positive \(\alpha\) indicates repulsion from remembered areas and leads to patterns such as in Figures 3, 6 and 9. Interestingly, there is a lack of symmetry in the sense that \(-\alpha_{l}\neq\alpha_{r}\) in general. In fact, \(|\alpha_{l}|\) and \(\alpha_{r}\) do not even remain ordered! This can be seen in Figures 1, 4 and 7. Moreover, Theorems 1.5 and 1.6 show that, close to the bifurcation point, the steady state consists of a single cosine wave, and Theorems 4.4 and 4.5 give conditions on their stability. An example of the stable case is shown in Figures 5-6, where a cosine wave emerges just beyond the bifurcation point, but diverges from this description as the bifurcation parameter is increased further. For the unstable case we do not see the small-amplitude cosine wave at all as the bifurcation threshold is crossed. Rather, the solution jumps to a higher amplitude pattern as found in Figure 3. ## 2 Well-posedness In this section we prove the existence of a unique global solution to a problem more general than system (1.8.a) or (1.8.b) for detection functions \(G(\cdot)\) satisfying (H0) and (H3), which includes the top-hat kernel. The restrictions on the growth terms \(f(\cdot)\) and \(g(\cdot)\) are compatible with the choices made in Section 5. The challenge is in the treatment of the (potentially) discontinuous kernel appearing inside the nonlocal advection term. To overcome these difficulties, we abuse a useful 'embedding' property of the top-hat kernel in one spatial dimension. This allows one to obtain _a priori_ estimates on the solution \(k(x,t)\) from which we obtain appropriate uniform bounds on \(u(x,t)\) and higher derivatives. We begin with some preliminary estimates. ### Preliminary Estimates In general, we assume \(G(\cdot)\) satisfies (H3); we first show this holds for the top-hat kernel. **Lemma 2.1**: _Let \(1\leq p\leq\infty\) and fix \(T>0\). Suppose \(z(\cdot,t)\in L^{p}(\Omega)\) is periodic in \(\Omega\) for almost every \(t\in[0,T]\). Denote by \(\overline{z}_{x}(\cdot,t)\) the spatial convolution (1.4) of \(z_{x}\) with the top-hat detection function (1.7). Then, for almost every \(t\in[0,T]\) there holds_ \[\left\lVert\overline{z}_{x}(\cdot,t)\right\rVert_{L^{p}(\Omega)}\leq\frac{1}{ R}\left\lVert z(\cdot,t)\right\rVert_{L^{p}(\Omega)}. \tag{2.1}\] _In particular, we have that_ \[\operatorname*{ess\,sup}_{t\in(0,T)}\left\lVert\overline{z}_{x}( \cdot,t)\right\rVert_{L^{p}(\Omega)}\leq R^{-1}\operatorname*{ess\,sup}_{t\in (0,T)}\left\lVert z(\cdot,t)\right\rVert_{L^{p}(\Omega)}, \tag{2.2}\] _for any \(T>0\) fixed._ **Remark 2.2**: _If \(z(\cdot,t)\) is continuous, we may replace "almost every" with "every" and \(\operatorname*{ess\,sup}\) with \(\sup\). This will be the case in the forthcoming results._ **Proof 1**: _First, we drop the dependence on \(t\) for notational brevity. The result essentially follows from an elementary inequality and the fact that_ \[\overline{z}_{x}=\frac{z(x+R)-z(x-R)}{2R}\] _when \(G(\cdot)\) is the top hat detection function. Consequently,_ \[\|\overline{z}_{x}\|_{L^{p}(\Omega)}^{p} =\frac{1}{(2R)^{p}}\int_{\Omega}|z(x+R)-z(x-R)|^{p}\,\mathrm{d}x\] \[\leq\frac{2^{p-1}}{(2R)^{p}}\int_{\Omega}\left(|z(x+R)|^{p}+|z(x- R)|^{p}\right)\mathrm{d}x\leq\frac{1}{R^{p}}\,\|z\|_{L^{p}(\Omega)}^{p}\,, \tag{2.3}\] _where we have used the periodicity of \(z\) in \(\Omega\). This proves (2.1). Taking the \(p^{th}\) roots of both sides followed by the supremum over \(t\in(0,T)\) yields (2.2)._ **Remark 2.3**: _The same \(L^{p}\)-type estimate holds for the Gaussian and exponential detection functions (1.5)-(1.6) as well. These cases are easier since the kernels themselves are appropriately differentiable and bounded. We omit the details._ Next, we obtain \(L^{p}(\Omega)\) bounds on a function \(k(x,t)\) when \(k\) solves a linear, first order differential equation for each \(x\in\Omega\). **Lemma 2.4**: _Let \(0\lessneq w(x,t)\in C^{1,1}(\overline{Q}_{T})\cap L^{1,1}(Q_{T})\) be periodic in \(\Omega\) for all \(t\in(0,T)\) and assume \(1<p\leq\infty\). For each \(x\in\Omega\), let \(k(x,\cdot)\) solve the ordinary differential equation_ \[\frac{dk}{dt}=g_{1}(w)-g_{2}(w)k \tag{2.4}\] _where \(g_{1},g_{2}\in C^{1}(\mathbb{R}^{+})\) are nonnegative and \(k(x,0)=k_{0}(x)\in W^{1,2}(\Omega)\). Then, if there exists \(M>0\) such that_ \[g_{1}(z)\leq Mg_{2}(z)\quad\text{ for all }z\geq 0, \tag{2.5}\] _there holds_ \[\sup_{t\in[0,T]}\|k(\cdot,t)\|_{L^{\infty}(\Omega)}\leq M+\|k_{0}\|_{L^{\infty }(\Omega)} \tag{2.6}\] **Proof 2**: _First, note that by solving the differential equation directly, \(k(x,\cdot)\in C^{1}([0,T])\). By the smoothness of \(g_{i}(\cdot)\), \(i=1,2\) and the boundedness of \(w(x,t)\) in \(Q_{T}\), \(k(\cdot,t)\in L^{p}(\Omega)\) for any \(p>1\), for all \(t\in(0,T)\). Taking the time derivative of \(\frac{1}{p}\,\|k(\cdot,t)\|_{L^{p}(\Omega)}^{p}\) gives_ \[\frac{1}{p}\frac{d}{dt}\int_{\Omega}k^{p}\mathrm{d}x=\int_{\Omega}k^{p-1}g_{1 }(w)\mathrm{d}x-\int_{\Omega}k^{p}g_{2}(w)\mathrm{d}x. \tag{2.7}\] _We now apply Young's inequality. To this end, we carefully rewrite as_ \[k^{p-1}g_{1}(w) =k^{p-1}g_{2}(w)^{(p-1)/p}\cdot\frac{g_{1}(w)}{g_{2}(w)^{(p-1)/p}}\] \[\leq\frac{1}{p_{1}}\left(k^{p-1}g_{2}(w)^{(p-1)/p}\right)^{p_{1}} +\frac{1}{q_{1}}\left(\frac{g(w)}{g_{2}(w)^{(p-1)/p}}\right)^{q_{1}}, \tag{2.8}\] _where \(p_{1},q_{1}>1\) satisfy \(p_{1}^{-1}+q_{1}^{-1}=1\). Choosing \(p_{1}=p/(p-1)\) and \(q_{1}=p\), equation (2.7) then becomes_ \[\frac{d}{dt}\int_{\Omega}k^{p}\mathrm{d}x\leq\int_{\Omega}\left(\frac{g_{1}(w )}{g_{2}(w)}\right)^{p}g_{2}(w)\mathrm{d}x \tag{2.9}\] _Using bound (2.5) we then have_ \[\frac{d}{dt}\int_{\Omega}k^{p}\mathrm{d}x\leq M^{p}\left\|g_{2}(w(\cdot,t)) \right\|_{L^{1}(\Omega)}. \tag{2.10}\] _Integrating both sides from \(0\) to \(t\) yields_ \[\|k(\cdot,t)\|_{L^{p}(\Omega)}^{p}\leq M^{p}\left\|g_{2}(w)\right\|_{L^{1,1}(Q _{T})}+\|k_{0}\|_{L^{p}(\Omega)}^{p}\,. \tag{2.11}\] _Taking \(p^{th}\) roots of both sides and sending \(p\to\infty\) leaves_ \[\|k(\cdot,t)\|_{L^{\infty}(\Omega)}\leq M+\|k_{0}\|_{L^{\infty}(\Omega)}\,. \tag{2.12}\] _Taking the supremum over \(t\in(0,T)\) yields (2.6), completing the proof._ We also highlight the following properties of \(k(x,t)\) inherited by the function \(w(x,t)\), a simple consequence of solving the ordinary differential equation. **Proposition 2.5**: _Suppose \(k(x,\cdot)\) solves (2.4) with \(w\in C^{1,1}(\overline{Q}_{T})\) periodic in \(\Omega\). Then, \(k(x,t)\in C^{1,2}(\overline{Q}_{T})\). Moreover, \(k(\cdot,t)\) is periodic in \(\Omega\) for all \(t>0\)._ Finally, we obtain \(L^{p}\) estiates on the time/space derivatives \(k_{t}\) and \(k_{x}\). **Theorem 2.6**: _Assume the same conditions as in Lemma 2.4 hold. Assume also that there exists \(N>0\) and \(q\geq 1\) fixed so that_ \[g_{2}(z)\leq N(1+z^{q})\quad\text{for all }z\geq 0. \tag{2.13}\] _Assume in addition that \(w_{x}\in L^{2,2}(Q_{T})\) and \(\sup_{t\in(0,T)}\|w(\cdot,t)\|_{L^{p}(\Omega)}<\infty\) for all \(p\geq 1\). Then there holds_ \[\sup_{t\in(0,T)}\|k_{t}(\cdot,t)\|_{L^{p}(\Omega)}\leq 4N\left(M+\sup_{t\in(0,T)}\|k(\cdot,t)\|_{L^{\infty}(\Omega)}\right)\left(|\Omega|^{1/p}+\sup_{t\in (0,T)}\|w(\cdot,t)\|_{L^{pq}(\Omega)}^{q}\right) \tag{2.14}\] _for any \(p\in(1,\infty)\). Moreover, if for some \(\tilde{N}>0\), \(\tilde{q}\geq 0\) fixed we have that_ \[|g_{1}^{\prime}(z)|,|g_{2}^{\prime}(z)|\leq\tilde{N}(1+z^{\tilde{q}})\quad \text{for all }z\geq 0, \tag{2.15}\] _then there exists a constant \(C>0\) depending only on \(T\), \(\tilde{N}\), and \(\tilde{q}\) so that_ \[\sup_{t\in(0,T)}\|k_{x}(\cdot,t)\|_{L^{p}(\Omega)}\leq C\sup_{t\in(0,T)}\left\| (1+w^{\tilde{q}})(\cdot,t)\right\|_{L^{2p/(2-p)}(\Omega)}\|w_{x}\|_{L^{2,2}(Q_ {T})}+\|(k_{0})_{x}\|_{L^{p}(\Omega)}\,, \tag{2.16}\] _for any \(p\in(1,2)\) whenever \(\tilde{q}>0\), and any \(p\in(1,2]\) whenever \(\tilde{q}=0\)._ **Proof 3**: _Integrating \(|k_{t}|^{p}\) over \(\Omega\), applying an elementary inequality, and using (2.5) yields_ \[\int_{\Omega}|k_{t}|^{p}\,\mathrm{d}x =\int_{\Omega}|g_{1}(w)-g_{2}(w)k|^{p}\,\mathrm{d}x\] \[\leq 2^{p-1}\int_{\Omega}\left(|g_{1}(w)|^{p}+|g_{2}(w)|^{p}\,|k| ^{p}\right)\mathrm{d}x.\] \[\leq 2^{p-1}\int_{\Omega}\left(M^{p}+|k|^{p}\right)|g_{2}(w)|^{p} \,\mathrm{d}x \tag{2.17}\] _Estimating further, we use the bound on \(k\) along with bound (2.13) and the same elementary inequality to see that_ \[\int_{\Omega}|k_{t}|^{p}\,\mathrm{d}x \leq 2^{p-1}\left(M^{p}+\|k(\cdot,t)\|_{L^{\infty}(\Omega)}^{p} \right)\int_{\Omega}|g_{2}(w)|^{p}\,\mathrm{d}x\] \[\leq 4^{p-1}N^{p}\left(M^{p}+\|k(\cdot,t)\|_{L^{\infty}(\Omega)}^{ p}\right)\int_{\Omega}\left(1+w^{pq}\right)\mathrm{d}x\] \[\leq 4^{p}N^{p}\left(M^{p}+\|k(\cdot,t)\|_{L^{\infty}(\Omega)}^{ p}\right)\left(|\Omega|+\|w(\cdot,t)\|_{L^{pq}(\Omega)}^{pq}\right).\] _Thus, we take the \(p^{th}\) roots of both sides followed by the supremum over \(t\in(0,T)\) to obtain (2.14). This completes the first part of the proof._ _Next, we obtain \(L^{p}\) bounds on \(k_{x}\) for any \(p\in(1,2)\). Solving the ordinary differential equation, we may compute \(k_{x}(x,t)\) as follows:_ \[k_{x}(x,t) =\frac{\partial}{\partial x}\left(\int_{0}^{t}e^{-\int_{s}^{t}g_{ 2}(w)\mathrm{d}\xi}g_{1}(w)\mathrm{d}s+k_{0}(x)\right)\] \[=\int_{0}^{t}e^{-\int_{s}^{t}g_{2}(w)\mathrm{d}\xi}\left(g_{1}^{ \prime}(w)w_{x}-\int_{s}^{t}g_{2}^{\prime}(w)w_{x}d\xi\right)\mathrm{d}s+(k_ {0})_{x}. \tag{2.18}\] _Therefore, estimating crudely and using bound (2.15) there holds_ \[|k_{x}| \leq\int_{0}^{T}\left(|g_{1}^{\prime}(w)|\,|w_{x}|+\int_{0}^{T}|g_ {2}^{\prime}(w)|\,|w_{x}|\,\mathrm{d}\xi\right)\mathrm{d}s+|(k_{0})_{x}|\] \[\leq\int_{0}^{T}\left(|g_{1}^{\prime}(w)|+T\,|g_{2}^{\prime}(w)| \right)|w_{x}|\,\mathrm{d}s+|(k_{0})_{x}|\] \[\leq\tilde{N}(1+T)\int_{0}^{T}\left(1+w^{\tilde{q}}\right)|w_{x}| \,\mathrm{d}s+|(k_{0})_{x}|\,. \tag{2.19}\] Raising both sides to the power \(p\), integrating over \(\Omega\) and estimating via an elementary application of Holder's inequality in the temporal domain yields \[\int_{\Omega}\left|k_{x}\right|^{p}\mathrm{d}x \leq\tilde{N}^{p}(1+T)^{p}\int_{\Omega}\left(\int_{0}^{T}(1+w^{ \tilde{q}})\left|w_{x}\right|\mathrm{d}s\right)^{p}\mathrm{d}x+\left\|(k_{0})_{ x}\right\|_{L^{p}(\Omega)}^{p}\] \[\leq\tilde{N}^{p}(1+T)^{p}T^{p-1}\iint_{Q_{T}}(1+w^{\tilde{q}})^{ p}\left|w_{x}\right|^{p}\mathrm{d}x\mathrm{d}s+\left\|(k_{0})_{x}\right\|_{L^{p}( \Omega)}^{p}. \tag{2.20}\] We now apply Holder's inequality in the spatial domain as follows: \[\int_{\Omega}(1+w^{\tilde{q}})^{p}\left|w_{x}\right|^{p}\mathrm{d}x\leq\left( \int_{\Omega}\left|w_{x}\right|^{pp_{1}}\mathrm{d}x\right)^{1/p_{1}}\left( \int_{\Omega}(1+w^{\tilde{q}})^{pq_{1}}\mathrm{d}x\right)^{1/q_{1}},\] where we again choose \(p_{1}=2/p>1\) so that \(q_{1}=2/(2-p)>1\). Simplifying and taking the supremum over \(t\in(0,T)\) for the lower order term, we find \[\int_{\Omega}(1+w^{\tilde{q}})^{p}\left|w_{x}\right|^{p}\mathrm{d}x\leq\left\| w_{x}(\cdot,t)\right\|_{L^{2}(\Omega)}^{p}\sup_{t\in(0,T)}\left\|(1+w^{\tilde{q}})( \cdot,t)\right\|_{L^{2p/(2-p)}(\Omega)}^{p}\] (2.20) then becomes \[\int_{\Omega}\left|k_{x}\right|^{p}\mathrm{d}x\leq\tilde{N}^{p}(1+T)^{p}T^{p- 1}\sup_{t\in(0,T)}\left\|(1+w^{\tilde{q}})(\cdot,t)\right\|_{L^{2p/(2-p)}( \Omega)}^{p}\int_{0}^{T}\left\|w_{x}(\cdot,s)\right\|_{L^{2}(\Omega)}^{p} \mathrm{d}s+\left\|(k_{0})_{x}\right\|_{L^{p}(\Omega)}. \tag{2.21}\] Finally, we apply Holder's inequality in the temporal domain once more as follows: \[\int_{0}^{T}\left\|w_{x}(\cdot,s)\right\|_{L^{2}(\Omega)}^{p}\mathrm{d}s\leq T ^{1-p/2}\left\|w_{x}\right\|_{L^{2,2}(Q_{T})}^{p},\] whence (2.21) becomes \[\int_{\Omega}\left|k_{x}\right|^{p}\mathrm{d}x\leq\tilde{N}^{p}(1+T)^{p}T^{p/ 2}\sup_{t\in(0,T)}\left\|(1+w^{\tilde{q}})(\cdot,t)\right\|_{L^{2p/(2-p)}( \Omega)}^{p}\left\|w_{x}\right\|_{L^{2,2}(Q_{T})}^{p}+\left\|(k_{0})_{x}\right\| _{L^{p}(\Omega)}^{p}. \tag{2.22}\] Taking the \(p^{th}\) roots of both sides followed by the supremum over \(t\in(0,T)\) yields (2.16), valid for any \(p\in(1,2)\). Finally, if \(\tilde{q}=0\), the dependence on \(w^{\tilde{q}}\) vanishes and the bound holds for \(p=2\), completing the proof. ### Existence & Uniqueness We are now prepared to use these preliminary estimates to prove the existence of a weak solution. Much of the heavy lifting is now complete. What remains is to construct an appropriate sequence of approximate solutions and use our previously obtained estimates to extract a convergent subsequence. **Proof 4** (Proof of Theorem 1.1): _We prove the existence of a global weak solution to the following general system subject to periodic boundary conditions:_ \[\begin{cases}u_{t}=du_{xx}+\alpha(u\overline{k}_{x})_{x}+f(u),&\text{ in }Q_{T},\\ k_{t}=g_{1}(u)-g_{2}(u)k&\text{ in }Q_{T},\end{cases} \tag{2.23}\] _where systems (1.8.a) and (1.8.b) are obtained by choosing \(g_{1}(u):=g(u)\), \(g_{2}(u):=\mu+\beta u\text{ or }g_{1}(u):=\kappa g(u)\), \(g_{2}(u):=\mu+\beta u+g(u)\), respectively. First we construct a sequence of approximate solutions via the following iteration scheme:_ \[\begin{cases}(u_{n})_{t}=d(u_{n})_{xx}+\alpha(u_{n}(\overline{k_{n}})_{x})_{x} +f(u_{n}),&\text{ in }Q_{T},\\ (k_{n})_{t}=g_{1}(u_{n-1})-g_{2}(u_{n-1})k_{n}&\text{ in }Q_{T},\end{cases} \tag{2.24}\] _for \(n\geq 2\), where we choose \((u_{n}(x,0),k_{n}(x,0))=(u_{0}(x),k_{0}(x))\) for each \(n\). Note carefully that \(u_{n}=u_{n}(x,t)\) is a function defined in \(\overline{Q}_{T}\) for all \(n\geq 1\), whereas \(u_{0}=u_{0}(x)\) denotes the fixed initial data of the original problem. The same holds for \(\{k_{n}\}_{n\geq 2}\), each of which are defined over \(\overline{Q}_{T}\); notice also that we do not refer to \(k_{1}(x,t)\) as we require only \(u_{1}(x,t)\) to initiate._ _Through this construction, we generate a sequence of solutions \(\{(u_{n},k_{n})\}_{n\geq 2}\). More precisely, we choose the initial iterate \(0<u_{1}(x,t)\in C^{2+\sigma,1+\sigma/2}(\overline{Q}_{T})\) for some \(\sigma\in(0,1)\). By solving differential equation for \(k_{2}(x,t)\), the dependence of \(k_{2}(x,t)\) on the sufficiently regular functions \(u_{1}(x,t)\) and \(g_{i}(\cdot)\), \(i=1,2\), ensures that \(k_{2}\in C^{2+\sigma,1+\sigma/2}(\overline{Q}_{T})\) for some (possibly smaller) \(\sigma\in(0,1)\) as well. Furthermore, the positivity of \(u_{1}\) and \(u_{1}(x,0)=u_{0}(x)\) ensures that \(k_{2}\gtrless 0\) in \(Q_{T}\). Then, the existence of a nonnegative, nontrivial solution \(u_{2}(x,t)\in C^{2+\sigma,1+\sigma/2}(\overline{Q}_{T})\) follows from the classical theory of parabolic equations since it is a second order, semi-linear parabolic equation with Holder continuous coefficients Ladyzhenskaidi [1968]. Therefore, there exists a nonnegative, nontrivial classical solution pair \((u_{2},k_{2})\) each belonging to \(C^{2+\sigma,1+\sigma/2}(\overline{Q}_{T})\) for some \(\sigma\in(0,1)\). One may then proceed inductively, proving the existence of a nonnegative, nontrivial classical solution pair \((u_{n},k_{n})\) for any \(n\geq 3\) using the regularity of the previous iterate \(u_{n-1}(x,t)\). We now seek uniform bounds in a weaker setting._ _To this end, fix \(n\geq 2\). It is easy to obtain \(L^{1}\)-bounds on \(u_{n}\) as follows:_ \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}u_{n}\mathrm{d}x=\int_{\Omega}f(u_ {n})\mathrm{d}x\leq f^{\prime}(0)\int_{\Omega}u_{n}\mathrm{d}x, \tag{2.25}\] _where we have integrated by parts, applied the boundary conditions, and used the assumed bound \(f(z)\leq f^{\prime}(0)z\) for all \(z\geq 0\). Gronwall's inequality implies that_ \[\left\|u_{n}(\cdot,t)\right\|_{L^{1}(\Omega)}\leq e^{f^{\prime}(0)T}\left\|u_ {0}\right\|_{L^{1}(\Omega)}.\] _Thus, integrating over \((0,T)\) yields_ \[\left\|u_{n}\right\|_{L^{1,1}(Q_{T})}\leq Te^{f^{\prime}(0)T}\left\|u_{0} \right\|_{L^{1}(\Omega)}, \tag{2.26}\] _and so \(\{u_{n}\}_{n\geq 1}\) is uniformly bounded in \(L^{1,1}(Q_{T})\) for any \(T>0\) fixed._ _Return now to the equation for \(k_{n}\). The smoothness of the iterates \(u_{n}\) allows us to apply Lemma 2.4, giving us_ \[\sup_{t\in(0,T)}\left\|k_{n}(\cdot,t)\right\|_{L^{\infty}(\Omega)}\leq M+ \left\|k_{0}\right\|_{L^{\infty}(\Omega)}. \tag{2.27}\] _Note that \(k_{0}\in W^{1,2}(\Omega)\Rightarrow k_{0}\in L^{\infty}(\Omega)\) by the Sobolev embedding. Then, Lemma 2.1 paired with estimate (2.27) implies that_ \[\sup_{t\in(0,T)}\left\|(\overline{k_{n}})_{x}(\cdot,t)\right\|_{L ^{\infty}(\Omega)} \leq R^{-1}\sup_{t\in(0,T)}\left\|k_{n}(\cdot,t)\right\|_{L^{ \infty}(\Omega)}\] \[\leq R^{-1}\left(M+\left\|k_{0}\right\|_{L^{\infty}(\Omega)} \right)=:C_{1} \tag{2.28}\] _Now we seek \(L^{p}\)-bounds on the iterates \(u_{n}\). Fix \(p\geq 2\). Taking the time derivative of \(\frac{1}{p}\left\|u_{n}(\cdot,t)\right\|_{L^{p}(\Omega)}^{p}\), integrating by parts yields and using the bound for \(f(\cdot)\) yields_ \[\frac{1}{p}\frac{d}{dt}\int_{\Omega}u_{n}^{p}\mathrm{d}x =\int_{\Omega}u_{n}^{p-1}(d(u_{n})_{x}+\alpha u_{n}(\overline{k_{ n}})_{x})_{x}\mathrm{d}x+\int_{\Omega}u_{n}^{p-1}f(u_{n})\mathrm{d}x\] \[\leq f^{\prime}(0)\int_{\Omega}u_{n}^{p}\mathrm{d}x-d(p-1)\int_{ \Omega}u_{n}^{p-2}\left|(u_{n})_{x}\right|^{2}\mathrm{d}x\] \[+\left|\alpha\right|(p-1)\int_{\Omega}u_{n}^{p-1}\left|(u_{n})_{x }\right|\left|(\overline{k_{n}})_{x}\right|\mathrm{d}x. \tag{2.29}\] _We now use (2.28) and Cauchy's inequality with epsilon to control the third term by the second term on the right hand side of (2.29). To this end, we estimate_ \[\left|\alpha\right|u_{n}^{p-1}\left|(u_{n})_{x}\right|\left|( \overline{k}_{n})_{x}\right| \leq\left|\alpha\right|C_{1}u_{n}^{(p-2)/2}\left|(u_{n})_{x} \right|\left|u_{n}\right|^{p/2}\] \[\leq\left|\alpha\right|C_{1}\left(\frac{\varepsilon}{2}u_{n}^{p} +\frac{1}{2\varepsilon}u_{n}^{p-2}\left|(u_{n})_{x}\right|^{2}\right), \tag{2.30}\] _where we choose \(\varepsilon=d^{-1}\left|\alpha\right|C_{1}\). Paired with (2.29), this leaves_ \[\frac{1}{p}\frac{d}{dt}\int_{\Omega}u_{n}^{p}\mathrm{d}x \leq-\frac{d}{2}\int_{\Omega}u_{n}^{p-2}\left|(u_{n})_{x}\right|^{2} \mathrm{d}x+(f^{\prime}(0)+C_{1}^{2}\alpha^{2}d^{-1}(p-1))\int_{\Omega}u_{n}^ {p}\mathrm{d}x \tag{2.31}\] _Therefore, dropping the negative term and applying Gronwall's inequality yields_ \[\left\|u_{n}(\cdot,t)\right\|_{L^{p}(\Omega)}^{p}\leq e^{p(f^{\prime}(0)+C_{1} ^{2}\alpha^{2}d^{-1}(p-1))T}\left\|u_{0}\right\|_{L^{p}(\Omega)}^{p}.\] _Taking \(p^{\text{th}}\) roots followed by the supremum over \(t\in(0,T)\) yields the estimate_ \[\sup_{t\in(0,T)}\left\|u_{n}(\cdot,t)\right\|_{L^{p}(\Omega)} \leq e^{(f^{\prime}(0)+C_{1}^{2}\alpha^{2}d^{-1}(p-1))T}\left\|u_{0} \right\|_{L^{p}(\Omega)}\] \[=:C_{2}, \tag{2.32}\] _noting that the exponent depends critically on \(p\). Next, we return to (2.31) for the case \(p=2\). Upon rearrangement, we apply estimate (2.32) to obtain_ \[\frac{1}{2}\frac{d}{dt}\int_{\Omega}u_{n}^{2}\mathrm{d}x+\frac{d }{2}\int_{\Omega}\left|(u_{n})_{x}\right|^{2}\mathrm{d}x \leq\left(f^{\prime}(0)+C_{1}^{2}\alpha^{2}d^{-1}\right)\int_{ \Omega}u_{n}^{2}\mathrm{d}x\] \[\leq C_{2}^{2}\left(f^{\prime}(0)+C_{1}^{2}\alpha^{2}d^{-1}\right)\] \[=:C_{3}. \tag{2.33}\] _Integrating both sides from \(0\) to \(T\) yields_ \[\frac{1}{2}\left(\left\|u_{n}(\cdot,T)\right\|_{L^{2}(\Omega)}^{2}-\left\|u_{ 0}\right\|_{L^{2}(\Omega)}^{2}+d\left\|(u_{n})_{x}\right\|_{L^{2,2}(Q_{T})}^{2 }\right)\leq C_{3}T. \tag{2.34}\] _Ignoring the positive term on the left hand side, we extract the desired estimate for \((u_{n})_{x}\):_ \[\left\|(u_{n})_{x}\right\|_{L^{2,2}(Q_{T})}^{2}\leq d^{-1}\left(2C_{3}T+\left\| u_{0}\right\|_{L^{2}(\Omega)}\right)=:C_{4}^{2}. \tag{2.35}\] _We now immediately have the boundedness of \((k_{n})_{x}\) and \((k_{n})_{t}\) for any \(n\geq 2\) in some \(L^{p}\) spaces. Indeed, by Theorem 2.6, estimates (2.27), (2.32) and (2.35) imply the existence of a constant \(C_{5}\), independent of \(n\), such that_ \[\sup_{t\in(0,T)}\left\|(k_{n})_{t}(\cdot,t)\right\|_{L^{p}(\Omega)},\sup_{t\in (0,T)}\left\|(k_{n})_{x}(\cdot,t)\right\|_{L^{p}(\Omega)}\leq C_{5}, \tag{2.36}\] _for any \(p\in(1,2)\)._ _We now appeal to standard \(L^{p}\)-estimates for parabolic equations and the Sobolev embedding to improve our estimates on \(u_{n}\). If we expand the equation for \(u_{n}\) it reads_ \[(u_{n})_{t}-d(u_{n})_{xx}=\alpha\left(\overline{(k_{n})}_{x}(u_{n})_{x}+u_{n} \overline{(k_{n})}_{xx}\right)+f(u_{n}).\] _Obviously, \(f(z)\leq f^{\prime}(0)z\) and \(u_{n}\in L^{p,p}(Q_{T})\) for any \(p\geq 1\) gives us that \(f(u_{n})\in L^{p,p}(Q_{T})\) as well. Then, \(L^{p}\)-estimates for strong solutions (see, e.g., Wang [2003]) ensures that there holds_ \[\left\|u_{n}\right\|_{W^{2,1}_{r}(Q_{T})}\leq C\left(\left\|\overline{(k_{n})} _{x}(u_{n})_{x}\right\|_{L^{r}(Q_{T})}+\left\|u_{n}\overline{(k_{n})}_{xx} \right\|_{L^{r}(Q_{T})}+\left\|u_{n}\right\|_{L^{r}(Q_{T})}\right), \tag{2.37}\] _for some \(C>0\), for any \(r>1\). Choosing \(r\in(1,p)\), Holder's inequality gives_ \[\left\|u_{n}\overline{(k_{n})}_{xx}\right\|_{L^{r}(Q_{T})}\leq\left\|u_{n} \right\|_{L^{p/r}(Q_{T})}\left\|\overline{(k_{n})}_{xx}\right\|_{L^{p}(Q_{T})}. \tag{2.38}\] _By Lemma (2.1), the bound (2.36) and (2.32), we may further estimate as_ \[\left\|u_{n}\right\|_{L^{p/r}(p-r)(Q_{T})}\left\|\overline{(k_{n })}_{xx}\right\|_{L^{p}(Q_{T})} \leq R^{-1}C_{2}\sup_{t\in(0,T)}\left\|(k_{n})_{x}(\cdot,t)\right\| _{L^{p}(\Omega)}\] \[\leq R^{-1}C_{2}C_{5}=:C_{6}, \tag{2.39}\] _for any \(r\in(1,p)\), where \(C_{6}\) does not depend on \(n\). Similarly, there holds_ \[\left\|\overline{(k_{n})}_{x}(u_{n})_{x}\right\|_{L^{r}(Q_{T})}\leq C_{7},\] _where \(C_{7}\) does not depend on \(n\). Hence,_ \[\left\|u_{n}\right\|_{W^{2,1}_{r}(Q_{T})}\leq C(C_{2}+C_{5}+C_{6}),\] _and so \(\{u_{n}\}_{n\geq 2}\) is bounded in \(W^{2,1}_{r}(Q_{T})\) for any \(r\in(1,p)\). Since \(p\) can be chosen as close to \(2\) as we like, we choose \(r\in(\frac{3}{2},2)\) and apply the Sobolev embedding to conclude that in fact_ \[\left\|u_{n}\right\|_{C^{\sigma,\sigma/2}(\overline{Q}_{T})}\leq\tilde{C} \left\|u_{n}\right\|_{W^{2,1}_{r}(Q_{T})}\leq\tilde{C}C(C_{2}+C_{5}+C_{6}), \tag{2.40}\] _for any \(\sigma\in(0,\frac{1}{2})\), for some \(\tilde{C}>0\). In particular, \(u_{n}\) is uniformly bounded in \(\overline{Q}_{T}\), independent of \(n\)._ _Now we are ready to obtain bounds on the time derivative \((u_{n})_{t}\). While the previous step gives \(L^{p}\)-bounds on the time derivative for \(p\in(1,2)\) only, with a bit of extra work we can show that it also holds for \(p=2\). These estimates follow from standard arguments used in the development of the \(L^{2}\)-theory of parabolic equations (see, e.g., [Wu et al., 2006, Ch. 3.3]), using all previous bounds. We show the key details only._ _First, note that bound (2.16) in Theorem 2.6 paired with the uniform boundedness of the iterates \(\{u_{n}\}_{n\geq 2}\) over \(\overline{Q}_{T}\) obtained in (2.40) implies that in fact \(\{(k_{n})_{x}\}_{n\geq 2}\) is uniformly bounded in \(L^{2,2}(Q_{T})\). Multiplying the equation for \(u_{n}\) by \((u_{n})_{t}\) and integrating over \(\Omega\) gives_ \[\int_{\Omega}\left|(u_{n})_{t}\right|^{2}\mathrm{d}x=\int_{\Omega}(u_{n})_{t} \left((d(u_{n})_{x}+\alpha u_{n}\overline{(k_{n})}_{x})_{x}+f(u_{n})\right) \mathrm{d}x \tag{2.41}\] _By the regularity of the iterates \((u_{n},k_{n})\) for fixed \(n\), we may exchange the order of differentiation and integrate by parts to obtain_ \[\int_{\Omega}\left|(u_{n})_{t}\right|^{2}\mathrm{d}x =-\frac{d}{2}\int_{\Omega}\left(\left|(u_{n})_{x}\right|^{2} \right)_{t}\mathrm{d}x+\int_{\Omega}(u_{n})_{t}f(u_{n})\mathrm{d}x\] \[+\alpha\int_{\Omega}(u_{n})_{t}\left(\left(u_{n}\right)_{x} \overline{(k_{n})}_{x}+u_{n}\overline{(k_{n})}_{xx}\right)\mathrm{d}x. \tag{2.42}\] _Integrating from \(0\) to \(T\) and dropping the negative term, we are left with_ \[\iint_{Q_{T}}\left|(u_{n})_{t}\right|^{2}\mathrm{d}x\mathrm{d}t \leq\frac{d}{2}\left\|(u_{0})_{x}\right\|_{L^{2}(\Omega)}^{2}+ \iint_{Q_{T}}(u_{n})_{t}f(u_{n})\mathrm{d}x\] \[+\alpha\iint_{Q_{T}}(u_{n})_{t}\left((u_{n})_{x}\overline{(k_{n} )_{x}}+u_{n}\overline{(k_{n})_{xx}}\right)\mathrm{d}x. \tag{2.43}\] _We then estimate crudely as follows: since \(u_{n}\) and \(\overline{(k_{n})}_{x}\) are uniformly bounded in \(\overline{Q}_{T}\) by (2.40) and (2.28), and since \((u_{n})_{x}\) and \(\overline{(k_{n})}_{xx}\) are uniformly bounded in \(L^{2,2}(Q_{T})\) by (2.35) and preceding arguments, \(f(u_{n}),\,(u_{n})_{x}\overline{(k_{n})}_{x}\) and \(u_{n}\overline{(k_{n})}_{xx}\) are all uniformly bounded in \(L^{2,2}(Q_{T})\). Hence, a simple application of Cauchy's inequality with epsilon yields the existence of a constant \(\tilde{C}^{\prime}>0\), independent of \(n\), such that_ \[\frac{1}{2}\int_{Q_{T}}\left|(u_{n})_{t}\right|^{2}\mathrm{d}x\mathrm{d}t\leq \frac{d}{2}\left\|(u_{0})_{x}\right\|_{L^{2}(\Omega)}^{2}+\tilde{C}^{\prime}. \tag{2.44}\] _We now summarize the uniform estimates we have obtained and complete the limiting process._ \[u_{n} \in L^{\infty}(0,T;L^{p}(\Omega))\cap C^{\sigma,\sigma/2}( \overline{Q}_{T}),\] \[(u_{n})_{x} \in L^{2}(0,T;L^{2}(\Omega)),\quad(u_{n})_{t}\in L^{2}(0,T;L^{2}( \Omega)),\] \[k_{n} \in L^{\infty}(0,T;L^{\infty}(\Omega))\cap C^{\sigma,\sigma/2}( \overline{Q}_{T}),\] \[(k_{n})_{x} \in L^{2}(0,T;L^{2}(\Omega)),\quad(k_{n})_{t}\in L^{2}(0,T;L^{2}( \Omega)). \tag{2.45}\] _Hence, there exists a limit function \((u_{\infty},k_{\infty})\) so that for any \(1\leq p\leq\infty\) and any \(0<\sigma^{\prime}<\sigma<1/2\), there holds_ \[u_{n}\to u_{\infty},\;k_{n}\to k_{\infty} \text{ strongly in }L^{p,p}(Q_{T})\cap C^{\sigma^{\prime},\sigma^{\prime}/2}( \overline{Q}_{T}),\] \[(u_{n})_{x}\to(u_{\infty})_{x} \text{ strongly in }L^{2,2}(Q_{T}),\] \[(u_{n})_{t}\to(u_{\infty})_{t} \text{ weakly in }L^{2,2}(Q_{T}),\] \[(k_{n})_{t}\to(k_{\infty})_{t},\;(k_{n})_{x}\to(k_{\infty})_{x} \text{ weakly in }L^{2,2}(Q_{T}). \tag{2.46}\] _It is not difficult to verify that \((u_{\infty},k_{\infty})\) is indeed a weak solution to the original problem (2.23) in the sense of (1.10)-(1.11) and satisfies the initial data in the classical sense. Since \(u_{n},k_{n}\) are nonnegative for all \(n\geq 2\), we find that \(0\leq u_{\infty},k_{\infty}\) in \(\overline{Q}_{T}\). Furthermore, the solution \(u_{\infty}\) is nontrivial since \(f^{\prime}(0)>0\), whence \(k_{\infty}\) is also nontrivial. We now write \((u,k)\) for the solution obtained._ _Uniqueness given initial data \((u_{0},k_{0})\) follows from standard arguments and using the fact that \(u\) and \(\overline{k}_{x}\) are uniformly bounded over \(Q_{T}\). Indeed, if there were two solution pairs \((u,k)\) and \((\tilde{u},\tilde{k})\) satisfying the same initial data, an application of Cauchy's inequality with epsilon paired with the uniform boundedness of the solutions over \(Q_{T}\), the smoothness of the functions \(g_{i}(\cdot)\), \(i=1,2\), \(f(\cdot)\) (Lipschitz continuity is sufficient), and the linearity of the spatial convolution operation yields_ \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\left((u-\tilde{u})^{2}+( k-\tilde{k})^{2}\right)\mathrm{d}x\leq C\int_{\Omega}\left((u-\tilde{u})^{2}+(k- \tilde{k})^{2}\right)\mathrm{d}x,\] _and so Gronwall's inequality implies that \(\left\|(u-\tilde{u})(\cdot,t)\right\|_{L^{2}(\Omega)}=\left\|(k-\tilde{k})( \cdot,t)\right\|_{L^{2}(\Omega)}=0\) for any \(t\in(0,T)\), and uniqueness is proved._ _Hence, for problem (1.8.a), there exists a unique, global weak solution in the sense of (1.10)-(1.11) so long as \(g(u)\) satisfies the bound_ \[g(z)\leq M(\mu+\beta z)\quad\forall z\geq 0,\] _for \(\mu,\ \beta\geq 0\) fixed, for some \(M>0\). For problem (1.8.b), \(g(z)\leq\mu+\beta z+g(z)\) holds trivially, and no further condition on \(g\) is required, concluding the proof._ ## 3 Stability of spatially-constant steady states ### Spatially-constant steady states Under assumptions (H0)-(H2), system (1.8.a) has two constant steady-states \((0,0)\) and \((1,\dfrac{\rho}{\mu+\beta})\). For simplicity, denote \[h(u,k):=g(u)-(\mu+\beta u)k,\ U_{0}:=(0,0),\ U_{*}:= (1,\dfrac{\rho}{\mu+\beta}).\] The ODE kinetic system corresponding to (1.8.a) is given by \[\begin{cases}u^{\prime}=f(u),&t>0,\\ k^{\prime}=g(u)-(\mu+\beta u)k,&t>0.\end{cases} \tag{3.1}\] Then the Jacobian matrices \(J_{0}\) at \(U_{0}\) and \(J_{*}\) at \(U_{*}\) of (3.1) are given by \[J_{0}=\left(\begin{array}{cc}f_{u0}&0\\ 0&-\mu\end{array}\right),\ J_{*}=\left(\begin{array}{cc}f_{u*}&0\\ h_{u*}&h_{k*}\end{array}\right),\] where \[f_{u0}:= f^{\prime}(0)>0,\ \ h_{u*}:= h_{u}(U_{*})=\dfrac{g^{\prime}(1)(\mu+\beta)-\beta\rho}{\mu+ \beta}, \tag{3.2}\] \[f_{u*}:= f^{\prime}(1)<0,\ \ h_{k*}:= h_{k}(U_{*})=-(\mu+\beta)<0.\] Let \(Tr(J_{*})\) denotes the trace of \(J_{*}\), and let \(Det(J_{*})\) be the determinant of \(J_{*}\). Then we have \[Tr(J_{*})=f_{u*}+h_{k*}<0,\ \ Det(J_{*})=f_{u*}h_{k*}>0. \tag{3.3}\] Hence \(U_{*}\) is a locally asymptotically stable steady state with respect to (3.1), and \(U_{0}\) is linearly unstable with (3.1) and also (1.8.a). ### Linear stability analysis In Section 3.3 we will use spectral analysis to determine rigorously the regions of stability for the constant steady state. However, the formalism required for spectral analysis can obscure the central message. Therefore it is valuable first to perform linear stability analysis using an ansatz. This gives a quick route to an answer that relies on said ansatz, which we then make rigorous via a more detailed spectral analysis. The ansatz we use is to assume that non-constant perturbations of the constant steady state have the following form at arbitrarily small times \[\tilde{u}=u_{0}\mathrm{e}^{\mathrm{i}q_{n}x+\lambda t},\quad\tilde{k}=k_{0} \mathrm{e}^{\mathrm{i}q_{n}x+\lambda t},\quad u\approx\tilde{u}+1,\quad k \approx\tilde{k}+\dfrac{\rho}{\mu+\beta}, \tag{3.4}\] where \(u_{0},k_{0},\lambda\in\mathbb{R}\) are constants and \(q_{n}=\sqrt{l_{n}}=n\pi/L\) for \(n\in\mathbb{N}\). These particular wavenumbers, \(q_{n}\), are chosen as they satisfy the periodic boundary conditions. Then, neglecting nonlinear terms and applying Fourier theory, the PDEs in system (1.8.a) become \[\lambda\left(\begin{array}{c}\tilde{u}\\ \tilde{k}\end{array}\right)=M_{n}\left(\begin{array}{c}\tilde{u}\\ \tilde{k}\end{array}\right), \tag{3.5}\] where \[M_{n}=\left(\begin{array}{cc}-l_{n}d+f^{\prime}(1)&-l_{n}\alpha C_{n}(G)\\ g^{\prime}(1)-\frac{\beta\rho}{\mu+\beta}&-\mu-\beta\end{array}\right), \tag{3.6}\] and \(C_{n}(G)\) (the Fourier coefficient of \(G\)) is defined in (1.12). Stability requires that the trace of \(M_{n}\) is negative and the determinant positive. For the determinant to be positive, we require \[\alpha C_{n}(G)[g^{\prime}(1)(\mu+\beta)-\beta\rho]+(\mu+\beta)^{2}\left(d- \frac{f^{\prime}(1)}{l_{n}}\right)>0. \tag{3.7}\] (3.7) holds when \(\alpha C_{n}(G)\) is small or zero as \(f^{\prime}(1)<0\) by (H0), and (3.7) is true for all \(n\) if \(\alpha\) (positive or negative) is sufficiently close to \(0\). For the trace to be negative, we require \[f^{\prime}(1)<l_{n}d+\mu+\beta, \tag{3.8}\] which is always true as the right-hand side is positive and \(f^{\prime}(1)<0\) by (H0). As long as Equations (3.7)-(3.8) are satisfied, system (1.8.a) will be stable to perturbations of the exponential functional form given in Equation (3.4) at wavenumber \(q_{n}\). A similar process gives the analogous result for system (1.8.b), which we leave as an exercise for the reader. In the next section, we generalise this result to arbitrary perturbations, for both systems (1.8.a) and (1.8.b) (see Theorems 1.3 and 1.4). ### Spectral analysis We now provide a detailed spectral analysis to confirm that the insights in Section 3.2 hold. Equation (3.6) is the eigenvalue problem to be examined here but requires further justification, which is done in Lemma 3.3 and 3.4. This ensures that the Fourier analysis utlized is robust. Of note is the symmetry of the kernel \(G(\cdot)\) about the origin, which guarantees that the coefficients \(C_{n}(G)\) are real-valued. If \(G\) is not even, then \(C_{n}(G)\) could be complex-valued, and may lead to a Hopf bifurcation. We do not explore this any further in the present work. Finally Theorem 3.6 shows other than the eigenvalues from (3.6), there is an essential spectral point. In this case, that essential spectral point is entirely negative and so will not affect stability. In general reaction-diffusion systems, the essential spectrum will not occur, but for such coupled PDE-ODE system, it may, and so we rule out the possibility. The linearized equation of system (1.8.a) at a constant steady state \(U_{*}=(u_{*},k_{*})\) is given by \[\begin{cases}\widetilde{u}_{t}=d\widetilde{u}_{xx}+f_{u*}\widetilde{u}+\alpha u _{*}(G*\widetilde{k})_{xx},&x\in(-L,L),\;t>0,\\ \widetilde{k}_{t}=h_{u*}\widetilde{u}+h_{k*}\widetilde{k},&x\in(-L,L),\;t>0, \\ \widetilde{u}(-L,t)=\widetilde{u}(L,t)=0,&t>0,\\ \widetilde{u}_{x}(-L,t)=\widetilde{u}_{x}(L,t)=0,&t>0.\end{cases} \tag{3.9}\] Define the linearized operator \(\mathcal{L}_{*}(\alpha):X\to Y\) in (3.9) by \[\mathcal{L}_{*}(\alpha)\left[\begin{array}{c}\phi\\ \psi\end{array}\right]=\left(\begin{array}{cc}d\phi_{xx}+f_{u*}\phi+\alpha u _{*}(G*\psi)_{xx}\\ h_{u*}\phi+h_{k*}\psi\end{array}\right). \tag{3.10}\] For further spectral analysis, we first recall the following definitions and give some lemmas. **Definition 3.1**: _[_Magal and Ruan_,_ 2018_, Definition 2.2.1]_ _Let \(A:\mathcal{D}(L)\subset X\to X\) be a linear operator on a \(\mathbb{K}\)-Banach space \(X\) with \(\mathbb{K}=\mathbb{R}\) or \(\mathbb{C}\). The resolvent set \(\rho(A)\) of \(A\) is the set of all points \(\lambda\in\mathbb{K}\) such that \((\lambda I-A)^{-1}\) is a bijection from \(\mathcal{D}(A)\) into \(X\) and the inverse \((\lambda I-A)^{-1}\), called the resolvent of \(A\), is a bounded linear operator from \(X\) into itself._ **Definition 3.2**: _[_Magal and Ruan_,_ 2018_, Definition 4.2.1]_ _Let \(A:\mathcal{D}(L)\subset X\to X\) be a linear operator on a complex Banach space \(X\). The spectrum of the operator \(A\) is defined as the complement of the resolvent set \(\sigma(A)=\mathbb{C}\setminus\rho(A)\). Consider the following three conditions:_ 1. \((\lambda I-A)^{-1}\) _exists;_ 2. \((\lambda I-A)^{-1}\) _is bounded;_ 3. _the domain of_ \((\lambda I-A)^{-1}\) _is dense in_ \(X\)_._ _The spectrum \(\sigma(A)\) can be further decomposed into three disjoint subsets._ 1. _The point spectrum is the set_ \[\sigma_{p}(A):=\{\lambda\in\sigma(A):\mathcal{N}(\lambda I-A)\neq\{0\}\}.\] _Elements of the point spectrum_ \(\sigma_{p}(A)\) _are called eigenvalues. If_ \(\lambda\in\sigma_{p}(A)\)_, elements_ \(x\in\mathcal{N}(\lambda I-A)\) _are called eigenvectors or eigenfunctions. The dimension of_ \(\mathcal{N}(\lambda I-A)\) _is the multiplicity of_ \(\lambda\)_._ 2. _The continuous spectrum is the set_ \[\sigma_{c}(A):=\{\lambda\in\sigma(A):(1)\text{ and }(3)\text{ hold but }(2)\text{ does not}\}.\] 3. _The residual spectrum is the set_ \[\sigma_{r}(A):=\{\lambda\in\sigma(A):(\lambda I-A)^{-1}\text{ exists but }\overline{\mathcal{R}(\lambda I-A)}\neq X\}.\] _Furthermore, we have the following spectrum decomposition:_ \[\sigma(A)=\sigma_{p}(A)\cup\sigma_{c}(A)\cup\sigma_{r}(A).\] **Lemma 3.3**: _Assume \(G\) satisfies (H0), and \(\phi\in H^{2}_{per}(-L,L)\). Then_ \[(G\ast\phi)_{xx}=G\ast(\phi_{xx}). \tag{3.11}\] **Proof 5**: _By using the symmetry of \(G\), the periodicity of \(G\) and \(\phi\), we get_ \[(G\ast\phi)_{xx}= \frac{1}{2L}\int_{-L}^{L}G_{xx}(x-y)\phi(y)dy=\frac{1}{2L}\int_{- L}^{L}G_{yy}(x-y)\phi(y)dy\] \[= \frac{1}{2L}(G_{y}(x+L)\phi(-L)-G_{y}(x-L)\phi(L)-\int_{-L}^{L}G_ {y}(x-y)\phi^{\prime}(y)dy)\] \[= \frac{1}{2L}(G_{y}(x+L)\phi(-L)-G_{y}(x-L)\phi(L)-G(x-L)\phi^{ \prime}(L)+G(x+L)\phi^{\prime}(-L)+\int_{-L}^{L}G(x-y)\phi^{\prime\prime}(y)dy)\] \[= \frac{1}{2L}(G^{\prime}(x+L)\phi(-L)-G^{\prime}(x-L)\phi(L)-G(x- L)\phi^{\prime}(L)+G(x+L)\phi^{\prime}(-L))+G\ast(\phi_{xx})\] \[= G\ast(\phi_{xx}).\] **Lemma 3.4**: _Assume \(G\) satisfies (H0). Then_ \[G\ast\phi_{n}=C_{n}(G)\phi_{n},\;\;G\ast\phi_{-n}=C_{n}(G)\phi_{-n} \tag{3.12}\] _where \(C_{n}(G)\) is defined in (1.12) and \(\phi_{n},\phi_{-n}\) are defined in (1.9)._ **Proof 6**: _For \(x\in[-L,L]\),_ \[G\ast\phi_{n}(x)= \phi_{n}\ast G(x)=\frac{1}{2L}\int_{-L}^{L}\phi_{n}(x-y)G(y)dy= \frac{1}{2L}\int_{-L}^{L}e^{\frac{in\pi}{L}(x-y)}G(y)dy\] \[= \frac{1}{2L}\int_{-L}^{L}e^{-\frac{in\pi}{L}y}G(y)dye^{\frac{in\pi }{L}x}=\frac{1}{2L}\int_{-L}^{L}\cos\left(\frac{n\pi}{L}y\right)G(y)dy\] \[= C_{n}(G)\phi_{n}(x).\] _Note that \(C_{n}\in\mathbb{R}\) as \(G\) is an even function from (H0). Thus the eigenspace corresponding to the eigenvalue \(\lambda=C_{n}(G)\) is_ \[V_{n}:=\operatorname{span}\left\{\cos\left(\frac{n\pi}{L}x\right),\;\sin\left( \frac{n\pi}{L}x\right)\right\}.\] Following a similar approach to the proof of [Ducrot et al., 2018, Proposition 2.1], we obtain the following lemma. **Lemma 3.5**: _The spectrum of the linear operator \(\mathcal{A}:D(\mathcal{A})\subset L^{2}_{per}(-L,L)\to L^{2}_{per}(-L,L)\) defined as_ \[\begin{cases}D(\mathcal{A})=H^{2}_{per}(-L,L),\\ A\phi=a\phi^{\prime\prime}+b(G*\phi^{\prime\prime}),\end{cases}\] _is_ \[\sigma(\mathcal{A})=\{\mu_{n}=\mu_{-n}:=-l_{n}(a+bC_{n}(G)),\;n\in\mathbb{N}_{ 0}\},\] _and the corresponding eigenfunctions are \(\phi_{n}(x)\) and \(\phi_{-n}(x)\) for \(n\in\mathbb{N}_{0}\), where \(a,b\) are constants, and \(l_{n}\), \(\phi_{n}\), \(\phi_{-n}\) are defined in (1.9)._ Now we can determine the spectral set of the linearized operator \(\mathcal{L}_{*}(\alpha)\). **Theorem 3.6**: _Assume that assumptions (H0)-(H2) are satisfied. Let \(l_{n}\) and \(\phi_{n}\) be the eigenvalues and eigenfunctions of problem (1.8). Then_ \[\sigma(\mathcal{L}_{*}(\alpha))=\sigma_{p}(\mathcal{L}_{*}(\alpha))=\{\lambda _{n}^{\pm}\}_{n\in\mathbb{Z}}\bigcup\{h_{k*}\}, \tag{3.13}\] _where_ \[\begin{split}&\lambda_{n}^{\pm}=\lambda_{-n}^{\pm}=\frac{B_{n} \pm\sqrt{B_{n}^{2}-4C_{n}}}{2},\\ & B_{n}=Tr(J_{*})-dl_{n},\;\;C_{n}=Det(J_{*})+(\alpha u_{*}h_{u*}C_ {n}(G)-dh_{k*})l_{n},\end{split} \tag{3.14}\] _for \(n\in\mathbb{N}_{0}\), and_ \[\begin{split}&(\phi_{n,\pm},\psi_{n,\pm})=\left(\phi_{n}(x),- \frac{h_{u*}}{h_{k*}-\lambda_{n}^{\pm}}\phi_{n}(x)\right),\\ &(\phi_{-n,\pm},\psi_{-n,\pm})=\left(\phi_{-n}(x),-\frac{h_{u*}} {h_{k*}-\lambda_{n}^{\pm}}\phi_{-n}(x)\right)\end{split} \tag{3.15}\] _are the eigenfunctions corresponding to \(\lambda_{n}^{\pm}\) and \(\lambda_{-n}^{\pm}\), where \(\phi_{n}\), \(\phi_{-n}\), \(h_{u*}\), \(h_{k*}\), \(Tr(J_{*})\), \(Det(J_{*})\) and \(C_{n}(G)\) are defined in (1.9), (3.2), (3.3) and (1.12), respectively. Furthermore, \(\lambda_{n}^{\pm}\) and \(\lambda_{-n}^{\pm}\) are eigenvalues of \(\mathcal{L}_{*}(\alpha)\) of finite multiplicity, and \(h_{k*}\) is an eigenvalue of infinite multiplicity._ **Proof 7**: _For \(\lambda\in\mathbb{C}\) and \((\xi,\eta)\in Y\), we consider the resolvent equation of \(\mathcal{L}_{*}(\alpha)\), which is_ \[\begin{cases}d\phi_{xx}+f_{u*}\phi+\alpha u_{*}(G*\psi)_{xx}=\lambda\phi+\xi,\\ h_{u*}\phi+h_{k*}\psi=\lambda\psi+\eta,\\ \phi(-L)=\phi(L),\;\phi^{\prime}(-L)=\phi^{\prime}(L).\end{cases} \tag{3.16}\] _If \(\lambda\neq h_{k*}\), from the second equation of (3.16), we have_ \[\psi=\frac{\eta-h_{u*}\phi}{h_{k*}-\lambda}, \tag{3.17}\] _Substituting (3.17) into the first equation of (3.16) and combining with (3.11) and (3.12), we get_ \[(f_{u*}-\lambda)\phi+(d\phi_{xx}-\alpha u_{*}\frac{h_{u*}}{h_{k*}-\lambda}(G* \phi^{\prime\prime}))=\xi. \tag{3.18}\] _Equation (3.18) has non-zero solutions if and only if_ \[(f_{u*}-\lambda)\notin\sigma(\mathcal{A}) \tag{3.19}\] _holds, where \(\mathcal{A}\) is the operator defined in Lemma 3.5 with \(a=d\) and \(b=-\alpha u_{*}\frac{h_{u*}}{h_{k*}-\lambda}\). Then from Lemma 3.5, (3.19) is equivalent to_ \[\frac{(f_{u*}-\lambda)(h_{k*}-\lambda)}{d(h_{k*}-\lambda)-\alpha u_{*}h_{u*}C_ {n}(G)}\notin\{l_{n}\}_{n\in\mathbb{N}_{0}}. \tag{3.20}\] _It follows that \(\mathcal{L}_{*}(\alpha)-\lambda I\) has a bounded inverse \((\mathcal{L}_{*}(\alpha)-\lambda I)^{-1}\) when (3.20) is satisfied. Otherwise, \(\lambda\) satisfies the following characteristic equation:_ \[\lambda^{2}-(Tr(J_{*})-dl_{n})\lambda+Det(J_{*})+(\alpha u_{*}h_{u*}C_{n}(G)-dh _{k*})l_{n}=0. \tag{3.21}\] _Therefore, \(\lambda^{\pm}_{\pm n}\) in (3.14) are the roots of (3.21) with \(\mathcal{R}e\lambda^{-}_{\pm n}\leq\mathcal{R}e\lambda^{+}_{\pm n}\), and (3.15) are eigenfunctions corresponding to \(\lambda^{\pm}_{\pm n}\)._ _If \(\lambda=h_{k*}\), we consider_ \[(\mathcal{L}_{*}(\alpha)-h_{k*}I)\left(\begin{array}{c}\phi\\ \psi\end{array}\right)=\left(\begin{array}{c}0\\ 0\end{array}\right), \tag{3.22}\] _that is_ \[\begin{cases}d\phi_{xx}+f_{u*}\phi+\alpha u_{*}(G*\psi)_{xx}=h_{k*}\phi,\\ h_{u*}\phi=0,\\ \phi(-L)=\phi(L),\phi^{\prime}(-L)=\phi^{\prime}(L).\end{cases} \tag{3.23}\] _Clearly, \(\phi=0\) and \((G*\psi)_{xx}=0\), which imply that there exist non-zero solutions to (3.22). Then \(h_{k*}\) is also an eigenvalue of \(\mathcal{L}_{*}(\alpha)\), and \(\dim ker(\mathcal{L}_{*}(\alpha)-h_{k*}I)=\infty\)._ Note that \(Tr(J_{*})<0\), \(Det(J_{*})>0\) from the stability of \(U_{*}\), and \(l_{n}\geq 0\), which yields \(\mathcal{R}e\lambda^{-}_{\pm n}<0\). In other words, eigenvalues \(w_{k*}\) and \(\{\lambda^{-}_{\pm n}\}_{n\in\mathbb{N}}\) of \(\mathcal{L}_{*}\) all have negative real parts, and for \(n=0\), we also have \[\lambda_{0}=\frac{Tr(J_{*})+\sqrt{Tr(J_{*})^{2}-4Det(J_{*})}}{2}<0.\] On the other hand, the sign of \(\lambda^{+}_{\pm n}\) depends on the magnitude of \(\alpha\). Recall \(\alpha_{n}\) defined in (1.13), we immediately have the following proposition. **Proposition 3.7**: _Assume that assumptions (H0)-(H2) are satisfied, and \(n\in\mathbb{N}\) such that \(C_{n}(G)\neq 0\). Let \(\mathcal{L}_{*}(\alpha)\) and \(\alpha_{n}\) be defined in (3.10) and (1.13), respectively. Then \(0\) is an eigenvalue of \(\mathcal{L}_{*}(\alpha)\) when \(\alpha=\alpha_{n}\) with multiplicity of two for \(n\in\mathbb{N}\), while other eigenvalues have non-zero real parts. Furthermore,_ \[\mathcal{N}(\mathcal{L}_{*}(\alpha_{n}))=\operatorname{span}\left\{\left(1,- \frac{h_{u*}}{h_{k*}}\right)\phi_{n}(x),\,\left(1,-\frac{h_{u*}}{h_{k*}} \right)\phi_{-n}(x)\right\}. \tag{3.24}\] **Proof 8**: _From Lemma 3.4, \(C_{n}(G)\) is real-valued. Substituting (1.13) into (3.14) yields \(\lambda^{+}_{\pm n}=0\) for \(n\in\mathbb{N}\). The conclusion of eigenfunctions follow from Lemma 3.5._ We can now prove Theorems 1.3 and 1.4. **Proof 9** (Proof of Theorem 1.3): _From the assumptions, \(h_{k*}<0\). We also know that \(\mathcal{R}e\lambda^{-}_{\pm n}<0\). Finally for \(\alpha_{l}<\alpha<\alpha_{r}\), all \(\lambda^{+}_{\pm n}\) are negative. Hence \(U_{*}\) is is locally asymptotically stable with respect to (1.8.a). And when \(\alpha<\alpha_{l}\) or \(\alpha>\alpha_{r}\), at least one of \(\lambda^{+}_{\pm n}\) is negative, thus \(U_{*}\) is unstable._ The proof of Theorem 1.4 is similar by repeating the same analysis. Equation (1.8.b) has two constant steady states \(\widehat{U}_{0}=(0,0)\) and \(\widehat{U}_{*}=(\widehat{u}_{*},\widehat{k}_{*})=(1,\frac{\rho\kappa}{\mu+ \beta+\rho})\). Let \(\widehat{h}(u,k):=g(u)(\kappa-k)-(\mu+\beta u)k\), we have \[\widehat{h}_{u_{*}}=\frac{\kappa[g(1)(\mu+\beta)-\beta\rho]}{\mu+\beta+\rho}, \,\widehat{h}_{k_{*}}=-(\mu+\beta+\rho). \tag{3.25}\] Other parts are similar to the ones for the proof of Theorem 1.3. ## 4 Bifurcation analysis In this section, we prove the existence of non-constant steady state solutions of (1.8.a) through the Bifurcation from Simple Eigenvalue Theorem Crandall and Rabinowitz (1971). From (3.24), the multiplicity of zero eigenvalue is two, so we will restrict the solutions to be even functions only to apply the bifurcation theorem. For completeness, we first recall the abstract bifurcation theorem, the definition of \(K\)-simple eigenvalue and a perturbation result. Consider an abstract equation \(F(\lambda,u)=0\), where \(F:\mathbb{R}\times X\to Y\) is a nonlinear differentiable mapping, and \(X,Y\) are Banach spaces. Crandall and Rabinowitz Crandall and Rabinowitz (1971) obtained the following classical Bifurcation from Simple Eigenvalue Theorem. **Theorem 4.1**: _[_Crandall and Rabinowitz_,_ _1971_, Theorem 1.17]_ _Suppose that \(\lambda_{0}\in\mathbb{R}\) and \(F:\mathbb{R}\times X\to Y\) is a twice continuously differentiable mapping and that_ 1. \(F(\lambda,0)=0\) _for_ \(\lambda\in\mathbb{R}\)_,_ 2. \(\dim(\mathcal{N}(F_{u}(\lambda_{0},0)))=\operatorname{codim}(\mathcal{R}(F_{u}( \lambda_{0},0)))=1\)_,_ 3. \(F_{\lambda u}(\lambda_{0},0)[\phi_{0}]\notin\mathcal{R}(F_{u}(\lambda_{0},0))\) _where_ \(\mathcal{N}(F_{u}(\lambda_{0},0))=\operatorname{span}\{\phi_{0}\}\in X\)_._ _Let \(Z\) be any complement of \(\operatorname{span}\{\phi_{0}\}\) in \(X\), then there exist an open interval \(\widehat{I}\) containing \(0\) and continuous functions \(\lambda:\widehat{I}\to\mathbb{R}\), \(z:\widehat{I}\to Z,\) such that \(\lambda(0)=\lambda_{0}\), \(z(0)=0,\) and \(u(s)=s\phi_{0}+sz(s)\) satisfies \(F(\lambda(s),u(s))=0\). Moreover, \(F^{-1}(\{0\})\) near \((\lambda_{0},0)\) consists precisely of the curves \(u=0\) and the curves \(\{(\lambda(s),u(s)):s\in\widehat{I}\}\)._ **Definition 4.2**: _[_Crandall and Rabinowitz, 1973_, Definition 1.2]_ _Let \(B(X,Y)\) denote the set of bounded linear maps of \(X\) into \(Y\), and let \(T,K\in B(X,Y)\). Then \(\mu\in\mathbb{R}\) is a \(K\)-simple eigenvalue of \(T\) if_ \[\dim\mathcal{N}(T-\mu K)=\operatorname{codim}\mathcal{R}(T-\mu K)=1,\] _and if \(\mathcal{N}(T-\mu K)=\operatorname{span}\{\phi_{0}\}\), \(Kx_{0}\notin\mathcal{R}(T-\mu K)\)._ **Theorem 4.3**: _[_Crandall and Rabinowitz, 1973_, Theorem 1.16]_ _Let \(\{(\lambda(s),u(s)):s\in\widehat{I}\}\) be the curve of nontrivial solutions in Theorem 4.1. Then there exist continuously differentiable functions \(r:(\lambda_{0}-\varepsilon,\lambda_{0}+\varepsilon)\to\mathbb{R}\), \(z:(\lambda_{0}-\varepsilon,\lambda_{0}+\varepsilon)\to X\), \(\mu:(-\delta,\delta)\to\mathbb{R}\), \(w:(-\delta,\delta)\to X\), such that_ \[\begin{split}& F_{u}(\lambda,0)z(\lambda)=r(\lambda)Kz(\lambda), \ \lambda\in(\lambda_{0}-\varepsilon,\lambda_{0}+\varepsilon),\\ & F_{u}(\lambda(s),u(s,\cdot))w(s)=\mu(s)Kw(s),\ \ s\in(-\delta, \delta),\end{split} \tag{4.1}\] _where \(r(\lambda_{0})=\mu(0)=0\), \(z(\lambda_{0})=w(0)=(\phi_{0})\), \(K:X\to Y\) is the inclusion map with \(K(u)=u\). Moreover, near \(s=0\) the functions \(\mu(s)\) and \(-s\lambda^{\prime}(s)r^{\prime}(\lambda_{0})\) have the same zeroes and, whenever \(\mu(s)\neq 0\) the same sign and satisfy_ \[\lim_{s\to 0}\frac{-s\lambda^{\prime}(s)r^{\prime}(\lambda_{0})}{\mu(s)}=1. \tag{4.2}\] To apply the bifurcation theorems, we define a nonlinear mapping \(F:\mathbb{R}\times X\to Y\) by \[F(\alpha,U)=\left(\begin{array}{c}du_{xx}+\alpha(u(G*k)_{x})_{x}+f(u)\\ g(u)-(\mu+\beta u)k\end{array}\right),\] where \(U=(u,k)\), then the Frechet derivative of \(F\) at \((\alpha,U)=(\alpha_{n},U_{*})\) is \[\partial_{U}F(\alpha_{n},U_{*})\left(\begin{array}{c}\phi\\ \psi\end{array}\right)=\left(\begin{array}{c}d\phi_{xx}+f_{u*}\phi+\alpha_{ n}u_{*}(G*\psi)_{xx}\\ h_{u*}\phi+h_{k*}\psi\end{array}\right)=\mathcal{L}_{*}(\alpha_{n})\left( \begin{array}{c}\phi\\ \psi\end{array}\right). \tag{4.3}\] For simplicity, we denote \(\mathcal{L}_{*}(\alpha_{n})\) as \(\mathcal{L}_{n}\) in the following. Let \[\begin{split} X^{s}&=\{h\in X:h(-x)=h(x),x\in(-L,L)\},\\ Y^{s}&=\{h\in Y:h(-x)=h(x),x\in(-L,L)\}.\end{split}\] We consider the restriction of \(F:\mathbb{R}\times X^{s}\to Y^{s}\), and the restriction of \(\mathcal{L}_{n}:X^{s}\to Y^{s}\). Denote \(\mathcal{N}^{s}(\mathcal{L}_{n})\) to be the kernel space of the operator \(\mathcal{L}_{n}\) in \(X^{s}\), and \(\mathcal{N}^{s}(\mathcal{L}_{n}^{*})\) to be the kernel space of the adjoint operator \(\mathcal{L}_{n}\) in \(X^{s}\). Then \(X^{s}\) and \(Y^{s}\) have the following decompositions: \[X^{s}=\mathcal{N}^{s}(\mathcal{L}_{n})\oplus X_{1}^{s},\ Y^{s}=\mathcal{N}^{s} (\mathcal{L}_{n})\oplus Y_{1}^{s},\] where \[\begin{split}&\mathcal{N}^{s}(\mathcal{L}_{n})=\operatorname{ span}\left\{\left(1,-\frac{h_{u*}}{h_{k*}}\right)\cos\left(\frac{n\pi}{L}x \right)\right\},\\ &\mathcal{N}^{s}(\mathcal{L}_{n}^{*})=\operatorname{ span}\left\{\left(1,r_{n}\right)\cos\left(\frac{n\pi}{L}x\right)\right\}\text{ with }r_{n}=\frac{dl_{n}-f_{u*}}{h_{u*}},\\ & X_{1}^{s}=\left\{\left(h_{1},h_{2}\right)\in X^{s}:\int_{-L}^{L} \left(h_{1}-\frac{h_{u*}}{h_{k*}}h_{2}\right)\cos\left(\frac{n\pi}{L}\right) dx=0\right\},\\ & Y_{1}^{s}=\mathcal{R}^{s}(\mathcal{L}_{n})=\left\{\left(h_{1},h_{2} \right)\in Y^{s}:\int_{-L}^{L}(h_{1}+r_{n}h_{2})\cos\left(\frac{n\pi}{L}x \right)dx=0\right\}.\end{split}\] Hence, \(\dim(\mathcal{N}^{s}(\mathcal{L}_{n}))=\mathrm{codim}(\mathcal{R}^{s}(\mathcal{L}_ {n}))=1\). We also have \[\partial_{\alpha U}F(\alpha_{n},U_{*})\left(\begin{array}{c}\phi\\ \psi\end{array}\right)=\left(\begin{array}{c}(G*\psi)_{xx}\\ 0\end{array}\right),\] thus \[\partial_{\alpha U}F(\alpha_{n},U_{*})\left[\left(1,-\frac{h_{u*}}{h_{k*}} \right)\cos\left(\frac{n\pi}{L}x\right)\right]^{T}=\left(-\frac{h_{u*}}{h_{k*} }\frac{\partial^{2}}{\partial x^{2}}\left(G*\cos\left(\frac{n\pi}{L}x\right) \right),0\right)^{T}\notin\mathcal{R}^{s}\left(\mathcal{L}_{n}\right)\] as \[\frac{h_{u*}}{h_{k*}}l_{n}C_{n}(G)\int_{-L}^{L}\cos^{2}\left(\frac{n\pi}{L}x \right)dx\neq 0,\] as long as \(C_{n}(G)\neq 0\). Now by applying Theorem 4.1, we obtain the existence of non-constant steady-state solutions of (1.8.a) (Theorem 1.5) and (1.8.b) (Theorem 1.6). Near a bifurcation point \(\alpha=\alpha_{n}\), it follows from Shi [1999] that the sign of \(\alpha_{n}^{\prime}(0)\) or the one of \(\alpha_{n}^{\prime\prime}(0)\) when \(\alpha_{n}^{\prime}(0)=0\) determine the bifurcation direction. If \(\alpha_{n}^{\prime}(0)\neq 0\), then a transcritical bifurcation occur and a non-trivial solution exists when \(\alpha(\neq\alpha_{n})\) is close to the bifurcation point \(\alpha_{n}\). If \(\alpha_{n}^{\prime}(0)=0\) and \(\alpha_{n}^{\prime\prime}(0)\neq 0\), then a pitchfork bifurcation occurs at \(\alpha=\alpha_{n}\). The pitchfork bifurcation is forward if \(\alpha_{n}^{\prime\prime}(0)>0\) and there are two (zero) non-trivial solutions for \(\alpha>\alpha_{n}\) (\(\alpha<\alpha_{n}\)), and it is backward if \(\alpha_{n}^{\prime\prime}(0)<0\). Since \[\left\langle\zeta,F_{UU}(\alpha_{n},U_{*})\left[\left(1,-\frac{h_{u*}}{h_{k*} }\right)\cos\left(\frac{n\pi}{L}x\right)\right]^{2}\right\rangle\] \[=\int_{-L}^{L}\left[\left(f_{uu*}+r_{n}\left(h_{uu*}-2h_{u*}\frac{h_{u*}}{h_{ k*}}\right)\right)\frac{1+\cos\left(\frac{2n\pi}{L}x\right)}{2}+2\alpha_{n}l_{n}C_{n}(G )\frac{h_{u*}}{h_{k*}}\cos\left(\frac{2n\pi}{L}x\right)\right]\cos\left(\frac {n\pi}{L}x\right)dx=0,\] where \(f_{uu*}\stackrel{{\triangle}}{{=}}f_{uu}(U_{*}),\ h_{u*} \stackrel{{\triangle}}{{=}}h_{uu}(U_{*}),\ h_{u*}\stackrel{{ \triangle}}{{=}}h_{uk}(U_{*})=-\beta\), and \(\zeta\in(Y^{s})^{*}\) satisfying \(N(\zeta)=\mathcal{R}(\mathcal{L}_{n})\). Then \[\alpha_{n}^{\prime}(0)=-\frac{\left\langle\zeta,F_{UU}(\alpha_{n},U_{*})\left[ \left(1,-\frac{h_{u*}}{h_{k*}}\right)\cos\left(\frac{n\pi}{L}x\right)\right]^{ 2}\right\rangle}{2\left\langle\zeta,F_{\alpha U}(\alpha_{n},U_{*})\left[ \left(1,-\frac{h_{u*}}{h_{k*}}\right)\cos\left(\frac{n\pi}{L}x\right)\right] \right\rangle}=0. \tag{4.4}\] We further calculate \(\alpha_{n}^{\prime\prime}(0)\) as in Shi [1999], \[\alpha_{n}^{\prime\prime}(0)=-\frac{\left\langle\zeta,F_{UU}\left(\alpha_{n},U_{*}\right)\left[\left(1,-\frac{h_{u*}}{h_{k*}}\right)\cos\left(\frac{n\pi}{ L}x\right)\right]^{3}\right\rangle}{3\left\langle\zeta,F_{\alpha U}\left(\alpha_{n},U_{*} \right)\left[\left(1,-\frac{h_{u*}}{h_{k*}}\right)\cos\left(\frac{n\pi}{L}x \right)\right]\right\rangle}-\frac{\left\langle\zeta,F_{UU}(\alpha_{n},U_{*}) \left[\left(1,-\frac{h_{u*}}{h_{k*}}\right)\cos\left(\frac{n\pi}{L}x\right) \right]\left[\Theta\right]\right\rangle}{\left\langle\zeta,F_{\alpha U}\left( \alpha_{n},U_{*}\right)\left[\left(1,-\frac{h_{u*}}{h_{k*}}\right)\cos\left( \frac{n\pi}{L}x\right)\right]\right\rangle}, \tag{4.5}\] where \(\Theta=(\Theta_{1},\Theta_{2})\) is the unique solution of \[F_{UU}\left(\alpha_{n},U_{*}\right)\left[\left(1,-\frac{h_{u*}}{h_{k*}} \right)\cos\left(\frac{n\pi}{L}x\right)\right]^{2}+F_{U}\left(\alpha_{n},U_{*} \right)\left[\Theta\right]=0. \tag{4.6}\] From Shi et al. [2021] and (4.4), we assume \(\Theta=(\Theta_{1},\Theta_{2})\) has the following form \[\Theta_{1}=\Theta_{1}^{1}+\Theta_{1}^{2}\cos\left(\frac{2n\pi}{L}x\right),\ \ \Theta_{2}=\Theta_{2}^{1}+\Theta_{2}^{2}\cos\left(\frac{2n\pi}{L}x\right). \tag{4.7}\] Combining (4.6) and (4.7), after calculation, we have \[\Theta_{1}^{1}=-\frac{f_{uu*}}{2f_{u*}},\ \ \ \ \ \ \ \ \ \Theta_{2}^{1}=\frac{h_{u*}f_{uu*}-f_{u*}\left(h_{uu*}-2h_{u*}\frac{h_{u*}}{h_{k*}} \right)}{2f_{u*}h_{k*}},\] \[\Theta_{1}^{2}=-\frac{h_{k*}\left(\frac{f_{uu*}}{2}+2\alpha_{n}l_{n}C_{n}(G) \frac{h_{u*}}{h_{k*}}\right)+2\alpha_{n}u_{*}l_{n}C_{n}(G)\left(h_{uu*}-2h_{ u*}\frac{h_{u*}}{h_{k*}}\right)}{h_{k*}(f_{u*}-4dl_{n})+4\alpha_{n}u_{*}h_{u*}l_{n}C_{n}(G)},\] Hence, by (4.5), \(\alpha^{\prime\prime}(0)\) can be calculated as follows: \[\alpha_{n}^{\prime\prime}(0)=\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad ## 5 Analysis of the model with a top-hat detection function In this section, we study some specific cases to demonstrate some of our analytical results corresponding to different growth functions \(g(\cdot)\) which exhibit different rates of growth for large arguments. Depending on the functional form of memory uptake \(g(\cdot)\), we establish a number of monotone/nonmonotone properties of the bifurcation values \(alpha_{n}(R)\). In each subsection we plot the relevant bifurcation curves \(\alpha_{l}(R)\), \(\alpha_{r}(R)\), along with a depiction of the corresponding steady state profiles near and far away from these critical values. For the numerical simulations we use a pseudo-spectral method with a forward-Euler time-stepping scheme. Trajectories are run until subsequent time steps are within a tolerance of \(10^{-6}\). Note that the spatial domain chosen is \((0,2\pi)\), equivalent to choosing \(\Omega=(-\pi,\pi)\). To this end, let \(L=\pi\) and \(f(u)=u(1-u)\). In all cases, we choose \(d=\mu=\beta=1.0\) and \(\rho=5\). We then choose the following three cases of \(g(u)\) to analyze equation (1.8.a) and equation (1.8.b) with the top-hat detection function defined in (1.7): \[\text{(i) }g(u)=\frac{2\rho u^{2}}{1+u^{2}};\qquad\text{(ii) }g(u)=\frac{2\rho u^{2}}{1+u};\qquad\text{(iii) }g(u)=\rho u^{2}.\] As previously noted in Remark 1.2, cases (i) and (ii) have a global weak solution for either problem (1.8.a) or (1.8.b). In case (iii), a global weak solution is only guaranteed by Theorem 1.1 for problem (1.8.b). ### Case (i) in equation (1.8.a) In this case, \(g(u)=\frac{2\rho u^{2}}{1+u^{2}}\), obviously, \(f\) and \(g\) satisfy assumptions (H1)-(H2). One can calculate that \((u_{*},k_{*})=\left(1,\frac{\rho}{\mu+\beta}\right)\) and \[\begin{split}& f_{u*}=-1,\;f_{uu*}=-2,\;f_{uuu*}=0,\;g_{u*}=\rho,\;h_{ u*}=\frac{\rho\mu}{\mu+\beta},\\ & h_{k*}=-(\mu+\beta),\;h_{uu*}=g_{uu*}=-\rho,\;h_{u*}=-\beta,\;h _{kk*}=0,\\ & h_{uuu*}=g_{uuu*}=h_{uu*}=h_{ukk*}=h_{kkk*}=0.\end{split} \tag{5.1}\] We consider the following model with the top-hat detection function: \[\begin{cases}u_{t}=du_{xx}+\alpha(u\overline{k}_{x})_{x}+u(1-u),&x\in(- \pi,\pi),\;t>0,\\ k_{t}=\frac{2\rho u^{2}}{1+u^{2}}-(\mu+\beta u)k,&x\in(-\pi,\pi),\;t>0, \end{cases} \tag{5.2}\] subject to periodic boundary conditions, where \(G(x)\) is defined as in (1.7) such that \(0<R<\pi\). From (1.4), we have \[\overline{k}(x)=\frac{1}{2\pi}\int_{x-R}^{x+R}\frac{1}{2R}k(y)dy,\;-\pi\leqslant x \leqslant\pi, \tag{5.3}\] and for \(G\) in (1.7), we have \[C_{n}(G)=\frac{\sin(nR)}{2\pi nR}. \tag{5.4}\] As in (3.10), the linearized operator at \((\alpha,U_{*})\) is \[\mathcal{L}_{*}(\alpha)\left(\begin{array}{c}\phi\\ \psi\end{array}\right)=\partial_{U}F(\alpha,U_{*})\left(\begin{array}{c}\phi \\ \psi\end{array}\right)=\left(\begin{array}{c}d\phi_{xx}-\phi+\frac{\alpha}{4 \underline{\mu}R}\left(\psi_{x}(x+R)-\psi_{x}(x-R)\right)\\ \frac{\mu+\beta}{\mu+\beta}\phi-(\mu+\beta)\psi\end{array}\right). \tag{5.5}\] From Theorem 3.6, the spectrum of \(\mathcal{L}_{*}(\alpha)\) is consisted of \(h_{k*}=-(\mu+\beta)<0\) and eigenvalues \(\lambda_{n}^{\pm}\) which satisfy the characteristic equation \[\lambda^{2}+(1+\mu+\beta+dn^{2})\lambda+(\mu+\beta)+\left(\alpha\frac{\rho\mu }{\mu+\beta}\frac{\sin(nR)}{2\pi nR}+d(\mu+\beta)\right)n^{2}=0, \tag{5.6}\] where \(l_{n}\) is defined in (1.9), \(n\in\mathbb{Z}\). When \(n=0\), from (3.3), all eigenvalues of (5.6) have negative real parts, hence the constant solution \(U_{*}\) is locally asymptotically stable with respect to non-spatial dynamics. Note that (5.6) is an even function of \(n\), so we consider \(n\in\mathbb{N}\) below. From (1.13), (1.14) and (5.4), we obtain \[\alpha_{n}^{R} =\frac{Det(J_{*})-dh_{k*}n^{2}}{u_{*}h_{u*}n^{2}\frac{\sin(nR)}{2\pi nR }}=\frac{-2\pi nR(\mu+\beta)^{2}}{\rho\mu\sin(nR)}\left(d+\frac{1}{n^{2}} \right), \tag{5.7}\] \[\Sigma^{+} =\left\{n\in\mathbb{N}:nR\in\cup_{j=0}^{\infty}(2j\pi,(2j+1)\pi) \right\},\] \[\Sigma^{-} =\left\{n\in\mathbb{N}:nR\in\cup_{j=0}^{\infty}((2j+1)\pi,(2j+2) \pi)\right\},\] \[\alpha_{l} =-\frac{2\pi(\mu+\beta)^{2}}{\rho\mu}\min_{n\in\Sigma^{+}}\frac{ nR}{\sin(nR)}\left(d+\frac{1}{n^{2}}\right),\] \[\alpha_{r} =-\frac{2\pi(\mu+\beta)^{2}}{\rho\mu}\max_{n\in\Sigma^{-}}\frac{ nR}{\sin(nR)}\left(d+\frac{1}{n^{2}}\right),\] In Figure 1 we numerically compute the values \(|\alpha_{l}|\) and \(\alpha_{r}\) and plot them with respect to the perceptual radius \(R\). Now we can apply Theorems 1.3 and 1.5 to (5.2) to have the following results. **Theorem 5.1**: _Let \(\alpha_{n}^{R},\Sigma^{+},\Sigma^{-},\alpha_{l},\alpha_{r}\) be defined in (5.7). Then the constant steady state solution \(U_{*}=(1,\rho/(\mu+\beta))\) is locally asymptotically stable with respect to (5.2) when \(\alpha_{l}<\alpha<\alpha_{r}\) and is unstable when \(\alpha<\alpha_{l}\) or \(\alpha>\alpha_{r}\). Moreover non-constant steady state solutions of (5.2) bifurcate from the the branch of constant solutions \(\Gamma_{0}=\{(\alpha,U_{*}):\alpha\in\mathbb{R}\}\) near \(\alpha=\alpha_{n}^{R}\), and these solutions are on a curve \(\Gamma_{n}=\{(\alpha_{n}(s),u_{n}(s,\cdot),k_{n}(s,\cdot)):|s|<\delta\}\) such that \(\alpha_{n}(0)=\alpha_{n}^{R}\) and \(\alpha_{n}^{\prime}(0)=0\). Moreover, the following monotonicity properties hold._ 1. _Suppose_ \(n\in\Sigma^{+}\) _so that_ \(\frac{\sin(nR)}{nR}>0\)_. Then_ \(\alpha_{n}(R)<0\)_, and_ \(\alpha_{n}(R)\) _(in particular,_ \(\alpha_{l}\)_) is monotonically increasing with respect to_ \(\rho\)_, is monotonically decreasing with respect to_ \(d\) _and_ \(\beta\)_, and is not monotone with respect to_ \(\mu\)_._ 2. _Suppose_ \(n\in\Sigma^{-}\) _so that_ \(\frac{\sin(nR)}{nR}<0\)_. Then_ \(\alpha_{n}(R)>0\)_, and_ \(\alpha_{n}(R)\) _(in particular,_ \(\alpha_{r}\)_) is monotonically increasing with respect to_ \(d\) _and_ \(\beta\)_, is monotonically decreasing with respect to_ \(\rho\)_, and is not monotone with respect to_ \(\mu\)_._ After calculation, \[\alpha_{n}^{\prime\prime}(0)=\frac{-2n^{4}-\frac{143}{24}n^{2}-\frac{5}{24}}{ -\frac{5}{4}n^{2}\frac{\sin(nR)}{2\pi nR}}.\] Therefore, at \(\alpha=\alpha_{r}\), the pitchfork bifurcation is forward and the bifurcating solutions are locally asymptotically stable with respect to (5.2), at \(\alpha=\alpha_{l}\), the pitchfork bifurcation is backward and the bifurcating solutions are locally asymptotically stable with respect to (5.2). The monotonicity properties described above can be understood intuitively. In case (i), we consider cases of aggregation (\(\alpha<0\)), and so a _decreasing_ behaviour requires higher rates of advection to destabilize the constant steady state, while an _increasing_ behaviour allows for destabalization of the constant steady state at lower advection rates. As is generally understood for diffusion-advection models, diffusion has a stabalizing effect, and higher rates of diffusion therefore require comparably high magnitudes of advection to destabalize the constant steady state. Similarly, an increased value of \(\beta\), a 'rate of safe return', also requires an increased magnitude of advection to destabalize the constant steady state. This suggests that the population cannot return to previously visited locations too quickly if patterns are to persist. In case (ii), we flip the sign of the advection rate and consider the segregataion case (\(\alpha>0\)), in which case the direction of the monotonicities also switch, but the understanding of this behaviour is identical to case (i). Most interestingly, perhaps, is the nonmonotone behaviour with respect to the memory decay rate \(\mu\). In fact, in this case \(\alpha_{l}\) (\(\alpha_{r}\)) is concave down (up), and so there is a critical value \(\mu^{*}>0\) so that the rate of advection required to destabalize the constant steady state is minimal. This is in contrast to Theorem 5.2 in case (ii), where monotonicity with respect to \(\beta\) is lost. Figure 1 demonstrates the more complex relationship between the perceptual radius \(R\) and the sizes of \(\alpha_{r}(R)\) and \(\alpha_{l}(R)\). Of note is the nonmonotone behaviour, particularly for smaller perceptual radii. This wavelike behaviour is most pronounced for \(\alpha_{r}(R)\). It is also easy to see that \(|\alpha_{l}(R)|<\alpha_{r}(R)\) when \(0<R<\pi/2\) (indeed this holds for \(R<2.2\) from Figure 1). But when \(R\) is larger than \(2.2\), either \(|\alpha_{l}(R)|\) or \(\alpha_{r}(R)\) could be the larger one. This in general shows that the advection rate needed to destabilize the positive equilibrium is larger when the perceptual radius is larger. When the perceptual radius is less than half of domain size, the attractive advection rate needed to destabilize the positive equilibrium is larger than the repulsive one. In Figures 2-3, we plot the solution profiles just before, just after, and far beyond the analytically calculated bifurcation points \(\alpha_{l}^{*}\) and \(\alpha_{r}^{*}\), respectively. ### Case (ii) in equation (1.8.a) Let \(g(u)=\dfrac{2\rho u^{2}}{1+u}\) in equation (1.8.a). Similar to case (i), we have \[\begin{split}&(u_{*},k_{*})=\left(1,\dfrac{\rho}{\mu+\beta}\right),\;f_{u*}=-1,\;f_{uu*}=-2,\;f_{uu*}=0,\;g_{u*}=\dfrac{3\rho}{2},\\ & h_{u*}=\dfrac{3\rho\mu+\rho\beta}{2(\mu+\beta)},\;h_{k*}=-(\mu+ \beta),h_{uu*}=g_{uu*}=\dfrac{\rho}{2},\;h_{u*k*}=-\beta,\\ & h_{kk*}=0,\;h_{uu*}=g_{uu*}=-\dfrac{3\rho}{4},h_{uu*}=h_{uk*}=h_ {kk*}=0.\end{split} \tag{5.8}\] and \[\alpha_{n}(R)=\dfrac{Det(J_{*})-dh_{k*}n^{2}}{u_{*}h_{u*}n^{2}\dfrac{\sin(nR)} {2\pi nR}}=\dfrac{-2(\mu+\beta)^{2}}{\rho(3\mu+\beta)\frac{\sin(nR)}{2\pi nR} }\left(d+\dfrac{1}{n^{2}}\right). \tag{5.9}\] In Figure 4, we again numerically compute \(|\alpha_{l}|\) and \(\alpha_{r}\) and plot them with respect to the perceptual radius \(R\). In Figures 5-6, we again plot the solution profiles just before, just after, and far beyond the analytically calculated bifurcation points \(\alpha_{l}^{*}\) and \(\alpha_{r}^{*}\). In this case the stability of the constant steady-state is the same as the result of Theorem 5.1 (which we omit here), while the monotonicity of \(\alpha_{n}(R)\) with respect to parameters is slightly different, we state the theorem below. Figure 1: The bifurcation curves for Section 5.1 for the perceptual radius \(R\) versus the advection rate \(\alpha\). In the subsequent Figures 2-3, we fix \(R=2.5\) and use the bifurcation curves to test values near and far from the critical values \(\alpha_{r}^{*}\) and \(\alpha_{\ell}^{*}\) where the constant steady state is expected to be destabalized. **Theorem 5.2**: _Let \(\alpha_{n}(R)\) be defined in (5.9)._ 1. _Suppose_ \(n\in\Sigma^{+}\) _so that_ \(\frac{\sin(nR)}{nR}>0\)_. Then_ \(\alpha_{n}(R)<0\)_, and_ \(\alpha_{n}(R)\) _(in particular,_ \(\alpha_{l}\)_) is monotonically increasing with respect to_ \(\rho\)_, monotonically decreasing with respect to_ \(d\)_, and is not monotone with with respect to either_ \(\mu\) _or_ \(\beta\)_._ 2. _Suppose_ \(n\in\Sigma^{-}\) _so that_ \(\frac{\sin(nR)}{nR}<0\)_. Then_ \(\alpha_{n}(R)>0\)_, and_ \(\alpha_{n}(R)\) _(in particular,_ \(\alpha_{r}\)_) is monotonically increasing with respect to_ \(d\)_, monotonically decreasing with respect to_ \(\rho\)_, and is not monotone with respect to either_ \(\mu\) _or_ \(\beta\)_._ This Theorem, similar to Theorem 5.1 for case (i), retains the expected monotonicity properties with respect to \(\rho\) and \(d\), while we lose monotonicity with respect to \(\beta\). Indeed, the curves \(\alpha_{l}\) and \(\alpha_{r}\) are concave with respect to each parameter, suggesting the existence of critical values \(\mu^{*}>0\) and \(\beta^{*}>0\) so that the magnitude of advection required to destabilize the constant steady state is minimal. This highlights a key difference caused by the choice of memory uptake \(g(\cdot)\). ### Case (iii) in equation (1.8.b) In this case, we choose \(g(u)=\rho u^{2}\) in equation (1.8.b). From (3.25) and (1.15), bifurcation point \(\widehat{\alpha}_{n}(R)\) has the following expression \[\widehat{\alpha}_{n}(R)=\frac{-(\rho+\mu+\beta)^{2}}{\kappa\rho(2\mu+\beta) \frac{\sin(nR)}{2\pi nR}}\left(d-\frac{f^{\prime}(1)}{l_{n}}\right). \tag{5.10}\] Similar to Theorem 5.1, we have the following. Figure 2: The steady state solutions corresponding to Section 5.1 for the negative \(\alpha\) (aggregation) case. We test \(\sim 0.1\) before, \(\sim 0.1\) after, and twice the value of the critical value \(\alpha_{\ell}^{*}\) given in Figure 1. **Theorem 5.3**: _Let \(\alpha_{n}(R),\Sigma^{+},\Sigma^{-},\alpha_{l},\alpha_{r}\) be defined in (5.10) and (1.16). Then there are non-constant steady-state solutions that bifurcation from the constant solution \((1,\frac{\rho}{\rho+\mu+\beta})\) near \(\widehat{\alpha}_{n}(R)\) of system (1.8.b). Moreover, the constant solution \((1,\frac{\rho}{\rho+\mu+\beta})\) is locally asymptotically stable when \(\widehat{\alpha}_{l}<\widehat{\alpha}<\widehat{\alpha}_{r}\) and unstable when \(\widehat{\alpha}<\widehat{\alpha}_{l}\) or \(\widehat{\alpha}>\widehat{\alpha}_{r}\)._ **Theorem 5.4**: _Let \(\widehat{\alpha}_{n}(R)\) be defined in (5.10)._ 1. _Suppose_ \(n\in\Sigma^{+}\) _so that_ \(\frac{\sin(nR)}{nR}>0\)_. Then_ \(\widehat{\alpha}_{n}(R)<0\)_, and_ \(\widehat{\alpha}_{n}(R)\) _(in particular,_ \(\widehat{\alpha}_{l}\)_) is monotonically increasing with respect to_ \(\kappa\)_, monotonically decreasing with respect to_ \(d\)_, and is not monotone with respect to any of_ \(\rho\)_,_ \(\mu\) _or_ \(\beta\)_;_ 2. _Suppose_ \(n\in\Sigma^{-}\) _so that_ \(\frac{\sin(nR)}{nR}<0\)_. Then_ \(\widehat{\alpha}_{n}(R)>0\)_, and_ \(\widehat{\alpha}_{n}(R)\) _(in particular,_ \(\widehat{\alpha}_{r}\)_) is monotonically increasing with respect to_ \(d\)_, monotonically decreasing with respect to_ \(\kappa\)_, and is not monotone with respect to any of_ \(\rho\)_,_ \(\mu\) _or_ \(\beta\)_._ We again compare to Theorem's 5.1-5.2: the monotonicity with respect to \(d\) remains, while a quadratic growth for the memory uptake function causes all other previously monotone cases to be nonmonotone! In case (iii), we also have a new parameter \(\kappa\), which is the theoretical maximal memory capacity of the organism. It is biologically reasonable, therefore, for an increase in this memory capacity to decrease the magnitude of advection required to destabalize the constant steady state. In Figure 7, we plot the bifurcation curves \(\alpha_{l}(R)\) and \(\alpha_{r}(R)\). Figures 8-9 again show the solution profiles as done in cases (i) and (ii). Figure 3: The steady state solutions corresponding to Section 5.1 for the positive \(\alpha\) (segregation) case. We test \(\sim 0.1\) before, \(\sim 0.1\) after, and twice the value of the critical value \(\alpha_{r}^{*}\) given in Figure 1. ## 6 Discussion The role of spatial memory in driving the movement of animals has long been of interest to both empirical ecologists Fagan et al. (2013) and mathematical modellers Wang and Salmaniw (2023a). In this work, we consider the incorporation of a nonlocal advection term in the PDE to model movement in response to remembered space use, where the memory map is described dynamically by an additional ODE. The nonlocal advection term is crucial from both biological and mathematical standpoints Painter et al. (2023). Biologically, it more accurately captures an essence of how organisms sense their surrounding environment and make movement decisions based on that information Martinez-Garcia et al. (2020). This is a useful step forward in our mathematical representation of animal movement ecology, making the modelling formulation more applicable to what is observed in the natural world. Mathematically, however, nonlocality introduces technical difficulties which deserve a careful and robust study. One of the significant contributions of this paper is the establishment of a well-posedness result, proving the existence and uniqueness of a global solution, ruling out the possibility of a finite-time blow up. In particular, we showed the existence and uniqueness of a solution even when considering the discontinuous top-hat detection function. This result broadens the existing literature in a crucial way, providing answers to some open questions found in Wang and Salmaniw (2023a). Prior to this work, the existence of solutions for this specific class of models remained an open question. Our result not only bridges this gap but also opens doors for more complex models incorporating different types of detection functions. Another significant contribution lies in our robust bifurcation and spectral analyses. Previous efforts, motivated more directly by the ecological application, have often relied solely on a linear stability analyses Briscoe et al. (2002), Potts Figure 4: The bifurcation curves for Section 5.2 for the perceptual radius \(R\) versus the advection rate \(\alpha\). In the subsequent Figures 5-6, we fix \(R=2.5\) and use the bifurcation curves to test values near and far from the critical values \(\alpha_{r}^{*}\) and \(\alpha_{\ell}^{*}\) where the constant steady state is expected to be destabalized. and Lewis (2016b), which can be insufficient in scenarios where the essential spectrum is nonempty. This does not occur in classical reaction-diffusion systems, but may be the case when nonlocal advection is introduced. Moreover, the comprehensive approach used here reveals a more nuanced understanding of the system's stability, describing more qualitative features of the solution profile near these critical bifurcation points. From this bifurcation analysis, we establish a number of monotonicity and non-monotonicity results for the critical bifurcation parameters, with the monotonic properties depending on the functional form given chosen for the memory uptake rate \(g(\cdot)\). In the special cases considered here, a bounded functional form has the most monotonicity properties, while a roughly linear or quadratic functional form appears to remove most of the monotonicity properties. This suggests the existence of critical values so that the magnitude of advection required to destabilize is minimized, an interesting feature that deserves future study. For example, is the critical value dependent on whether the population aggregates or segregates? Our numerical simulations using a pseudo-spectral method further complement our analytical results. These numerical methods are particularly well-suited for dealing with nonlocal advection-diffusion problem, demonstrating some of the interesting dynamics found in these models. While this work provides new insights and fills existing gaps in the literature, further studies are needed to explore more general functional forms, higher-dimensional space, and for domains with a physical boundary. From a biological perspective, there are various aspects to memory at play that are not modelled here Fagan et al. (2013). In reality, animals' advective tendencies will not simply be towards (or away from) areas they have previously visited. Rather they will assess the quality of those places - e.g. whether they contain access to food or shelter, if they have had aggressive or favourable encounters there - and adjust their advective tendencies accordingly Lewis et al. (2021). Our work paves the way for analysing these more detailed and realistic memory effects. One tricky, yet important, feature will be the inclusion of heterogeneous landscapes, for example where some areas are better than others for foraging or hiding from predators Van Moorter et al. (2009); Merkle et al. (2014). Analysis of Figure 5: The steady state solutions corresponding to Section 5.2 for the negative \(\alpha\) (aggregation) case. We test \(\sim 0.1\) before, \(\sim 0.1\) after, and twice the value of the critical value \(\alpha_{\ell}^{*}\) given in Figure 4. nonlinear PDEs often takes place on a homogeneous environment, but to connect better to the ecological community, theory on pattern formation in heterogeneous environments will be of fundamental importance Krause et al. (2020). Additionally, it will be important to account for between-population interactions via multi-species models with explicit inclusion of memory processes Potts and Lewis (2019). A possible way in to this would be to analyse existing models on territory, some which are simply multi-species extensions of the model analysed here Potts and Lewis (2016a,b), by placing these on solid mathematical foundations through developing existence theory, and gaining greater insights into territorial pattern formation through rigorous spectral and bifurcation analyses. ## 7 Acknowledgments The authors thank Mark Lewis for his insightful comments on model development. DL was partially supported by a HIT PhD Scholarship and the University of Alberta. YS is supported by NSERC grant PDF-578181-2023. JRP acknowledges support of Engineering and Physical Sciences Research Council (EPSRC) grant EP/V002988/1. JS is partially supported by the U.S. National Science Foundation grants OCE-2207343 and DMS-185359. HW gratefully acknowledges support from the Natural Sciences and Engineering Research Council of Canada (Discovery Grant RGPIN-2020-03911 and NSERC Accelerator Grant RGPAS-2020-00090) and the Canada Research Chairs program (Tier 1 Canada Research Chair Award).
2302.02161
Reflective ability and its energy dependence
We introduce the notion of reflective ability and discuss its energy dependence using rational form of unitarization. Correspondence with the phase of exponential unitarization is traced. Increase of the reflective ability of interaction region starts at the LHC energies. Numerical estimates are given.
S. M. Troshin, N. E. Tyurin
2023-02-04T13:21:09Z
http://arxiv.org/abs/2302.02161v2
# Reflective ability and its energy dependence ###### Abstract We introduce the notion of reflective ability and discuss its energy dependence using rational form of unitarization. Correspondence with the phase of exponential unitarization is traced. Increase of the reflective ability of interaction region starts at the LHC energies. Numerical estimates are given. Keywords: Elastic scattering; Unitarity; Reflective ability Introduction. Reflective ability Introduction of the reflective ability notion aims to clarify and provide another test for asymptotics under the hadron interactions. The studies of dynamics of the elastic hadron scattering is considered to be a relevant tool for achieving that purpose. The elastic scattering element of \(S\)-matrix in the impact parameter space denoted in what follows as \(S\) (\(S\equiv S(s,b)\)) is the real function in case of a pure imaginary approximation for the elastic scattering amplitude. So, this function can be either positive or negative. Nonnegative values of \(S\) correspond to the absorptive scattering while the negative ones were interpreted as a result of the reflective scattering [1] by analogy with optics [2]. The reflective scattering mode can be associated with central core presence in a hadron structure. Unitarity provides the constraint \(|S|\leq 1\). It is quite natural to introduce the reflective ability, when \(S<0\), as \[R(s)\equiv|S(s,0)|. \tag{1}\] This definition corresponds to the absorptive ability definition when \(S\geq 0\). Reflective ability is due to the central geometric elastic scattering while the absorptive ability is about peripheral shadow elastic scattering [3]. We consider here the reflective ability \(R(s)\) on base of rational unitarization and demonstrate that monotonic increase is its most probable behavior. ## 2 Rational unitarization vs exponential one Unitarization of an "input amplitude" is a commonly used approach to obtain final amplitude consistent with unitarity. The recent discussion of the unitarization method and choice of the input for this procedure has been given in [4]. Indeed, unitarization is mapping of some input quantity to the unitary circle. We discuss the two kinds of mapping, the rational and exponential ones. There are also hybrid approaches which we will not concern here. Under the rational unitarization the function \(S(s,b)\) is expressed through the "input amplitude" \(U(s,b)\) by the following ratio: \[S(s,b)=\frac{1+iU(s,b)}{1-iU(s,b)}, \tag{2}\] which is a one-to-one transform between the functions \(S\) and \(U\). Eq. (2) performs mapping upper half-plane of complex domain of allowed by unitarity \(U\)-variation onto the unit circle of explicit unitary \(S\)-variation domain (Fig. 1). In mathemat ics, it is known as Cayley transform, the particular case of Mobius transform. It should be noted that the exponential unitarization \[S(s,b)=\exp[2i\delta(s,b)] \tag{3}\] with \(\delta(s,b)\equiv\delta_{R}(s,b)+i\delta_{I}(s,b)\) is not a one-to-one transform. It corresponds to \(S\neq 0\) for any finite value of \(\delta\) (Fig. 2). The both mappings are conformal and due to unitarity \(\mbox{Im}U\geq 0\) as well as \(\mbox{Im}\delta\geq 0\). Expression for the phase \(\delta(s,b)\) through the function \(U(s,b)\) and the respective phase features have been discussed in [1]. Thus, we consider the rational unitarization since it is simple and one-to-one transform, provides a smooth transition to the reflective scattering mode and is Figure 1: One-to-one mapping by the rational unitarization, Eq. (2). The LHC energies correspond to \(S\simeq 0\). Figure 2: One–way mapping by the exponential unitarization, Eq. (3). consistent with physically motivated damping of radiation [5]. The corresponding phase behavior is also traced. We refer to pure imaginary case of the scattering, i.e. under replacement \(U\to iU\) we rewrite Eq. (2) in the simplified form \[S(s,b)=\frac{1-U(s,b)}{1+U(s,b)} \tag{4}\] The basic principles of the \(U\)-matrix construction have recently been discussed in [4]. The arguments based on the geometrical models of hadron collisions were also given. The factorized form \[U(s,b)=g(s)\omega(b), \tag{5}\] with the dependence \(g(s)\sim s^{\lambda}\) was proposed. The function \(\omega(b)\) can be interpreted as a convolution of the two hadron matter distributions in the transverse plane with the dependence \(\omega(b)=\exp(-\mu b)\), \(\mu>0\) in the geometrical models of hadron collisions. ## 3 Energy dependence of reflective ability The reflective scattering (\(S<0\)) appears first at \(b=0\), i.e. when \(U(s,0)>1\) in Eq. (4) at \(s>s_{r}\) under energy increase and the reflective ability \(R(s)\) has the form \[R(s)=[g(s)-1]/[g(s)+1]. \tag{6}\] The respective energy dependence of the function \(R(s)\) at the energies beyond the LHC energy region is given by Eq. (6). Since the function \(g(s)\) increases with energy the reflective ability \(R(s)\) becomes nonvanishing and increases also, \(R(s)\to 1\) and \(dR(s)/ds\to 0\) at \(s\rightarrow\infty\). The reflective scattering requires \(\delta_{R}(s,0)=\pi/2\) at \(s>s_{r}\). Fig. 3 shows critical behavior of \(\delta_{R}(s,0)\). Dependence of the phase imaginary part, Eq. (3), in the reflective region is related to dependence of the reflective ability \(R(s)\) of the interation region: \[\delta_{I}(s,0)=-\frac{1}{2}\ln R(s). \tag{7}\] Its decreasing energy dependence results from increase of the reflective ability \(R(s)\) and sketched at Fig. 4. Increase of the reflective ability implies slow down of the mean multiplicity growth at the energies beyond the LHC [7] with respective enhancement of the elastic scattering at large transferred momenta (deep-elastic scattering) [3]. It also positevely correlates with energy dependence of the ratio of the experimentally observable quantities \(Y(s)\equiv\sigma_{tot}(s)/16\pi B(s)\), where \(\sigma_{tot}(s)\) is the total cross-section and \(B(s)\) is the slope of the differential cross-section of elastic scattering at \(-t=0\). Both these quantities are the integrals over impact parameter asymptotically proportional to \(\ln^{2}s\). The ratio \(Y(s)\) was suggested to be interpreted as an effective interaction intensity (since the ratio effectively eliminates effect of the interaction radius increase in the total cross-section growth) and is expected to cross the black disk limit value \(1/2\) at \(\sqrt{s}\simeq 10^{8}\) GeV [8]. Its observed energy increasing dependence at available energies is just another indication of the asymptotics remotness and is connected to the reflective ability increase. The connection of reflective ability \(R(s)\) with the function \(Y(s)\) turns into an approximate equality in the case of the Fermi-like impact parameter form (flat at small and moderate impact parameters and zero at large ones) of the elastic profile function. Such form generates multiple dips and bumps in the respective differential cross-section of nucleons' elastic scattering at very high energies. We can say that this form is common for the elastic scattering of nuclei at current energies [9], nuclei elastic scattering at current energies provides a window to hadron elastic scattering at asymptotic energies. One can estimate the energy values when the reflective ability becomes close to its asymptotic value - unity where the relation \[R(s)\simeq Y(s) \tag{8}\] takes place. It should happen at the energies around the value of \(\sqrt{s}\simeq 10^{10}\) TeV which corresponds to asymptotic scattering regime when the asymptotical Figure 3: Critical behavior of the scattering phase real part \(\delta_{R}(s,0)\) with energy. theorems, in particular, \(\sigma_{tot}(s)\sim\ln^{2}s\) are fulfilled. Eqs. (4) and (5) imply that reflective domain of the interaction region enlarges with energy spreading into periphery of the impact parameters with the rate of \(\sim\ln s\). It should be noted that the absorptive scattering mode does not imply the appearance of reflective domain in hadron interactions picture at all the energies. It is not surprizing since absorption does not cover the whole allowed by unitarity region of the amplitude variation [10]. ## Conclusion Quantitative analysis of the LHC experimental data provides an evidence for appearance of the reflective ability (\(S<0\)) at the energy \(\sqrt{s}=13\) TeV [11] which corresponds to the energy value \(\sqrt{s_{r}}\). Indirect information on the reflective ability of interaction region can, in principle, be extracted from the differential cross-section of the deep-elastic scattering. The scattering in the deep-elastic region, i.e. at large transferred momenta at sufficiently high energies, is sensitive to reflective ability of intreraction region arising due to presence of the inner core in proton structure. Decrease of the imaginary part \(\delta_{I}(s,0)\) with energy should not be interpreted as an increase of the interaction region transparency (due to deepening of the local minimum of the inelastic overlap function at \(b=0\)), but instead of it, this deepening means the reflective ability increase which corresponds to repulsion Figure 4: Energy dependence of the scattering phase imaginary part \(\delta_{I}(s,0)\) in the reflective region at \(s>s_{r}\). in hadron interaction dynamics due to an inner hadron core presence (it, with reservations, corresponds to the negative Wigner time delay \(Q\)[5, 12]). We turn to the interpretation of the reflective ability increase. It was proposed [13] to associate the reflective scattering mode appearance with formation of a color-conducting medium in the intermediate state of the hadron collision occured at sufficiently high energies and small impact parameters regarding temperature as depending on the the initial energy and impact parameter of collison. This media is treated as a consisting of the free colored objects. Therefore, a color-conducting media emerges instead of a color-insulating media at lower energies. Using analogy with scattering of the electromagnetic wave by metals1 one can correlate energy increase of the reflective ability with increase of color conductivity of the deconfined medium. The above analogy is based on similarity of gluons and photons, i.e. on the replacement of an electromagnetic field of QED by a chromomagnetic field of QCD. Footnote 1: Reflective ability of a metal is proportional to its electric conductivity due to presence of free electrons. The appearance of a color-conducting media is associated with the critical dependence of the phase real part \(\delta_{R}(s,0)\), while _increase of the reflective ability_ is associated with decrease of the imaginary part of the scattering phase \(\delta_{I}(s,0)\) at the energies in the region just beyond the LHC.
2303.15734
Adaptive Background Music for a Fighting Game: A Multi-Instrument Volume Modulation Approach
This paper presents our work to enhance the background music (BGM) in DareFightingICE by adding an adaptive BGM. The adaptive BGM consists of five different instruments playing a classical music piece called "Air on G-String." The BGM adapts by changing the volume of the instruments. Each instrument is connected to a different element of the game. We then run experiments to evaluate the adaptive BGM by using a deep reinforcement learning AI that only uses audio as input (Blind DL AI). The results show that the performance of the Blind DL AI improves while playing with the adaptive BGM as compared to playing without the adaptive BGM.
Ibrahim Khan, Thai Van Nguyen, Chollakorn Nimpattanavong, Ruck Thawonmas
2023-03-28T05:08:55Z
http://arxiv.org/abs/2303.15734v3
# Adaptive Background Music for a Fighting Game: A Multi-Instrument Volume Modulation Approach ###### Abstract This paper presents our work to enhance the background music (BGM) in DareFightingICE by adding an adaptive BGM. The adaptive BGM consists of five different instruments playing a classical music piece called "Air on G-String.". The BGM adapts by changing the volume of the instruments. Each instrument is connected to a different element of the game. We then run experiments to evaluate the adaptive BGM by using a deep reinforcement learning AI that only uses audio as input (Bllnd DL AI). The results show that the performance of the Blind DL AI improves while playing with the adaptive BGM as compared to playing without the adaptive BGM. adaptive BGM, Rule-based adaptive background music, background music. ## I Introduction As the popularity of video games keeps growing, video game developers are trying to improve their games to leave a positive impression on the players by enhancing their experience [1]. These improvements are in many different forms like gameplay-related changes or visually pleasing games or games with very enjoyable music or sound effects. Regardless of the type of improvement, the ultimate goal is to enhance the overall experience of the players. Our work is based on DareFightingICE Competition, which has two tracks: Sound Design Track and AI Track [2]. It ran at the 2022 IEEE Conference on Games (CoG 2022). The platform for the competition is DareFightingICE - an enhanced version of FightingICE [3] - with an enhanced sound design and a new AI interface that can give audio data to AIs. This research focuses on video game music and how to use it to enhance players' experience. An immersive gaming experience is greatly aided by BGM [5, 6]. In fighting games, BGM should change in response to the dynamic action, creating suspense and excitement as the players engage in combat. Fighting game BGM is typically implemented using pre-composed songs that loop, which fall short of accurately capturing the intensity of combat. As a result, the target genre in this research is fighting games. In fighting games, players go against another player or a computer player in a one-versus-one fight, using different attacks and abilities to overcome the opponent. These fighting games are two-and-a-half-dimensional, which means that the players can only move in two dimensions (left or right, while in some fighting games the players can perform an attack or dodge that takes them out of 2D). In addition, because fighting games are fairly simple compared to other genres of video games, they are open to a larger audience. This research uses DareFightingICE. The goal of this research is to create adaptive BGM by modifying the BGM of DareFightingICE, and evaluating the performance of the adaptive BGM by using a deep learning AI that only uses audio as input (Blind DL AI) [4]. The contributions of our work are as follows: 1. One we are the first group to focus on adaptive music in fighting games. 2. we are also the first group to use multiple instruments in the adaptation of a BGM. 3. our research focuses on giving players information about the state of the game through the BGM. ## II Related Work Since our research touches on two topics which are audio in video games and adaptive audio in video games, this section is divided into two parts. ### _Sound in Video games_ The significance of music in video games has been extensively studied. Video game music has been proven to have an impact on everyday living, attitude, and other aspects [7]. A person's attachment to the music featured in a game can result in these effects. It has been established that voice-over in video games and audio dialogue acted by the player or by non-playable characters is more entertaining for the players [8]. The information given to the game's players is also made easier to recall thanks to these voiceovers. A significant part of the game is also performed by the background soundtrack. It was found that players did better when there was background music playing than when there wasn't any [9]. Similarly, it has been noted that for video games to be engaging for players, the music performed within them must match the atmosphere or tone of the game [10]. A poor illustration of this would be to play soothing music as the game reaches its conclusion. Moving from background music to more focused sound effects and auditory signals, it has been observed that directional or 3D sound effects give players more information about their surroundings, including where other players are located [11, 12]. Steps and other types of action are represented in these 3D sound elements. This section's focus thus far has been on video games' audio. There has been very little study on sound designs in video games, which is a change from sound in video games. A sound design refers to a collection of sound effects together with its source code for its timing-control algorithm. Even fewer have produced findings that are helpful in developing an effective sound design. Despite the lack of study, sound designs for game categories like first-person shooters and real-time strategy have been suggested [13]. For their respective game categories, these two sound designs have distinctive sound effects. ### _Adaptive Audio in Video Games_ There has been some research on adaptive music in video games in the past. A study made an adaptive music system keeping players' emotions in mind and it was shown to be better than the original music in the game [14]. Another study on adaptive music found that players value adaptive music over linear music [15]. Lastly, another research on adaptive music changed the tempo of the music by taking into account players' actions and the state of the game and found that it enhanced players' experience by making the game more immersive or enjoyable [16]. Previous research has shown the importance of both music and adaptive music in video games. However, there has been little to no research when it comes to adaptive music in fighting games. ### _DareFightingICE Platforms_ The adaptive BGM is created in the aforementioned fighting game platform DareFightingICE. Since there has been a recent update to the DareFightingICE platform from version 5.2 (used in the 2022 DareFightingICE Competition) to version 6.0, we include both versions in our research. The reason is to show that our adaptive BGM works well on both of them. #### Ii-C1 DareFightingICE 5.2 DareFightingICE version 5.2 was the official version of the DareFightingICE Competition which was first held at CoG 2022. This version was an upgrade from the FightingICE platform and added an enhanced sound design keeping players without vision in mind. The platform also provided audio data to AIs, leading to the new Blind AI Track of the competition. Version 5.2 added a new function getAudioData to the original interface that gives audio data at each frame of the game to make it easier for AIs to access audio data. At each frame, audio data are sampled with a length of 16.67 ms; notice that DareFightingICE has a frame per second of 60, as in FightingICE. The stereo sound in the 2D format used in DareFightingICE helps accurately depict the location of both players. Consequently, a two-channel sound format is offered to enable AIs to perceive audio that might aid them in locating the positions of both players and those of projectiles. #### Ii-C2 DareFightingICE 6.0 This version of DareFightingICE has revamped the communication interface between AIs and the game system by using the open-source remote procedure call gRPC instead of Py4J. This change has resulted in up to 65% reduced latency, improved stability, and the elimination of missed frames compared to the Py4J interface [17]. This version will be used in the 2023 DareFightingICE Competition with another editable source code file called "Play.java" included for the Sound Design Track. This file is made available to give the users ability to modify the BGM at run time. ## III Rule-Based Adaptive BGM In this research, we propose an adaptive BGM that adapts to player actions and the players' in-game position. The proposed adaptive BGM consists of five different instruments playing a classical music piece called "Air on G-String." This music was selected for this research because we found the contrast of fast-paced actions with calming music to be a good combination, a technique used in some famous and popular movies such as X-Men: Days of the Future Past (QuickSilver-stopping-bullets scene). The five different instruments are the piano, the cello, the flute, the violin, and the ukulele. The BGM adjusts by altering the loudness of the instruments. This adjustment happens when the game elements, these instruments are connected to, change. The elements in use are both players' health points (HP), energy points (EP), and the distance between the two players (PD). The HP is the number of hits a player can take before losing, and the EP is the energy in point that the players need to perform different attacks. The design of the proposed adaptive BGM is illustrated in Fig. 1. In Fig. 1, there are five instruments playing together to compose the BGM, and every instrument is connected to a different game element. The violin and the piano are connected to player one's HP and EP, respectively; and the flute and the ukulele are connected to player two's HP and EP, respectively. Lastly, the cello is connected to PD. In Fig. 2, the instruments connected to HP of both players are at maximum volume when HP is maximum, and the volume is turned down as the HP gets lower. The ones connected to players' EP work in the same way, i.e., when EP is low the volume is low and when EP is high the volume is high. For the cello, the volume is at maximum when PD is close to zero meaning they are very close to each other, and it becomes lower the further away the players move from each other. The proposed adaptive BGM is designed this way to give useful information to both human and AI players. The maximum HP for one player per round is 400, EP is 300, and the PD is 800 pixels horizontally. The HP and EP of both players and their PD are divided into different levels to decrease or increase the volume of the instruments gradually. The levels of not only instruments but also players' HP, EP, and PD are empirically selected due to the lack of existing research when it comes to adaptive BGM using volume modulation. The levels for HP are 400, 300, 250, 200, 150, 100, and 50, which are connected to the instrument's volume levels of 75%, 60%, 55%, 40%, 35%, 25%, and 10% respectively. For EP of both players, the instruments' volume levels are the same as HP, but EP levels are 300, 250, 200, 150, 100, and 50. For PD, the levels are 800, 600, 500, 400, 300, 60, and 0, which are connected to the instrument's volume levels of 75%, 60%, 50%, 40%, 30%, 20%, and 10%. The reason for the abrupt change from 300 to 60 is for the players without vision to know without a doubt that the opponent player is near them. Lastly, for the 2023 DareFightingICE Competition participants, if they want to modify or change the adaptive BGM1. They just have to change the five instruments playing the BGM and if they want to use fewer than five instruments, they can decide which rules to ignore and which to select. Figure 3 shows a code snippet that they can modify. In Fig. 3, the play2() function takes five parameters as follows: Footnote 1: code of the adaptive BGM: [https://tinyurl.com/adaptiveBGM](https://tinyurl.com/adaptiveBGM) 1. The first parameter is the audio source of the instrument. 2. The second parameter is the audio buffer that stores the audio file that will play on the audio source. 3. The third parameter is the horizontal location in the game, where the audio source will play. 4. The fourth parameter is the vertical location in the game. 5. The last parameter is to check if the audio source should be looped or not. ## IV Evaluation For the evaluation of the adaptive BGM, we did an objective evaluation, the details for which are following. For the objective evaluation, we use the aforementioned Blind DL AI. Our method for this evaluation is to train the Blind DL AI with and without the adaptive BGM and then compare the performance of the AIs. We hypothesize that if the performance of the Blind DL AI is better with the adaptive BGM then the adaptive BGM is giving useful information, and since the AI we are using can only take audio data as input we believe that players without vision will also be able to use the same information for their advantage. To test our adaptive BGM we used the submissions to the 2022 DareFightingICE Competition Sound Design Track and train the Blind DL AI as mentioned above. We trained the Blind DL AI for 900 rounds on DareFightingICE 5.2 (1 game has 3 rounds) against MCTSAI65, a weaken version of a sample Monte-Carlo [18]. For DareFightingICE 6.0, we implement a new weak version of MCTSAI, which fixed the problem of MCTSAI65's performance difference on different environments, and call it MCTSAI23i. This version of MCTS Fig. 1: Rule-Based Adaptive BGM. Fig. 3: Adaptive BGM Code Example Fig. 2: Adaptive BGM Rules AI has similar performance as the previous MCTSAI65 against our Blind DL AI as shown in Table 1. We trained the Blind DL AI against the MCTSAI23i for 900 rounds for each sound design. The difference between MCTSAI65 and MCTSAI23i is that for MCTSAI65 the MCTS execution time was changed to 6.5 ms and for MCTSAI23i we limit the number of iterations MCTS can do for each frame. We introduce this change because we found that MCTSAI65 performed differently in different environments. For the subjective evaluation or evaluation of our adaptive BGM by human players, since the performance of Blind DL AI was similar to the performance of human players in previous work [2], we decided to only go with the objective evaluation. We believe that both players with and without vision will also be able to use the same information to their advantage. We used the same environments as in the 2022 DareFightingICE Competition. More specifically, six computers were used that have the same specification, i.e., CPU: Intel(R) Xeon(R) W-2135 CPU@ 3.70GHz 3.70 GHz, RAM: 16 GB, GPU: NVIDIA Quadro P1000 4GB VRAM, and OS: Windows 10. ### _Results_ We conducted experiments on three different sound designs; The default, runner-up, and winner sound design of the 2022 DareFightingICE Competition on both versions of DareFightingICE. The experiment structure is that the Blind DL AI fights against MCTSAI65 for version 5.2 and MCTSAI23i for version 6.0 for each sound design, first with linear BGM and then with the BGM being replaced with the adaptive one. The AI is trained from scratch for 900 rounds and since the Blind DL AI had three different audio encoders [3] as an option, the experiment was run on each encoder per one sound design; one-dimensional Convolutional Neural Network (1D-CNN), Fast Fourier Transform (FFT), and Mel-Spectrogram (Mel-Spec). We then evaluate the performance of each trained Blind DL AI by making it fight against each of the aforementioned opponent AIs for 90 rounds. The ratio of the number of wins2 over 90 rounds, Eqn. (1), and the average HP difference at the end of a round between the trained AI and its opponent, Eqn. (2), are then calculated. The equations and details above are taken from previous work [4]. Footnote 2: In the game, the round winner is either the one with a non-zero HP while its opponent’s HP has reached zero or the one with the higher HP when the round-length limit of 60 s has reached. \[win_{ratio}=\frac{\text{\emph{winning rounds}}}{\text{\emph{total rounds}}} \tag{1}\] \[avgHP_{diff}=\frac{\text{\emph{sum of }}HP_{r}^{self}-HP_{r}^{opp}\text{ \emph{for all }}\text{\emph{r}}}{\text{\emph{total rounds}}} \tag{2}\] As shown from the results in Tables II,IV,III,V,VII, and VI, the Blind DL AI performs better with the adaptive BGM in all sound designs across both versions of DareFightingICE. For each encoder, Blind DL AI also performs better with our adaptive BGM. This proves our hypothesis that the adaptive BGM is giving useful information. For version 5.2 the best encoder is 1D-CNN for the winner sound design. It is undefeated in testing. This encoder is also the most stable and consistent among all other encoders for all the sound designs for version 5.2. Version 6.0's best encoder is FFT for the runner-up sound design as it has the highest performance among all encoders for all sound designs. We call this encoder best for version 6.0 because it is also consistent in testing. ### _Behavior of Blind DL AIs_ This section describes the behavior3 of different Blind DL AIs for different sound designs in both versions. Only the behavior of Blind DL AIs trained on the adaptive BGM is described. For each sound design, the encoder with the best result is chosen. The summary of different behaviors is shown in Table VIII. Footnote 3: Link to the videos of Blind DL AIs: [https://tinyurl.com/abm4fg](https://tinyurl.com/abm4fg). Figure 4 shows a sample of different time steps from a fight between the best Blind DL AI from version 6.0 against MCTSAI23i, and it also shows the change in volume of different instruments at these time steps. #### Vi-B1 DareFightingICE 5.2 The behavior of the Blind DL AIs trained on DareFightingICE version 5.2 is described in this section. Default Sound Design - For the default sound design, the best encoder is FFT. This Blind DL AI tends to play a moderate role in terms of attack or defense. It tries to jump to avoid the fireball attack from the opponent AI. It attacks by jumping and trying to hit the opponent with kicks. The best move this Blind AI does is that it counters MCTSAI65's jump attacks by performing the uppercut move. This uppercut move seems to be the most used move by this AI. Runner-up Sound Design - For the runner-up sound design, the best encoder is 1D-CNN. This Blind DL AI plays a very defensive style. It tends to not move from its location much. It lets the opponent come closer and then attacks by crouching and punching. This combo of crouch punches seems to be the most used attack by this AI. It seems that through training, the AI has learned that crouch punching is effective against MCTSAI65. Although it does not try to avoid the fireball attack from the opponent, it can still overwhelm the opponent by trapping them in a corner and spamming crouch punches. Winner Sound Design - For the winner sound design, the best encoder is 1D-CNN. Like the runner-up sound design AI, this AI tends to play defensively. It does not move much apart from some jump attacks. Its favorite combination of attack is crouch kicks as it is the most used attack by the AI. This AI is the strongest among all of the Blind DL AIs trained. It tries to avoid the fireball attack by jumping. Also, it counters MCTSAI65's jumping attack with its own jumping attack, or if it has enough energy, it performs the lighting uppercut. This AI is undefeated in testing. #### Vi-B2 DareFightingICE 6.0 The behavior of Blind DL AIs trained on DareFightingICE version 6.0 is described in this section. Default Sound Design - For the default sound design in DareFightingICE version 6.0, the best encoder is FFT. This Blind DL AI is the weakest among the selected AIs. It tends to play a more aggressive role. It attacks by jumping toward the opponent. The most used attack by this AI is the heavy punch. It tends to do a quick sit-stand movement repeatedly. It also does not try to avoid the fireball attack. The uniqueness of this AI is that it performs the throw attack which lifts the opponent in the air. Runner-up Sound Design - For the runner-up sound design in DareFightingICE version 6.0, the best encoder is FFT. This Blind DL AI is the best for DareFightingICE version 6.0. Its behavior is similar to the default sound design AI of DareFightingICE version 6.0. The difference is that this AI tends to attack slowly as compared to the other Blind DL AIs. The most used attack combination of this AI is a punch followed by a kick. It also does the throw move and the lighting uppercut move. It does not try to avoid the fireball attack. Winner Sound Design - For the winner sound design in DareFightingICE version 6.0, the best encoder is 1D-CNN. This Blind DL AI is also defensive. Its most used attack is the uppercut. This AI tries to defend itself from MCTSAI23i's jump attacks with either uppercuts or jumping punches. It scarcely uses the lighting uppercut attack. It also does not try to avoid the fireball attack. ## V Conclusions This paper presented a rule-based adaptive background music (BGM) system that consists of five different instruments playing a classical music piece called "Air on G-String." The proposed adaptive BGM adapts by changing the volume of the instruments. Each instrument is connected to a different element of the game. The paper also showed that the performance of a deep reinforcement learning AI using only audio data as its input, called Blind DL AI, improved while playing with the adaptive BGM as compared to playing without it. We believe that both players with and without vision will also be able to use the same information given by the proposed adaptive BGM to their advantage, just like the Blind DL AI does. In the future, we are planning to use a deep learning approach to the adaptation of BGM. This is because in the current work, we used the performance of the Blind DL AI when fighting against a version of a Monte-Carlo tree search AI to assess our adaptive BGM and we did not do any aesthetic evaluation of it. We believe that the adaptive BGM using deep learning would result in achieving a better performance from the Blind DL AI as compared to the rule-based, as we plan to use the Blind DL AI's performance as a part of the reward function. As for aesthetic evaluation, since our current work is a rule-based adaptive BGM decided the rules taking into account the resulting BGM's aesthetics; However, by using a deep learning approach we anticipate that the aesthetic evaluation of the BGM would be needed.
2310.01679
Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
The vast majority of techniques to train fair models require access to the protected attribute (e.g., race, gender), either at train time or in production. However, in many important applications this protected attribute is largely unavailable. In this paper, we develop methods for measuring and reducing fairness violations in a setting with limited access to protected attribute labels. Specifically, we assume access to protected attribute labels on a small subset of the dataset of interest, but only probabilistic estimates of protected attribute labels (e.g., via Bayesian Improved Surname Geocoding) for the rest of the dataset. With this setting in mind, we propose a method to estimate bounds on common fairness metrics for an existing model, as well as a method for training a model to limit fairness violations by solving a constrained non-convex optimization problem. Unlike similar existing approaches, our methods take advantage of contextual information -- specifically, the relationships between a model's predictions and the probabilistic prediction of protected attributes, given the true protected attribute, and vice versa -- to provide tighter bounds on the true disparity. We provide an empirical illustration of our methods using voting data. First, we show our measurement method can bound the true disparity up to 5.5x tighter than previous methods in these applications. Then, we demonstrate that our training technique effectively reduces disparity while incurring lesser fairness-accuracy trade-offs than other fair optimization methods with limited access to protected attributes.
Hadi Elzayn, Emily Black, Patrick Vossler, Nathanael Jo, Jacob Goldin, Daniel E. Ho
2023-10-02T22:30:25Z
http://arxiv.org/abs/2310.01679v1
# Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features ###### Abstract The vast majority of techniques to train fair models require access to the protected attribute (e.g., race, gender), either at train time or in production. However, in many important applications this protected attribute is largely unavailable. In this paper, we develop methods for measuring and reducing fairness violations in a setting with limited access to protected attribute labels. Specifically, we assume access to protected attribute labels on a small subset of the dataset of interest, but only probabilistic estimates of protected attribute labels (e.g., via Bayesian Improved Surname Geocoding) for the rest of the dataset. With this setting in mind, we propose a method to estimate bounds on common fairness metrics for an existing model, as well as a method for training a model to limit fairness violations by solving a constrained non-convex optimization problem. Unlike similar existing approaches, our methods take advantage of contextual information - specifically, the relationships between a model's predictions and the probabilistic prediction of protected attributes, given the true protected attribute, and vice versa - to provide tighter bounds on the true disparity. We provide an empirical illustration of our methods using voting data. First, we show our measurement method can bound the true disparity up to 5.5x tighter than previous methods in these applications. Then, we demonstrate that our training technique effectively reduces disparity while incurring lesser fairness-accuracy trade-offs than other fair optimization methods with limited access to protected attributes. ## 1 Introduction In both the private and public sectors, organizations are facing increased pressure to ensure they use equitable machine learning systems, whether through legal obligations or social norms (FCRA; ECOA; U.S. E.O., 2021; House, 2022; Hill, 2020). For instance, in 2022, Meta Platforms agreed to build a system for measuring and mitigating racial disparity in advertising to settle a lawsuit filed by the U.S. Department of Housing and Urban Development under the Fair Housing Act (Austin, Jr., 2022b; Isaac, 2022). Similarly, recent Executive Orders in the United States (U.S. E.O., 2021; U.S. E.O., 2023) direct government agencies to measure and mitigate disparity resulted from or exacerbated by their programs, including in the "design, develop[ment], acquii[sition], and us[e] [of] artificial intelligence and automated systems" (U.S. E.O., 2023). Yet both companies (Andrus et al., 2021) and government agencies (U.S. E.O., 2021) rarely collect or have access to individual-level data on race and other protected attributes on a comprehensive basis. Given that the majority of algorithmic fairness tools which could be used to monitor and mitigate racial bias require demographic attributes (Bird et al., 2020; Bellamy et al., 2018), the limited availability of protected attribute data represents a significant challenge in assessing algorithmic fairness and training fairness-constrained systems difficult. In this paper, we address this problem by introducing methods for _1) measuring_ fairness violations in, and _2) training_ fair models on, data with limited access to protected attribute labels. We assume access to protected attribute labels on only a small subset of the dataset of interest, along with probabilistic estimates of protected attribute labels-- for example, estimates generated using Bayesian Improved Surname Geocoding (BISG) (Imai and Khanna, 2016)--for the rest of the dataset. We leverage this limited labeled data to establish whether certain relationships between the model's predictions, the probabilistic protected attributes, and the ground truth protected attributes hold. Given these conditions, our first main result (Theorem 1) shows that we can bound a range of common fairness metrics, from above and below, over the full dataset with easily computable (un)fairness estimators calculated using the _probabilistic_ estimates of the protected attribute. We expound on these conditions, define the fairness estimators, and introduce this result in Section 2. To train fair models, we leverage our results on measuring fairness violations to bound disparity during learning; we enforce the upper bound on unfairness _calculated with the probabilistic protected attribute_ (measured on the full training set) as a surrogate fairness constraint, while also enforcing the conditions required to ensure the estimators accurately bound disparity in the model's predictions (calculated on the labeled subset), as constraints during training. We leverage recent work in constrained learning with non-convex losses (Chamon et al., 2022) to ensure bounded fairness violations with near-optimal performance at prediction time. We note that our data access setting is common across a variety of government and business contexts: first, estimating race using BISG is standard practice in government and industry (CFPB, 2014; Fiscella and Fremont, 2006; Koh et al., 2011; Austin, Jr., 2022a,b). Although legal constraints or practical barriers often prevent collecting a full set of labels for protected attributes, companies and agencies can and do obtain protected attribute labels for subsets of their data. For example, companies such as Meta have started to roll out surveys asking for voluntary disclosure of demographic information to assess disparities (Austin, Jr., 2022a). Another method for obtaining a subset of protected attribute data is to match data to publicly available administrative datasets containing protected attribute labels for a subset of records, as in, e.g. Elzayn et al. (2023). While our approach has stronger data requirements than recent work in similar domains (Kallus et al., 2022; Wang et al., 2020) in that a subset of it must have protected attribute labels, many important applications satisfy this requirement. The advantage to using this additional data is substantially tighter bounds on disparity: in our empirical applications, we find up to 5.5x tighter bounds for fairness metrics, and up to 5 percentage points less of an accuracy penalty when enforcing the same fairness bound during training. In sum, we present the following contributions: _1)_ We introduce a new method of bounding ground truth fairness violations across a wide range of fairness metrics in datasets with limited access to protected attribute data (Section 2); _2)_ We introduce a new method of training models with near-optimal and near-feasible bounded unfairness with limited protected attribute data (Section 3); _3)_ We show the utility of our approaches, including comparisons to a variety of baselines and other approaches, on various datasets relevant for assessing disparities in regulated contexts: we focus on voter registration data, commonly used to estimate racial disparities in voter turnout (U.S. DOJ, 2023) (Section 4) with additional datasets presented in Appendix F. ## 2 Methodology for Measurement In this section, we formally introduce our problem setting and notation, define the types of fairness metrics we can measure and enforce with our techniques, and define the _probabilistic_ and _linear_ estimators of disparity for these metrics. We then introduce our first main result: given certain relationships between the protected attribute, model predictions, and probabilistic estimates of protected attribute in the data, we can upper and lower bound the true fairness violation for a given metric using the linear and probabilistic estimators respectively. ### Notation and Preliminaries **Setting and Datasets.** We wish to learn a model of an outcome \(Y\) based on individuals' features \(X\). Individuals have a special binary protected class feature \(B\in\{0,1\}\) which is usually unobserved, and _proxy variables_\(Z\subset X\) which may be correlated with \(B\). Our primary dataset, called the _learning dataset_, is \(\mathcal{D}\coloneqq\mathcal{D}_{U}\cup\mathcal{D}_{L}\), where \(\mathcal{D}_{U}\) (the _unlabeled set_) consists of observations \(\{(X_{i},Y_{i},Z_{i})\}_{i=1}^{n_{U}}\) and \(\mathcal{D}_{L}\) (the _labeled set_) additionally includes \(B\) and so consists of \(\{(X_{i},Y_{i},Z_{i},B_{i})\}_{i=1}^{n_{L}}\). An _auxiliary dataset_\(\{(Z,B)\}_{i=1}^{n_{A}}\) allows us to learn an estimate of \(b_{i}\coloneqq\Pr[B_{i}|Z_{i}]\); except where specified, we abstract away from the auxiliary dataset and assume access to \(b\). When considering learning, we assume a _hypothesis class_ of models \(\mathcal{H}\) which map \(X\) either directly to \(Y\) or a superset (e.g. \([0,1]\) rather than \(\{0,1\}\)), and consider models parameterized by \(\theta\), i.e. \(h_{\theta}\in\mathcal{H}\). An important random variable that we will use is the _conditional covariance_ of random variables. In particular, for random variables \(Q,R,S,T\), we write \(C_{Q,R|S,T}\coloneqq\text{Cov}(Q,R|S,T)\). **Notation.** For a given estimator \(\theta\) and random variable \(X\), we use \(\hat{\theta}\) to denote the sample estimator and \(\hat{X}\) to denote a prediction of \(X\). We use \(\bar{X}\) to indicate the sample average of a random variable taken over an appropriate dataset. In some contexts we use group-specific averages, which we indicate with a superscript. For example, we use \(\bar{b}^{B_{i}}\) to denote the sample average of \(b\) among individuals who have protected class feature \(B\) equal to \(B_{i}\). We will indicate a generic conditioning event using the symbol \(\mathcal{E}\), and overloading it, we will write \(\mathcal{E}_{i}\) as an indicator, i.e. \(1\) when \(\mathcal{E}\) is true for individual \(i\) and \(0\) otherwise. In the learning setting, \(\mathcal{E}_{i}\) will depend on our choice of model \(h\); when we want to emphasize this, we write \(\mathcal{E}_{i}(h)\). We will also use the \((\cdot)\) notation to emphasize dependence on context more generally, e.g. \(C_{f,b|B}(h_{\theta})\) is the covariance of \(f\) and \(b\) conditional on \(B\) under \(h_{\theta}\). **Fairness Metrics.** In this paper, we focus on measuring and enforcing a group-level _fairness metric_ that can be expressed as the difference across groups of some function of the outcome and the prediction, possibly conditioned on some event. More formally: **Definition 1**.: A _fairness metric_\(\mu\) is an operator associated with a function \(f\) and an event \(\mathcal{E}\) such that \[\mu(\mathcal{D})\coloneqq\mathbb{E}_{\mathcal{D}}[f(\hat{Y},Y)|\mathcal{E},B =1]-\mathbb{E}_{\mathcal{D}}[f(\hat{Y},Y)|\mathcal{E},B=0],\] where the distribution \(\mathcal{D}\) corresponds to the process generating \((X,Y,\hat{Y})\). Many common fairness metrics can be expressed in this form by defining an appropriate event \(\mathcal{E}\) and function \(f\). For instance, _demographic parity_ in classification (Calders et al., 2009; Zafar et al., 2017; Zliobaite, 2015) corresponds to letting \(\mathcal{E}\) be the generically true event and \(f\) be simply the indicator \(\mathbf{1}[\hat{Y}=1]\). False positive rate parity (Chouldechova, 2017; Corbett-Davies & Goel, 2018) corresponds to letting \(\mathcal{E}\) be the event that \(Y=0\) and letting \(f(\hat{Y},Y)=\mathbf{1}[\hat{Y}\neq Y]\). True positive rate parity (Hardt et al., 2016) (also known as "equality of opportunity") corresponds to letting \(\mathcal{E}\) be the event that \(Y=1\) and \(f(\hat{Y},Y)=\mathbf{1}[\hat{Y}\neq Y]\). For simplicity, we have defined a fairness metric as a scalar and assume it is conditioned over a single event \(\mathcal{E}\). It is easy to extend this definition to multiple events (e.g. for the fairness metric known as equalized odds) by considering a set of events \(\{\mathcal{E}_{j}\}\) and keeping track of \(\mathbb{E}_{\mathcal{D}}[f_{j}(\hat{Y},Y)|\mathcal{E}_{j},B]\) for each. For clarity, we demonstrate how many familiar notions of fairness can be written in the form of Definition 1 in Appendix A.5. There are other metrics that cannot be written in this form; we do not consider those here. ### Fairness Metric Estimators Our first main result is that we can bound fairness metrics of the form described above over a dataset with linear and probabilistic fairness estimates, given that certain conditions hold on the relationships between model predictions, predicted protected attribute, and the ground truth protected attribute. In order to understand this result, we define the _probabilistic_ and _linear_ estimators. Intuitively, the probabilistic estimator is the population estimate of the given disparity metric weighted by each observation's probability of being in the relevant demographic group. Formally: **Definition 2** (Probabilistic Estimator).: For fairness metric \(\mu\) with function \(f\) and event \(\mathcal{E}\), the probabilistic estimator of \(\mu\) for a dataset \(\mathscr{D}\) is given by \[\widehat{D}_{\mu}^{P}:=\frac{\sum_{i\in\mathcal{E}}b_{i}f(\hat{Y}_{i},Y_{i})}{ \sum_{i\in\mathcal{E}}b_{i}}-\frac{\sum_{i\in\mathcal{E}}(1-b_{i})f(\hat{Y}_{i},Y_{i})}{\sum_{i\in\mathcal{E}}(1-b_{i})}.\] It is assumed that at least one observation in the dataset has had \(\mathcal{E}\) occur. Meanwhile, the linear disparity metric is the coefficient of the probabilistic estimate \(b\) in a linear regression of \(f(\hat{Y},Y)\) on \(b\) and a constant among individuals in \(\mathcal{E}\). For example, in the case of demographic parity, where \(f(\hat{Y},Y)=\hat{Y}\), it is the coefficient on \(b\) in the linear regression of \(\hat{Y}\) on \(b\) and a constant over the entire sample. Using the well-known form of the regression coefficient (see, e.g. Angrist and Pischke (2009), we define the linear estimator as: **Definition 3** (Linear Estimator).: For a fairness metric \(\mu\) with function \(f\) and associated event \(\mathcal{E}\), the linear estimator of \(\mu\) for a dataset \(\mathscr{D}\) is given by: \[\widehat{D}_{\mu}^{L}:=\frac{\sum_{i\in\mathcal{E}}\left(f(\hat{Y}_{i},Y_{i}) -\overline{f(\hat{Y},Y)}\right)(b_{i}-\overline{b})}{\sum_{i\in\mathcal{E}}(b _{i}-\overline{b})^{2}}\] where \(\overline{\cdot}\) represents the sample mean among event \(\mathcal{E}\). We define \(D_{\mu}^{P}\) and \(D_{\mu}^{L}\) to be the asymptotes of the probabilistic and linear estimators, respectively, as the identically and independently distributed sample grows large. ### Bounding Fairness with Disparity Estimates Our main result proves that when certain covariance conditions between model predictions, predicted demographic attributes, and true demographic attributes hold, we can guarantee that the linear and probabilistic estimators of disparity calculated with the _probabilistic_ protected attribute serve as upper and lower bounds on _true_disparity. This result follows from the following proposition: **Proposition 1**.: Suppose that \(b\) is a probabilistic estimate of a demographic trait (e.g. race) given some observable characteristics \(Z\) and conditional on event \(\mathcal{E}\), so that \(b=\Pr[B=1|Z,\mathcal{E}]\). Define \(D_{\mu}^{P}\) as the asymptotic limit of the probabilistic disparity estimator, \(\widehat{D}_{\mu}^{P}\), and \(D_{\mu}^{L}\) as the asymptotic limit of the linear disparity estimator, \(\widehat{D}_{\mu}^{L}\). Then: \[D_{\mu}^{P}=D_{\mu}-\frac{\mathbb{E}[\mathrm{Cov}(f(\hat{Y},Y),B|b,\mathcal{E })]}{\text{Var}(B|\mathcal{E})} \tag{1}\] and \[D_{\mu}^{L}=D_{\mu}+\frac{\mathbb{E}[\mathrm{Cov}(f(\hat{Y},Y),b|B,\mathcal{E })]}{\text{Var}(b|\mathcal{E})}. \tag{2}\] Since variance is always positive, the probabilistic and linear estimators serve as bounds on disparity when \(C_{f,b|B,\mathcal{E}}\) and \(C_{f,B|b,\mathcal{E}}\) are either both positive or both negative, since they are effectively separated from the true disparity by these values: if they are both positive, then \(D_{\mu}^{L}\) serves as an upper bound and \(D_{\mu}^{P}\) serves as a lower bound; if they are both negative, then \(D_{\mu}^{P}\) serves as an upper bound and \(D_{\mu}^{L}\) serves as a lower bound. Formally, **Theorem 1**.: Suppose that \(\mu\) is a fairness measure with function \(f\) and conditioning event \(\mathcal{E}\) as described above, and that \(\mathbb{E}[\mathrm{Cov}(f(\hat{Y},Y),b|B,\mathcal{E})]>0\) and \(\mathbb{E}[\mathrm{Cov}(f(\hat{Y},Y),B|b,\mathcal{E}]>0\). Then, \[D_{\mu}^{P}\leq D_{\mu}\leq D_{\mu}^{L}.\] Proposition 1 and Theorem 1, which we prove in Appendix A, subsume and generalize a result from Elzayn et al. (2023). These results define the conditions under which \(D_{\mu}^{L}\) and \(D_{\mu}^{P}\), easily computable quantities, serve as bounds on ground truth fairness violations -- and as we show in Section 4.2, this allows us to bound the specified fairness metrics in practice when measuring predictions in existing models whenever these conditions hold. However, as we demonstrate in the next section, this also provides us with a simple method to bound fairness violations when training machine learning models. Methodology for Training We now combine our fairness estimators with existing constrained learning approaches to develop a methodology for training fair models when only a small subset labeled with ground true protected characteristics is available. The key idea to our approach is to enforce both an upper bound on the magnitude of fairness violations computed with the _probabilistic_ protected attributes (\(\widehat{D}_{\mu}^{L}\)), while also leveraging the small labeled subset to enforce the _covariance constraints_ referenced in Theorem 1. This way, as satisfaction of the covariance constraints guarantees that \(\widehat{D}_{\mu}^{L}\) serves as a bound on unfairness, we ensure bounded fairness violations in models trained with probabilistic protected characteristic labels. Due to space constraints, we defer discussion of the mathematical framework underlying the ideas to Appendix B. **Problem Formulation** In an ideal setting, given access to ground truth labels on the full dataset, we could simply minimize the expected risk subject to the constraint that - whichever fairness metric we have adopted - the magnitude of fairness violations do not exceed a given threshold \(\alpha\). However, in settings where we only have access to a small labeled subset of data, training a model by directly minimizing the expected risk subject to fairness constraints on the labeled subset may result in poor performance, particularly for complicated learning problems. Instead, we propose enforcing an upper bound on the disparity estimator as a _surrogate_ fairness constraint. Recall that Theorem 1 describes conditions under which the linear estimator upper or lower bounds the true disparity; if we can _enforce_ these conditions in our training process using the smaller _labeled_ dataset, then our training process provides the fairness guarantees desired while leveraging the information in the full dataset. To operationalize this idea, we recall that Theorem 1 characterizes two cases in which the linear estimator could serve as an upper bound in magnitude: in the first case, both residual covariance terms are positive, and \(D_{\mu}\leq D_{\mu}^{L}\); in the second, both are negative, and \(D_{\mu}^{L}\leq D_{\mu}\)1. Minimizing risk while satisfying these constraints in each case separately gives the following two problems: Footnote 1: Note that as a result of Proposition 1, when \(C_{f,b|B,\mathcal{E}}\) and \(C_{f,B|b,\mathcal{E}}\) are both positive, the true fairness metric is necessarily is forced to be positive, and symmetrically for for negative values. **Problem 1.A.** \[\min_{h\in\mathcal{H}}\mathbb{E}[L(h(X),Y)]\text{ s.t. }D_{\mu}^{L}\leq \alpha;\mathbb{E}[C_{f,B|b,\mathcal{E}}]\geq 0;\,\mathbb{E}[C_{f,b|B,\mathcal{E}}]\geq 0\] **Problem 1.B.** \[\min_{h\in\mathcal{H}}\mathbb{E}[L(h(X),Y)]\text{ s.t. }-\alpha\leq D_{\mu}^{L}; \mathbb{E}[C_{f,B|b,\mathcal{E}}]\leq 0;\,\mathbb{E}[C_{f,b|B,\mathcal{E}}]\leq 0\] To find the solution that minimizes the the fairness violation with the highest accuracy, we select: \[h^{*}\in\text{argmin}_{h^{*}_{2a},h^{*}_{2b}}\mathbb{E}[L(h(X),Y)].\] By construction, \(h^{*}\) is feasible, and so satisfies \(|D_{\mu}(h^{*})|\leq\alpha\); moreover, while \(h^{*}\) may not be the lowest-loss predictor such that \(|D_{\mu}|\leq\alpha\), it is the best predictor which admits the linear estimator as an upper bound on the magnitude of the disparity. In other words, it is the best model for which we can _guarantee_ fairness using our measurement technique. **Remark.** Note that the second covariance constraint (associated with the lower-bound, i.e. the probabilistic estimator) in each problem is necessary to rule out solution far below the desired range in the opposite sign; otherwise, a solution to Problem 1.A could have \(D_{\mu}<-\alpha\) and to Problem 1.B \(D_{\mu}>\alpha\), and the ultimate \(h^{*}\) selected could be infeasible with respect to the desired fairness constraint. (Note also that as a consequence, the probabilistic estimator will also serve as a _lower bound_ for the magnitude of disparity under the selected model.) **Empirical Problem** The problems above are over the full population, but in practice we usually only have samples. We thus now turn to the question of how we can solve the optimization problem with probabilistic fairness constraints empirically. We focus on the one-sided Problem 1.A for brevity but the other side follows similarly. The empirical analogue of Problem 1.A is the following: **Problem 2.A.** \[\min_{h_{\theta}\in\mathcal{H}}\frac{1}{n_{\mathscr{D}}}\sum_{i=1}^{n_{ \mathscr{D}}}L(h_{\theta}(X_{i}),Y_{i})\text{ s.t. }\widehat{D}_{\mu}^{L}(h_{\theta})\leq\alpha;\widehat{C}_{f,b|B, \mathcal{E}}(h_{\theta})\geq 0;\widehat{C}_{f,b|B,\mathcal{E}}(h_{\theta})\geq 0\] **Solving the empirical problem.** While Problem 2.A is a constrained optimization problem, it is not, except in special cases, a convex problem. Despite this, recent results (Chamon and Ribeiro, 2020; Chamon et al., 2022) have shown that under relatively mild conditions, a primal-dual learning algorithm can be used to obtain approximate solutions with good performance guarantees.2 In particular, if we define the _empirical Lagrangian_ as: Footnote 2: For the special case of linear regression with mean-squared error losses, we provide a closed-form solution to the primal problem. This can be used for a heuristic solution with appropriate dual weights. \[\widehat{\mathcal{L}}(\theta,\vec{\mu})=\frac{1}{n_{\mathscr{B}}}\sum_{i=1}^{ n_{\mathscr{B}}}L(h_{\theta}(X_{i}),Y_{i})+\mu_{L}\left(\widehat{D}_{\mu}^{L}(h_{ \theta})-\alpha\right)\Big{)}-\mu_{b|B}\widehat{C}_{f,b|B,\mathcal{E}}-\mu_{ B|b}\widehat{C}_{f,B|b,\mathcal{E}} \tag{3}\] (where \(\widehat{C}_{f,b|B,\mathcal{E}}\) and \(\widehat{C}_{f,B|b,\mathcal{E}}\) are as in Problem 1.A), the optimization problem can be viewed as a min-max game between a primal (\(\theta\)) and dual (\(\mu\)) player where players are selecting \(\theta\) and \(\mu\) to \(\max_{\mu}\min_{\theta}\widehat{\mathcal{L}}(\theta,\mu)\). Formally, Algorithm 1 in the appendix provides pseudocode for a primal-dual learner similar to Chamon et al. (2022), Cotter et al. (2019), etc. specialized to our setting; adapting and applying Theorem 3 in (Chamon et al., 2022), provides the following guarantee: **Theorem 2.** Let \(\mathcal{H}\) have a VC-dimension \(d\), be _decomposable_, and finely cover its convex hull. Assume that \(y\) takes on a finite number of values, the induced distribution \(x|y\) is non-atomic for all \(y\), and Problem 2.A has a feasible solution. Then if Algorithm 1 is run for \(T\) iterations, and \(\tilde{\theta}\) is selected by uniformly drawing \(t\in\{1...T\}\), the following holds with probability \(1-\delta\): For each target constraint \(\ell\in\{D_{\mu}^{L},C_{f,b|B,\mathcal{E}},C_{f,B|b,\mathcal{E}}\}\), \[\mathbb{E}[\ell(h_{\tilde{\theta}})]\leq c_{i}+\mathcal{O}\left(\frac{d\log N} {\sqrt{N}}\right)+\mathcal{O}\left(\frac{1}{T}\right)\text{ and }\mathbb{E}[L(h_{\tilde{ \theta}},y)]\leq P^{*}+\mathcal{O}\left(\frac{d\log N}{\sqrt{N}}\right)\] where \(P^{*}\) is the optimal value of Problem 2.A. The theorem provides an _average-iterate_ guarantee of approximate feasibility and optimality when a solution is drawn from the empirical distribution. Note that it is not a priori obvious whether our bounds remain informative over this empirical distribution, but we show in Appendix A that the covariance conditions holding on average imply that our bounds hold on average: **Proposition 2.** Suppose \(\tilde{\theta}\) is drawn from the empirical distribution produced by Algorithm 1. If: \[\mathbb{E}\left[\mathbb{E}[\mathrm{Cov}(f(h_{\tilde{\theta}}(X),B))|\mathcal{ E},b|\tilde{\theta}]\geq 0\text{ and }\mathbb{E}\left[\mathbb{E}[\mathrm{Cov}(f(h_{\tilde{\theta}}(X),b))| \mathcal{E},B]|\tilde{\theta}\right]\geq 0,\] Then \(\mathbb{E}D_{\mu}(h_{\tilde{\theta}})\leq\mathbb{E}D_{\mu}^{L}(h_{\tilde{ \theta}})\). **Remark.** Combining Theorem 2 and Proposition 2 guarantees that a randomized classifier with parameters drawn according to the empirical distribution from Algorithm 1 will approximately meet our disparity bound goals _on average_. Without stronger assumptions, this is all that can be said; this is a general limitation of game-based empirical optimization methods, since they correspond equilibrium discovery, and only mixed-strategy equilibria are guaranteed to exit. In practice, however, researchers applying similar methods select the final or best feasible iterate of their model, and often find feasible good performance (Cotter et al., 2019; Wang et al., 2020); thus in our results section, we compare our best-iterate performance to other methods. ## 4 Empirical Evaluation We now turn to experiments of our disparity measurement and fairness enforcing training methods on predicting voter turnout. We provide additional experiments on the COMPAS dataset (Angwin et al., 2016), as well as on simulated data, in Appendix F and Appendix G, respectively. ### Data **L2 Dataset.** The L2 dataset provides demographic, voter, and consumer data from across the United States collected by the company L2. Here, we consider the task of predicting voter turnout for the general election in 2016 and measuring model fairness violations with respect to Black and non-Black voters. This application is particularly relevant since race/ethnicity information is often not fully available (Imai & Khanna, 2016), and much of voting rights law hinges on determining whether there exists racially polarized voting and/or racial disparities in turnout (Barber & Holbein, 2022). We focus on the six states with self-reported race labels (North Carolina, South Carolina, Florida, Georgia, Louisiana, and Alabama). We denote \(\hat{Y}=1\) if an individual votes in the 2016 election and \(\hat{Y}=0\) otherwise; refer to Appendix C.1 for a detailed description of this dataset. **Race Probabilities.** The L2 dataset provides information on voters' first names, last names, and census block group, allowing the use of Bayesian Improved (Firstname and) Surname Geocoding Method (BISG/BIFSG) for estimating race probabilities (Elliott et al., 2008; Elliott et al., 2009; Imai & Khanna, 2016). We obtain our priors through the decennial Census in 2010 on the census block group level. AUC for BISG/BIFSG across the six states we investigate in the L2 data ranges from 0.85-0.90. Further details on how we implement BISG/BIFSG for the L2 data and its performance can be found in Appendix C.2. ### Measurement In this section, we showcase our method of bounding true disparity when race is unobserved. Given _1)_ model predictions on a dataset with probabilistic race labels and _2)_ true race labels for a small subset of that data, we attempt to obtain bounds on three disparity measures: demographic disparity (DD), false positive rate disparity (FPRD), and true positive rate disparity (TPRD). #### 4.2.1 Experimental Design and Comparisons. **Setup.** To simulate measurement of fairness violations on predictions from a pre-trained model with limited access to protected attribute, we first train unconstrained logistic regression models with an 80/20 split of the available L2 data for each state. Then, in order to simulate realistic data access conditions, we measure fairness violations on a random subsample of the test set (\(n=150,000\)), with 1% (\(n=1,500\)) of this sample including ground truth race labels to constitute the labeled subset. We do this by first checking the covariance constraints on the labeled subset, and then calculating \(\widehat{D}_{L}\) and \(\widehat{D}_{P}\) on the entire set of \(150,000\) examples sampled from the test set. We also compute standard errors for our estimators as specified by the procedure in Appendix Section B. To evaluate our method, we measure true fairness violations on the \(150,000\) examples sampled from the test set, and check to see whether we do in fact bound the true fairness violations within standard error. Further information about our unconstrained models can be found in Appendix Section D.1. We present our results in Figure 1. **Comparisons**. We compare our method of estimating fairness violations using probabilistic protected characteristic labels to the method described in Kallus et al. (2022), which is one of the only comparable methods in the literature. We will refer to as KDC from here on. Details of KDC and our implementation can be found in Appendix Section D.2. Figure 1: Comparison of our method of bounding true disparity (blue) to the method proposed in Kallus et al. (2022) (grey), using a logistic regression model to predict voter turnout in six states. Only a small subset (here, \(n=1,500\), i.e. 1%) of the data contains information on true race. The grey dot represents true disparity. Both methods successfully bound true disparity within its 95% standard errors, but our estimators provide tighter bounds. #### 4.2.2 Results Figure 1 compares our method of estimating disparity (blue) with KDC (grey) for the three disparity measures and all six states. This figure shows estimates when training a logistic regression model, and Figure 5 in the Appendix shows similar results for training random forests. Across all experiments, both KDC's and our estimators always bound true disparity. However, we observe two crucial differences: _1)_ our bounds are markedly tighter (3.8x smaller on average, and as much as 5.5x smaller) than KDC, and as a result _2)_ our bounds almost always indicate the direction of true disparity. When they do not, it is due to the standard error which shrinks with more data. By contrast, KDC's bounds consistently span[-0.5, 0.5], providing limited utility even for directional estimates. ### Training In this section, we demonstrate the efficacy of our approach to training fairness-constrained machine learning models. Following our algorithm in Section 3, we train models with both covariance conditions necessary for the fairness bounds to hold and also constrain the upper bound on absolute value of disparity, \(\widehat{D}_{\mu}^{L}\), to be below some bound \(\alpha\). We find that our method _1)_ results in lower true disparity on the test set than using the labeled subset alone, or using prior methods to bound disparity; _2)_ more frequently reaches the target bound than other techniques; and _3)_ often incurs less of an accuracy trade-off when enforcing the same bound on disparity compared to related techniques. #### 4.3.1 Experimental Design and Comparisons. **Experimental Design.** We demonstrate our technique by training logistic regression models to make predictions with bounded DD, FPRD, and TPRD across a range of bounds. We include results for neural network models in Appendix Section E.7. We train these models on the data from Florida within the L2 dataset, as it has the largest unconstrained disparity among the six states, see Figure 1.We report the mean and standard deviations of our experimental results over ten trials. For each trial, we split our data (\(n=150,000\)) into train and test sets, with a 80/20 split. From the training set, we subsample the labeled subset so that it is 1% of the total data (\(n=1,500\)). To enforce fairness constraints during training, we solve the empirical problem 3A and its symmetric analogue, which enforces negative covariance conditions and \(\widehat{D}_{\mu}^{L}\) as a (negative) lower bound. We use the labeled subset to enforce adherence to the covariance conditions during training. We use the remainder of the training data, as well as the labeled subset, to enforce the constraint on \(\widehat{D}_{\mu}^{L}\) during training. As noted in Section 3, our method theoretically guarantees a near-optimal, near-feasible solution _on average_ over \(\theta^{(1)}...\theta^{(T)}\). However, following Wang et al. (2020), for each of these sub-problems, we select the best iterate \(\theta^{(t)}\) which satisfies the bound on \(\widehat{D}_{\mu}^{L}\) on the training set, the covariance constraints on the labeled subset, and that achieves the lowest loss on the training set. We report our results on the solution between these two sub-problems that is feasible and has the lowest loss. We present the accuracy and resulting disparity of model predictions on the test set after constraining fairness violations during training for a range of metrics (DD, FPRD, TPRD), across a range of bounds (0.04, 0.06, 0.08, 0.10) for our method as well as three comparisons, described below, in Figure 2. Further details about the experimental setup can be found in Appendix Section E.1. **Comparisons.** We compare our results for enforcing fairness constraints with probabilistic protected attribute labels to the following methods: _1)_ A model trained _only_ on the labeled subset with true race labels, enforcing a fairness constraint over those labels. This is to motivate the utility of using a larger dataset with noisy labels when a smaller dataset exists on the same distribution with true labels. To implement this method, we use the non-convex constrained optimization technique from Chamon et al. (2022) to enforce bounds on fairness violations calculated directly on ground-truth race labels, as we describe in greater detail in Appendix E.2. _2)_ We compare with a recent method by Wang et al. (2020) for enforcing fairness constraints on data with noisy protected attributes and a labeled auxiliary set, which is based on an extension of Kallus et al. (2022)'s disparity measurement method. This method guarantees that the relevant disparity metrics will be satisfied within the specified slack, which we take as a bound. However, their implementation does not consider DD - further details on this method can be found in Appendix Section E.3. _3)_ We compare with a method for enforcing fairness with incomplete demographic labels introduced by Mozannar et al. (2020), which essentially modifies Agarwal et al. (2018)'s fair training approach to only enforce a fairness constraint on the available demographically labeled data. This method also guarantees that the relevant disparity metrics will be satisfied within specified slack, which we modify to be comparable to our bound. Details on this approach can be found in Appendix E.4. In Appendix Section E.6, we also compare to two other models: _1)_ an "oracle" model trained to enforce a fairness constraint over the ground-truth race labels on the whole dataset; and _2)_ a naive model which ignores label noise and enforces disparity constraints directly on the probabilistic race labels, thresholded to be in \((0,1)\). ### Results We display our results in Figure 2, with additional results in Sections E and G of the Appendix. Looking at the top row of the figure, we find that our method, in all instances, reduces disparity further than training on the labeled subset alone (blue vs. orange bars in Figure 2), than using Wang et al. (2020) (blue versus green bars in Figure 2), and than using Mozannar et al. (2020) (blue versus pink bars in Figure 2). Second, our method satisfies the target fairness bound on the test set more often than the other methods (12 out of 12 times, as opposed to 0, 1, and 0 for labeled subset, Wang, and Mozannar respectively). In other words, the disparity bounds our method learns on the train set generalize better to the test set than the comparison methods. We note that deviations from the enforced bound on the test set, when they arise, are due to generalization error in enforcing constraints from the train to the test set, and because our training method guarantees _near_-feasible solutions. The bottom row of the figure shows how our method performs with respect to accuracy in comparison to other methods. The results here are more variable; however, we note that this dataset seems to exhibit a steep fairness-accuracy tradeoff--yet despite our method reducing disparity much farther than all other methods (indeed, being the only metric that reliably bounds the resulting disparity in the test set), we often perform comparably or slightly better. For example, when mitigating TPRD, our method mitigates disparity much more than Mozannar et al. (2020) and Wang et al. (2020), yet outperforms both with respect to accuracy. In the case of FPRD, while our method does exhibit worse accuracy, these sets of experiments also exhibit the largest difference in disparity reduction between our method and the other methods, which may make such an accuracy difference inevitable. Similarly, the accuracy discrepancy between the labeled subset method and our method is reasonable given the fairness-accuracy trade-off. ## 5 Related Work While there are many methods available for training models with bounded fairness violations (Agarwal et al., 2018; Hardt et al., 2016; Bellamy et al., 2018), the vast majority of them require access to the protected attribute at training or prediction time. While there are other works which assume access Figure 2: Mean and standard deviation of resulting disparity (top, y-axis) and accuracy (bottom, y-axis) on the test set after enforcing the target fairness bounds (x-axis) on our method (blue); only using the labeled subset with true labels (orange) and Wang et al. (2020) (green) over ten trials. On the top row, we fade bars when the mean does not meet the desired bound, which is indicated by the dotted blue lines. The dashed grey line in all plots indicates disparity from the unconstrained model. only_ to noisy protected attribute labels (Wang et al., 2020), and _no_ protected attribute labels (Lahoti et al., 2020), or a even a labeled subset of protected attribute labels, but without an auxiliary set to generate probabilistic protected attribute estimates (Jung et al., 2022); very few works mirror our data access setting. One exception, from which we draw inspiration, is Elzayn et al. (2023); that work studies in detail the policy-relevant question of whether Black U.S. taxpayers are audited at higher rates than non-Black taxpayers, and uses a special case of our Theorem 1 (for measurement _only_). In this paper, we formalize and extend their technique to bound a wide array of fairness constraints, and introduce methods to _train_ fair models given this insight. Within the set of techniques with a different data access paradigm, we differ from many in that we leverage information about the relationship between probabilistic protected attribute labels, ground truth protected attribute, and model predictions to measure and enforce our fairness bounds. Thus, while we do require the covariance conditions to hold in order to enforce our fairness bounds, we note that these are requirements we can _enforce_ during training, unlike assumptions over noise models as in other approaches to bound true disparity with noisy labels (Blum and Stangl, 2019; Jiang and Nachum, 2020; Celis et al., 2021). Intuitively, leveraging some labeled data can allow us to have less of an accuracy trade-off when training fair models, as demonstrated with our comparison to Wang et al. (2020). In this case, using this data means we do not have to protect against every perturbation within a given distance to the distribution, as with distributionally robust optimization (DRO). Instead, need only to enforce constraints on optimization-- in our experimental setting, we see that this can lead to a lower fairness-accuracy trade-off.
2307.02967
Stationary fluctuations of run-and-tumble particles
We study the stationary fluctuations of independent run-and-tumble particles. We prove that the joint densities of particles with given internal state converges to an infinite dimensional Ornstein-Uhlenbeck process. We also consider an interacting case, where the particles are subjected to exclusion. We then study the fluctuations of the total density, which is a non-Markovian Gaussian process, and obtain its covariance in closed form. By considering small noise limits of this non-Markovian Gaussian process, we obtain in a concrete example a large deviation rate function containing memory terms.
Frank Redig, Hidde van Wiechen
2023-07-06T13:06:00Z
http://arxiv.org/abs/2307.02967v2
# Equilibrium fluctuations of run-and-tumble particles ###### Abstract In this paper we study the stationary fluctuations of independent run-and-tumble particles. We prove that the joint densities of particles with given internal state converges to an infinite dimensional Ornstein Uhlenbeck process. We discuss also an interacting case, where the particles are subjected to exclusion. We then study the fluctuations of the total density, which is a non-Markovian Gaussian process. By considering small noise limits of this process, we obtain in a concrete example a large deviation rate function containing memory terms. ## 1 Introduction In this paper we consider a system of independent run-and-tumble particles on \(\mathbb{Z}\) and study the stationary fluctuations of its empirical distribution. Because particles have positions and internal states (which determine the direction in which they move and/or their rate of hopping over lattice edges), the hydrodynamic limit is a system of linear reaction-diffusion equations, describing the macroscopic joint evolution of the densities of particles with a given internal state. In this sense, the paper can be viewed as a study of macroscopic properties of the multi-layer particle systems which we studied in [9]. The study of hydrodynamic limits and fluctuations around the hydrodynamic limit for particles with internal states, or alternatively, multi-layer systems is quite recent, and to our knowledge at present only a limited set of results is known: see [3], [5], [4]. In our paper we prove that the fluctuation fields converge to a system of stochastic partial differential equations where the drift is determined by the hydrodynamic limit, and where the noise has both a conservative part coming from the transport of particles with a given internal state as well as a non-conservative part coming from the flipping of internal states. Because the system of independent particles has a simple dual consisting of independent particles with reversed velocities, the covariance of the fluctuation fields can also be computed, both stationary and non-stationary via this duality. After having dealt with independent particles, we indicate how to deal with interacting particles such as layered exclusion processes, where still duality can be used. One of our motivations of studying fluctuation fields of particles with internal states is to understand fluctuation properties of the total density, i.e., disregarding the internal states of the particles. The configuration of the number of particles at each site is no longer Markovian, and therefore understanding its fluctuations and large deviations is of interest because memory effects enter, i.e., the limiting Gaussian field describing the total density does not satisfy a Markovian SPDE. We give one example where we can explicitly characterize the large deviations of the limiting SPDE in the small noise limit. These large deviations give an indication of the large deviations of the total density of particles. The latter can of course also be obtained via a contraction principle from the large deviations of the joint densities of particles with a given internal state. However, the large deviation rate function obtained via this contraction principle is very indirect, and therefore in this paper we preferred not to follow this road in order to have a more explicit of the rate function where in particular memory terms become manifest. The rest of our paper is organized as follows. In Section 2 we introduce the run-and-tumble particle model and state preliminary results on ergodic measures, duality and hydrodynamic limit, the latter of which will be proven in the appendix A. In Section 2.4 we state the main result of this paper, Theorem 3.1, on the fluctuations of the model, and also give an idea why the result should hold using the covariance structure. In Section 4 we then give a formal proof of this theorem and end with a generalization to a multi-layer version of the symmetric exclusion process in Section 4.3. Lastly, in Section 5 we study the hydrodynamic limit and the fluctuations of the total density of particles, and we will end this section with a large deviations result for the limiting fluctuation process. ## 2 Basic notations and definitions In this paper we will look at the run-and-tumble particle process, which is a process designed to model active particles. Let \(V:=\mathbb{Z}\times S\), with \(S\subset\mathbb{Z}\) a finite set. The set \(V\) is the state space of a single run-and-tumble particle. We see elements \(v=(x,\sigma)\in V\) as particles with position \(x\in\mathbb{Z}\) and internal state \(\sigma\in S\). The dynamics of a single run-and-tumble particle are now as follows 1. At rate \(\kappa N^{2}\) the particle performs a nearest neighbor jump, i.e., \((x,\sigma)\to(x\pm 1,\sigma)\) 2. At rate \(\lambda N\) the particle performs an active jump in the direction of its internal state, i.e., \((x,\sigma)\to(x+\sigma,\sigma)\). 3. At rate \(c(\sigma,\sigma^{\prime})\) the particle changes its internal state from \(\sigma\) to \(\sigma^{\prime}\), i.e. \((x,\sigma)\to(x,\sigma^{\prime})\). Here we assume that the rates \(\big{\{}c(\sigma,\sigma^{\prime})\bigm{|}\sigma,\sigma^{\prime}\in S\big{\}}\) are irreducible and symmetric, i.e., \(c(\sigma,\sigma^{\prime})=c(\sigma^{\prime},\sigma)\). The run-and-tumble particle process is the process of configurations consisting of independent run-and-tumble particles. More precisely it is a Markov process \(\{\eta_{t}:t\geq 0\}\) on the state space \(\Omega:=\mathbb{N}^{V}\) consisting of independent random walkers on \(V\) where every particle has the dynamics as described above. From the dynamics we can write down the following generator \(L_{N}\) working on local functions, i.e., functions \(f:\Omega\to\mathbb{R}\) which only depend on a finite number of sites in \(V\). \[L_{N}f(\eta) =\kappa N^{2}\sum_{(x,\sigma)\in V}\eta(x,\sigma)\left(f\big{(} \eta^{(x,\sigma)\to(x+1,\sigma)}\big{)}+f\big{(}\eta^{(x,\sigma)\to(x-1,\sigma )}\big{)}-2f(\eta)\right)\] \[\quad+\lambda N\sum_{(x,\sigma)\in V}\eta(x,\sigma)\left(f\big{(} \eta^{(x,\sigma)\to(x+\sigma,\sigma)}\big{)}-f(\eta)\right)\] \[\quad+\sum_{(x,\sigma)\in V}\sum_{\sigma^{\prime}\in S}\eta(x, \sigma)c(\sigma,\sigma^{\prime})\left(f\big{(}\eta^{(x,\sigma)\to(x,\sigma^{ \prime})}\big{)}-f(\eta)\right),\] Here \(\eta(x,\sigma)\) denotes the number of particles at site \((x,\sigma)\in V\) in the configuration \(\eta\), and \(\eta^{(x,\sigma)\to(y,\sigma^{\prime})}\) denotes the configuration \(\eta\) where a single particle has moved from \((x,\sigma)\) to \((y,\sigma^{\prime})\), if possible. ### Scaling limit of single particle dynamics We will denote the generator \(\mathscr{L}_{N}\) of a single run-and-tumble particle, since it will make numerous appearances throughout this paper. This generator works on the space of test functions on the space \(\mathbb{R}\times S\), denoted by \(C^{\infty}_{c,S}\), which is defined as \[C^{\infty}_{c,S}:=\left\{\phi:\mathbb{R}\times S\to\mathbb{R}:\phi(\cdot, \sigma)\in C^{\infty}_{c}(\mathbb{R})\text{ for all }\sigma\in S\right\}.\] The generator \(\mathscr{L}_{N}\) is now as follows \[\mathscr{L}_{N}\phi(x,\sigma) =\kappa N^{2}(\phi(x+\tfrac{1}{N},\sigma)+\phi(x-\tfrac{1}{N}, \sigma)-2\phi(x,\sigma))+\lambda N(\phi(x+\tfrac{\sigma}{N},\sigma)-\phi(x, \sigma))\] \[\quad+\sum_{\sigma^{\prime}\in S}c(\sigma,\sigma^{\prime})(\phi(x,\sigma^{\prime})-\phi(x,\sigma)).\] Corresponding to this generator we denote the Markov semigroup \(S^{N}_{t}\). Through Taylor approximations around \(\phi(x,\sigma)\), it is easy to see that \(\mathscr{L}_{N}\phi\to A\phi\) uniformly as \(N\to\infty\), where \(A\) is the differential operator given by \[A\phi(x,\sigma)=\big{(}\tfrac{\kappa}{2}\partial_{xx}+\sigma\lambda\partial_{x }\big{)}\,\phi(x,\sigma)+\sum_{\sigma^{\prime}\in S}c(\sigma,\sigma^{\prime}) \big{(}\phi(x,\sigma^{\prime})-\phi(x,\sigma)\big{)}. \tag{1}\] As a consequence we can also write that \(S_{i}^{N}\phi\to e^{tA}\phi\) uniformly for all \(\phi\in C_{0,S}\), i.e., the functions space consisting of functions \(\phi:\mathbb{R}\times S\to\mathbb{R}\) such that \(\phi(\cdot,\sigma)\in C_{0}(\mathbb{R})\) for all \(\sigma\in S\). The operator \(A\) above is also an operator on (a subset of) the Hilbert space \(L^{2}(\mathrm{d}x\times|\cdot|_{S})\), where \(|\cdot|_{S}\) is the counting measure over \(S\). The inner product on this Hilbert space, denoted by \(\langle\langle\cdot,\cdot\rangle\rangle\), is the following \[\langle\langle\phi,\psi\rangle\rangle:=\sum_{\sigma\in S}\int_{\mathbb{R}}\phi (x,\sigma)\psi(x,\sigma)\,\mathrm{d}x.\] Later on we will need the adjoint of the operator \(A\) with respect to this inner product, which reads on \(\phi\in C_{c,S}^{\infty}\) \[A^{*}\phi(x,\sigma)=\big{(}\tfrac{\kappa}{2}\partial_{xx}-\sigma\lambda \partial_{x}\big{)}\,\phi(x,\sigma)+\sum_{\sigma^{\prime}\in S}c(\sigma,\sigma ^{\prime})\big{(}\phi(x,\sigma^{\prime})-\phi(x,\sigma)\big{)}. \tag{2}\] ### Basic properties of independent run-and-tumble particles Before we state the theorem of the equilibrium fluctuations, we first give an overview of some known results of run-and-tumble particles. #### 2.2.1 Stationary ergodic product measures We define the measures \(\mu_{\rho}\), with \(\rho\in\mathbb{R}\) as the product Poisson measure with density \(\rho\), i.e. \[\mu_{\rho}:=\bigotimes_{(x,\sigma)\in V}\mathrm{Pois}(\rho).\] In [9] it is proved that these measures are ergodic with respect to the run-and-tumble particle process. For this reason, while looking at the equilibrium fluctuations, we will start the processes \(\{\eta_{t}:t\geq 0\}\) from the measure \(\mu_{\rho}\). #### 2.2.2 Duality **Definition 2.1**.: We say that two Markov processes \(\{\eta_{t}\ \big{|}\ t\geq 0\}\) and \(\{\xi_{t}\ \big{|}\ t\geq 0\}\) on the state spaces \(\Omega\) and \(\Omega^{\prime}\) respectively, are _dual_ to one another with respect to a duality function \(D:\Omega\times\Omega^{\prime}\to\mathbb{R}\) if \[\mathbb{E}_{\eta}\left[D(\xi,\eta_{t})\right]=\widehat{\mathbb{E}}_{\xi} \left[D(\xi_{t},\eta)\right]<\infty, \tag{3}\] where \(\mathbb{E}_{\eta}\) denotes the expectation in \(\{\eta_{t}\ \big{|}\ t\geq 0\}\) starting from \(\eta\) and \(\widehat{E}_{\xi}\) the expectation in the dual process \(\{\xi(t)\ \big{|}\ t\geq 0\}\) starting from \(\xi\). In [9] it is proved that the run-and-tumble particle process is dual to its time-reversed process where the active jumps are in the reverse direction, i.e., the process corresponding to the following generator. \[\widehat{L}_{N}f(\eta) =\kappa N^{2}\sum_{(x,\sigma)\in V}\eta(x,\sigma)\left(f\big{(} \eta^{(x,\sigma)\to(x+1,\sigma)}\big{)}+f\big{(}\eta^{(x,\sigma)\to(x-1,\sigma )}\big{)}-2f(\eta)\right)\] \[\quad+\lambda N\sum_{(x,\sigma)\in V}\eta(x,\sigma)\left(f\big{(} \eta^{(x,\sigma)\to(x-\sigma,\sigma)}\big{)}-f(\eta)\right)\] \[\quad+\sum_{(x,\sigma)\in V}\sum_{\sigma^{\prime}\in S}\eta(x, \sigma)c(\sigma,\sigma^{\prime})\left(f\big{(}\eta^{(x,\sigma)\to(x,\sigma^{ \prime})}\big{)}-f(\eta)\right).\] The duality function is then given by \[D(\xi,\eta)=\prod_{(x,\sigma)\in V}\frac{\eta(x,\sigma)!}{\xi(x,\sigma)!(\eta( x,\sigma)-\xi(x,\sigma))!}\cdot I\big{(}\xi(x,\sigma)\leq\eta(x,\sigma)\big{)},\] where \(I\) denotes the characteristic function. In our paper we will mostly need this duality relation in the form of duality with a single dual particle particle, i.e., \[\mathbb{E}_{\eta}[\eta_{t}(x,\sigma)]=\widehat{\mathbb{E}}_{(x,\sigma)}[\eta( \widehat{X}_{t},\widehat{\sigma}_{t})],\] where \((\frac{X}{N},\widehat{\sigma}_{t})\) is the process corresponding to the generator \(\widehat{\mathscr{L}_{N}}\) given by \[\widehat{\mathscr{L}_{N}}\phi(x,\sigma) =\kappa N^{2}(\phi(x+\tfrac{1}{N},\sigma)+\phi(x-\tfrac{1}{N}, \sigma)-2\phi(x,\sigma))+\lambda N(\phi(x-\tfrac{\sigma}{N},\sigma)-\phi(x, \sigma))\] \[\quad+\sum_{\sigma^{\prime}\in S}c(\sigma,\sigma^{\prime})(\phi(x,\sigma^{\prime})-\phi(x,\sigma)).\] We denote the corresponding Markov semigroup of this process as \(\widehat{S}_{t}^{N}\). By a Taylor expansion, we obtain that \(\widehat{\mathscr{L}_{N}}\phi\to A^{*}\phi\) uniformly in \(N\) for all \(\phi\in C^{\infty}_{c,S}\), and therefore we are able to write for all \(\phi\in C_{0,S}\) that \(\widehat{S}_{t}^{N}\phi\to e^{tA^{*}}\phi\) uniformly. ### Hydrodynamic limit In this section we will briefly mention the hydrodynamic limit of the run-and-tumble particle process. Given a function \(\rho:\mathbb{R}\times S\to\mathbb{R}\) such that \(\rho(\cdot,\sigma)\in C^{2}_{b}(\mathbb{R})\) for all \(\sigma\in S\), we start by defining the product Poisson measures \(\mu_{\rho}^{N}\) for every \(N\in\mathbb{N}\) as follows \[\mu_{\rho}^{N}:=\bigotimes_{(x,\sigma)\in V}\mathrm{Pois}\big{(}\rho(\tfrac{x }{N},\sigma)\big{)}.\] Furthermore, for every \(N\in\mathbb{N}\), the process \(\{\eta_{t}^{N}:t\geq 0\}\) is the run-and-tumble particle process started from \(\eta_{0}^{N}\sim\mu_{\rho}^{N}\). We can now define the empirical measures of the process, denoted by \(\pi^{N}=\big{\{}\pi_{t}^{N}:t\geq 0\big{\}}\), as follows \[\pi_{t}^{N}:=\frac{1}{N}\sum_{(x,\sigma)\in V}\eta_{t}^{N}(x,\sigma)\delta_{( \tfrac{x}{N},\sigma)},\] where \(\delta\) is the dirac measure. For every \(t\geq 0\), \(\pi_{t}^{N}\) is a Radon measure on \(\mathbb{R}\times S\) such that for \(\phi\in C^{\infty}_{c,S}\) we have that \[\pi_{t}^{N}(\phi):=\big{\langle}\phi,\pi_{t}^{N}\big{\rangle}=\frac{1}{N} \sum_{(x,\sigma)\in V}\eta_{t}^{N}(x,\sigma)\phi(\tfrac{x}{N},\sigma).\] We then have the following **Theorem 2.1**.: _For every \(t\geq 0\), \(\varepsilon>0\) and \(\phi\in C^{\infty}_{c,S}\), we have that_ \[\lim_{N\to\infty}\mathbb{P}\left(\left|\pi_{t}^{N}(\phi)-\sum_{\sigma\in S} \int\rho_{t}(x,\sigma)\phi(x,\sigma)dx\right|>\varepsilon\right)=0,\] _where \(\rho_{t}(x,\sigma)\) solves the PDE \(\dot{\rho}_{t}=A^{*}\rho_{t}\) with initial condition \(\rho_{0}(x,\sigma)=\rho(x,\sigma)\)._ This theorem is actually a corollary of an even stronger theorem which shows convergence of the trajectories \(\pi^{N}\) in the path space \(D([0,T];\mathbf{M})\) equipped with the Skorokhod topology, where \(\mathbf{M}\) is the space of Radon measures on \(\mathbb{R}\times S\). Let \(\pi=\{\pi_{t}:t\geq 0\}\) denote the trajectory of measures on \(\mathbb{R}\times S\) such that for all \(t\geq 0,\phi\in C^{\infty}_{c,S}\) we have that \(\langle\phi,\pi_{t}\rangle=\langle\langle\phi,\rho_{t}\rangle\rangle\), where \(\rho_{t}\) solves the PDE in the above theorem. The trajectory \(\pi\) is then the unique continuous path in \(D([0,T];\mathbf{M})\) such that for all \(\phi\in C^{\infty}_{c,S}\) \[\mathscr{M}_{t}^{\phi}(\pi)=\pi_{t}(\phi)-\pi_{0}(\phi)-\int_{0}^{t}\pi_{s}(A \phi)\,\mathrm{d}s=0. \tag{4}\] **Theorem 2.2**.: _For any \(N\in\mathbb{N}\), let \(P^{N}\) be the law of the process \(\pi^{N}\). Then \(P^{N}\to\delta_{\pi}\) weakly in \(D([0,T];\mathbf{M})\) for any \(T>0\), with \(\pi\) the unique continuous path solving (4)._ For the sake of self-consistency, the proof of Theorem 2.2 is put in the appendix. The method of proof is as that of Seppalainen in [10, Chapter 8]. ### Stationary fluctuations For every \(N\in\mathbb{N}\), we define the fluctuation field \(Y^{N}:=\{Y^{N}_{t}:t\geq 0\}\) as \[Y^{N}_{t}=\frac{1}{\sqrt{N}}\sum_{x\in\mathbb{Z}}\big{(}\eta_{t}(x,\sigma)-\rho \big{)}\delta_{(\frac{x}{N},\sigma)}. \tag{5}\] This process takes values in the space of distributions on \(\mathbb{R}\times S\), denoted by \((C^{\infty}_{c,S})^{*}\). We expect the fluctuation field \(Y^{N}\) to converge weakly to a generalized stationary Ornstein-Uhlenbeck process. Before we can state the result we first recall some basic definitions of space-time white noise (see e.g. [6] for a detailed account). **Definition 2.2**.: A random distribution \(\mathscr{W}\) is called a _white noise_ on \(\mathbb{R}\times S\) if \(\{\langle\phi,\mathscr{W}\rangle:\phi\in C^{\infty}_{c,s}\}\) is jointly centered Gaussian with covariance \[\mathbb{E}[\langle\phi,\mathscr{W}\rangle\,\langle\psi,\mathscr{W}\rangle]= \left\langle\langle\phi,\psi\rangle\right\rangle.\] We denote by \(\mathrm{d}\mathscr{W}_{t}\) the time-differential of space-time white noise. This object is such that when paired with a test function \(\phi\in C^{\infty}_{c,S}\) and integrated over time gives a Brownian motion, i.e., \[\int_{0}^{t}\left\langle\phi,\mathrm{d}\mathscr{W}_{s}\right\rangle\mathrm{d} s=B(\left\langle\left\langle\phi,\phi\right\rangle\right)t),\] where \(B(\cdot)\) is a standard Brownian motion on \(\mathbb{R}\). We denote by \(\frac{\mathrm{d}\mathscr{W}_{t}}{\mathrm{d}t}\) the corresponding space-time white noise. This random space-time distribution is such that for all \(\phi:[0,T]\times\mathbb{R}\times S\to\mathbb{R}\), with \(\phi(t,\cdot)\) a test function \(\langle\phi,\frac{\mathrm{d}\mathscr{W}_{t}}{\mathrm{d}t}\rangle\) is jointly Gaussian with covariance \[\mathbb{E}\left[\langle\phi,\frac{\mathrm{d}\mathscr{W}_{t}}{\mathrm{d}t} \rangle\langle\psi,\frac{\mathrm{d}\mathscr{W}_{t}}{\mathrm{d}t}\rangle\right] =\int_{0}^{T}\left\langle\langle\phi(t,\cdot),\psi(t,\cdot)\rangle\right\rangle \mathrm{d}t.\] **Remark 2.1**.: In physics language, a white noise on \(\mathbb{R}\times S\) is a Gaussian field \(W(x,\sigma)\) with covariance \(\delta(x-y)\delta_{\sigma,\sigma^{\prime}}\), and a space-time white noise on \(\mathbb{R}\times S\) is a Gaussian field \(W(t,x,\sigma)\) with covariance \(\delta(t^{\prime}-t)\delta(x-y)\delta_{\sigma,\sigma^{\prime}}\). ## 3 Main theorem We are now ready to state the main theorem of this paper. **Theorem 3.1**.: _For every \(N\in\mathbb{N}\), let \(Q^{N}\) denote the law of the process \(Y^{N}\). Then \(Q^{N}\to Q\) weakly in \(D([0,T];(C^{\infty}_{c,S})^{*})\) for any \(T>0\), where \(Q\) is the law of the stationary Gaussian process \(Y\) satisfying the following SPDE:_ \[\mathrm{d}Y_{t}=A^{*}Y_{t}\,\mathrm{d}t+\sqrt{2\kappa\rho}\partial_{x}\, \mathrm{d}\mathscr{W}_{t}+\sqrt{2\rho\Sigma}\,\mathrm{d}\mathscr{W}_{t}. \tag{6}\] Here \(\mathrm{d}\mathscr{W}_{t}\) and \(\mathrm{d}\mathscr{W}_{t}\) are two independent space-time white noises on the space \(\mathbb{R}\times S\), and \(\Sigma\) is the operator working on test functions \(\phi\in C^{\infty}_{c,S}\) as \[(\Sigma\phi)(x,\sigma)=-\sum_{\sigma^{\prime}\in S}c(\sigma,\sigma^{\prime}) \big{(}\phi(x,\sigma^{\prime})-\phi(x,\sigma)\big{)}.\] It is easy to check that for \(\phi,\psi\in C^{\infty}_{c,S}\) we have that \(\left\langle\left\langle\Sigma\phi,\psi\right\rangle\right\rangle=\left\langle \left\langle\phi,\Sigma\psi\right\rangle\right\rangle\), hence the operator is self-adjoint and we can define the square root of the operator \(\sqrt{\Sigma}\). Furthermore, the process \(\partial_{x}\,\mathrm{d}\mathscr{W}_{t}\) is defined as the process of distributions such that for all \(\phi\in C^{\infty}_{c,S}\) \[\left\langle\phi,\partial_{x}\,\mathrm{d}\mathscr{W}_{t}\right\rangle=-\left \langle\partial_{x}\phi,\mathrm{d}\mathscr{W}_{t}\right\rangle.\] The rigorous meaning of the SPDE in (6) is that the mapping \(\phi\mapsto Y_{t}(\phi)\) is the solution of the following martingale problem: for every \(\phi\in C^{\infty}_{c,S}\), the following two processes \[\begin{split}\mathscr{M}^{\phi}_{t}(Y)&=Y_{t}(\phi )-Y_{0}(\phi)-\int_{0}^{t}Y_{s}(A\phi)ds,\\ \mathscr{N}^{\phi}_{t}(Y)&=\mathscr{M}^{\phi}_{t}(Y) ^{2}-2t\kappa\rho\left\langle\left\langle\partial_{x}\phi,\partial_{x}\phi \right\rangle\right\rangle-2t\rho\left\langle\left\langle\phi,\Sigma\phi \right\rangle\right\rangle\end{split} \tag{7}\] are martingales with respect to the filtration \(\mathscr{F}_{t}=\sigma(Y_{s}:0\leq s\leq t)\). ### Stationary covariance of the fluctuation fields Before we give the proof of Theorem 3.1, we will first compare the covariance structure of the limiting process of \(Y^{N}\) with the covariance structure of the process solving the SPDE in (6). This covariance uniquely characterizes the process. **Proposition 3.2**.: _For all \(\phi,\psi\in C_{c}^{\infty}(\mathbb{R}\times S)\)_ \[\lim_{N\to\infty}\mathbb{E}[Y_{t}^{N}(\phi)Y_{0}^{N}(\psi)]= \mathbb{E}[Y_{t}(\phi)Y_{0}(\psi)]=\rho\cdot\left\langle\left\langle e^{tA} \phi,\psi\right\rangle\right\rangle.\] Proof.: If \(Y\) is a solution to the SPDE in (6), then we can write \[Y_{t}(\phi)=\mathscr{M}_{t}^{\phi}(Y)+Y_{0}(\phi)+\int_{0}^{t}Y_{ s}(A\phi)\,\mathrm{d}s,\] where \(\mathscr{M}_{t}^{\phi}(Y)\) is a martingale with respect to the filtration \(\mathscr{F}_{t}=\sigma\left(Y_{s}:0\leq s\leq t\right)\) such that \(\mathscr{M}_{0}^{\phi}(Y)=0\). By the martingale property we have that \[\mathbb{E}[\mathscr{M}_{t}^{\phi}(Y)Y_{0}(\psi)]=\mathbb{E} \big{[}\mathbb{E}[\mathscr{M}_{t}^{\phi}(Y)Y_{0}(\psi)|\mathscr{F}_{0}]\big{]} =\mathbb{E}\big{[}Y_{0}(\psi)\mathbb{E}[\mathscr{M}_{t}^{\phi}(Y)| \mathscr{F}_{0}]\big{]}=0,\] and so \[\mathbb{E}[Y_{t}(\phi)Y_{0}(\psi)]=\mathbb{E}[Y_{0}(\phi)Y_{0}( \psi)]+\int_{0}^{t}\mathbb{E}[Y_{s}(A\phi)Y_{0}(\psi)]\,\mathrm{d}s.\] From this we get the following differential equation \[\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}[Y_{t}(\phi)Y_{0}(\psi)] \bigg{|}_{t=0}=\mathbb{E}[Y_{0}(A\phi)Y_{0}(\psi)].\] Therefore, if \(Y\) is a solution of (6), we see that \[\mathbb{E}[Y_{t}(\phi)Y_{0}(\psi)]=\mathbb{E}[Y_{0}(e^{tA}\phi)Y_{ 0}(\psi)]=\rho\cdot\left\langle\left\langle e^{tA}\phi,\psi\right\rangle \right\rangle.\] On the other hand, for any \(N\in\mathbb{N}\) we have that \[\mathbb{E}_{\eta}\left[Y_{t}^{N}(\phi)Y_{0}^{N}(\psi)\right] =\frac{1}{N}\sum_{(x,\sigma)\in V}\sum_{(y,\sigma^{\prime})\in V} \phi(\tfrac{x}{N},\sigma)\psi(\tfrac{y}{N},\sigma^{\prime})\int\mathbb{E}_{ \eta}\left[(\eta_{t}(x,\sigma)-\rho)(\eta(y,\sigma^{\prime})-\rho)\right] \mathrm{d}\mu_{\rho}(\eta)\] \[=\frac{1}{N}\sum_{(x,\sigma)\in V}\sum_{(y,\sigma^{\prime})\in V }\phi(\tfrac{x}{N},\sigma)\psi(\tfrac{y}{N},\sigma^{\prime})\int\widehat{ \mathbb{E}}_{(x,\sigma)}\left[(\eta(\widehat{X}_{t},\widehat{\sigma}_{t})- \rho)(\eta(y,\sigma^{\prime})-\rho)\right]\mathrm{d}\mu_{\rho}(\eta)\] \[=\frac{1}{N}\sum_{(x,\sigma)\in V}\sum_{(y,\sigma^{\prime})\in V }\phi(\tfrac{x}{N},\sigma)\psi(\tfrac{y}{N},\sigma^{\prime})\widehat{\mathbb{ E}}_{(x,\sigma)}\left[\mathrm{Cov}_{\mu_{\rho}}\left(\eta(\widehat{X}_{t}, \widehat{\sigma}_{t}),\eta(y,\sigma^{\prime})\right)\right] \tag{8}\] Now note that, because \(\mu_{\rho}\) is a product of Poisson measures, the covariance term is equal to \(\rho\) if and only if \((\widehat{X}_{t},\widehat{\sigma}_{t})=(y,\sigma^{\prime})\) and zero otherwise. Therefore \[\sum_{(y,\sigma^{\prime})\in V}\psi(\tfrac{y}{N},\sigma^{\prime}) \widehat{\mathbb{E}}_{(x,\sigma)} \left[\mathrm{Cov}_{\mu_{\rho}}\left(\eta(\widehat{X}_{t}, \widehat{\sigma}_{t}),\eta(y,\sigma^{\prime})\right)\right]\] \[=\rho\sum_{(y,\sigma^{\prime})\in V}\psi(\tfrac{y}{N},\sigma^{ \prime})\widehat{\mathbb{E}}_{(x,\sigma)}\left[I\left((\widehat{X}_{t}, \widehat{\sigma}_{t})=(y,\sigma^{\prime})\right)\right]\] \[=\rho\cdot(\widehat{S}_{t}^{N}\psi)(\tfrac{x}{N},\sigma) \tag{9}\] Here \(\widehat{S}_{t}^{N}\) is the semigroup of the Markov process \((\tfrac{\widehat{X}_{t}}{N},\widehat{\sigma}_{t})\), for which we have the following uniform convergence \(\widehat{S}_{t}^{N}\psi\to e^{tA^{*}}\psi\) (see section 2.2.2). By now combining (8) and (9), we find that \[\mathbb{E}_{\eta}\left[Y_{t}^{N}(\phi)Y_{0}^{N}(\psi)\right]=\rho \cdot\frac{1}{N}\sum_{(x,\sigma)\in V}\sum_{(y,\sigma^{\prime})\in V}\phi( \tfrac{x}{N},\sigma)(\widehat{S}_{t}^{N}\psi)(\tfrac{x}{N},\sigma)\to\rho\cdot \left\langle\left\langle\phi,e^{tA^{*}}\psi\right\rangle\right\rangle=\rho \cdot\left\langle\left\langle e^{tA}\phi,\psi\right\rangle\right\rangle,\] which concludes the proof. Proof of stationary fluctuations In this section we prove Theorem 3.1, following the line of proof of Van Ginkel and Redig in [11]. We start by introducing the Dynkin martingales of \(Y_{t}^{N}(\phi)\). For every \(\phi\in C_{c,S}^{\infty}\) and \(N\in\mathbb{N}\), let \(\{\mathscr{F}_{t}^{N}:t\geq 0\}\) be the filtration generated by \(\{Y_{t}^{N}:t\geq 0\}\). Since the process \(Y_{t}^{N}(\phi)\) is a Markov processes, the following processes \[\begin{split}\mathscr{M}_{t}^{N,\phi}(Y^{N})&=Y_{t }^{N}(\phi)-Y_{0}^{N}(\phi)-\int_{0}^{t}L_{N}Y_{s}^{N}(\phi)\,\mathrm{d}s,\\ \mathscr{N}_{t}^{N,\phi}(Y^{N})&=\mathscr{M}_{t}^{N, \phi}(Y^{N})^{2}-\int_{0}^{t}\Gamma_{s}^{N,\phi}(Y^{N})\,\mathrm{d}s\end{split} \tag{10}\] are \(\mathscr{F}_{t}^{N}\)-martingales, where \(\Gamma_{s}^{N,\phi}\) is the Carre du champ operator given by \[\Gamma_{s}^{N,\phi}(Y^{N}):=L_{N}\big{(}Y_{s}^{N}(\phi)^{2}\big{)}-2Y_{s}^{N} (\phi)L_{N}Y_{s}^{N}(\phi). \tag{11}\] Our goal for this section is to show that in the limit as \(N\to\infty\), we can substitute \(\mathscr{M}_{t}^{\phi}(Y^{N})\) and \(\mathscr{N}_{t}^{\phi}(Y^{N})\) for \(\mathscr{M}_{t}^{N,\phi}(Y^{N})\) and \(\mathscr{N}_{t}^{N,\phi}(Y^{N})\) respectively. We do so in the Propositions 4.1 and 4.4. **Proposition 4.1**.: _For all \(\phi\in C_{c,S}^{\infty}\) we have_ \[\lim_{N\to\infty}\mathbb{E}\left[\left|\mathscr{M}_{t}^{N,\phi}(Y^{N})- \mathscr{M}_{t}^{\phi}(Y^{N})\right|^{2}\right]=0.\] Proof.: First of all, note that by definition \[\mathbb{E}\left[\left|\mathscr{M}_{t}^{N,\phi}(Y^{N})-\mathscr{M}_{t}^{\phi} (Y^{N})\right|^{2}\right]=\mathbb{E}\left[\left|\int_{0}^{t}L_{N}Y_{s}^{N}( \phi)\,\mathrm{d}s-\int_{0}^{t}Y_{s}^{N}(A\phi)\,\mathrm{d}s\right|^{2}\right].\] For a given \((x,\sigma)\in V\) we have that \[L_{N}\eta(x,\sigma) =\kappa N^{2}[\eta(x+1,\sigma)+\eta(x-1,\sigma)-2\eta(x,\sigma)]\] \[\quad+\lambda N[\eta(x-\sigma,\sigma)-\eta(x,\sigma)]\] \[\quad+\sum_{\sigma^{\prime}\in S}c(\sigma,\sigma^{\prime})[\eta(x,\sigma^{\prime})-\eta(x,\sigma)].\] In particular we find that \(\big{(}L_{N}\eta(x,\sigma)\big{)}\phi(\frac{x}{N},\sigma)=\eta(x,\sigma)( \mathscr{L}_{N}\phi)(\frac{x}{N},\sigma)\), where we remind the reader that \(\mathscr{L}_{N}\) is the generator of a single run-and-tumble particle on the rescaled space \(\frac{1}{N}\mathbb{Z}\times S\), therefore \[L_{N}Y_{s}^{N}(\phi)=\frac{1}{\sqrt{N}}\sum_{(x,\sigma)\in V}\big{(}L_{N}\eta( x,\sigma)\big{)}\phi(\frac{x}{N},\sigma)=\frac{1}{\sqrt{N}}\sum_{(x,\sigma)\in V }\eta_{s}(x,\sigma)\cdot(\mathscr{L}_{N}\phi)(\frac{x}{N},\sigma).\] Now using that for any \(\phi\in C_{c,S}^{\infty}\), we have that \[\sum_{(x,\sigma)\in V}\rho\cdot(\mathscr{L}_{N}\phi)(\frac{x}{N},\sigma)=0,\] we are able to write \[L_{N}Y_{s}^{N}(\phi)=\frac{1}{\sqrt{N}}\sum_{(x,\sigma)\in V}(\eta_{s}(x, \sigma)-\rho)\cdot(\mathscr{L}_{N}\phi)(\frac{x}{N},\sigma).\] Since \(\mathscr{L}_{N}\phi\to A\phi\) uniformly, where \(A\) is defined in (1), we have that \[L_{N}Y_{s}^{N}(\phi)=\frac{1}{\sqrt{N}}\sum_{(x,\sigma)\in V}(\eta_{s}(x, \sigma)-\rho)\cdot(A\phi)(\frac{x}{N},\sigma)+R_{1}(\phi,N,s), \tag{12}\] where \(R_{1}(\phi,N,s)\) is an error term produced by the Taylor approximations. Since \(\phi\) is compactly supported, if we define \(\mathrm{supp}_{N}(\phi):=\{(x,\sigma)\in V,\phi(\frac{x}{N},\sigma)\neq 0\}\) and \(V_{\phi}^{N}=\mathrm{supp}_{N}(\phi)\cap V\), then \(|V_{\phi}^{N}|=\mathcal{O}(N)\). Furthermore, the error term is bounded in the following way \[|R_{1}(\phi,N,s)|\leq\frac{1}{N^{\frac{3}{2}}}\sum_{(x,\sigma)\in V_{\phi}^{N }}(\eta_{s}(x,\sigma)-\rho)(\kappa||\partial_{xxx}\phi||_{\infty}+\lambda \sigma^{2}||\partial_{xx}\phi||_{\infty}). \tag{13}\] Therefore we find that for every \(\phi\in C^{\infty}_{c,S}\) and \(t\geq 0\), \[\mathbb{E}\left[R_{1}(\phi,N,s)^{2}\right] \leq\frac{1}{N^{3}}\mathbb{E}\left[\sum_{(x,\sigma),(y,\sigma^{ \prime})\in V^{N}_{\phi}}(\eta_{s}(x,\sigma)-\rho)(\eta_{s}(y,\sigma^{\prime})- \rho)(\kappa||\partial_{xxx}\phi||_{\infty}+\lambda\sigma^{2}||\partial_{xx} \phi||_{\infty})^{2}\right]\] \[=\frac{1}{N^{3}}\sum_{(x,\sigma),(y,\sigma^{\prime})\in V^{N}_{ \phi}}\mathrm{Cov}\big{(}\eta_{s}(x,\sigma),\eta_{s}(y,\sigma^{\prime})\big{)} (\kappa||\partial_{xxx}\phi||_{\infty}+\lambda\sigma^{2}||\partial_{xx}\phi|| _{\infty})^{2}. \tag{14}\] Since we are starting the process \(\eta_{t}\) from the invariant product measure \(\mu_{\rho}\), we have that \[\mathrm{Cov}\big{(}\eta_{s}(x,\sigma),\eta_{s}(y,\sigma^{\prime})\big{)}=\rho \cdot I\big{(}(x,\sigma)=(y,\sigma^{\prime})\big{)}. \tag{15}\] Therefore, \[\mathbb{E}\left[R_{1}(\phi,N,s)^{2}\right]\leq\frac{1}{N^{3}}|V^{N}_{\phi}| \rho(\kappa||\partial_{xxx}\phi||_{\infty}+\lambda\sigma^{2}||\partial_{xx} \phi||_{\infty})^{2}\to 0,\] where we used the fact that \(|V^{N}_{\phi}|=\mathcal{O}(N)\). Note that the above convergence is uniform in \(s\), and therefore by dominated convergence we find that \[\lim_{N\to\infty}\mathbb{E}\left[\left|\mathscr{M}^{N,\phi}_{t}(Y^{N})- \mathscr{M}^{\phi}_{t}(Y^{N})\right|^{2}\right]\leq\lim_{N\to\infty}t\cdot \int_{0}^{t}\mathbb{E}\left[R_{1}(\phi,N,s)^{2}\right]\mathrm{d}s=0,\] which concludes the proof. The substitution of \(\mathscr{M}^{\phi}_{t}(Y^{N})\) is a bit more work and requires a fourth moment estimate. We start by proving two lemmas. The proof of the substitution result in Proposition 4.4 immediately follows from these lemmas. **Lemma 4.2**.: _For all \(\phi\in C^{\infty}_{c,S}\) we have the following_ \[\lim_{k\to\infty}\mathbb{E}\left[\left(\mathscr{M}^{N,\phi}_{t}(Y^{N})^{2}- \mathscr{M}^{\phi}_{t}(Y^{N})^{2}\right)^{2}\right]=0. \tag{16}\] Proof.: We start with the following application of Holder's inequality \[\mathbb{E}\left[\left(\mathscr{M}^{N,\phi}_{t}(Y^{N})^{2}- \mathscr{M}^{\phi}_{t}(Y^{N})^{2}\right)^{2}\right] =\mathbb{E}\left[\left(\mathscr{M}^{N,\phi}_{t}(Y^{N})-\mathscr{ M}^{\phi}_{t}(Y^{N})\right)^{2}\left(\mathscr{M}^{N,\phi}_{t}(Y^{N})+ \mathscr{M}^{\phi}_{t}(Y^{N})\right)^{2}\right] \tag{17}\] We will first show that the first expectation in the last line vanishes as \(N\to\infty\), and afterwards we will show that the second expectation is uniformly bounded in \(N\). Note that by (12) \[\mathbb{E}\left[\left(\mathscr{M}^{N,\phi}_{t}(Y^{N})-\mathscr{M}^{\phi}_{t}(Y ^{N})\right)^{4}\right]=\mathbb{E}\left[\left(\int_{0}^{t}\left[R_{1}(\phi,N,s )\right]\mathrm{d}s\right)^{4}\right]\leq t^{3}\int_{0}^{T}\mathbb{E}\left[R_ {1}(\phi,N,s)^{4}\right]\mathrm{d}s.\] Using the bound in (13) we find that \[\mathbb{E}\left[R_{1}(\phi,N,s)^{4}\right] \leq\frac{1}{N^{6}}\sum_{\begin{subarray}{c}(x_{i},\sigma_{i}) \in V^{N}_{\phi}\\ 1\leq i\leq 4\end{subarray}}\mathbb{E}\left[\prod_{i=1}^{4}(\eta_{s}(x_{i}, \sigma_{i})-\rho)\right](\kappa||\partial_{xxx}\phi||_{\infty}+\lambda\sigma^{ 2}||\partial_{xx}\phi||_{\infty})^{4} \tag{18}\] \[=\frac{1}{N^{6}}\left(|V^{N}_{\phi}|(3\rho^{2}+\rho)+|V^{N}_{\phi }|^{2}\rho^{2}\right)(\kappa||\partial_{xxx}\phi||_{\infty}+\lambda\sigma^{2}|| \partial_{xx}\phi||_{\infty})^{4},\] where we used that for \((x,\sigma),(y,\sigma^{\prime})\in V\) with \(x\neq y\) and \(\sigma\neq\sigma^{\prime}\), \[\mathbb{E}\left[(\eta_{s}(x,\sigma)-\rho)^{4}\right]=3\rho^{2}+\rho,\quad\ \mathbb{E}\left[(\eta_{s}(x,\sigma)-\rho)^{2}(\eta_{s}(y,\sigma^{\prime})-\rho )^{2}\right]=\rho^{2}.\] From (18) it follows that \(R_{1}(\phi,N,s,\sigma)\xrightarrow{L^{4}}0\) uniformly in \(s\), hence we find that \[\mathbb{E}\left[\left(\mathscr{M}_{t}^{N,\phi}(Y^{N})-\mathscr{M}_{t}^{\phi}(Y^ {N})\right)^{4}\right]\leq t^{3}\int_{0}^{t}\mathbb{E}\left[R_{1}(\phi,N,s)^{ 4}\right]\mathrm{d}s\to 0.\] To now show that the second expectation in the last line of (17) is uniformly bounded in \(N\), note that \[\mathbb{E}\left[\left(\mathscr{M}_{t}^{N,\phi}(Y^{N})+\mathscr{M}_{t}^{\phi}(Y ^{N})\right)^{4}\right]\leq 8\left(\mathbb{E}\left[\left(\mathscr{M}_{t}^{N, \phi}(Y^{N})\right)^{4}\right]+\mathbb{E}\left[\left(\mathscr{M}_{t}^{\phi}(Y ^{N})\right)^{4}\right]\right), \tag{19}\] and similarly \[\mathbb{E}\left[\left(\mathscr{M}_{t}^{\phi}(Y^{N})\right)^{4}\right]\leq 2 7\left(\mathbb{E}\left[Y_{t}^{N}(\phi)^{4}\right]+\mathbb{E}\left[Y_{0}^{N}( \phi)^{4}\right]+\mathbb{E}\left[\left(\int_{0}^{t}Y_{s}^{N}(A\phi)\, \mathrm{d}s\right)^{4}\right]\right). \tag{20}\] Now we need to show that three expectations on the right-hand-side are uniformly bounded. For the first expectation, we find that \[\mathbb{E}\left[Y_{t}^{N}(\phi)^{4}\right]\leq\frac{1}{N^{2}}\cdot\sum_{(x_{1},\sigma_{1})\in V_{\phi}^{N}}\cdots\sum_{(x_{4},\sigma_{4})\in V_{\phi}^{N}} \mathbb{E}\left[\prod_{i=1}^{4}(\eta_{t}(x_{i},\sigma_{i})-\rho)\right]||\phi ||_{\infty}.\] Since \(\mathbb{E}[\eta_{t}(x,\sigma)-\rho]=0\), we only get non-zero contributions when all \((x_{i},\sigma_{i})\) are equal or when we have two distinct pairs. Therefore \[\mathbb{E}\left[Y_{t}^{N}(\phi)^{4}\right]\leq\frac{1}{N^{2}}\left(|V_{\phi}^ {N}|(3\rho^{2}+\rho)+3|V_{\phi}^{N}|(|V_{\phi}^{N}|-1)\rho^{2}\right)||\phi||_ {\infty}=\mathcal{O}(1), \tag{21}\] hence it is uniformly bounded, and similar approaches can be used for \(\mathbb{E}\left[Y_{0}^{N}(\phi)^{4}\right]\) and \(\mathbb{E}\left[Y_{s}^{N}(A\phi)^{4}\right]\). The fact that the last expectation in (20) is uniformly bounded now follows from an application of Holder's inequality, namely \[\mathbb{E}\left[\left(\int_{0}^{t}Y_{s}^{N}(A\phi)\,\mathrm{d}s\right)^{4} \right]\leq t^{3}\int_{0}^{T}\mathbb{E}\left[\left(Y_{s}^{N}(A\phi)\right)^{4 }\right]\mathrm{d}s.\] Therefore we know that \(\mathbb{E}\left[\left(\mathscr{M}_{t}^{\phi}(Y^{N})\right)^{4}\right]\) is uniformly bounded. The proof for \(\mathbb{E}\left[\left(\mathscr{M}_{t}^{N,\phi}(Y^{N})\right)^{4}\right]\) works the same way if we use that \[\mathbb{E}\left[\left(L_{N}Y_{s}^{N}(\phi)\right)^{4}\right]=8\left(\mathbb{E }\left[\left(Y_{s}^{N}(A\phi)\right)^{4}\right]+\mathbb{E}\left[R_{1}(\phi,N,t,\sigma)^{4}\right]\right),\] where by (18) we already know that \(\mathbb{E}\left[R_{1}(\phi,N,t,\sigma)^{4}\right]\) is uniformly bounded. Hence we can conclude that (16) holds. **Lemma 4.3**.: _For all \(\phi\in C_{c,S}^{\infty}\) we have the following_ \[\lim_{N\to\infty}\mathbb{E}\left[\left(\int_{0}^{t}\Gamma_{s}^{N,\phi}(Y^{N}) \,\mathrm{d}s-2t\kappa\rho\left\langle\left\langle\partial_{x}\phi,\partial_{ x}\phi\right\rangle\right\rangle-2t\rho\left\langle\left\langle\phi,\Sigma\phi \right\rangle\right\rangle\right)^{2}\right]=0.\] Proof.: Using the following general derivation for a Markov process with transition rates \(r(\eta,\eta^{\prime})\) \[L_{N}f^{2}(\eta)-2f(\eta)\cdot L_{N}f(\eta) =\sum_{\eta^{\prime}\in\Omega}r(\eta,\eta^{\prime})\Big{(}\big{(} f^{2}(\eta^{\prime})-f^{2}(\eta)\big{)}-2\big{(}f(\eta)f(\eta^{\prime})-f^{2}( \eta)\big{)}\Big{)} \tag{22}\] \[=\sum_{\eta^{\prime}\in\Omega}r(\eta,\eta^{\prime})\big{(}f(\eta^{ \prime})-f(\eta)\big{)}^{2},\] we find that \[\Gamma_{s}^{N,\phi}(Y^{N}) =\kappa N\sum_{(x,\sigma)\in V}\eta_{s}(x,\sigma)\left(\phi( \tfrac{x+1}{N})-\phi(\tfrac{x}{N},\sigma)\right)^{2}+(\phi(\tfrac{x-1}{N})- \phi(\tfrac{x}{N},\sigma))^{2}\big{)}\] \[\quad+\lambda\sum_{(x,\sigma)\in V}\eta_{s}(x,\sigma)\left(\phi( \tfrac{x+\sigma}{N})-\phi(\tfrac{x}{N},\sigma)\right)^{2}\] \[\quad+\frac{1}{N}\sum_{(x,\sigma)\in V}\sum_{\sigma^{\prime}\in S }c(\sigma,\sigma^{\prime})\eta_{s}(x,\sigma)\big{(}\phi(x,\sigma^{\prime})-\phi (x,\sigma))^{2}.\] Making use of Taylor approximations again, we can write \[\begin{split}\Gamma_{s}^{N,\phi}(Y^{N})&=\frac{2\kappa}{N }\sum_{(x,\sigma)\in V}\eta_{s}(x,\sigma)\left(\partial_{x}\phi(\tfrac{x}{N}, \sigma)\right)^{2}+\frac{1}{N}\sum_{(x,\sigma)\in V}\sum_{\sigma^{\prime}\in S }c(\sigma,\sigma^{\prime})\eta_{s}(x,\sigma)(\phi(\tfrac{x}{N},\sigma^{ \prime})-\phi(\tfrac{x}{N},\sigma)^{2}\\ &\qquad+R_{2}(\phi,s,N),\end{split} \tag{23}\] with \(R_{2}(\phi,s,N)\) the error term, which is bounded as follows \[|R_{2}(\phi,s,N)|\leq\kappa\frac{1}{N^{3}}\sum_{(x,\sigma)\in V_{\phi}^{N}} \eta_{s}(x,\sigma)\kappa||\partial_{xx}\phi||_{\infty}+\frac{1}{N^{2}}\sum_{( x,\sigma)\in V_{\phi}^{N}}\eta_{s}(x,\sigma)\lambda\sigma||\phi^{\prime}||_{ \infty}.\] In particular, following the line of thought leading to (14), we can again deduce that \(R_{2}(\phi,s,N)\xrightarrow{L^{2}}0\). Therefore, for the expectation we find that \[\begin{split}\mathbb{E}\left[\Gamma_{s}^{N,\phi}(Y^{N})\right]& =\frac{2\kappa\rho}{N}\sum_{(x,\sigma)\in V}(\partial_{x}\phi( \tfrac{x}{N},\sigma))^{2}+\frac{2\rho}{N}\sum_{\sigma^{\prime}\in S}c(\sigma, \sigma^{\prime})(\phi(\tfrac{x}{N},\sigma^{\prime})-\phi(\tfrac{x}{N},\sigma)^ {2}+\mathbb{E}\left[R_{2}(\phi,s,N)\right]\\ &\to 2\kappa\rho\left\langle\left\langle\partial_{x}\phi,\partial_{x} \phi\right\rangle\right\rangle+2\rho\left\langle\left\langle\phi,\Sigma\phi \right\rangle\right\rangle,\end{split} \tag{24}\] and for the variance \[\begin{split}\operatorname{Var}\left[\Gamma_{s}^{N,\phi}(Y^{N}) \right]&\leq\frac{C(\phi,s)}{N^{2}}\sum_{(x,\sigma),(y,\sigma^{ \prime})\in V_{\phi}^{N}}\operatorname{Cov}\bigl{(}\eta_{s}(x,\sigma),\eta_{s} (y,\sigma^{\prime})\bigr{)}\\ &=\frac{C(\phi,s)}{N^{2}}|V_{\phi}^{N}|\rho\to 0,\end{split} \tag{25}\] with \(C(\phi,s)\) some constant and where we have used (15) for the equality. Since the variance converges to zero, this means that \(\Gamma_{s}^{N,\phi}(Y^{N})\) converges to its mean in \(L^{2}\). Therefore \[\begin{split}\lim_{N\to\infty}&\mathbb{E}\left[ \left(\int_{0}^{t}\Gamma_{s}^{N,\phi}(Y^{N})\,\mathrm{d}s-2t\kappa\rho\left\langle \left\langle\partial_{x}\phi,\partial_{x}\phi\right\rangle\right\rangle-2t \rho\left\langle\left\langle\phi,\Sigma\phi\right\rangle\right\rangle\right)^ {2}\right]\\ &\leq\lim_{N\to\infty}\int_{0}^{t}\mathbb{E}\left[\left(\Gamma_{s}^ {N,\phi}(Y^{N})-2\kappa\rho\left\langle\left\langle\partial_{x}\phi,\partial_{ x}\phi\right\rangle\right\rangle-2\rho\left\langle\left\langle\phi,\Sigma\phi \right\rangle\right\rangle\right)^{2}\right]\mathrm{d}s\\ &=0,\end{split}\] where we used dominated convergence for the last equality. **Proposition 4.4**.: _For all \(\phi\in C_{c,S}^{\infty}\)_ \[\lim_{N\to\infty}\mathbb{E}\left[\left|\mathscr{N}_{t}^{N,\phi}(Y^{N})- \mathscr{N}_{t}^{\phi}(Y^{N})\right|^{2}\right]=0.\] Proof.: We have that \[\begin{split}\mathbb{E}\left[\left|\mathscr{N}_{t}^{N,\phi}(Y^{N })-\mathscr{N}_{t}^{\phi}(Y^{N})\right|^{2}\right]&\leq 2\mathbb{E}\left[\left( \mathscr{M}_{t}^{N,\phi}(Y^{N})^{2}-\mathscr{M}_{t}^{\phi}(Y^{N})^{2}\right)^{ 2}\right]\\ &\quad+2\mathbb{E}\left[\left(\int_{0}^{t}\Gamma_{s}^{N,\phi}(Y^{ N})\,\mathrm{d}s-2t\kappa\rho\left\langle\left\langle\partial_{x}\phi,\partial_{x} \phi\right\rangle\right\rangle-2t\rho\left\langle\left\langle\phi,\Sigma\phi \right\rangle\right\rangle\right)\right].\end{split}\] The proof now follows from Lemma 4.2 and 4.3. ### Tightness In this section we will show the tightness of the collection \(\{Y^{N}:N\in\mathbb{N}\}\). **Proposition 4.5**.: \(\{Y^{N}:N\in\mathbb{N}\}\) _is tight in \(D([0,T];(C_{c,S}^{\infty})^{*})\)._ Proof.: Since \(C^{\infty}_{c,S}\) is a nuclear space, by Mitoma [8, Theorem 4.1] it suffices to prove that for a fixed \(\phi\in C^{\infty}_{c,S}\) we have that \(\{Y^{N}(\phi):N\in\mathbb{N}\}\) is tight in the path space \(D([0,T];\mathbb{R})\). Aldous' criterion tells us that it suffices to show the following two things: **A.1**: For all \(t\in[0,T]\) and \(\varepsilon>0\) there exists a compact \(K(t,\varepsilon)\in\mathbb{R}\) such that \[\sup_{N\in\mathbb{N}}\mathbb{P}\big{(}Y^{N}_{t}(\phi)\notin K(t,\varepsilon) \big{)}\leq\varepsilon.\] **A.2**: For all \(\varepsilon>0\) \[\lim_{\delta\to 0}\limsup_{N\to\infty}\sup_{\begin{subarray}{c}\tau\in \mathscr{T}_{T}\\ \theta\leq\delta\end{subarray}}\mathbb{P}\big{(}|Y^{N}_{\tau}(\phi)-Y^{N}_{ \tau+\theta}(\phi)|>\varepsilon\big{)}=0,\] with \(\mathscr{T}_{T}\) the set of all stopping times bounded by \(T\). Fix \(t\in[0,T]\) and \(\phi\in C^{\infty}_{c,S}\). Then, for every \(\sigma\in S\) we have that \[\mathbb{E}[Y^{N}_{t}(\phi)]=\frac{1}{\sqrt{N}}\sum_{(x,\sigma)\in V }\mathbb{E}\left[\eta_{t}(x,\sigma)-\rho\right]\phi(\tfrac{x}{N},\sigma)=0,\] \[\operatorname{Var}[Y^{N}_{t}(\phi)]=\frac{1}{\sqrt{N}}\sum_{(x, \sigma)\in V}\operatorname{Var}\left[\eta_{t}(x,\sigma)-\rho\right]\phi( \tfrac{x}{N},\sigma)=\frac{1}{N}\rho\sum_{(x,\sigma)\in V}\phi^{2}(\tfrac{x}{ N},\sigma).\] By the central limit theorem, we therefore see that every \(Y^{N}_{t}(\phi)\) converges in distribution to the normal distribution \(\mathcal{N}\big{(}0,\rho\left\langle\left\langle\phi,\phi\right\rangle\right\rangle \big{)}\). This implies the tightness of the real-valued random variables \(\{Y^{N}_{t}(\phi):N\in\mathbb{N}\}\), and therefore also **A.1**. To prove **A.2**, we note that for every bounded stopping time \(\tau\in\mathscr{T}_{T}\) we have that \[Y^{N}_{\tau}(\phi)=\mathscr{M}^{N,\phi}_{\tau}(Y^{N})+Y^{N}_{0}(\phi)+\int_{0 }^{\tau}L_{N}Y^{N}_{s}(\phi)ds,\] with \(\mathscr{M}^{N,\phi}_{\tau}(Y^{N})\) the Dynkin martingale of \(Y^{N}_{\tau}(\phi)\). Using the Markov inequality, we can then deduce that \[\mathbb{P}\big{(}|Y^{N}_{\tau}(\phi)-Y^{N}_{\tau+\theta}(\phi)|>\varepsilon \big{)} \leq\frac{1}{\varepsilon^{2}}\mathbb{E}\left[\big{(}Y^{N}_{\tau}( \phi)-Y^{N}_{\tau+\theta,\sigma}(\phi)\big{)}^{2}\right]\] \[\leq\frac{2}{\varepsilon^{2}}\left(\mathbb{E}\left[\left( \mathscr{M}^{N,\phi}_{\tau}(Y^{N})-\mathscr{M}^{N,\phi}_{\tau+\theta}(Y^{N}) \right)^{2}\right]+\mathbb{E}\left[\left(\int_{\tau}^{\tau+\theta}L_{N}Y^{N}_ {s}(\phi)ds\right)^{2}\right]\right). \tag{26}\] For the integral term, note that by the Cauchy-Schwarz inequality and Fubini we have that \[\mathbb{E}\left[\left(\int_{\tau}^{\tau+\theta}L_{N}Y^{N}_{s}( \phi)dr\right)^{2}\right] \leq\sqrt{\theta}\cdot\left(\mathbb{E}\left[\int_{0}^{T+\theta} \left(L_{N}Y^{N}_{s}(\phi)\right)^{2}ds\right]\right)^{\frac{1}{2}} \tag{27}\] \[=\sqrt{\theta}\cdot\left(\int_{0}^{T+\theta}\mathbb{E}\left[ \left(L_{N}Y^{N}_{s}(\phi)\right)^{2}\right]ds\right)^{\frac{1}{2}}.\] In the proof of Lemma 4.2 we have shown that \(\{L_{N}Y^{N}_{s}(\phi):N\in\mathbb{N}\}\) is uniformly bounded in \(L^{4}\), hence it is also uniformly bounded in \(L^{2}\), i.e. \[C:=\sup_{N\in\mathbb{N}}\mathbb{E}\left[\left(L_{N}Y^{N}_{s}(\phi)\right)^{2} \right]<\infty. \tag{28}\] Combining (27) and (28), we find that \[\lim_{\delta\to 0}\limsup_{N\to\infty}\sup_{\begin{subarray}{c}\tau\in \mathscr{T}_{T}\\ \theta\leq\delta\end{subarray}}\mathbb{E}\left[\left(\int_{\tau}^{\tau+\theta}L _{N}Y^{N}_{s}(\phi)dr\right)^{2}\right]\leq\lim_{\delta\to 0}\sqrt{\delta CT}=0. \tag{29}\] For the martingale, by the martingale property we have that \[\mathbb{E}\left[\mathscr{M}_{\tau}^{N,\phi}(Y^{N})\mathscr{M}_{\tau+\theta}^{N, \phi}(Y^{N})\right]=\mathbb{E}\left[\left(\mathscr{M}_{\tau}^{N,\phi}(Y^{N}) \right)^{2}\right],\] hence we see that \[\mathbb{E}\left[\left(\mathscr{M}_{\tau}^{N,\phi}(Y^{N})-\mathscr{M}_{\tau+ \theta}^{N,\phi}(Y^{N})\right)^{2}\right]=\mathbb{E}\left[\left(\mathscr{M}_{ \tau+\theta}^{N,\phi}(Y^{N})\right)^{2}-\left(\mathscr{M}_{\tau}^{N,\phi}(Y^{N })\right)^{2}\right].\] Since \(\mathbb{E}\left[\mathscr{M}_{0,\sigma}^{N,\phi}(Y^{N})\right]=0\), we can use that \[\mathbb{E}\left[\left(\mathscr{M}_{t}^{N,\phi}(Y^{N})\right)^{2}\right]= \mathbb{E}\left[\int_{0}^{t}\Gamma_{s}^{N,\phi}(Y^{N})\right]ds,\] because \(\int_{0}^{t}\Gamma_{s}^{N,\phi}(Y^{N})ds\) is the quadratic variation of the process \(\mathscr{M}_{t}^{N,\phi}(Y^{N})\). Furthermore, \(\mathbb{E}\left[\left(\Gamma_{s}^{N,\phi}(Y^{N})\right)^{2}\right]\) is uniformly bounded since \(\Gamma_{s}^{N,\phi}(Y^{N})\) converges in \(L^{2}\), hence \[\sup_{N\in\mathbb{N}}\mathbb{E}\left[\left(\mathscr{M}_{\tau}^{N, \phi}(Y^{N})-\mathscr{M}_{\tau+\theta}^{N,\phi}(Y^{N})\right)^{2}\right] =\sup_{N\in\mathbb{N}}\mathbb{E}\left[\int_{\tau}^{\tau+\theta} \Gamma_{s}^{N,\phi}(Y^{N})\right]ds,\] \[\leq\sqrt{\theta}\cdot\left(\int_{0}^{T+\theta}\sup_{N\in \mathbb{N}}\mathbb{E}\left[\left(\Gamma_{s}^{N,\phi}(Y^{N})\right)^{2}\right] ds\right)^{\frac{1}{2}}<\infty,\] where we used Cauchy Schwarz in the second line. From this we can again conclude that \[\lim_{\delta\to 0}\limsup_{N\to\infty}\sup_{\begin{subarray}{c}\tau\in \mathscr{D}\\ \theta\leq\delta\end{subarray}}\mathbb{E}\left[\left(\mathscr{M}_{\tau}^{N, \phi}(Y^{N})-\mathscr{M}_{\tau+\theta}^{N,\phi}(Y^{N})\right)^{2}\right]=0. \tag{30}\] Combining (29) and (30) with (26), we indeed find that (**A.2**) holds. ### Uniqueness of limits By the tightness, there exists a subsequence \(N_{k}\) and a process \(Y\in D([0,T];(C_{c,S}^{\infty})^{*})\) such that \(Y^{N_{k}}\to Y\) in distribution. **Lemma 4.6**.: _For each \(\phi\in C_{c,S}^{\infty}\) we have that \(t\mapsto Y_{t}(\phi)\) is a.s. continuous._ Proof.: We define the following functions \[w_{\delta}(X)=\sup_{|t-s|<\delta}|X_{t}-X_{s}|,\ \ \ \ \ w_{\delta}^{\prime}(X)= \inf_{\begin{subarray}{c}0=t_{0}<t_{1}<\ldots<t_{r}=1\\ t_{1}-t_{i-1}>\delta\end{subarray}}\max_{1\leq i\leq r}\ \sup_{t_{i-1}\leq s<t\leq t_{i}}|X_{t}-X_{s}|,\] then we have the following inequality \[w_{\delta}(X)\leq 2w_{\delta}^{\prime}(X)+\sup_{t}|X_{t}-X_{t^{-}}|. \tag{31}\] In [1], Aldous shows that the second part of Aldous' criterion, as stated in **A.2**, implies the following: for all \(\varepsilon>0\) and all \(\sigma\in S\) we have that \[\lim_{\delta\to 0}\limsup_{N\to\infty}\mathbb{P}(w_{\delta}^{\prime}(Y^{N}( \phi))\geq\varepsilon)=0. \tag{32}\] Now note that \[\sup_{t}\left|Y_{t}^{N}(\phi)-Y_{t^{-}}^{N}(\phi)\right|\leq\sup_{t}\frac{1}{ \sqrt{N}}\sum_{v\in V}|(\eta_{t}(v)-\eta_{t^{-}}(v))\phi(v)|\leq\frac{1}{\sqrt {N}}||\phi||_{\infty}\to 0, \tag{33}\] where we used that there can be at most one jump between the times \(t\) and \(t^{-}\) for the second inequality. Therefore, by combining (32) and (33) with (31) we can conclude that \[\lim_{\delta\to 0}\limsup_{N\to\infty}\mathbb{P}(w_{\delta}(Y^{N}(\phi))\geq \varepsilon)=0.\] Therefore we find that \(t\mapsto Y_{t}(\phi)\) is a.s. continuous. Finally we show that \(Y\) solves the martingale problem in (7). **Proposition 4.7**.: _For every \(\phi\in C^{\infty}_{c,S}\) the processes \(\mathscr{M}^{\phi}_{t}(Y)\) and \(\mathscr{N}^{\phi}_{t}(Y)\) defined in (7) are martingales with respect to the filtration \(\{\mathscr{F}_{t}:t\geq 0\}\) generated by \(Y\)._ Proof.: Fix arbitrary \(n\in\mathbb{N}\), \(s\geq 0\), \(0\leq s_{1}\leq...\leq s_{n}\leq s\), \(\psi_{1},...,\psi_{n}\in C^{\infty}_{c,S}\) and \(\Psi\in C_{b}(\mathbb{R}^{n})\), and define the function \(\mathcal{I}:D([0,T];(C^{\infty}_{c,S})^{*})\to\mathbb{R}\) as \[\mathcal{I}(X):=\Psi\left(X_{s_{1}}(\psi_{1}),...,X_{s_{n}}(\psi_{n})\right).\] To show that \(\mathscr{M}^{\phi}_{t}(Y)\) and \(\mathscr{N}^{\phi}_{t}(Y)\) are \(\mathscr{F}_{t}\)-martingales, it suffices to show that \[\lim_{k\to\infty}\mathbb{E}\left[\mathscr{M}^{N_{k},\phi}_{t}(Y^{N_{k}}) \mathcal{I}(Y^{N_{k}})\right]=\mathbb{E}\left[\mathscr{M}^{\phi}_{t}(Y) \mathcal{I}(Y)\right],\quad\lim_{k\to\infty}\mathbb{E}\left[\mathscr{N}^{N_{ k},\phi}_{t}(Y^{N_{k}})\mathcal{I}(Y^{N_{k}})\right]=\mathbb{E}\left[\mathscr{N}^{ \phi}_{t}(Y)\mathcal{I}(Y)\right],\] with \(\mathscr{M}^{N,\phi}_{t}\) and \(\mathscr{N}^{N,\phi}_{t}\) the Dynkin martingales defined in (10). Namely, by the martingale property we then have that \[\mathbb{E}\left[\mathscr{M}^{\phi}_{t}(Y)\mathcal{I}(Y)\right]=\lim_{k\to \infty}\mathbb{E}\left[\mathscr{M}^{N_{k},\phi}_{t}(Y^{N_{k}})\mathcal{I}(Y^{ N_{k}})\right]=\lim_{k\to\infty}\mathbb{E}\left[\mathscr{M}^{N_{k},\phi}_{s}(Y^{N_{k}}) \mathcal{I}(Y^{N_{k}})\right]=\mathbb{E}\left[\mathscr{M}^{\phi}_{s}(Y) \mathcal{I}(Y)\right],\] and analogous for \(\mathscr{N}^{\phi}_{t}(Y)\). We start by proving \(\mathscr{M}^{\phi}_{t}(Y)\) is a martingale. First of all, note that from Proposition 4.1 we can conclude \[\lim_{k\to\infty}\mathbb{E}\left[\mathscr{M}^{N_{k},\phi}_{t}(Y^{N_{k}}) \mathcal{I}(Y^{N_{k}})\right]=\lim_{k\to\infty}\mathbb{E}\left[\mathscr{M}^{ \phi}_{t}(Y^{N_{k}})\mathcal{I}(Y^{N_{k}})\right].\] Furthermore, in Lemma 4.2 we have shown that the process \(\mathscr{M}^{\phi}_{t}(Y^{N})\) is uniformly bounded in \(L^{4}\), hence it is also uniformly bounded in \(L^{2}\), therefore \[\sup_{k\in\mathbb{N}}\mathbb{E}\left[\left|\mathscr{M}^{\phi}_{t}(Y^{N_{k}}) \mathcal{I}(Y^{N_{k}})\right|^{2}\right]\leq||\Psi||_{\infty}^{2}\sup_{k\in \mathbb{N}}\sum_{\sigma\in S}\mathbb{E}\left[\left(\mathscr{M}^{\phi}_{t}(Y^{ N_{k}})\right)^{2}\right]<\infty.\] This implies that we have uniform integrability of \(\mathscr{M}^{\phi}_{t}(Y^{N_{k}})\mathcal{I}(Y^{N_{k}})\). It now suffices to show that \(\mathscr{M}^{\phi}_{t}(Y^{N_{k}})\mathcal{I}(Y^{N_{k}})\) converges to \(\mathscr{M}^{\phi}_{t}(Y)\mathcal{I}(Y)\) in distribution. Note that because the path space \(D([0,T];(C^{\infty}_{c,S})^{*}_{S})\) is not metrizable, we are unable to use the Portmanteau theorem. However, in [11] a method is introduced to work around this. We define the following two mappings \(P_{1}:D([0,T];(C^{\infty}_{c,S})^{*})\to D([0,T];\mathbb{R})^{n+2}\) and \(P_{2}:D([0,T];\mathbb{R})^{n+2}\to\mathbb{R}\) as \[P_{1}(Y):=(Y(\cdot),Y(A\cdot),Y(\cdot),...,Y(\cdot))\] and \[P_{2}(X):=\left(X^{1}_{t}(\phi)-X^{1}_{0}(\phi)-\int_{0}^{t}X^{2}_{s}(\phi)ds \right)\Psi(X^{3}_{s_{1}}(\psi_{1}),...,X^{n+2}_{s_{n}}(\psi_{n})).\] Notice first of all that we have \(P_{2}\circ P_{1}=\mathscr{M}^{\phi}_{t}\cdot\mathcal{I}\). By Jakubowski [7, Theorem 1.7], we know that each of the components of \(P_{1}\) is continuous, therefore \(P_{1}\) itself is continuous. Now we need to prove that the set of discontinuities of \(P_{2}\) has measure \(0\) under law of \(P_{1}(Y)\). Afterwards we are able to use the Portmanteau theorem to conclude that \[\mathscr{M}^{\phi}_{t}(Y^{N})\cdot\mathcal{I}(Y^{N})=P_{2}(P_{1}(Y^{N}))\to P _{2}(P_{1}(Y))=\mathscr{M}^{\phi}_{t}(Y)\cdot\mathcal{I}(Y)\] in distribution. By Lemma 4.6, we know that \(Y(\phi)\) is a.s. continuous for every \(\phi\in C^{\infty}_{c,S}\), and therefore also \(P_{1}(Y)\). Let \(X^{m}\) be a sequence in \(D([0,T];\mathbb{R}^{S})^{n+2}\) such that \(X^{m}\to X\) in the Skorokhod topology, where \(X\in D([0,T];\mathbb{R})^{n+2}\) is continuous. The latter assumption actually tells us that \(X^{m}\to X\) uniformly, and therefore it is easy to see that \(P_{2}(X^{m})\to P_{2}(X)\). So under the law of \(P_{1}(Y)\), we have that \(P_{2}\) is a.s. continuous. This finishes the proof that \(\mathscr{M}^{\phi}_{t}(Y)\) is an \(\mathscr{F}_{t}\)-martingale. The proof that \(\mathscr{N}^{\phi}_{t}(Y)\) is a martingale works in the same way. First we note that by Proposition 4.4 we have that \[\lim_{k\to\infty}\mathbb{E}\left[\mathscr{N}^{N_{k},\phi}_{t}(Y^{N_{k}}) \mathcal{I}(Y^{N_{k}})\right]=\lim_{k\to\infty}\mathbb{E}\left[\mathscr{N}^{ \phi}_{t}(Y^{N_{k}})\mathcal{I}(Y^{N_{k}})\right].\] Therefore we only need to show that \[\sup_{k\in\mathbb{N}}\mathbb{E}\left[\left|\mathscr{N}_{t}^{\phi}(Y^{N_{k}}) \mathcal{I}(Y^{N_{k}})\right|^{2}\right]<\infty. \tag{34}\] Afterwards the convergence of \(\mathscr{N}_{t}^{\phi}(Y^{N_{k}})\mathcal{I}(Y^{N_{k}})\) to \(\mathscr{N}_{t}^{\phi}(Y)\mathcal{I}(Y)\) in distribution follows from the same arguments as above. To see that (34) holds, note that \[\mathbb{E}\left[\left(\mathscr{N}_{t}^{\phi}(Y^{N_{k}})\right)^{2}\right]\leq 2 \mathbb{E}\left[\left(\mathscr{M}_{t}^{\phi}(Y^{N_{k}})\right)^{4}\right]+8t^{2 }\rho^{2}\left(\kappa\left\langle\left\langle\partial_{x}\phi,\partial_{x} \phi\right\rangle\right\rangle+\left\langle\left\langle\phi,\Sigma\phi\right \rangle\right\rangle\right)^{2},\] In the proof of Lemma 4.2, we have already shown that \(\mathbb{E}\left[\left(\mathscr{M}_{t}^{\phi}(Y^{N})\right)^{4}\right]\) is uniformly bounded in \(N\), hence the result follows. ### Fluctuations of interacting multi-layer systems: The multi-layer SEP The multi-layer symmetric exclusion process, or multi-layer SEP, is a generalization of the symmetric exclusion process on \(\mathbb{Z}\) to the multi-layered setting on \(\mathbb{Z}\times S\). For this process we look at configurations \(\eta\in\{0,1,...,\alpha\}^{V}\) with \(\alpha\in\mathbb{N}\), i.e., there are at most \(\alpha\) particles per site \(v\in V\). Instead of having an active components on every layer \(\sigma\in S\) like the run-and-tumble particle system, multi-layer SEP switches to a different diffusion coefficient, denoted by \(\kappa_{\sigma}\), between the layers. The generator of this process is then as follows \[L_{N}^{SEP}f(\eta) =N^{2}\sum_{(x,\sigma)\in V}\kappa_{\sigma}\sum_{|x-y|=1}\eta(x, \sigma)\left(\alpha-\eta(y,\sigma)\right)\left(f\big{(}\eta^{(x,\sigma)\to(y, \sigma)}\big{)}-f(\eta)\right)\] \[\qquad+\sum_{(x,\sigma)\in V}\sum_{\sigma^{\prime}\in S}c( \sigma,\sigma^{\prime})\eta(x,\sigma)(\alpha-\eta(x,\sigma^{\prime}))\left(f \big{(}\eta^{(x,\sigma)\to(x,\sigma^{\prime})}\big{)}-f(\eta)\right).\] In [9] it is proved that this process is self-dual and has ergodic measures given by product Binomial measures \(\nu_{\rho}=\bigotimes_{v\in V}\mathrm{Bin}(\alpha,\rho)\) where \(\rho\in(0,1)\) is constant. The corresponding single-particle generator is then given by \[\mathscr{L}_{N}^{SEP}\phi(x,\sigma)=\alpha\kappa_{\sigma}\big{(}(\phi(x+ \tfrac{1}{N},\sigma)+\phi(x-\tfrac{1}{N},\sigma)-2\phi(x,\sigma)\big{)}+\sum_{ \sigma^{\prime}\in S}c(\sigma,\sigma^{\prime})\big{(}\phi(x,\sigma^{\prime})- \phi(x,\sigma)\big{)},\] and \(\mathscr{L}_{N}^{SEP}\phi\to B\phi\) uniformly, where \[(B\phi)(x,\sigma)=\frac{\alpha\kappa_{\sigma}}{2}\partial_{xx}\phi(x,\sigma)+ \sum_{\sigma^{\prime}\in S}\alpha c(\sigma,\sigma^{\prime})\big{(}\phi(x, \sigma^{\prime})-\phi(x,\sigma)\big{)}.\] Since we took the rates \(c(\sigma,\sigma^{\prime})\) symmetric, this operator is self-adjoint in the Hilbert space \(L^{2}(\mathrm{d}x\times|\cdot|_{S})\). Using the same line of proof as earlier in this section, we can find a result for the fluctuations of this process where the limiting process satisfies the SPDE \[\mathrm{d}Y_{t}=BY_{t}\,\mathrm{d}t+\sqrt{2\rho(\alpha-\rho)K}\partial_{x}\, \mathrm{d}\mathscr{W}_{t}+\sqrt{2\rho(\alpha-\rho)\Sigma}\,\mathrm{d}\tilde{ \mathscr{W}}_{t}. \tag{35}\] Here \(K\) is the operator given by \((K\phi)(x,\sigma)=\kappa_{\sigma}\phi(x,\sigma)\). Note in the noise terms the appearance of the terms \(\rho(\alpha-\rho)\) instead of \(\rho\) as in (6). This comes from the fact that for \((x,\sigma)\neq(y,\sigma^{\prime})\) \[\mathbb{E}_{\nu_{\rho}}[\eta_{s}(x,\sigma)(\alpha-\eta_{s}(y,\sigma^{\prime})] =\rho(\alpha-\rho),\] which plays a role in the calculation of the expectation of the Carre du champ operator. ## 5 Scaling limits of the total density When we sum over layers, i.e., the \(\sigma\)-variable, then the configuration giving the total amount of particles per site is of course no longer a Markov process. Therefore, both in the hydrodynamic limit as well as in the fluctuations we expect to see memory terms. In the hydrodynamic limit and in the fluctuations these memory effects appear in the form of higher order time derivatives. Finally, when looking at the small-noise limit of the fluctuations, we obtain large deviations of Schilder's type for Gaussian processes, and we will also see memory terms in the corresponding large deviation rate function. In this section we make these memory term effects explicit in the simplest possible setting. From now on we assume that \(S=\{-1,1\}\) and that \(c(1,-1)=c(-1,1)=\gamma\). In this section we want to find properties of the fluctuations of the total density of particles, where we sum up the particles in both layers. This produces an empirical measure and fluctuation field on \(\mathbb{R}\) given by \[\zeta_{t}^{N}=\frac{1}{N}\sum_{(x,\sigma)\in V}\eta_{t}^{N}(x,\sigma)\delta_{ \frac{x}{N}},\qquad Z_{t}^{N}=\frac{1}{\sqrt{N}}\sum_{(x,\sigma)\in V}(\eta_{t }(x,\sigma)-\rho)\delta_{\frac{x}{N}}.\] ### Hydrodynamic equation for the total density From Theorem 2.1 we can deduce that \(\xi_{t}^{N}\) converges in probability to \(\varrho_{t}(x)\,\mathrm{d}x\), where the density \(\varrho_{t}(x)\) is the sum of the densities on both layers, i.e., \(\varrho_{t}(x)=\rho_{t}(x,1)+\rho_{t}(x,-1)\) with \(\rho_{t}(x,\sigma)\) the solution to the hydrodynamic equation \(\dot{\rho}_{t}=A^{*}\rho_{t}\). We can rewrite this equation as a system of PDE's given by \[\begin{cases}\dot{\rho}_{t}(x,1)=\left(\frac{\kappa}{2}\partial_{xx}-\lambda \partial_{x}\right)\rho_{t}(x,1)+\gamma(\rho_{t}(x,-1)-\rho_{t}(x,1)),\\ \\ \dot{\rho}_{t}(x,-1)=\left(\frac{\kappa}{2}\partial_{xx}+\lambda\partial_{x} \right)\rho_{t}(x,-1)+\gamma(\rho_{t}(x,1)-\rho_{t}(x,-1)).\end{cases}\] Summing up both equations gives us a PDE for the total density \(\varrho(x)\). This PDE also depends on the difference of the densities, which we will denote by \(\Delta_{t}(x):=\rho(x,1)-\rho(x,-1)\), and therefore we get a new system of PDE's \[\begin{cases}\dot{\varrho}_{t}(x)=\frac{\kappa}{2}\partial_{xx}\varrho_{t}(x )-\lambda\partial_{x}\Delta_{t}(x),\\ \\ \dot{\Delta}_{t}(x)=\frac{\kappa}{2}\partial_{xx}\Delta_{t}(x)-\lambda\partial _{x}\varrho_{t}(x)-2\gamma\Delta_{t}(x).\end{cases} \tag{36}\] From this system we can actually find a closed equation for \(\varrho(x)\). Namely, by first taking a second derivative in time of the upper equation we find that \[\ddot{\varrho}_{t}(x)=\frac{\kappa}{2}\partial_{xx}\dot{\varrho}_{t}(x)- \lambda\partial_{x}\dot{\Delta}_{t}(x)=\frac{\kappa}{2}\partial_{xx}\dot{ \varrho}_{t}(x)-\lambda\partial_{x}\left(\frac{\kappa}{2}\partial_{xx}\Delta_ {t}(x)-\lambda\partial_{x}\varrho_{t}(x)-2\gamma\Delta_{t}(x)\right).\] Now we use that from the upper equation in (36) we also have that \(-\lambda\partial_{x}\Delta_{t}(x)=\dot{\varrho}_{t}(x)-\frac{\kappa}{2} \partial_{xx}\varrho_{t}(x)\), in order to find the following equation \[\ddot{\varrho}_{t}(x)-(\kappa\partial_{xx}+2\gamma)\dot{\varrho}_{t}(x)= \left((\lambda^{2}-\gamma\kappa)\partial_{xx}-\frac{\kappa^{2}}{4}(\partial _{x})^{4}\right)\varrho_{t}(x).\] ### Fluctuations of the total density For the analysis of the fluctuation field of the total density we will be a bit more precise. We first set up a framework where we can rigorously talk about the different distributions coming from the SPDE given in (6) corresponding to both layers. For that, we start by defining a fluctuation field for each layer individually. \[Y_{t,\sigma}^{N}=\frac{1}{\sqrt{N}}\sum_{x\in\mathbb{Z}}(\eta_{t}(x,\sigma)- \rho)\delta_{\frac{x}{N}}.\] The relation between these fluctuation fields and \(Z_{t}^{N}\) is immediate, namely for every \(\phi\in C_{c}^{\infty}\) we have that \[\left\langle\phi,Z_{t}^{N}\right\rangle=\left\langle\phi,Y_{t,1}^{N}\right\rangle +\left\langle\phi,Y_{t,-1}^{N}\right\rangle. \tag{37}\] However, there is also a direct relation between the fluctuation fields on both layers and the fluctuation field \(Y_{t}^{N}\) on \(\mathbb{R}\times S\) defined in (5): for any \(\phi\in C_{c,S}^{\infty}\) the following holds \[\left\langle\phi,Y_{t}^{N}\right\rangle=\left\langle\phi(\cdot,1),Y_{t,1}^{N} \right\rangle+\left\langle\phi(\cdot,-1),Y_{t,-1}^{N}\right\rangle. \tag{38}\] In this way \(Y_{t}^{N}\), but more importantly its limiting process \(Y_{t}\), can be interpreted as a column vector of distributions, \(Y_{t}=\left(Y_{t,1}\quad Y_{t,-1}\right)^{T}\), working on a row vector of functions, \(\phi=\left(\phi(\cdot,1)\quad\phi(\cdot,-1)\right)\). With this in mind, we can look at the vector representation of the measure \(A^{*}Y_{t}\). We have that \[\left\langle\phi,A^{*}Y_{t}\right\rangle=\left\langle A\phi,Y_{t}\right\rangle =\left\langle(\tfrac{\kappa}{2}\partial_{xx}+\lambda\partial_{x}) \phi(\cdot,1),Y_{t,1}\right\rangle+\left\langle\phi(\cdot,1),\gamma(Y_{t,-1}-Y _{t,1})\right\rangle\] \[\qquad+\left\langle(\tfrac{\kappa}{2}\partial_{xx}-\lambda \partial_{x})\phi(\cdot,-1),Y_{t,-1}\right\rangle+\left\langle\phi(\cdot,-1), \gamma(Y_{t,1}-Y_{t,-1})\right\rangle\] \[=\left\langle\phi(\cdot,1),(\tfrac{\kappa}{2}\partial_{xx}- \lambda\partial_{x})Y_{t,1}+\gamma(Y_{t,-1}-Y_{t,1})\right\rangle\] \[\qquad+\left\langle\phi(\cdot,-1),(\tfrac{\kappa}{2}\partial_{ xx}+\lambda\partial_{x})Y_{t,-1}+\gamma(Y_{t,1}-Y_{t,-1})\right\rangle.\] Therefore \(A^{*}Y_{t}\) corresponds to the following vector of distributions \[A^{*}Y_{t}=\begin{pmatrix}(\tfrac{\kappa}{2}\partial_{xx}-\lambda\partial_{x} )Y_{t,1}+\gamma(Y_{t,-1}-Y_{t,1})\\ (\tfrac{\kappa}{2}\partial_{xx}+\lambda\partial_{x})Y_{t,-1}+\gamma(Y_{t,1}-Y _{t,-1})\end{pmatrix}.\] In a similar way we can find a vector representation of the noise part in the SPDE (6), namely \[\sqrt{2\kappa\rho}\partial_{x}\,\mathrm{d}\mathscr{W}_{t}+\sqrt{ 2\rho\Sigma}\,\mathrm{d}\tilde{\mathscr{W}}_{t} =\sqrt{2\kappa\rho}\partial_{x}\begin{pmatrix}\mathrm{d}W_{t,1}\\ \mathrm{d}W_{t,-1}\end{pmatrix}+\sqrt{2\rho\Sigma}\begin{pmatrix}\mathrm{d} \tilde{W}_{t,1}\\ \mathrm{d}\tilde{W}_{t,-1}\end{pmatrix}\] \[=\begin{pmatrix}\sqrt{2\kappa\rho}\partial_{x}\,\mathrm{d}W_{t,1} +\sqrt{\gamma\rho}\left(\mathrm{d}\tilde{W}_{t,-1}-\mathrm{d}\tilde{W}_{t,1} \right)\\ \sqrt{2\kappa\rho}\partial_{x}\,\mathrm{d}W_{t,-1}+\sqrt{\gamma\rho}\left( \mathrm{d}\tilde{W}_{t,1}-\mathrm{d}\tilde{W}_{t,-1}\right)\end{pmatrix},\] where all the \(\mathrm{d}W_{t,i},\mathrm{d}\tilde{W}_{t,i}\) are independent space-time white noises on \(\mathbb{R}\). In this notation, the SPDE in (6) actually gives us a system of SPDE's given by \[\begin{cases}\mathrm{d}Y_{t,1}=\left[\tfrac{\kappa}{2}\partial_{xx}Y_{t,1}- \lambda\partial_{x}Y_{t,1}+\gamma\left(Y_{t,-1}-Y_{t,1}\right)\right]\mathrm{d }t+\sqrt{2\kappa\rho}\partial_{x}\,\mathrm{d}W_{t,1}+\sqrt{\gamma\rho}\left( \mathrm{d}\tilde{W}_{t,-1}-\mathrm{d}\tilde{W}_{t,1}\right),\\ \\ \mathrm{d}Y_{t,-1}=\left[\tfrac{\kappa}{2}\partial_{xx}Y_{t,-1}+\lambda \partial_{x}Y_{t,-1}+\gamma\left(Y_{t,1}-Y_{t,-1}\right)\right]\mathrm{d}t+ \sqrt{2\kappa\rho}\partial_{x}\,\mathrm{d}W_{t,-1}+\sqrt{\gamma\rho}\left( \mathrm{d}\tilde{W}_{t,1}-\mathrm{d}\tilde{W}_{t,-1}\right).\end{cases}\] Now we are able to sum up these equations to get an SPDE for the fluctuation process of the total density \(Z_{t}\). Just like in the hydrodynamic limit, this will again depend on the difference of the two processes above, denoted by \(R_{t}:=Y_{t,1}-Y_{t,-1}\). This gives us the following system of coupled SPDE's \[\begin{cases}\mathrm{d}Z_{t}=\left[\tfrac{\kappa}{2}\partial_{xx}Z_{t}- \lambda\partial_{x}R_{t}\right]\mathrm{d}t+2\sqrt{\kappa\rho}\partial_{x}\, \mathrm{d}W_{t,Z},\\ \\ \mathrm{d}R_{t}=\left[\tfrac{\kappa}{2}\partial_{xx}R_{t}-\lambda\partial_{x}Z _{t}-2\gamma R_{t}\right]\mathrm{d}t+2\sqrt{\kappa\rho}\partial_{x}\,\mathrm{d }W_{t,R}+2\sqrt{2\gamma\rho}\,\mathrm{d}\tilde{W}_{t},\end{cases} \tag{39}\] where \[W_{t,Z}=\frac{1}{\sqrt{2}}\left(W_{t,1}+W_{t,-1}\right),\quad\ W_{t,R}=\frac{1 }{\sqrt{2}}\left(W_{t,1}-W_{t,-1}\right),\quad\tilde{W}_{t}=\frac{1}{\sqrt{2} }\left(\tilde{W}_{t,1}-\mathrm{d}\tilde{W}_{t,-1}\right),\] which are all independent space-time white noises on \(\mathbb{R}\). **Remark 5.1**.: It is clear is that \(Z_{t}\) and \(R_{t}\) are (non-Markovian) Gaussian processes. Therefore, we can characterize \(Z_{t}\) through its covariances. Using (37) and (38), we can actually relate this covariance to the covariance structure of \(Y_{t}\), which we have already calculated in Proposition 3.2. In order to do so, for a given \(\phi,\psi\in C_{c}^{\infty}\) we define the functions \(\bar{\phi},\bar{\psi}\in C_{c,S}^{\infty}\) by setting \(\bar{\phi}(x,\sigma)=\phi(x)\) and \(\bar{\psi}(x,\sigma)=\psi(x)\). The covariance can then be computed as follows: \[\mathbb{E}[\left\langle\phi,Z_{t}\right\rangle\left\langle\psi,Z_{0}\right\rangle] =\mathbb{E}[\left(\left\langle\bar{\phi}(\cdot,1),Y_{t,1}\right\rangle+ \left\langle\bar{\phi}(\cdot,-1),Y_{t,-1}\right\rangle\right)\left(\left\langle \bar{\psi}(\cdot,1),Y_{0,1}\right\rangle+\left\langle\bar{\psi}(\cdot,-1),Y_{0,-1 }\right\rangle\right)]\] \[=\mathbb{E}[\left\langle\bar{\phi},Y_{t}\right\rangle\left\langle \bar{\psi},Y_{0}\right\rangle]\] \[=\rho\cdot\left\langle\left\langle e^{tA}\bar{\phi},\bar{\psi} \right\rangle\right\rangle.\] This covariance strongly resembles the covariance of a stationary Ornstein Uhlenbeck process, but notice that the semigroup \(e^{tA}\) works on the "extended" functions \(\bar{\phi},\bar{\psi}\), which corresponds to the non-Markovianity of the process \(\{Z_{t},t\geq 0\}\). In the case of \(\kappa=0\) the noise term vanishes in the upper equation of (39) and therefore we we can solve the system explicitly. Namely, we then find that \[\begin{cases}\mathrm{d}Z_{t}=-\lambda\partial_{x}R_{t}\,\mathrm{d}t,\\ \\ \mathrm{d}R_{t}=-\left[\lambda\partial_{x}Z_{t}+2\gamma R_{t}\right]\mathrm{d}t +2\sqrt{\gamma\rho}\,\mathrm{d}\tilde{W}_{t}.\end{cases}\] Just like for the hydrodynamic limit of the total density, by now taking a second derivative in time in the first equation we find that \(\mathrm{d}^{2}Z_{t}=-\lambda\partial_{x}\,\mathrm{d}R_{t}\,\mathrm{d}t\). By now filling in \(\mathrm{d}R_{t}\) from the lower equation, we have that \[\frac{\mathrm{d}^{2}Z_{t}}{\mathrm{d}t^{2}} =\lambda^{2}\partial_{xx}Z_{t}-2\gamma\lambda\partial_{x}R_{t}+2 \lambda\sqrt{\gamma\rho}\partial_{x}\frac{\mathrm{d}\tilde{W}_{t}}{\mathrm{d}t}\] \[=\lambda^{2}\partial_{xx}Z_{t}+2\gamma\frac{\mathrm{d}Z_{t}}{ \mathrm{d}t}+2\lambda\sqrt{\gamma\rho}\partial_{x}\frac{\mathrm{d}\tilde{W}_{ t}}{\mathrm{d}t}. \tag{40}\] ### Large deviations of the limiting fluctuations From the expression given in (40) we are also able to obtain the rate function for the large deviations of \(Z_{t}^{(\epsilon)}\) in the small noise regime, where we add a factor \(\varepsilon\) before the noise \(\tilde{W}_{t}\) which we will send to zero, i.e., we are interested in the large deviations of Schilder type for the family of Gaussian process given by \[\frac{\mathrm{d}^{2}Z_{t}^{(\epsilon)}}{\mathrm{d}t^{2}}=\lambda^{2}\partial_{ xx}Z_{t}^{(\epsilon)}+2\gamma\frac{\mathrm{d}Z_{t}^{(\epsilon)}}{\mathrm{d}t}+ \varepsilon 2\lambda\sqrt{\gamma\rho}\partial_{x}\frac{\mathrm{d}\tilde{W}_{t}}{ \mathrm{d}t}. \tag{41}\] We use that \[\mathbb{P}\left(\varepsilon\partial_{x}\frac{\mathrm{d}\tilde{W}_{t}}{ \mathrm{d}t}\asymp\Gamma(t,x)\right)\asymp\exp\left(-\varepsilon^{-2}\frac{1 }{2}\int_{0}^{T}||\Gamma(t,\cdot)||_{H_{-1}}^{2}\,\mathrm{d}t\right),\] which has to be interpreted in the sense of the large deviation principle in the space of space-time distributions. The rate function in the above equation can be derived from the log-moment-generating function of a space-time white noise on \(\mathbb{R}\), which for a test function \(\phi\in C_{c}^{\infty}([0,T]\times\mathbb{R})\) is equal to \[\Lambda(\phi)=\lim_{\varepsilon\to 0}\varepsilon^{2}\log\left(\mathbb{E}[e^{ \varepsilon^{-1}\left(\phi,\partial_{x}\frac{\mathrm{d}\tilde{W}_{t}}{ \mathrm{d}t}\right)}]\right)=\frac{1}{2}\left\langle\partial_{x}\phi,\partial _{x}\phi\right\rangle_{L^{2}(\mathbb{R}\times[0,T])}.\] The Legendre transform of \(\Lambda\) then yields the rate function, \[\Lambda^{*}(\Gamma(t,x)) =\sup_{\phi\in C_{c}^{\infty}([0,T]\times\mathbb{R})}\left\{ \left\langle\phi,\Gamma\right\rangle_{L^{2}([0,T]\times\mathbb{R})}-\frac{1}{ 2}\left\langle\partial_{x}\phi,\partial_{x}\phi\right\rangle_{L^{2}([0,T] \times\mathbb{R})}\right\}\] \[=\frac{1}{2}\int_{0}^{T}||\Gamma(t,\cdot)||_{H^{-1}}^{2}\, \mathrm{d}t.\] As a consequence, we obtain the large deviation principle for the random space-time distribution \(Z_{t}^{(\varepsilon)}\), namely from (41) it follows that \[\mathbb{P}\left(Z_{t}^{(\varepsilon)}\asymp\Gamma(t,x)\right) =\mathbb{P}\left(\varepsilon\partial_{x}\frac{\mathrm{d}\tilde{W} _{t}}{\mathrm{d}t}\asymp\frac{1}{2\lambda\sqrt{\gamma\rho}}\left(\tilde{\Gamma }(t,x)-2\gamma\dot{\Gamma}(t,x)-\lambda^{2}\partial_{xx}\Gamma(t,x)\right)\right) \tag{42}\] \[\asymp\exp\left(-\varepsilon^{-2}\frac{1}{4\lambda\sqrt{\gamma \rho}}\int_{0}^{T}\left||\tilde{\Gamma}(t,\cdot)-2\gamma\dot{\Gamma}(t,\cdot)- \lambda^{2}\partial_{xx}\Gamma(t,\cdot)\right||_{H_{-1}}^{2}\,\mathrm{d}t \right).\] **Acknowledgements** The authors would like to thank Christian Maes for helpful discussions.
2303.01741
On the residual Monge-Ampère mass of plurisubharmonic functions with symmetry in $\mathbb{C}^2$
The aim of this paper is to study the residual Monge-Amp\`{e}re mass of a plurisubharmonic function with isolated singularity at the origin in $\mathbb{C}^2$. We prove that the residual mass is zero if its Lelong number is zero at the origin, provided that it is $S^1$-invariant. This result answers the zero mass conjecture raised by Guedj and Rashkovskii in this special case. More generally, we obtain an estimate on the residual mass by the maximal Lelong number and Lelong number at the origin.
Long Li
2023-03-03T07:02:21Z
http://arxiv.org/abs/2303.01741v2
# On the residual Monge-Ampere mass of plurisubharmonic functions with symmetry in \(\mathbb{C}^{2}\) ###### Abstract. The aim of this paper is to study the residual Monge-Ampere mass of a plurisubharmonic function with isolated singularity at the origin in \(\mathbb{C}^{2}\). We proved that the residual mass is zero if its Lelong number is zero at the origin, provided that it is \(S^{1}\)-invariant and radially regular. This result answers the zero mass conjecture raised by Guedj and Rashkovskii in this special case. ## 1. Introduction Let \(D\) be a bounded domain in \(\mathbb{C}^{n}\), and \(u\) a \(C^{2}\)-continuous plurisubharmonic function on \(D\). Then the Monge-Ampere operator operates on \(u\) and equals the following positive measure as \[\mathrm{MA}(u):=(dd^{c}u)^{n}\geq 0, \tag{1.1}\] where \(d:=\partial+\bar{\partial}\) and \(d^{c}:=\frac{i}{2}(\bar{\partial}-\partial)\). This operator has great importance in pluripotential theory. However, it is fully non-linear and can not be defined for all plurisubharmonic functions on \(D\), cf. [7], [20] and [29]. However, there are several ways to define the Monge-Ampere measure for a plurisubharmonic function \(u\), if extra conditions have been assumed. For instance, Bedford and Talyor [2] have shown that \((dd^{c}u)^{n}\) is well defined, if \(u\) is further in \(L^{\infty}_{loc}(D)\). Later Demailly [11] extended this definition to all plurisubharmonic functions whose unbounded locus are relatively compact in \(D\). In particular, the operator \((dd^{c})^{n}\) acts well on plurisubharmonic functions with isolated singularity. For simplicity, we take \(D\) as the unit ball \(B_{1}\) in \(\mathbb{C}^{n}\). Let \(u\) be a plurisubharmonic function on \(B_{1}\) that is locally bounded outside the origin. Then Guedj and Rashkovskii (Question 7, [17]) raised the following question: **Conjecture 1.1**.: _Assume that \((dd^{c}u)^{n}\) has a Dirac mass at the origin. Does it imply that \(u\) has a positive Lelong number at the origin?_ The atomic mass of \((dd^{c}u)^{n}\) at the origin is called the _residual Monge-Ampere mass_ of \(u\), and we can write it as \[\tau_{u}(0):=\frac{1}{\pi^{n}}\mathrm{MA}(u)(\{0\}). \tag{1.2}\] Here the normalization is chosen in such a way that we have \(\tau_{\log|z|}(0)=1\). Denote the Lelong number of \(u\) at the origin by \(\nu_{u}(0)\). Then the above Conjecture (1.1) can be rephrased as follows. **Conjecture 1.2**.: \(\nu_{u}(0)=0\Rightarrow\tau_{u}(0)=0\) For this reason, this problem is also called the _zero mass conjecture_ for plurisubharmonic functions. In history, there have been many works that contribute to this problem, cf. [8], [25], [26], [19], [6] and [16]. In particular, Rashkovskii [25] confirmed this conjecture, provided with toric symmetry on \(u\). In this paper, we will study a more general symmetry called circular symmetry for plurisubharmonic functions. Let \((z_{1},\cdots,z_{n})\) be the complex Euclidean coordinate on \(\mathbb{C}^{n}\), and then there is a natural \(S^{1}\)-action on it as \[z\to e^{i\theta}z:=(e^{i\theta}z_{1},\cdots,e^{i\theta}z_{n}), \tag{1.3}\] for all \(\theta\in\mathbb{R}\). A domain is balanced if it is invariant under this \(S^{1}\)-action. We say that a function \(u\) on a balanced domain \(D\) is circular symmetric, or \(S^{1}\)-invariant if \[u(e^{i\theta}z)=u(z),\] for all \(z\in D\). Then it is apparent that an \(S^{1}\)-invariant function also has toric symmetry. In fact, there is a deep connection between \(S^{1}\)-invariant plurisubharmonic functions and _the Schwarz symmetrization_ technique in classical analysis. Berman and Berndtsson [4] proved that the Schwarz symmetrization of any \(S^{1}\)-invariant plurisubharmonic function is also plurisubharmonic. Moreover, the Lelong number at the origin is always increasing under this symmetrization, cf. [23]. On the other hand, this \(S^{1}\)-action is highly related to _the Hopf-fiberation_ of the unit sphere \(S^{2n-1}\) in \(\mathbb{R}^{2n}\cong\mathbb{C}^{n}\), cf. [15]. As the first attempt to utilize this geometric picture, we will restrict to \(\mathbb{C}^{2}\) in this paper, where the structure of the Hopf-fiberation \(S^{1}\hookrightarrow S^{3}\xrightarrow{p}S^{2}\) is fully understood. For this reason, the domain \(D\) is assumed to be the unit ball \(B_{1}\subset\mathbb{C}^{2}\) from now on. Then we introduce the family \(\mathcal{F}(B_{1})\) (Definition (2.1)) as a collection of all \(S^{1}\)-invariant plurisubharmonic functions on \(B_{1}\) that is \(L^{\infty}_{loc}\) outside the origin. In order to perform calculus, we further introduce the family \(\mathcal{F}^{\infty}(B_{1})\) (Definition (2.2)) as a sub-collection of \(\mathcal{F}(B_{1})\) that is \(C^{2}\)-continuous outside the origin, and then we first confirm the zero mass conjecture for this family. **Theorem 1.3** (Theorem (5.5)).: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[\nu_{u}(0)=0\Rightarrow\tau_{u}(0)=0.\] The key observation is a decomposition formula (Theorem (4.4)) for the complex Monge-Ampere mass. It decomposes the measure \((dd^{c}u)^{2}\) on the ball \(B_{R}\) into two integrals on the boundary \(S_{R}:=\partial B_{R}\). The first integral corresponds to the so called _pluricomplex energy_ on \(\mathbb{CP}^{1}\) (Section (7.2)), and the second integral is a kind of \(L^{2}\)-Lelong number (Section (5.1)). Furthermore, it is possible to loose the \(C^{2}\)-regularity condition in the family \(\mathcal{F}^{\infty}(B_{1})\), by utilizing the slicing theory of currents, cf. [12], [14], [28]. To this purpose, we further introduce a sub-collection of the family \(\mathcal{F}(B_{1})\) that are _radially regular_ (Definition (6.2)) functions on the punctured ball \(B_{1}^{*}\). Roughly speaking, a function \(u\in\mathcal{F}(B_{1})\) is radially regular if the directional derivative \(r\partial_{r}u\) is \(L^{\infty}_{loc}\), and the second order derivative \((r\partial_{r})^{2}u\) is \(L^{1}_{loc}\) in \(B^{*}_{1}\). Then the following theorem confirms the zero mass conjecture for this sub-collection. **Theorem 1.4** (Theorem (6.5)).: _For any radially regular function \(u\in\mathcal{F}(B_{1})\), we have_ \[\nu_{u}(0)=0\Rightarrow\tau_{u}(0)=0.\] There is another point of view to look at a function \(u\in\mathcal{F}(B_{1})\) that transforms this local problem to a global one. First we recall a few basic facts in Kahler geometry, cf. [27], [13], [10] and [24]. Consider a sub-geodesic ray in the space of Kahler potentials on \(\mathbb{CP}^{1}\). It is actually a local plurisubharmonic function \(u\) on the product space \(\mathbb{D}^{*}\times\mathbb{CP}^{1}\) that is \(S^{1}\)-invariant in the argument direction of \(\mathbb{D}^{*}\). Moreover, it is a geodesic ray if the following _homogeneous complex Monge-Ampere equation_ holds \[(dd^{c}u)^{2}=0, \tag{1.4}\] on the product \(\mathbb{D}^{*}\times\mathbb{CP}^{1}\). On the other hand, we note that the punctured disk \(\mathbb{D}^{*}\) acts on \(B^{*}_{1}\subset\mathbb{C}^{2}\) in a natural way. Then the punctured ball \(B^{*}_{1}\) can be thought of as a non-trivial \(\mathbb{D}^{*}\)-fiberation over \(\mathbb{CP}^{1}\), i.e. we have the following fiber bundle structure \[\mathbb{D}^{*}\hookrightarrow B^{*}_{1}\overset{p}{\to}\mathbb{CP}^{1}. \tag{1.5}\] Comparing with the manifold \(\mathbb{D}^{*}\times\mathbb{CP}^{1}\), we have a simpler total space since the Euclidean metric on \(B^{*}_{1}\) is flat. However, the fiberation structure corresponds to the Hopf-fiberation that is more complicated. In particular, the usual complex structure on \(B^{*}_{1}\subset\mathbb{C}^{2}\) is no longer a product of the complex structures on \(\mathbb{D}^{*}\) and \(\mathbb{CP}^{1}\). In this way, a function \(u\in\mathcal{F}(B_{1})\) can be viewed as a sub-geodesic ray on this non-trivial \(\mathbb{D}^{*}\)-bundle (Definition (7.1)), and it is a geodesic ray on this bundle if equation (1.4) holds on \(B^{*}_{1}\). This observation leads us to a new understanding about the decomposition formula and the zero mass conjecture. In fact, the decomposition formula can be fit into an energy picture as shown in Theorem (7.3). There a sub-geodesic ray corresponds to a convex energy functional on \((-\infty,0)\), and geodesic rays are exactly the affine ones. The zero mass conjecture has also been rephrased under this picture, and it describes simultaneous zero asymptotic behaviors of two energy functionals as in Theorem (7.5). Finally, we would like to point out that the decomposition formula (Theorem (4.4)) is very likely to be generalized to all dimensions. Then Theorem (1.3) and (1.4) can be proved in \(\mathbb{C}^{n}\) in a similar manner. However, a counter-example has been contructed by Chi Li [22] to a stronger version of the zero mass conjecture (Question 8, [17]) raised by Demailly. This example is actually an \(S^{1}\)-invariant plurisubharmonic function in \(\mathbb{C}^{2}\), whose singularities occur on a polar set in \(\mathbb{CP}^{1}\) (as the base of the fiberation). Therefore, it is also possible to find counter-examples in \(\mathbb{C}^{2}\) to the zero mass conjecture, if we lose enough regularities for an \(S^{1}\)-invariant plurisubharmonic function. **Acknowledgment:** The author is very grateful to Prof. Xiuxiong Chen and Prof. Mihai Paun for their continuous support and encouragement in mathematics. This problem has been raised to the author when he was studying in Fourier Institute, Grenoble. It is also a great pleasure to thank Chengjian Yao, Xiaojun Wu, Jian Wang for lots of useful discussions. ## 2. **Plurisubharmonic functions with isolated singularity** Denote \(z:=(z_{1},z_{2})\) by the complex Euclidean coordinate on \(\mathbb{C}^{2}\). There is a natural \(S^{1}\)-action on it as \[z\to e^{i\theta}z,\] where \(e^{i\theta}z:=(e^{i\theta}z_{1},e^{i\theta}z_{2})\), and \(\theta\) is an arbitrary real number. We say that a domain \(D\) is balanced if it is invariant under this action. Moreover, a function \(u\) on a balanced domain is said to be \(S^{1}\)-invariant if for all \(z\in D\) \[u(e^{i\theta}z)=u(z).\] Assume that the origin \(0\in\mathbb{C}^{2}\) is contained in a balanced domain \(D\), and we denote \(D^{*}\) by the set \(D-\{0\}\). Consider a plurisubharmonic function \(u\) on \(D\), and we adapt to the following definition. **Definition 2.1**.: _A plurisubharmonic function \(u\) on \(D\) belongs to the family \(\mathcal{F}(D)\), if it is \(S^{1}\)-invariant and \(L^{\infty}_{loc}\) on \(D^{*}\)._ We say that \(u\) has an isolated singularity at the origin, if \(u\in\mathcal{F}(D)\) and \(u(0)=-\infty\). In order to preform calculus, we also introduce the following collection of functions with better regularities. **Definition 2.2**.: _A plurisubharmonic function \(u\) on \(D\) belongs to the family \(\mathcal{F}^{\infty}(D)\), if it is \(S^{1}\)-invariant and \(C^{2}\)-continuous on \(D^{*}\)._ We note that a function \(u\in\mathcal{F}^{\infty}(D)\) also belongs to the family \(\mathcal{F}(D)\). By shrinking \(D\) to a smaller balanced domain if necessary, a function \(u\in\mathcal{F}(D)\), or \(\mathcal{F}^{\infty}(D)\) always has an upper bound. After adjusting a constant, we can further assume the following normalization condition \[\sup_{D}u\leq-1,\] for all \(u\in\mathcal{F}(D)\), or \(\mathcal{F}^{\infty}(D)\). ### The residual mass In the following, we will focus on the local behavior of a plurisubharmonic function \(u\) near the origin. To this purpose, it is enough to consider the balanced domain \(D\) as a small ball centered at the origin. Let \(B_{R}\subset\mathbb{C}^{2}\) be the open ball with radius \(R\) centered at the origin, and \(B_{R}^{*}:=B_{R}-\{0\}\) be the corresponding punctured ball. Denote its boundary as \(S_{R}:=\partial B_{R}\), and then \(S_{R}\) is actually a 3-sphere in \(\mathbb{R}^{4}\). Obviously, these two domains \(B_{R}\), \(B_{R}^{*}\) are balanced for each \(R>0\), and then we can consider plurisubharmonic functions in the family \(\mathcal{F}(B_{R})\) and \(\mathcal{F}^{\infty}(B_{R})\). Thanks to Demailly's work [11], the complex Monge-Ampere measure of \(u\in\mathcal{F}(B_{1})\) or \(\mathcal{F}^{\infty}(B_{1})\) is well defined, namely, the following wedge product is a bidegree-\((2,2)\) closed positive current \[\operatorname{MA}(u):=dd^{c}u\wedge dd^{c}u,\] and then it is also a positive Borel measure on \(B_{1}\). (Here we have used \(dd^{c}=i\partial\bar{\partial}\)). Fixing an \(R\in(0,1)\), we take this measure on the ball as \[\operatorname{MA}(u)(B_{R}):=\int_{B_{R}}(dd^{c}u)^{2}=\int\chi_{B_{R}}(dd^{c} u)^{2}, \tag{2.1}\] where \(\chi_{B_{R}}\) is the characteristic function of the ball \(B_{R}\). Then it builds a decreasing sequence of non-negative real numbers as \(R\to 0\). Thanks to the dominated convergence theorem, this limit is exactly the residual Monge-Ampere mass of \(u\) at the origin, i.e. we have \[\tau_{u}(0)=\frac{1}{\pi^{2}}\lim_{R\to 0}\operatorname{MA}(u)(B_{R}). \tag{2.2}\] In order to calculate the measure in equation (2.1), we first observe the following analogue of the Portemanteau theorem. **Lemma 2.3**.: _Suppose \(u_{j}\) is a sequence of smooth plurisubharmonic functions on \(B_{1}\), decreasing to \(u\in\mathcal{F}^{\infty}(B_{1})\). Then we have_ \[\operatorname{MA}(u)(B_{R})=\lim_{j\to+\infty}\operatorname{MA}(u_{j})(B_{R}), \tag{2.3}\] _for all \(R\in(0,1)\)._ Proof.: It is enough to prove the following two inequalities. First, we claim \[\operatorname{MA}(u)(\overline{B}_{R})\geq\limsup_{j\to+\infty}\operatorname{ MA}(u_{j})(\overline{B}_{R}), \tag{2.4}\] on any closed ball \(\overline{B}_{R}\) in \(B_{1}\). Second, we claim \[\operatorname{MA}(u)(B_{R})\leq\liminf_{j\to+\infty}\operatorname{MA}(u_{j})(B _{R}), \tag{2.5}\] on any open ball \(B_{R}\subsetneq B_{1}\). Then we have \[\operatorname{MA}(u)(\overline{B}_{R})=\operatorname{MA}(u)(B_{R}), \tag{2.6}\] since \(u\) is \(C^{2}\)-continuous near the boundary \(S_{R}\). Hence our result follows from equation (2.4) and (2.5). To prove the first claim, we observe that there exists a sequence of smooth cut off functions \(\chi_{k}\), such that \(\chi_{k}=1\) on \(\overline{B}_{R}\) and \(\chi_{k}=0\) outside of \(B_{R+\frac{1}{k}}\). Therefore, we have for a fixed \(k\) \[\limsup_{j\to+\infty}\operatorname{MA}(u_{j})(\overline{B}_{R}) = \limsup_{j\to+\infty}\int\chi_{\overline{B}_{R}}(dd^{c}u_{j})^{2} \tag{2.7}\] \[\leq \limsup_{j\to+\infty}\int\chi_{k}(dd^{c}u_{j})^{2}\] \[= \int\chi_{k}(dd^{c}u)^{2}\leq\operatorname{MA}(u)(B_{R+\frac{1}{k }}).\] The equality on the last line of the above equation follows from the convergence \((dd^{c}u_{j})^{2}\to(dd^{c}u)^{2}\) in the sense of currents, cf. [11]. Finally, the inequality (equation (2.4)) follows by taking \(k\to+\infty\) in equation (2.7). The second claim (equation (2.5)) can also be proved in a similar way. **Remark 2.4**.: _For a general \(u\in\mathcal{F}(B_{1})\), Lemma (2.3) may fail to be true for all \(R\in(0,1)\). However, if the Monge-Ampere measure of \(u\) has no mass on the boundary of a ball, namely, we assume_ \[\int_{S_{R}}(dd^{c}u)^{2}=0,\] _for a fixed \(R\), then equation (2.6) still holds, and the convergence (equation (2.3)) follows from the same argument._ Another advantage to deal with the family \(\mathcal{F}^{\infty}(B_{R})\) is that we can perform integration by parts on the current \((dd^{c}u)^{2}\) as follows. **Proposition 2.5**.: _For a plurisubharmonic function \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[\int_{B_{R}}(dd^{c}u)^{2}=\int_{S_{R}}d^{c}u\wedge dd^{c}u, \tag{2.8}\] _for all \(R\in(0,1)\)._ Proof.: Let \(\rho(z):=\rho(|z|)\in C_{0}^{\infty}(\mathbb{C}^{2})\) be a non-negative function, satisfying \(\rho(r)=0\) for \(r\geq 1\), and \(\int_{\mathbb{C}^{2}}\rho\ d\lambda=1\). For each \(\varepsilon>0\) small, we rescale it as \[\rho_{\varepsilon}(z):=\varepsilon^{-4}\rho(z/\varepsilon).\] Then we have the following standard regularization of \(u\in\mathcal{F}^{\infty}(B_{1})\) by convolution, namely, we have on any closed ball contained in \(B_{1}\) \[u_{\varepsilon}(z): = (u*\rho_{\varepsilon})(z) \tag{2.9}\] \[= \int_{|z-y|\leq\varepsilon}\rho_{\varepsilon}(z-y)u(y)d\lambda(y)\] \[= \int_{|w|\leq 1}u(z-\varepsilon w)\rho(w)d\lambda(w).\] Hence \(u_{\varepsilon}(z)\) is a sequence of smooth plurisubharmonic functions, decreasing to \(u(z)\) as \(\varepsilon\to 0\). Moreover, we have \(u_{\varepsilon}\to u\) uniformly in \(C^{2}\)-norm on any compact subset in \(B_{1}^{*}\). Due to Stokes' theorem, we compute \[\int_{B_{R}}(dd^{c}u_{\varepsilon})^{2}=\int_{S_{R}}d^{c}u_{\varepsilon}\wedge dd ^{c}u_{\varepsilon}, \tag{2.10}\] on any ball \(B_{R}\subsetneq B_{1}\). Thanks to Lemma (2.3), The L.H.S. of equation (2.10) converges to the measure \(\operatorname{MA}(u)(B_{R})\). Moreover, the R.H.S. of this equation converges to the desired integral \[\int_{S_{R}}d^{c}u\wedge dd^{c}u,\] since \(u_{\varepsilon}\to u\) uniformly in \(C^{2}\)-norm near the sphere \(S_{R}\). Then our result follows. Take an \(S^{1}\)-action on the regularization \(u_{\varepsilon}(z)\), we note that it is also invariant under this action. In fact, if put \(w^{\prime}=e^{-i\theta}w\), then we have \[u_{\varepsilon}(e^{i\theta}z) = \int u(e^{i\theta}z-\varepsilon w)\rho(w)d\lambda(w)\] \[= \int u\{e^{i\theta}(z-\varepsilon w^{\prime})\}\rho(w^{\prime})d \lambda(w^{\prime})\] \[= \int u(z-\varepsilon w^{\prime})\rho(w^{\prime})d\lambda(w^{ \prime})=u_{\varepsilon}(z). \tag{2.11}\] Therefore, the following result holds. **Corollary 2.6**.: _For any function \(u\in\mathcal{F}(B_{1})\), there exists a sequence of \(S^{1}\)-invariant smooth plurisubharmonic function \(u_{j}\) decreasing pointwise to \(u\), possible on a slightly smaller ball._ In order to illustrate the use of Proposition (2.5), let us consider a simpler case. Suppose \(u\) is a subharmonic function on the unit disk \(\mathbb{D}\subset\mathbb{C}\), which is also \(C^{2}\)-continuous on \(\mathbb{D}^{*}\). Then equation (2.8) reduces to \[\int_{|z|<R}dd^{c}u=\int_{|z|=R}d^{c}u, \tag{2.12}\] for all \(R\in(0,1)\). Utilizing the polar coordinate \(z=re^{i\theta}\) on \(\mathbb{C}^{*}\), we have the computation \[d^{c}u=-\operatorname{Im}(\bar{\partial}u)=\frac{1}{2}\left\{(r\partial_{r}u) d\theta-(r^{-1}\partial_{\theta}u)dr\right\}. \tag{2.13}\] Denote the circular average of \(u\) by \[\hat{u}(r):=\frac{1}{2\pi}\int_{0}^{2\pi}u(re^{i\theta})d\theta,\] Then it follows \[\frac{1}{\pi}\int_{|z|<R}dd^{c}u=r\partial_{r}\hat{u}|_{r=R}\to\nu_{u}(0),\] as \(R\to 0\). Therefore, we prove that the residual mass of \(u\) at the origin is zero if its Lelong number is in \(\mathbb{C}\). ## 3. The Hopf-coordinates In this section, we are going to compute the integral on the R.H.S. of equation (2.8). It boils down to calculate the following 3-form on the 3-sphere \[d^{c}u\wedge dd^{c}u|_{S_{R}}, \tag{3.1}\] for \(u\in\mathcal{F}^{\infty}(B_{1})\), and \(R\in(0,1)\). First, we note that the \(S^{1}\)-action also performs on this 3-sphere, and it induces the Hopf fiberation. The Hopf fiberation \(S^{1}\hookrightarrow S^{3}\xrightarrow{p}S^{2}\) is an example of non-trivial \(S^{1}\)-fiber bundle over \(S^{2}\). It can be illustrated via the following real coordinates. Write the unit 3-sphere as \[S^{3}:=\{x_{1}^{2}+y_{1}^{2}+x_{2}^{2}+y_{2}^{2}=1\},\] where \((x_{1},y_{1},x_{2},y_{2})\in\mathbb{R}^{4}\) is the Euclidean coordinate. Let \(\theta\in[0,\pi],\varphi\in[0,2\pi]\) be the spherical coordinate of the unit 2-sphere \(S^{2}\subset\mathbb{R}^{3}\). Define the following coordinate for \(\eta\in[0,4\pi]\) \[x_{1}=\cos\left(\frac{\eta+\varphi}{2}\right)\sin\left(\frac{\theta}{2}\right), \quad y_{1}=\sin\left(\frac{\eta+\varphi}{2}\right)\sin\left(\frac{\theta}{2} \right),\] \[x_{2}=\cos\left(\frac{\eta-\varphi}{2}\right)\cos\left(\frac{\theta}{2}\right), \quad y_{2}=\sin\left(\frac{\eta-\varphi}{2}\right)\cos\left(\frac{\theta}{2} \right).\] It is clear that \(\eta\) is the direction under the \(S^{1}\)-action, and the Hopf fiberation \(p:S^{3}\to S^{2}\) is the submersion \[\left(2(x_{1}x_{2}+y_{1}y_{2}),\ 2(x_{2}y_{1}-x_{1}y_{2}),\ x_{2}^{2}+y_{2}^{2 }-x_{1}^{2}-y_{1}^{2}\right). \tag{3.2}\] This leads us to introduce the following _real Hopf-coordinate_ \[(r,\eta,\theta,\varphi)\] for all \(r\in\mathbb{R}_{+}\), \(\theta\in[0,\pi]\) and \(\eta,\varphi\in\mathbb{R}\) to represent a point in \(\mathbb{R}^{4}-\{0\}\). Then the change of variables is \[x_{1}=r\cos\left(\frac{\eta+\varphi}{2}\right)\sin\left(\frac{ \theta}{2}\right),\quad\ y_{1}=r\sin\left(\frac{\eta+\varphi}{2}\right)\sin \left(\frac{\theta}{2}\right),\] \[x_{2}=r\cos\left(\frac{\eta-\varphi}{2}\right)\cos\left(\frac{ \theta}{2}\right),\quad\ y_{2}=r\sin\left(\frac{\eta-\varphi}{2}\right)\cos \left(\frac{\theta}{2}\right).\] It follows that a 3-sphere \(S_{R}\) as the boundary of the ball \(B_{R}\) can be written as \[S_{R}:=\{x_{1}^{2}+y_{1}^{2}+x_{2}^{2}+y_{2}^{2}=R^{2}\}\subset\mathbb{R}^{4},\] for a fixed \(R>0\). Moreover, if the angle \(\varphi\) varies in \([0,2\pi)\) and \(\eta\) in \([0,4\pi)\), then this coordinate runs over all the points on the 3-sphere \(S_{R}\) exactly once. ### Complex version There is another way to view the Hopf fiberation through the complex coordinates. Let \(z:=(z_{1},z_{2})\) be the complex Euclidean coordinate on \(\mathbb{C}^{2}\), and then the unit 3-sphere \(S^{3}\) can be characterized as \[S^{3}:=\{|z_{1}|^{2}+|z_{2}|^{2}=1\}.\] Topologically, the 2-sphere \(S^{2}\) can be identified with the extended complex plane \(\mathbb{C}_{\infty}:=\mathbb{C}\bigcup\{\infty\}\) via the stereographic projection. Moreover, the complex projective line \(\mathbb{CP}^{1}\) can also be identified with \(\mathbb{C}_{\infty}\) via the continuous map \(f:S^{3}\to S^{2}\) as \[f:(z_{1},z_{2})\to\frac{z_{1}}{z_{2}}\in\mathbb{C}_{\infty}.\] Then we can define the fiber map \(p\) in the fiber bundle \(S^{1}\hookrightarrow S^{3}\xrightarrow{p}S^{2}\) as \[p:(z_{1},z_{2})\to[z_{1}:z_{2}]\in\mathbb{CP}^{1},\] and the pre-image of each point in \(\mathbb{CP}^{1}\) is a great circle in \(S^{3}\). In order to illustrate the idea, we first introduce the following easier version. It is well known that \(\mathbb{CP}^{1}\) can be covered by two holomorphic coordinate charts, consisting of \[U_{1}:=\mathbb{CP}^{1}-[1:0],\quad U_{2}:=\mathbb{CP}^{1}-[0:1],\] where \(U_{1}\) is identified with \(\mathbb{C}_{\infty}-\{\infty\}\) and \(U_{2}\) with \(\mathbb{C}_{\infty}-\{0\}\). Write their corresponding holomorphic coordinates as \(\zeta:=z_{1}/z_{2}\) and \(\xi=\zeta^{-1}\). Then we give the following two homeomorphisms as the local trivializations of the fiber map \(p\): define \(\psi_{1}:\mathbb{C}\times S^{1}\to p^{-1}(\mathbb{C})\) and \(\psi_{2}:(\mathbb{C}_{\infty}-\{0\})\times S^{1}\to p^{-1}(\mathbb{C}_{\infty}- \{0\})\) as \[\psi_{1}(\zeta,\eta^{\prime})=\left(\frac{\zeta e^{i\eta^{\prime}}}{(1+|\zeta|^ {2})^{1/2}},\ \frac{e^{i\eta^{\prime}}}{(1+|\zeta|^{2})^{1/2}}\right), \tag{3.3}\] and \[\psi_{2}(\xi,\eta^{\prime})=\left(\frac{e^{i\eta^{\prime}}}{(1+|\xi|^{2})^{1/2 }},\ \frac{\xi e^{i\eta^{\prime}}}{(1+|\xi|^{2})^{1/2}}\right), \tag{3.4}\] for all \(\zeta,\xi\in\mathbb{C}\) and \(\eta^{\prime}\in\mathbb{R}\). It is apparent that we have \(p\circ\psi_{1}(\zeta,\eta^{\prime})=[\zeta:1]\) on \(U_{1}\) and \(p\circ\psi_{2}(\xi,\eta^{\prime})=[1:\xi]\) on \(U_{2}\). Then these two local trivializations describe the Hopf fiberation, and more details can be found in Section (8.1). ### Another version Next, we can write the complex variables of \(\mathbb{C}^{2}\) in terms of the real Hopf-coordinate as \[z_{1}=(r\sin(\theta/2))e^{\frac{i}{2}(\eta+\varphi)},\quad z_{2}=(r\cos(\theta /2))e^{\frac{i}{2}(\eta-\varphi)}, \tag{3.5}\] for all \(r>0\), \(\theta\in[0,\pi]\), \(\varphi\in[0,2\pi]\) and \(\eta\in[0,4\pi]\). Moreover, the complex variable on \(\mathbb{C}_{\infty}\) is \[\zeta:=\frac{z_{1}}{z_{2}}=\tan(\theta/2)e^{i\varphi}. \tag{3.6}\] and we obtain the following change of variables \[z_{1}=re^{\frac{i}{2}\eta}\frac{(\zeta\cdot|\zeta|)^{1/2}}{(1+|\zeta|^{2})^{1/ 2}},\quad z_{2}=re^{\frac{i}{2}\eta}\frac{(\bar{\zeta}/|\zeta|)^{1/2}}{(1+| \zeta|^{2})^{1/2}}. \tag{3.7}\] In fact, they can be viewed as different local trivializations of the Hopf-fiberation with the fiber map \(p\). Denote \(\ell_{+},\ell_{-}\) by the following half circles on \(S^{2}\cong\mathbb{CP}^{1}\) \[\ell_{+}:=\left\{[t:1]\in\mathbb{CP}^{1};\ \ t\in[0,+\infty]\right\};\] \[\ell_{-}:=\left\{[t:1]\in\mathbb{CP}^{1};\ \ t\in[-\infty,0]\right\},\] and define two holomorphic coordinate charts as \[V_{1}:=S^{2}-\ell_{+};\quad V_{2}:=S^{2}-\ell_{-}.\] Similarly, we have another two half circles as \[\jmath_{+}:=\left\{[e^{i\theta^{\prime}}:1]\in\mathbb{CP}^{1};\ \ \theta^{\prime}\in[0,\pi]\right\};\] \[\jmath_{-}:=\left\{[e^{i\theta^{\prime}}:1]\in\mathbb{CP}^{1};\ \ \theta^{\prime}\in[\pi,2\pi]\right\},\] and another two charts are defined as \[V_{3}:=S^{2}-\jmath_{+};\quad V_{4}:=S^{2}-\jmath_{-}.\] It is apparent that each of the charts \(V_{i},i=1,2,3,4\) can be identified to the slit plane \(\mathbb{C}_{+}\) (or \(\mathbb{C}_{-}\)) via the stereographic projections, and they together cover the whole sphere \(S^{2}\). Then we can introduce homeomorphisms \(\psi_{i}^{\prime}\) between \(V_{i}\times S^{1}\) and \(p^{-1}(V_{i})\) for all \(i=1,2,3,4\), and they will build the local trivializations for the fiber map \(p\) in a different way. In particular, equation (3.7) gives the map \(\psi_{1}^{\prime}:V_{1}\times S^{1}\to p^{-1}(V_{1})\) as \[\psi_{1}^{\prime}(\zeta,\eta):=\left(e^{\frac{i}{2}\eta}\frac{(\zeta\cdot|\zeta| )^{1/2}}{(1+|\zeta|^{2})^{1/2}},\quad e^{\frac{i}{2}\eta}\frac{(|\zeta|/\zeta)^ {1/2}}{(1+|\zeta|^{2})^{1/2}}\right). \tag{3.8}\] Therefore, we introduce the following coordinate \[(r,\eta,\zeta,\bar{\zeta})\] for all \(r>0\), \(\eta\in\mathbb{R}\) and \(\zeta\in\mathbb{C}_{\infty}\) to represent a point in \(\mathbb{C}^{2}-\{0\}\), and refer it as the _complex Hopf-coordinate_. We note that the coordinate \(\zeta\) (under the trivialization \(\psi_{1}^{\prime}\)) is no longer continuous across the line \(\ell_{+}\), since its fractional power \(\zeta^{1/2}\) is multi-valued. However, we do not worry about this problem if the function \(u\) is \(S^{1}\)-invariant, and the reason is as follows. In fact, the trivialization \(\psi_{2}^{\prime}\) on \(V_{2}\times S^{1}\) will take another analytic branch of the two-valued holomorphic function \(\zeta^{1/2}\), and then we can write \[\psi_{2}^{\prime}(\zeta,\eta):=\left(e^{\frac{i}{2}\eta}\frac{e^{i\pi}(\zeta \cdot|\zeta|)^{1/2}}{(1+|\zeta|^{2})^{1/2}},\quad e^{\frac{i}{2}\eta}\frac{e^{ -i\pi}(|\zeta|/\zeta)^{1/2}}{(1+|\zeta|^{2})^{1/2}}\right). \tag{3.9}\] Hence it follows \(u\circ\psi_{1}=u\circ\psi_{2}\) for all \((\zeta,\eta)\) in the overlapping area. In other words, the function \(u\) is periodic in the angle \(\varphi\) direction with period \(2\pi(\)instead of \(4\pi!)\). **Remark 3.1**.: _In fact, there is no difference between the two trivializations \(\psi_{1}\) and \(\psi_{1}^{\prime}\) for any \(S^{1}\)-invariant function, since we can rewrite \(\psi_{1}^{\prime}\) as follows from equation (3.6)_ \[\psi_{1}^{\prime}(\zeta,\eta):=\left(e^{-\frac{i}{2}\varphi}\frac{\zeta e^{ \frac{i}{2}\eta}}{(1+|\zeta|^{2})^{1/2}},\quad e^{-\frac{i}{2}\varphi}\frac{e ^{\frac{i}{2}\eta}}{(1+|\zeta|^{2})^{1/2}}\right), \tag{3.10}\] _and then it follows \(u\circ\psi_{1}=u\circ\psi_{1}^{\prime}\) for all \((\zeta,\eta)\) in the overlapping area._ ## 4. The decomposition formula Now we are going to perform local computations near a point \(b\in S_{R}\), under the complex Hopf-coordinate. It is noted that we will directly calculate on \(\bar{z}_{1}^{2}\) and \(\bar{z}_{2}^{2}\) in the following, and the fractional power \(\zeta^{1/2}\) will not be used essentially. ### The \(1\)-form Recall that we have \[z_{1}=re^{\frac{i}{2}\eta}\frac{(\zeta\cdot|\zeta|)^{1/2}}{(1+|\zeta|^{2})^{1/ 2}},\quad z_{2}=re^{\frac{i}{2}\eta}\frac{(\bar{\zeta}/|\zeta|)^{1/2}}{(1+| \zeta|^{2})^{1/2}}, \tag{4.1}\] and then the following relations exist: \[z_{1}\cdot\bar{z}_{2}=r^{2}\frac{\zeta}{1+|\zeta|^{2}},\quad z_{1}\cdot z_{2} =r^{2}\frac{e^{i\eta}|\zeta|}{1+|\zeta|^{2}}, \tag{4.2}\] and \[|z_{1}|^{2}+|z_{2}|^{2}=r^{2}, \tag{4.3}\] together with \[|z_{2}|^{2}-|z_{1}|^{2}=r^{2}\frac{1-|\zeta|^{2}}{1+|\zeta|^{2}}=r^{2}\cos\theta. \tag{4.4}\] The first goal is to calculate the following 1-form \[d^{c}u = \frac{i}{2}(\bar{\partial}u-\partial u)=-\operatorname{Im}(\bar {\partial}u)\] \[= -\operatorname{Im}\left(\frac{\partial u}{\partial\bar{z}_{1}}d \bar{z}_{1}\right)-\operatorname{Im}\left(\frac{\partial u}{\partial\bar{z}_{2 }}d\bar{z}_{2}\right). \tag{4.5}\] The first term on the R.H.S. of the above equation can be computed as \[\bar{z}_{1}=re^{-\frac{i}{2}\eta}\frac{(\bar{\zeta}\cdot|\zeta|)^{1/2}}{(1+| \zeta|^{2})^{1/2}},\quad\bar{z}_{1}^{2}=r^{2}e^{-i\eta}\frac{\bar{\zeta}\cdot| \zeta|}{(1+|\zeta|^{2})}, \tag{4.6}\] and it follows \[2\bar{z}_{1}d\bar{z}_{1} = 2re^{-i\eta}\frac{\bar{\zeta}\cdot|\zeta|}{(1+|\zeta|^{2})}dr- ir^{2}e^{-i\eta}\frac{\bar{\zeta}\cdot|\zeta|}{(1+|\zeta|^{2})}d\eta\] \[+ r^{2}e^{-i\eta}\left\{\partial_{\zeta}\left(\frac{\bar{\zeta} \cdot|\zeta|}{1+|\zeta|^{2}}\right)d\zeta+\partial_{\bar{\zeta}}\left(\frac{ \bar{\zeta}\cdot|\zeta|}{1+|\zeta|^{2}}\right)d\bar{\zeta}\right\}, \tag{4.7}\] and then we have \[\partial_{\zeta}\left(\frac{\bar{\zeta}\cdot|\zeta|}{1+|\zeta|^{2}}\right)= \frac{\bar{\zeta}^{2}}{2|\zeta|}\cdot\frac{1-|\zeta|^{2}}{(1+|\zeta|^{2})^{2 }}, \tag{4.8}\] and \[\partial_{\bar{\zeta}}\left(\frac{\bar{\zeta}\cdot|\zeta|}{1+|\zeta|^{2}} \right)=\frac{|\zeta|}{2}\cdot\frac{3+|\zeta|^{2}}{(1+|\zeta|^{2})^{2}}. \tag{4.9}\] Combing equations (4.8), (4.9) with (4.7), we obtain \[2\bar{z}_{1}d\bar{z}_{1} = 2re^{-i\eta}\frac{\bar{\zeta}\cdot|\zeta|}{(1+|\zeta|^{2})}dr- ir^{2}e^{-i\eta}\frac{\bar{\zeta}\cdot|\zeta|}{(1+|\zeta|^{2})}d\eta\] \[+ r^{2}e^{-i\eta}\frac{\bar{\zeta}^{2}}{2|\zeta|}\cdot\frac{1-| \zeta|^{2}}{(1+|\zeta|^{2})^{2}}d\zeta\] \[+ r^{2}e^{-i\eta}\frac{|\zeta|}{2}\cdot\frac{3+|\zeta|^{2}}{(1+| \zeta|^{2})^{2}}d\bar{\zeta}, \tag{4.10}\] and it follows \[4\ d\bar{z}_{1} = 4e^{-\frac{i}{2}\eta}\frac{(\bar{\zeta}\cdot|\zeta|)^{1/2}}{(1+| \zeta|^{2})^{1/2}}dr-2ire^{-\frac{i}{2}\eta}\frac{(\bar{\zeta}\cdot|\zeta|)^{1 /2}}{(1+|\zeta|^{2})^{1/2}}d\eta\] \[+ re^{-\frac{i}{2}\eta}\left(\frac{\bar{\zeta}}{|\zeta|}\right)^{ \frac{3}{2}}\cdot\frac{1-|\zeta|^{2}}{(1+|\zeta|^{2})^{3/2}}d\zeta\] \[+ re^{-\frac{i}{2}\eta}\left(\frac{\zeta}{|\zeta|}\right)^{\frac{1 }{2}}\cdot\frac{3+|\zeta|^{2}}{(1+|\zeta|^{2})^{3/2}}d\bar{\zeta}. \tag{4.11}\] In fact, we can divide \(\bar{z}_{1}^{2}\) in equation (4.10), and obtain \[4\ d\bar{z}_{1}=\bar{z}_{1}\left\{4r^{-1}dr-2id\eta+\frac{1}{\zeta}\cdot\frac {1-|\zeta|^{2}}{(1+|\zeta|^{2})}d\zeta+\frac{1}{\zeta}\cdot\frac{3+|\zeta|^{2 }}{(1+|\zeta|^{2})}d\bar{\zeta}\right\}. \tag{4.12}\] Similarly, we have \[\bar{z}_{2}=re^{-\frac{i}{2}\eta}\frac{(\zeta/|\zeta|)^{1/2}}{(1+|\zeta|^{2})^{1/ 2}},\quad\bar{z}_{2}^{2}=r^{2}e^{-i\eta}\frac{\zeta/|\zeta|}{(1+|\zeta|^{2})}, \tag{4.13}\] and then it gives \[2\bar{z}_{2}d\bar{z}_{2} = 2re^{-i\eta}\frac{\zeta/|\zeta|}{(1+|\zeta|^{2})}dr-ir^{2}e^{-i \eta}\frac{\zeta/|\zeta|}{(1+|\zeta|^{2})}d\eta\] \[+ r^{2}e^{-i\eta}\left\{\partial_{\zeta}\left(\frac{\zeta/|\zeta| }{1+|\zeta|^{2}}\right)d\zeta+\partial_{\bar{\zeta}}\left(\frac{\zeta/|\zeta|} {1+|\zeta|^{2}}\right)d\bar{\zeta}\right\}. \tag{4.14}\] Moreover, we have \[\partial_{\zeta}\left(\frac{\zeta/|\zeta|}{1+|\zeta|^{2}}\right)=\frac{1}{2| \zeta|}\cdot\frac{1-|\zeta|^{2}}{(1+|\zeta|^{2})^{2}}, \tag{4.15}\] and \[\partial_{\bar{\zeta}}\left(\frac{\zeta/|\zeta|}{1+|\zeta|^{2}}\right)=- \frac{\zeta^{2}}{2|\zeta|^{3}}\cdot\frac{1+3|\zeta|^{2}}{(1+|\zeta|^{2})^{2}}. \tag{4.16}\] It follows \[2\bar{z}_{2}d\bar{z}_{2} = 2re^{-i\eta}\frac{\zeta/|\zeta|}{(1+|\zeta|^{2})}dr-ir^{2}e^{-i \eta}\frac{\zeta/|\zeta|}{(1+|\zeta|^{2})}d\eta\] \[+ r^{2}e^{-i\eta}\frac{1}{2|\zeta|}\cdot\frac{1-|\zeta|^{2}}{(1+| \zeta|^{2})^{2}}d\zeta\] \[- r^{2}e^{-i\eta}\frac{\zeta^{2}}{2|\zeta|^{3}}\cdot\frac{1+3| \zeta|^{2}}{(1+|\zeta|^{2})^{2}}d\bar{\zeta}, \tag{4.17}\] and we further simplify as \[4\ d\bar{z}_{2}=\bar{z}_{2}\left\{4r^{-1}dr-2id\eta+\frac{1}{\zeta}\cdot \frac{1-|\zeta|^{2}}{(1+|\zeta|^{2})}d\zeta-\frac{1}{\bar{\zeta}}\cdot\frac{1 +3|\zeta|^{2}}{(1+|\zeta|^{2})}d\bar{\zeta}\right\}. \tag{4.18}\] Next we are going to use the chain rule as follows \[\frac{\partial u}{\partial\bar{z}_{1}}=\frac{\partial u}{\partial r}\frac{ \partial r}{\partial\bar{z}_{1}}+\frac{\partial u}{\partial\zeta}\frac{ \partial\zeta}{\partial\bar{z}_{1}}+\frac{\partial u}{\partial\bar{\zeta}} \frac{\partial\bar{\zeta}}{\partial\bar{z}_{1}},\] and \[\frac{\partial u}{\partial\bar{z}_{2}}=\frac{\partial u}{\partial r}\frac{ \partial r}{\partial\bar{z}_{2}}+\frac{\partial u}{\partial\zeta}\frac{ \partial\zeta}{\partial\bar{z}_{2}}+\frac{\partial u}{\partial\bar{\zeta}} \frac{\partial\bar{\zeta}}{\partial\bar{z}_{2}}.\] Since \(\zeta\) is a holomorphic function of \(z\), it is clear that \(\partial\zeta/\partial\bar{z}_{1}=0\), and \(\partial\zeta/\partial\bar{z}_{2}=0\). Moreover, it follows from equation (4.3) that \(\partial r/\partial\bar{z}_{1}=z_{1}/2r\) and \(\partial r/\partial\bar{z}_{2}=z_{2}/2r\). Hence we have \[4\,{\rm Im}\left(\frac{\partial u}{\partial r}\frac{\partial r} {\partial\bar{z}_{1}}d\bar{z}_{1}\right)\] \[= (r^{-1}\partial_{r}u)|z_{1}|^{2}\left\{-d\eta+\frac{1-|\zeta|^{2} }{2(1+|\zeta|^{2})}\,{\rm Im}\left(\frac{d\zeta}{\zeta}\right)+\frac{3+|\zeta| ^{2}}{2(1+|\zeta|^{2})}\,{\rm Im}\left(\frac{d\bar{\zeta}}{\bar{\zeta}}\right)\right\}\] \[= -(r^{-1}\partial_{r}u)|z_{1}|^{2}\left\{d\eta+{\rm Im}\left(\frac{ d\zeta}{\zeta}\right)\right\}, \tag{4.19}\] where we used the equality \(\operatorname{Im}(\zeta^{-1}d\zeta)=-\operatorname{Im}(\bar{\zeta}^{-1}d\bar{\zeta})\). Similarly, it follows \[4\operatorname{Im}\left(\frac{\partial u}{\partial r}\frac{ \partial r}{\partial\bar{z}_{2}}d\bar{z}_{2}\right)\] \[= (r^{-1}\partial_{r}u)|z_{2}|^{2}\left\{-d\eta+\frac{1-|\zeta|^{2} }{2(1+|\zeta|^{2})}\operatorname{Im}\left(\frac{d\zeta}{\zeta}\right)-\frac{1 +3|\zeta|^{2}}{2(1+|\zeta|^{2})}\operatorname{Im}\left(\frac{d\bar{\zeta}}{ \zeta}\right)\right\}\] \[= -(r^{-1}\partial_{r}u)|z_{2}|^{2}\left\{d\eta-\operatorname{Im} \left(\frac{d\zeta}{\zeta}\right)\right\}.\] Combing with equation (4.19) and (4.20), we further have \[-4\operatorname{Im}\left(\frac{\partial u}{\partial r}\frac{ \partial r}{\partial\bar{z}_{1}}d\bar{z}_{1}\right)-4\operatorname{Im}\left( \frac{\partial u}{\partial r}\frac{\partial r}{\partial\bar{z}_{2}}d\bar{z}_{ 2}\right)\] \[= (r\partial_{r}u)\left\{d\eta-\frac{1-|\zeta|^{2}}{1+|\zeta|^{2}} \operatorname{Im}\left(\frac{d\zeta}{\zeta}\right)\right\},\] and note that the R.H.S. of equation (4.21) equals to \[(r\partial_{r}u)(d\eta-\cos\theta d\varphi) \tag{4.22}\] in the real Hopf-coordiante. Furthermore, it is straightforward to have \[\frac{\partial\bar{\zeta}}{\partial\bar{z}_{1}}=\frac{1}{\bar{z}_{2}},\quad \frac{\partial\bar{\zeta}}{\partial\bar{z}_{2}}=-\frac{\bar{z}_{1}}{(\bar{z}_ {2})^{2}}.\] Then we obtain \[4\frac{\partial u}{\partial\bar{\zeta}}\frac{\partial\bar{\zeta }}{\partial\bar{z}_{1}}d\bar{z}_{1}\] \[= (\bar{\zeta}\partial_{\bar{\zeta}}u)\left\{4r^{-1}dr-2id\eta+ \frac{1-|\zeta|^{2}}{(1+|\zeta|^{2})}\left(\frac{d\zeta}{\zeta}\right)+\frac{3 +|\zeta|^{2}}{(1+|\zeta|^{2})}\left(\frac{d\bar{\zeta}}{\bar{\zeta}}\right) \right\},\] and \[4\frac{\partial u}{\partial\bar{\zeta}}\frac{\partial\bar{\zeta }}{\partial\bar{z}_{2}}d\bar{z}_{2}\] \[= -(\bar{\zeta}\partial_{\bar{\zeta}}u)\left\{4r^{-1}dr-2id\eta+ \frac{1-|\zeta|^{2}}{(1+|\zeta|^{2})}\left(\frac{d\zeta}{\zeta}\right)-\frac{ 1+3|\zeta|^{2}}{(1+|\zeta|^{2})}\left(\frac{d\bar{\zeta}}{\bar{\zeta}}\right) \right\}.\] Moreover, we note the first three terms on the R.H.S. of equation (4.23) and (4.24) are only differ by a minus sign. Hence it follows \[-\operatorname{Im}\left(\frac{\partial u}{\partial\bar{\zeta}} \frac{\partial\bar{\zeta}}{\partial\bar{z}_{1}}d\bar{z}_{1}+\frac{\partial u }{\partial\bar{\zeta}}\frac{\partial\bar{\zeta}}{\partial\bar{z}_{2}}d\bar{z }_{2}\right)\] \[= -\operatorname{Im}\left(\partial_{\bar{\zeta}}u\cdot d\bar{ \zeta}\right)=\operatorname{Im}\left(\partial_{\zeta}u\cdot d\zeta\right).\] In conclusion, we obtain the following formula. **Lemma 4.1**.: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[4\ d^{c}u = (r\partial_{r}u)\left\{d\eta-\cos\theta\cdot\operatorname{Im}\left( \frac{d\zeta}{\zeta}\right)\right\}+4\operatorname{Im}(\partial_{\zeta}u\cdot d\zeta)\] \[= (ru_{r})d\eta+(2\sin\theta\cdot u_{\theta}-\cos\theta\cdot ru_{r} )d\varphi-\frac{2u_{\varphi}}{\sin\theta}d\theta. \tag{4.26}\] _where \(\cos\theta=(1-|\zeta|^{2})(1+|\zeta|^{2})^{-1}\)._ Proof.: Combine equation (4.21) and (4.25), and then the equality follows in the complex Hopf-coordinate. By utilizing the real Hopf-coordinate, we obtain the second line on the R.H.S. of equation (4.26), and one can check the following change of variables \[2\operatorname{Im}(\partial_{\zeta}u\cdot d\zeta)=\sin\theta(\partial_{ \theta}u)d\varphi-\frac{1}{\sin\theta}(\partial_{\varphi}u)d\theta,\] Then our result follows. ### The \(3\)-form In this section, we continue our computation on the complex hessian \(dd^{c}u\) for a \(u\in\mathcal{F}^{\infty}(B_{1})\) in the complex Hopf-coordinate. Again, we perform the calculation near a point on \(S_{R}\) for an \(R\in(0,1)\). Moreover, we only need to know the formula for its restriction as \[dd^{c}u|_{S_{R}}=d(d^{c}u)|_{S_{R}}.\] For the first term in equation (4.26) we have \[d\left\{(ru_{r})d\eta\right\}|_{S_{R}}=(ru_{r})_{,\zeta}d\zeta\wedge d\eta+( ru_{r})_{,\bar{\zeta}}d\bar{\zeta}\wedge d\eta, \tag{4.27}\] where we used the notation \[(ru_{r})_{,\zeta}=\frac{\partial(r\partial_{r}u)}{\partial\zeta},\ \ (ru_{r})_{,\bar{ \zeta}}=\frac{\partial(r\partial_{r}u)}{\partial\bar{\zeta}}.\] For the second term, we can write it as \[d\left\{(ru_{r})\cos\theta\frac{1}{2i}\left(\frac{d\bar{\zeta}}{\zeta}-\frac {d\zeta}{\zeta}\right)\right\}|_{S_{R}}. \tag{4.28}\] Then we have \[\partial_{\zeta}\cos\theta=\partial_{\zeta}\left(\frac{1-|\zeta|^{2}}{1+| \zeta|^{2}}\right)=\frac{-2\bar{\zeta}}{(1+|\zeta|^{2})^{2}}, \tag{4.29}\] and the following is clear \[d\left(\frac{d\zeta}{\zeta}\right)=0,\ \ \ d\left(\frac{d\bar{\zeta}}{\bar{ \zeta}}\right)=0. \tag{4.30}\] Thus equation (4.28) is equal to \[\frac{1}{2i}\left\{(ru_{r})_{,\zeta}\cos\theta\frac{\zeta}{| \zeta|^{2}}-(ru_{r})\frac{2}{(1+|\zeta|^{2})^{2}}\right\}d\zeta\wedge d\bar{\zeta}\] \[+ \frac{1}{2i}\left\{(ru_{r})_{,\bar{\zeta}}\cos\theta\frac{\bar{ \zeta}}{|\zeta|^{2}}-(ru_{r})\frac{2}{(1+|\zeta|^{2})^{2}}\right\}d\zeta\wedge d \bar{\zeta}\] \[= (ru_{r})\frac{2id\zeta\wedge d\bar{\zeta}}{(1+|\zeta|^{2})^{2}}- \operatorname{Re}(\zeta\cdot(ru_{r})_{,\zeta})\frac{\cos\theta}{|\zeta|^{2}} id\zeta\wedge d\bar{\zeta}, \tag{4.31}\] and then we obtain the following equality. **Lemma 4.2**.: _For any \(u\in{\mathcal{F}}^{\infty}(B_{1})\), we have_ \[4\ dd^{c}u|_{S_{R}} = (ru_{r})_{,\zeta}d\zeta\wedge d\eta+(ru_{r})_{,\bar{\zeta}}d\bar{ \zeta}\wedge d\eta\] \[+ \left\{4u_{,\zeta\bar{\zeta}}-\operatorname{Re}(\zeta\cdot(ru_{r} )_{,\zeta})\frac{\cos\theta}{|\zeta|^{2}}+\frac{2ru_{r}}{(1+|\zeta|^{2})^{2}} \right\}id\zeta\wedge d\bar{\zeta}.\] Proof.: It is left to compute the third term in equation (4.26), and we can perform like follows \[\frac{1}{2i}d(\partial_{\zeta}u\cdot d\zeta-\partial_{\bar{\zeta}}u\cdot d\bar {\zeta})|_{S_{R}}=\frac{\partial^{2}u}{\partial\zeta\partial\bar{\zeta}}\ id\zeta\wedge d\bar{\zeta},\] and then the equality follows. In terms of the real Hopf-coordinate, we observe that \[id\zeta\wedge d\bar{\zeta}=\frac{\sin\theta}{2\cos^{4}(\theta/2)}d\theta\wedge d\varphi,\] and then we obtain another way to describe the above \(2\)-form as \[4\ dd^{c}u|_{S_{R}} = (ru_{,r\theta})d\theta\wedge d\eta+(ru_{,r\varphi})d\varphi\wedge d\eta\] \[+ \left\{(ru_{r})\sin\theta-(ru_{,r\theta})\cos\theta\right\}d \theta\wedge d\varphi\] \[+ 2\left\{\sin\theta\cdot u_{,\theta\theta}+\cos\theta\cdot u_{ \theta}+(\sin\theta)^{-1}u_{,\varphi\varphi}\right\}d\theta\wedge d\varphi.\] Next we are going to compute the following \(3\)-form. Combing Lemma (4.1) with Lemma (4.2), it equals to \[16\ d^{c}u\wedge dd^{c}u|_{S_{R}}={\rm I}+{\rm II},\] where \[{\rm I}: = \left((r\partial_{r}u)\left\{d\eta-\cos\theta\cdot\operatorname{ Im}\left(\frac{d\zeta}{\zeta}\right)\right\}+4\operatorname{Im}(\partial_{\zeta}u \cdot d\zeta)\right)\] \[\wedge 2\operatorname{Re}\left\{(ru_{r})_{,\zeta}d\zeta\wedge d\eta \right\},\] and \[{\rm II}: = \left((r\partial_{r}u)\left\{d\eta-\cos\theta\cdot\operatorname{ Im}\left(\frac{d\zeta}{\zeta}\right)\right\}+4\operatorname{Im}(\partial_{\zeta}u \cdot d\zeta)\right)\] \[\wedge \left\{4u_{,\zeta\bar{\zeta}}-\operatorname{Re}(\zeta\cdot(ru_{ r})_{,\zeta})\frac{\cos\theta}{|\zeta|^{2}}+\frac{2ru_{r}}{(1+|\zeta|^{2})^{2}} \right\}id\zeta\wedge d\bar{\zeta}.\] (4.35) Then the first term is \[{\rm I} = d^{c}u\wedge\left\{(ru_{r})_{,\zeta}d\zeta\wedge d\eta+(ru_{r})_ {,\bar{\zeta}}d\bar{\zeta}\wedge d\eta\right\}\] \[= \frac{i}{2}(ru_{r})\frac{\cos\theta}{|\zeta|^{2}}\left\{\bar{ \zeta}\cdot(ru_{r})_{,\bar{\zeta}}+\zeta\cdot(ru_{r})_{,\zeta}\right\}d\zeta \wedge d\bar{\zeta}\wedge d\eta\] \[- 2i\left\{(\partial_{\bar{\zeta}}u)(ru_{r})_{,\zeta}+(\partial_{ \zeta}u)(ru_{r})_{,\bar{\zeta}}\right\}d\zeta\wedge d\bar{\zeta}\wedge d\eta\] \[= (ru_{r})\frac{\cos\theta}{|\zeta|^{2}}\operatorname{Re}\left\{ \zeta\cdot(ru_{r})_{,\zeta}\right\}id\zeta\wedge d\bar{\zeta}\wedge d\eta\] \[- 4\operatorname{Re}\left\{(\partial_{\bar{\zeta}}u)(ru_{r})_{, \zeta}\right\}id\zeta\wedge d\bar{\zeta}\wedge d\eta,\] and the second term is \[\mathrm{II} = d^{c}u\wedge\left\{4u_{,\zeta\bar{\zeta}}-\mathrm{Re}(\zeta\cdot( ru_{r})_{,\zeta})\frac{\cos\theta}{|\zeta|^{2}}+\frac{2ru_{r}}{(1+|\zeta|^{2})^{2}} \right\}id\zeta\wedge d\bar{\zeta}\] \[= -(ru_{r})\frac{\cos\theta}{|\zeta|^{2}}\,\mathrm{Re}\left\{\zeta \cdot(ru_{r})_{,\zeta}\right\}id\zeta\wedge d\bar{\zeta}\wedge d\eta\] \[+ \left\{4(ru_{r})u_{,\zeta\bar{\zeta}}+\frac{2(ru_{r})^{2}}{(1+| \zeta|^{2})^{2}}\right\}id\zeta\wedge d\bar{\zeta}\wedge d\eta.\] Observe that the first line on the R.H.S. of equation (4.36) cancels with the first line on the R.H.S. of equation (4.37), and then we eventually obtain the following formula. **Proposition 4.3**.: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), we have the following \(3\)-form on the \(3\)-sphere \(S_{R}\), in terms of the complex Hopf-coordinate as_ \[8\ d^{c}u\wedge dd^{c}u|_{S_{R}}\] \[= 2\left((ru_{r})_{u,\zeta\bar{\zeta}}-\mathrm{Re}\left\{(\partial _{\bar{\zeta}}u)(ru_{r})_{,\zeta}\right\}\right)id\zeta\wedge d\bar{\zeta} \wedge d\eta\] \[+ \frac{(ru_{r})^{2}}{(1+|\zeta|^{2})^{2}}id\zeta\wedge d\bar{ \zeta}\wedge d\eta.\] Moreover, we note that \(\zeta\) is actually a coordinate on \(\mathbb{C}_{\infty}\cong\mathbb{CP}^{1}\), and this leads us to consider the Fubini-Study metric on this Kahler manifold as \[\omega:=\frac{id\zeta\wedge d\bar{\zeta}}{2(1+|\zeta|^{2})^{2}},\] and it has volume \(\int_{\mathbb{CP}^{1}}\omega=\pi\). Then we can rewrite the above \(3\)-form in the following coordinate-free way \[8\ d^{c}u\wedge dd^{c}u|_{S_{R}}\] \[= 2(ru_{r}\cdot\Delta_{\omega}u)\omega\wedge d\eta\] \[- \{\langle\nabla u,\nabla(ru_{r})\rangle_{\omega}+\langle\nabla( ru_{r}),\nabla u\rangle_{\omega}\}\omega\wedge d\eta\] \[+ 2(ru_{r})^{2}\omega\wedge d\eta, \tag{4.39}\] where \(\nabla\) is the complex gradient for \(\zeta\), and the inner product is taken on \(\mathbb{CP}^{1}\) as \[\langle\nabla v,\nabla w\rangle_{\omega}=\mathrm{tr}_{\omega}(\partial_{\zeta }v\wedge\bar{\partial}_{\zeta}w).\] Next, we are going to use another change of variables as \(t:=\log r\in(-\infty,0)\), and then \(u\) can be rewritten as \[u_{t}(\zeta):=\hat{u}(t,\zeta)=u(e^{t},\zeta,\bar{\zeta}). \tag{4.40}\] Along each complex line through the origin of \(\mathbb{C}^{2}\), we have \[r\partial_{r}u=\partial_{t}\hat{u}=\dot{u}_{t}, \tag{4.41}\] where \(r=e^{t}\). Therefore, we obtain the following decomposition formula for the complex Monge-Ampere mass. **Theorem 4.4**.: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[\frac{1}{\pi}\int_{S_{R}}d^{c}u\wedge dd^{c}u=2\int_{\mathbb{CP}^{1}}(\dot{u} _{t}\Delta_{\omega}u_{t})\omega+\int_{\mathbb{CP}^{1}}(\dot{u}_{t})^{2}\omega, \tag{4.42}\] _where \(\dot{u}_{t}=\frac{du_{t}}{dt}|_{t=T}\) for \(e^{T}=R\)._ Proof.: The idea is to integrate both sides of equation (4.39) on the 3-sphere \(S_{R}\). Due to equation (3.9) and Remark (3.1), it is legal to perform the integration under the complex Hopf-coordinate \((\zeta,\eta)\in\mathbb{C}_{\infty}\times S^{1}\). After applying Fubini's Theorem, it boils down to take the following integration by parts on \(\mathbb{CP}^{1}\). \[- \int_{\mathbb{CP}^{1}}\partial_{\zeta}u\wedge\bar{\partial}_{ \zeta}(ru_{r})-\int_{\mathbb{CP}^{1}}\partial_{\zeta}(ru_{r})\wedge\bar{ \partial}_{\zeta}u\] \[= \int_{\mathbb{CP}^{1}}\partial\bar{\partial}_{\zeta}u\cdot(ru_{r} )+\int_{\mathbb{CP}^{1}}(ru_{r})\cdot\partial\bar{\partial}_{\zeta}u\] \[= 2\int_{\mathbb{CP}^{1}}(ru_{r}\cdot\Delta_{\omega}u)\omega,\] where Stoke's theorem is used in the first equality. Equipped with the other two terms in the R.H.S. of equation (4.39), and then our result follows. In order to illustrate the above computation in a clear way, we will also invoke the real Hopf-coordinate, and perform the integration under it. First we note again that the function \(u\) is periodic in the angle \(\varphi\) direction with period \(2\pi\), and this can be seen directly as follows. \[u\left(|z_{1}|e^{\frac{i}{2}(\eta+\varphi)+i\pi},|z_{2}|e^{\frac {i}{2}(\eta-\varphi)-i\pi}\right)\] \[= u\left(|z_{1}|e^{\frac{i}{2}(\eta+\varphi)+i\pi},|z_{2}|e^{\frac {i}{2}(\eta-\varphi)+i\pi}\right)\] \[= u\left(|z_{1}|e^{\frac{i}{2}(\eta+\varphi)},|z_{2}|e^{\frac{i}{2 }(\eta-\varphi)}\right).\] Then in the real Hopf-coordinate, we take the wedge product of equation (4.26) and (4.33), and obtain \[8\ d^{c}u\wedge dd^{c}u|_{S_{R}}\] \[= ru_{r}^{2}\left\{\frac{\partial_{\varphi}(u_{\varphi}\cdot u_{r} ^{-1})}{\sin\theta}+\sin\theta\cdot\partial_{\theta}(u_{\theta}\cdot u_{r}^{- 1})\right\}d\theta\wedge d\varphi\wedge d\eta\] \[+ ru_{r}^{2}\left\{\cos\theta\cdot(u_{\theta}\cdot u_{r}^{-1})+ \frac{r}{2}\sin\theta\right\}d\theta\wedge d\varphi\wedge d\eta.\] Next we will integrate the two sides of this equation on the 3-sphere \(S_{R}\). Thanks to Fubini's theorem, we can perform the integration by parts in the \(\varphi\)-direction as follows \[\int_{0}^{\pi}d\theta\int_{0}^{2\pi}\sin^{-1}\theta(ru_{r}^{2}) \partial_{\varphi}(u_{\varphi}\cdot u_{r}^{-1})d\varphi\] \[= \int_{0}^{\pi}d\theta\int_{0}^{2\pi}\sin^{-1}\theta\left\{(ru_{r} )u_{,\varphi\varphi}-ru_{\varphi}u_{,r\varphi}\right\}d\varphi\] \[= 2\int_{0}^{2\pi}\int_{0}^{\pi}(ru_{r})\frac{u_{,\varphi\varphi}}{ \sin\theta}d\theta d\varphi.\] On the other hand, we can also perform the integration by parts in \(\theta\)-direction as follows. \[\int_{0}^{2\pi}d\varphi\int_{0}^{\pi}\cos\theta(ru_{r}^{2})(u_{\theta }\cdot u_{r}^{-1})d\theta\] \[= \int_{0}^{2\pi}d\varphi\left(\sin\theta(u_{\theta}\cdot ru_{r}) \right)\big{|}_{0}^{\pi}-\int_{0}^{2\pi}d\varphi\int_{0}^{\pi}\sin\theta \partial_{\theta}(u_{\theta}\cdot ru_{r})d\theta\] \[= -\int_{0}^{2\pi}d\varphi\int_{0}^{\pi}\sin\theta\left\{r\partial _{r}(u_{\theta})^{2}+ru_{r}^{2}\cdot\partial_{\theta}(u_{\theta}\cdot u_{r}^{ -1})\right\}d\theta,\] and the first term on the R.H.S. of equation (4.47) reads as \[-\int_{0}^{2\pi}d\varphi\int_{0}^{\pi}\sin\theta\left\{2r(u_{ \theta}u_{,r\theta})\right\}d\theta\] \[= -\int_{0}^{2\pi}d\varphi\left(2\sin\theta(ru_{r}\cdot u_{\theta} )\right)\big{|}_{0}^{\pi}+2\int_{0}^{2\pi}\int_{0}^{\pi}(ru_{r})\partial_{ \theta}(\sin\theta u_{\theta})d\theta d\varphi.\] Finally we combine equation (4.45), (4.47), (4.48) and (4.46) together to obtain \[8\int_{S_{R}}d^{c}u\wedge dd^{c}u\] \[= 8\pi\int_{0}^{2\pi}\int_{0}^{\pi}(ru_{r})\left\{\begin{array}{ ll}\frac{u_{,\varphi\varphi}}{\sin\theta}+\partial_{\theta}(\sin\theta u_{\theta}) \end{array}\right\}d\theta d\varphi\] \[+ 2\pi\int_{0}^{2\pi}\int_{0}^{\pi}(ru_{r})^{2}\sin\theta d\theta d\varphi\] \[= 8\pi\int_{S^{2}}(ru_{r}\cdot\Delta_{\Theta}u)d\sigma_{2}+2\pi \int_{S^{2}}(ru_{r})^{2}d\sigma_{2},\] where \(d\sigma_{2}=\sin\theta d\theta\wedge d\varphi\) is the area form of the unit 2-sphere \(S^{2}\), and \(\Delta_{\Theta}\) is the standard Laplacian on \(S^{2}\) w.r.t. the round metric, i.e. we have \[\Delta_{\Theta}u=\left\{\frac{1}{\sin\theta}\frac{\partial}{\partial\theta} \left(\sin\theta\frac{\partial u}{\partial\theta}\right)+\frac{1}{\sin^{2} \theta}\frac{\partial^{2}u}{\partial\varphi^{2}}\right\}. \tag{4.50}\] Then it is clear that equation (4.49) is equivalent to (4.42) after the change of variables of the real and complex Hopf-coordinates and \(r=e^{t}\). ## 5. The residual Monge-Ampere mass For the next step, we are going to estimate the two integrals on the R.H.S. of equation (4.42) as \(t\rightarrow-\infty\). First, we note that the last integral has a deep connection with the Lelong number of \(u\). ### The \(L^{2}\)-Lelong number On the one hand, it is well known that the following limit is exactly the Lelong number at the origin of a plurisubharmonic function \(u\) in \(\mathbb{C}^{2}\). \[\nu_{u}(0)=\lim_{r\to 0^{+}}\nu_{u}(0,r),\] if we take \[\nu_{u}(0,r):=r\partial_{r}^{-}\left(\frac{1}{2\pi^{2}}\int_{|\xi|=1}u(r\xi)\ d \sigma_{3}(\xi)\right), \tag{5.1}\] where \(d\sigma_{3}\) is the area form of the unit \(3\)-sphere \(S^{3}\). Moreover, the number \(\nu_{u}(0,r)\) is non-decreasing in the radial direction \(r\). In fact, it is a standard fact that the Lelong number of a plurisubharmonic function \(u\) is invariant under restriction to almost all complex directions in \(\mathbb{C}^{2}\). From now on, we assume that the plurisubharmonic function \(u\) is in the space \(\mathcal{F}(B_{1})\) with the normalization \(\sup_{B_{1}}u=-1\). Fixing a \(\zeta\in\mathbb{C}_{\infty}\), the complex line through the origin of \(\mathbb{C}^{2}\) in the direction \(\zeta\) can be written as \(\ell_{\zeta}:=(\lambda\zeta,\lambda)\) for all \(\lambda\in\mathbb{C}\) and \(\zeta\neq\infty\). If \(\zeta=\infty\), then we have \(\ell_{\infty}:=(\lambda,0)\). Then the following restriction \[u|_{\ell_{\zeta}}=u(\lambda\zeta,\lambda)=u(|\lambda|\zeta,|\lambda|)\] is actually a non-decreasing convex function of \(\log|\lambda|\) (see [4]), and hence is also a non-decreasing convex function of \[t=\log r=\log|\lambda|+\frac{1}{2}\log(1+|\zeta|^{2}),\] for all \(t\in(-\infty,0)\). In other words, the negative function \(u_{t}(\zeta)=\hat{a}(t,\zeta)\) (defined in equation (4.40)) is also non-decreasing and convex in \(t\). From equation (4.41) and (5.5), the Lelong number at zero of \(u|_{\ell_{\zeta}}\) is equal to \[\nu_{u|_{\ell_{\zeta}}}(0)=\lim_{t\to-\infty}\dot{u}_{t}(\zeta), \tag{5.2}\] and we further have \[\dot{u}_{t}\geq 0,\ \ and\ \ \ddot{u}_{t}\geq 0, \tag{5.3}\] for almost all \(t\in(-\infty,0)\) and \(\zeta\) fixed. Then the invariance of the Lelong number under restrictions reads as follows. **Lemma 5.1** (Remark (2.38), [18]).: _For a plurisubharmonic function \(u\), its Lelong number at the origin \(\nu_{u}(0)\) is equal to \(\nu_{u|_{\ell_{\zeta}}}(0)\) for almost everywhere \(\zeta\in\mathbb{C}_{\infty}\). Moreover, for such a \(\zeta\), it is the decreasing limit of \(\dot{u}_{t}(\zeta)\) as \(t\to-\infty\)._ Suppose that \(u\) is further in \(\mathcal{F}^{\infty}(B_{1})\). Recall that the total area of a unit \(3\)-sphere is \(2\pi^{2}\). Then we can rewrite the area form in the real Hopf-coordinate as \[8\ d\sigma_{3}=\sin\theta d\theta\wedge d\varphi\wedge d\eta, \tag{5.4}\] and then equation (5.1) can be re-written as \[\nu_{u}(0,r)=\frac{1}{4\pi}\int_{S^{2}}(r\partial_{r}u)\ d\sigma_{2}=\frac{1} {\pi}\int_{\mathbb{CP}^{1}}\dot{u}_{t}\omega. \tag{5.5}\] Then we can obtain an estimate on the last term of equation (4.42), and it will reveal that this term actually behaves like a kind of \(L^{2}\)_-Lelong number._ **Lemma 5.2**.: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[[\nu_{u}(0)]^{2}=\lim_{t\to-\infty}\frac{1}{\pi}\int_{\mathbb{CP}^{1}}(\dot{u }_{t})^{2}\omega. \tag{5.6}\] Proof.: For each fixed \(\zeta\), we denote \(v_{\infty}\) by the limit of the non-decreasing sequence \(\dot{u}_{t}(\zeta)\) as \[v_{\infty}=\lim_{t\to-\infty}\dot{u}_{t}(\zeta)\geq 0.\] Thanks to Lemma (5.1), it coincides with the Lelong number \(\nu_{u}(0)\) for almost everywhere \(\zeta\in\mathbb{CP}^{1}\). Take \[M_{A}:=\max_{\zeta\in\mathbb{CP}^{1}}\dot{u}_{-A}(\zeta), \tag{5.7}\] for some constant \(A>0\). Then we have \[\dot{u}_{t}(\zeta)\leq M_{A},\] for all \(t<-A\) and \(\zeta\in\mathbb{CP}^{1}\), and then our result follows from the dominated convergence theorem as \[\lim_{t\to-\infty}\frac{1}{\pi}\int_{\mathbb{CP}^{1}}(\dot{u}_{t})^{2}\omega= \frac{1}{\pi}\int_{\mathbb{CP}^{1}}v_{\infty}^{2}\omega=\nu_{u}^{2}(0). \tag{5.8}\] ### Laplacian estimates The next goal is to estimate the first term on the R.H.S. of equation (4.42). Before moving on, we need to take a closer look at the Laplacian \(\Delta_{\omega}\) as follows. First, it is a standard fact that the Laplacian \(\Delta_{e}\) with respect to the Euclidean coordinate in \(\mathbb{R}^{4}\) has the following decomposition in the hyperspherical coordinate \[\Delta_{e}=\Delta_{r}+r^{-2}\Delta_{\Xi}, \tag{5.9}\] where \(\Delta_{r}=r^{-3}\partial_{r}(r^{3}\partial_{r}\cdot)\) is the radial part, and \(\Delta_{\Xi}\) is the standard Laplacian on the unit 3-sphere. That is to say, this operator \(\Delta_{\Xi}\) only consists of derivatives in the directions perpendicular to the radial direction. In the real Hopf-coordinate, it is standard to compute (cf. [21]) \[\frac{1}{4}\ \Delta_{\Xi}=\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta\frac{\partial}{\partial\theta}\right)+\frac{1}{\sin^ {2}\theta}\left(\frac{\partial^{2}}{\partial\varphi^{2}}+\frac{\partial^{2}} {\partial\eta^{2}}\right)+\frac{2\cos\theta}{\sin^{2}\theta}\frac{\partial^{ 2}}{\partial\varphi\partial\eta}. \tag{5.10}\] If the function \(u\) is also \(S^{1}\)-invariant, then we can compare equation (5.10) with (4.50), and obtain \[\Delta_{\Xi}u=4\ \Delta_{\Theta}u=2\ \Delta_{\omega}u. \tag{5.11}\] Therefore, we have the following Laplacian decomposition formula for any function \(u\in\mathcal{F}^{\infty}(B_{1})\) \[\Delta_{e}u=\Delta_{r}u+2r^{-2}\Delta_{\omega}u. \tag{5.12}\] **Lemma 5.3**.: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), there exists a positive constant \(M_{A}\) such that we have_ \[2\int_{\mathbb{CP}^{1}}\dot{u}_{t}(\Delta_{\omega}u_{t})\omega\leq M_{A}\int_{ \mathbb{CP}^{1}}(\ddot{u}_{t}+2\dot{u}_{t})\omega, \tag{5.13}\] _for all \(t\leq-A\)._ Proof.: Thanks to the Laplacian decomposition formula (equation (5.12)), the first integral on the R.H.S. of equation (4.42) can be rewritten as \[2\int_{\mathbb{CP}^{1}}\dot{u}_{t}(\Delta_{\omega}u_{t})\omega=\int_{\mathbb{CP}^ {1}}\dot{u}_{t}(r^{2}\Delta_{e}u)\omega-\int_{\mathbb{CP}^{1}}\dot{u}_{t}(r^{2} \Delta_{r}u)\omega. \tag{5.14}\] Moreover, the second term on the R.H.S. of equation (5.14) is \[\int_{\mathbb{CP}^{1}}\dot{u}_{t}(r^{2}\Delta_{r}u)\omega = \int_{\mathbb{CP}^{1}}(ru_{r})\left\{(r^{2}u_{,rr}+ru_{r})+2ru_{r} \right\}\omega\] \[= \int_{\mathbb{CP}^{1}}\dot{u}_{t}(\ddot{u}_{t}+2\dot{u}_{t})\omega. \tag{5.15}\] Thanks to equation (5.3), the integral in equation (5.15) is actually non-negative, and we obtain \[2\int_{\mathbb{CP}^{1}}\dot{u}_{t}(\Delta_{\omega}u_{t})\omega\leq\int_{ \mathbb{CP}^{1}}\dot{u}_{t}(r^{2}\Delta_{e}u)\omega. \tag{5.16}\] Furthermore, we have \(\Delta_{e}u\geq 0\) since \(u\) is plurisubharmonic on \(\mathbb{C}^{2}\). Take \(M_{A}\) as the maximum of \(\dot{u}_{-A}\) on \(\mathbb{CP}^{1}\) (equation (5.7)), and it follows for all \(t<-A\) \[2\int_{\mathbb{CP}^{1}}\dot{u}_{t}(\Delta_{\omega}u_{t})\omega \leq \int_{\mathbb{CP}^{1}}\dot{u}_{t}(r^{2}\Delta_{e}u)\omega\] \[\leq M_{A}\int_{\mathbb{CP}^{1}}(r^{2}\Delta_{r}u+2\Delta_{\omega}u)\omega\] \[= M_{A}\int_{\mathbb{CP}^{1}}(\ddot{u}_{t}+2\dot{u}_{t})\omega. \tag{5.17}\] Here we have used \[\int_{\mathbb{CP}^{1}}(\Delta_{\omega}u)\omega=0\] in the last equality in equation (5.17). Then our result follows. Now everything boils down to study the asymptotic behavior of the following integral \[\int_{\mathbb{CP}^{1}}\ddot{u}_{t}\omega=\frac{d}{dt}\left(\int_{\mathbb{CP}^ {1}}\dot{u}_{t}\omega\right), \tag{5.18}\] as \(t\to-\infty\). To this purpose, we introduce the following two non-negative functionals \[I_{u}(t):=\int_{\mathbb{CP}^{1}}\dot{u}_{t}\omega;\quad J_{u}(t):=\int_{ \mathbb{CP}^{1}}(\dot{u}_{t})^{2}\omega, \tag{5.19}\] and the negative functional as a primitive of \(I_{u}\) \[\mathcal{I}(u_{t}):=\int_{\mathbb{CP}^{1}}u_{t}\omega, \tag{5.20}\] for all \(t\in(-\infty,0)\). Then the following observation is crucial. **Proposition 5.4**.: _Suppose \(u\in\mathcal{F}^{\infty}(B_{1})\) has zero Lelong number at the origin. Then there exists a sequence \(t_{i}\in(-\infty,0)\) converging to \(-\infty\) such that_ \[\lim_{i\to+\infty}\frac{dI_{u}}{dt}(t_{i})=0.\] Proof.: First we note that \(\mathcal{I}(u_{t})\) is a non-decreasing convex function along \(t\in(-\infty,0)\). As its first derivative, \(I_{u}\) is a non-negative, non-decreasing function in \(t\). Moreover, we have \[I_{u}^{\prime}(t):=\frac{dI_{u}}{dt}(t)=\int_{\mathbb{CP}^{1}}\ddot{u}_{t} \omega\geq 0. \tag{5.21}\] Thanks to equation (5.5), we have its limit as a decreasing sequence in \(t\) \[\lim_{t\to-\infty}I_{u}(t)=\pi\nu_{u}(0)=0. \tag{5.22}\] Hence \(I_{u}\) is a non-negative \(C^{1}\)-continuous function decreasing to zero as \(t\to-\infty\). Then it is sufficient to prove \[\liminf_{t\to-\infty}I_{u}^{\prime}(t)=0. \tag{5.23}\] Suppose not, and then there exists an \(\varepsilon>0\) such that \[\liminf_{t\to-\infty}I_{u}^{\prime}(t)>\varepsilon, \tag{5.24}\] and then there exists a \(T<0\) such that for all \(t<T\) \[I_{u}^{\prime}(t)>\varepsilon/2. \tag{5.25}\] Hence the graph of \(I_{u}\) will be under the following straight line for all \(t<T\) \[y=\frac{\varepsilon}{2}x+I_{u}(T)-\frac{\varepsilon T}{2},\] but this implies \(I_{u}(t)<0\) for all \(t\) negative enough, which is a contradiction. Then we are ready to prove the main theorem. **Theorem 5.5**.: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), its residual Monge-Ampere mass \(\tau_{u}(0)\) is zero, if its Lelong number \(\nu_{u}(0)\) is zero at the origin._ Proof.: Thanks to Proposition (2.5) and the decomposition formula (equation (4.42)), we have \[\pi^{-1}\mathrm{MA}(u)(B_{r})=2\int_{\mathbb{CP}^{1}}(\dot{u}_{t}\Delta_{ \omega}u_{t})\omega+\int_{\mathbb{CP}^{1}}(\dot{u}_{t})^{2}\omega, \tag{5.26}\] for \(r=e^{t}\). Thanks to Lemma (5.9), we can further estimate \[\pi^{-1}\mathrm{MA}(u)(B_{r})\leq M_{A}\int_{\mathbb{CP}^{1}}(\ddot{u}_{t}+2 \dot{u}_{t})\omega+\int_{\mathbb{CP}^{1}}(\dot{u}_{t})^{2}\omega, \tag{5.27}\] for a uniform constant \(M_{A}\geq 0\) and all \(t<-A\). In fact, we can re-write the above estimate (equation (5.27) as \[\pi^{-1}\mathrm{MA}(u)(B_{r})\leq M_{A}\left\{I_{u}^{\prime}(t)+2I_{u}(t) \right\}+J_{u}(t), \tag{5.28}\] for \(r=e^{t}\). Due to Lemma (5.2) and equation (5.5), the zero Lelong number \(\nu_{u}(0)=0\) implies \[I_{u}(t)\to 0,\quad\text{and}\quad J_{u}(t)\to 0,\] as \(t\to-\infty\). Moreover, we infer from Proposition (5.4) that there exists a sequence \(t_{i}=\log r_{i}\) converging to \(-\infty\) satisfying \[\pi^{-1}\mathrm{MA}(u)(B_{r_{i}})\leq M_{A}\left\{I_{u}^{\prime}(t_{i})+2I_{u }(t_{i})\right\}+J_{u}(t_{i}), \tag{5.29}\] and the R.H.S. of equation (5.29) converges to zero as \(r_{i}\to 0\). However, the Monge-Ampere mass of \(u\) on the \(r\)-ball \(\operatorname{MA}(u)(B_{r})\) is non-decreasing in \(r\). Then it follows \[\tau_{u}(0)=\frac{1}{\pi^{2}}\lim_{r\to 0^{+}}\operatorname{MA}(u)(B_{r})=0. \tag{5.30}\] ## 6. Radially regular functions In order to perform the computation to obtain the decomposition formula (Theorem (4.4)), we have assumed a strong condition on the regularities of the function \(u\in\mathcal{F}^{\infty}(B_{1})\), namely, the \(C^{2}\)-regularities of \(u\) is required outside the singularity. In this section, we will loose this regularity condition, and prove the same result. Let \(D\) be a bounded domain in \(\mathbb{C}^{2}\), and \(r=e^{t}=|z|\). The weighted measure \(d\lambda_{t}\) is introduced as \[d\lambda_{t}:=r^{-4}d\lambda,\] where \(d\lambda\) is the Lebesgue measure on \(\mathbb{R}^{4}\cong\mathbb{C}^{2}\). We note that a non-negative function is integrable in the measure \(d\lambda\), if it is integrable in \(d\lambda_{t}\). The reverse is also true, if the domain \(D\) does not contain a neighborhood of the origin. From now on, we will take the domain \(D\) as the punctured unit ball \(B_{1}^{*}\), and use \(L^{1}_{loc}(B_{1}^{*})\) to denote the space of all locally \(L^{1}\)-functions on \(B_{1}^{*}\). In fact, there is no difference to use the measure \(d\lambda\) or \(d\lambda_{t}\) in this definition, since we can always consider the integrability on a smaller ball \(B_{|z_{0}|/3}(z_{0})\) for any point \(z_{0}\in B_{1}^{*}\). The following concept about weak derivatives is standard. **Definition 6.1**.: _Let \(u\) be a function in the space \(L^{1}_{loc}(B_{1}^{*})\). Then a function \(v\in L^{1}_{loc}(B_{1}^{*})\) is called the weak derivative of \(u\) in the \(t\)-direction if_ \[\int_{B_{1}^{*}}\chi vd\lambda_{t}=-\int_{B_{1}^{*}}u(r\partial_{r}\chi)d \lambda_{t} \tag{6.1}\] _for all test function \(\chi\in C^{1}_{0}(B_{1}^{*})\)._ In a similar way, the 2nd. order weak derivative \(w\in L^{1}_{loc}(B_{1}^{*})\) of \(u\) in the \(t\)-direction is defined as for all \(\chi\in C^{2}_{0}(B_{1}^{*})\) \[\int_{B_{1}^{*}}\chi wd\lambda_{t}=\int_{B_{1}^{*}}u(r\partial_{r})^{2}\chi d \lambda_{t}. \tag{6.2}\] Moreover, we will denote the weak derivative \(v\) (the 2nd. order weak derivative \(w\)) of \(u\) in the \(t\)-direction by \(\partial_{t}u\) (and \(\partial_{t}^{2}u\)). Then we are ready to introduce the following concept. **Definition 6.2**.: _For a constant \(T<0\), a function \(u\in\mathcal{F}(B_{1})\) is said to be radially regular from \(T\), if it satisfies the following two conditions:_ 1. _its two weak derivatives_ \(\partial_{t}u\) _and_ \(\partial_{t}^{2}u\) _in the_ \(t\)_-direction exist on_ \(B_{1}^{*}\)_;_ 2. _the right derivative_ \(\partial_{t}^{+}u(\zeta)|_{t=T}\) _on any complex line_ \(\ell_{\zeta}\) _has a uniform upper bound for all_ \(\zeta\in\mathbb{CP}^{1}\)_._ We will explain these two conditions in the above Definition as follows. ### Slicing Theory Next, we recall a few facts of slicing theory ([12], [14], [28]). First the radius function \(r:B_{1}^{*}\to(0,1)\) can be viewed as a proper smooth submersion of differentiable manifolds. In fact, if we identify \(B_{1}^{*}\) with \(S^{3}\times(0,1)\), then \(r\) is locally the projection map \[S^{3}\times(0,1)\to(0,1)\quad(\zeta,\bar{\zeta},\eta,r)\to r.\] Here we have used the complex Hopf-coordinate on \(S^{3}\). A function \(U\in L^{1}_{loc}(B_{1}^{*})\) can be viewed as a order zero locally flat current, and then we can define its slicing for \(r^{\prime}\in(0,1)\) as \[U_{r^{\prime}}:=U|_{S_{r^{\prime}}}.\] Thanks to Fubini's Theorem, this restriction of \(U\) to the fibers exists for almost everywhere \(r^{\prime}\in(0,1)\), and we have \(U_{r^{\prime}}\in L^{1}(S_{r^{\prime}})\). Moreover, its regularization \(U_{\varepsilon}:=U*\rho_{\varepsilon}\) (equation (2.9)) has the property that \(U_{\varepsilon,r^{\prime}}\to U_{r^{\prime}}\) in \(L^{1}(S_{r^{\prime}})\) for almost all \(r^{\prime}\in(0,1)\). For every smooth \(3\)-form \(\alpha\) compactly supported on \(B_{1}^{*}\), and smooth \(1\)-form \(\beta\) on \((0,1)\), we have the basic slicing formula \[\int_{B_{1}^{*}}U\alpha\wedge r^{*}\beta=\int_{r^{\prime}\in(0,1)}\left(\int_ {S_{r^{\prime}}}U_{r^{\prime}}(\zeta,\eta)\alpha|_{S_{r^{\prime}}}\right) \beta(r^{\prime}). \tag{6.3}\] Thanks to the condition-(i) in definition (6.2) and the slicing theory, a radially regular function \(u\in\mathcal{F}(B_{1})\) has the slicing on itself and its two weak derivatives. Now we will write for \(e^{t}=r\) \[u_{t}:=u|_{S_{r}},\quad\dot{u}_{t}:=\partial_{t}u|_{S_{r}},\quad\ddot{u}_{t}: =\partial_{t}^{2}u|_{S_{r}},\] as \(L^{1}\)-functions on \(S_{r}\) for almost everywhere \(r\in(0,1)\). Due to the \(S^{1}\)-symmetry, they can also be viewed as \(L^{1}\)-functions on \(\mathbb{CP}^{1}\). Then it is legal to introduce the following functionals for almost all \(t\). \[\mathcal{I}(u_{t}):=\int_{\mathbb{CP}^{1}}u_{t}\omega,\quad I_{u}(t):=\int_{ \mathbb{CP}^{1}}\dot{u}_{t}\omega,\quad I_{u}^{\prime}(t):=\int_{\mathbb{CP}^ {1}}\ddot{u}_{t}\omega.\] As we have explained in the Section (5.1), the functional \(\mathcal{I}(u_{t})\) is a negative, non-decreasing convex function along \(t\in(-\infty,0)\), and converges to \(-\infty\) as \(t\to-\infty\). Therefore, its first derivative \[\frac{d\mathcal{I}(u_{t})}{dt}\geq 0\] exists for almost all \(t\), and is a non-decreasing \(L^{\infty}_{loc}\)-function in \((-\infty,0)\). Furthermore, its second derivative \[\frac{d^{2}\mathcal{I}(u_{t})}{dt^{2}}\geq 0\] also exists for almost everywhere \(t\in(-\infty,0)\) as an \(L^{1}_{loc}\)-function. Then we have the following observation. **Lemma 6.3**.: _For a radially regular function \(u\in\mathcal{F}(B_{1})\), we have_ \[I_{u}(t)=\frac{d\mathcal{I}(u_{t})}{dt}, \tag{6.4}\] _as \(L^{\infty}_{loc}\)-functions, and_ \[I^{\prime}_{u}(t)=\frac{dI_{u}(t)}{dt}=\frac{d^{2}\mathcal{I}(u_{t})}{dt^{2}}, \tag{6.5}\] _as \(L^{1}_{loc}\)-functions on \((-\infty,0)\)._ Proof.: It is enough to prove equation (6.4) and (6.5) in the sense of distributions. In order to finish this, we will apply the basic slicing formula (equation (6.3)) as follows. Let \(\{\rho_{j}\}_{j=1}^{k}\) be a partition of unity of \(\mathbb{CP}^{1}\) as a compact complex manifold, and then we define a 3-form for each \(j=1,\cdots,k\) as \[\alpha_{j}:=\rho_{j}(\zeta)d\sigma_{3},\] where \(d\sigma_{3}\) is the area form of the unit 3-sphere (equation (5.4)). Moreover, take a smooth function \(g\) compactly supported in \((0,1)\), and then we define a 1-form on this interval as \[\beta:=(r\partial_{r}g)r^{-1}dr.\] We note that the measure \(d\lambda_{t}\) can be rewritten in the hyper-spherical coordinate as \(d\lambda_{t}=r^{-1}drd\sigma_{3}\). Then we obtain from equation (6.3) as \[\int_{B_{1}^{*}}u\ \alpha_{j}\wedge\beta=\int_{0}^{1}\left(\int_{S_{r}}u_{t }\rho_{j}(\zeta)d\sigma_{3}\right)(r\partial_{r}g)r^{-1}dr, \tag{6.6}\] for \(r=e^{t}\). Then the L.H.S. of equation (6.6) is equal to \[\int_{B_{1}^{*}}u\ r\partial_{r}\{\rho_{j}(\zeta)g(r)\}d\lambda_{t} = -\int_{B_{1}^{*}}\rho_{j}(\zeta)\partial_{t}ug(r)d\lambda_{t}\] \[= -\int_{0}^{1}\left(\int_{S_{r}}\dot{u}_{t}\rho_{j}(\zeta)d\sigma_ {3}\right)g(r)r^{-1}dr.\] Take \(\tilde{g}(t):=g(e^{t})\) as a test function on \((-\infty,0)\). Summing up with \(j\), we obtain \[\int_{0}^{1}\left(\int_{S_{r}}u_{t}d\sigma_{3}\right)\partial_{t}\tilde{g}dt= -\int_{0}^{1}\left(\int_{S_{r}}\dot{u}_{t}d\sigma_{3}\right)\tilde{g}(t)dt, \tag{6.8}\] and then equation (6.4) follows. By replacing \(u\) with \(\dot{u}_{t}\) in equation (6.6), we obtain in a similar way \[\int_{0}^{1}\left(\int_{S_{r}}\dot{u}_{t}d\sigma_{3}\right)\partial_{t}\tilde {g}dt=-\int_{0}^{1}\left(\int_{S_{r}}\ddot{u}_{t}d\sigma_{3}\right)\tilde{g}( t)dt, \tag{6.9}\] and then equation (6.5) also follows. ### The second condition Recall from Section (5.1) again that the restriction \(u|_{\ell_{\zeta}}\) to each complex line for any \(\zeta\in\mathbb{C}_{\infty}\) is also a negative, non-decreasing convex function along \(t\). Therefore, its right (or left) derivative exists for all \(t\in(-\infty,0)\), and its two-sides derivative \(\frac{du|_{\ell_{\zeta}}}{dt}\) exists for almost all \(t\), and is a non-negative, non-decreasing function along \(t\) as before. This derivative must coincide with the restriction of the weak derivative \(\partial_{t}u|_{\ell_{\zeta}}\) for almost all \((t,\zeta)\). In fact, the slicing theory can be applied to the other projection \((r,\eta,\zeta,\bar{\zeta})\to(\zeta,\bar{\zeta})\). Then the regularization \[\partial_{t}u_{\varepsilon}:=r\partial_{r}(u*\rho_{\varepsilon})=(r\partial_{ r}u*\rho_{\varepsilon})\] has the property that \(\partial_{t}u_{\varepsilon}|_{\ell_{\zeta}}\to\partial_{t}u|_{\ell_{\zeta}}\) in \(L^{1}_{loc}\) on the punctured disk \(\mathbb{D}^{*}\subset\ell_{\zeta}\) for almost all \(\zeta\in\mathbb{C}_{\infty}\). On the other hand, we certainly have \(\partial_{t}u_{\varepsilon}|_{\ell_{\zeta}}\to\frac{du|_{\ell_{\zeta}}}{dt}\) for almost all \((t,\zeta)\), and then the claim follows. Therefore, the condition-(ii) in definition (6.2) implies that there exists a uniform upper bound of the function \(\partial_{t}u\) in the punctured ball \(B_{R}^{*}\) with \(R=e^{T}\). Then \(\partial_{t}u\) is not merely \(L^{1}_{loc}\), but also \(L^{\infty}\) in this smaller domain. Hence we can also introduce the following \(L^{2}\)-Lelong number (or \(J_{u}\)-functional) as \[J_{u}(t):=\int_{\mathbb{CP}^{1}}(\dot{u}_{t})^{2}\omega,\] for almost everywhere \(t<T\). Then we will show that the crucial Lemma (5.3) holds for radially regular functions in a similar manner. **Proposition 6.4**.: _Suppose a function \(u\in\mathcal{F}(B_{1})\) is radially regular from \(T\). Then there exists a real number \(A>-T\) and a uniform constant \(M_{A}>0\), such that we have the following estimate for \(r=e^{t}\)_ \[\pi^{-1}\mathrm{MA}(u)(B_{r})\leq M_{A}\left\{\frac{dI_{u}}{dt}(t)+2I_{u}(t) \right\}+J_{u}(t), \tag{6.10}\] _for almost everywhere \(t\in(-\infty,-A]\)._ Proof.: Thanks to Corollary (2.6), the standard regularization \(u_{\varepsilon}:=u*\rho_{\varepsilon}\) builds a sequence of functions in \(\mathcal{F}^{\infty}(B_{1})\) decreasing to \(u\). According to condition-(ii), its first derivative \(\dot{u}_{\varepsilon,t}=\partial_{t}u*\rho_{\varepsilon}\) in the \(t\)-direction is a non-negative function with uniform upper bound on \(B_{R}\) with \(R=e^{T}\). In fact, take a real number \(A>-T\) such that the slicing \(\dot{u}_{-A}\) exists as an \(L^{\infty}\)-function, and \(\ddot{u}_{-A}\) exists as an \(L^{1}\)-function on the corresponding fiber. Denote \(M_{A}^{\prime}\) by the fiberwise maximum as \[M_{A}^{\prime}:=\sup_{\zeta\in\mathbb{CP}^{1}}\dot{u}_{-A}<+\infty.\] Then we can choose the constant \(M_{A}:=kM_{A}^{\prime}+1\) for \(k:=\sup_{|z|<1}\rho(z)\). Pick up an arbitrary constant \(B>A\). According to condition-(ii) and equation (2.9), this number \(M_{A}\) will be a uniform upper bound of \(\dot{u}_{\varepsilon,t}\) for all \(t\in(-B,-A]\) and all \(\varepsilon\) small enough (the choice of how small \(\varepsilon\) is may depend on \(B\)). Then Lemma (5.3) implies the following estimate \[2\int_{\mathbb{CP}^{1}}\dot{u}_{\varepsilon,t}(\Delta_{\omega}u_{\varepsilon, t})\omega\leq M_{A}\int_{\mathbb{CP}^{1}}(\ddot{u}_{\varepsilon,t}+2\dot{u}_{ \varepsilon,t})\omega, \tag{6.11}\] for all \(\varepsilon>0\) small and \(-B<t\leq-A\). On the one hand, thanks to the condition-(i) and slicing theory, the R.H.S. of equation (6.11) converges to \[M_{A}\int_{\mathbb{CP}^{1}}(\ddot{u}_{t}+2\dot{u}_{t})\omega = M_{A}\left\{I_{u}^{\prime}(t)+2I_{u}(t)\right\}\] \[= M_{A}\left\{\frac{dI_{u}}{dt}(t)+2I_{u}(t)\right\}. \tag{6.12}\] as \(\varepsilon\to 0\) for almost all \(-B<t\leq-A\). Here we have used Lemma (6.3) in the second equality of the above equation. Furthermore, we also have the convergence of the \(L^{2}\)-Lelong numbers for the same reason, i.e. we have \[J_{u_{\varepsilon}}(t)\to J_{u}(t), \tag{6.13}\] as \(\varepsilon\to 0\) for almost all \(-B<t\leq-A\). On the other hand, the decomposition formula (see Theorem (4.4)) implies the following estimate for \(r=e^{t}\) \[\pi^{-1}\mathrm{MA}(u_{\varepsilon})(B_{r})=2\int_{\mathbb{CP}^{1}}(\dot{u}_{ \varepsilon,t}\Delta_{\omega}u_{\varepsilon,t})\omega+\int_{\mathbb{CP}^{1}}( \dot{u}_{\varepsilon,t})^{2}\omega, \tag{6.14}\] for all \(-B<t\leq-A\) and \(\varepsilon\) small enough. Combining with equation (6.11), (6.12) and (6.13), we take \(\varepsilon\to 0\) in equation (6.14). Then the inequality in equation (6.10) follows from Lemma (2.3) and equation (2.5) for almost everywhere \(-B<t\leq-A\). Finally, take \(B\to+\infty\) and our result follows. Now we are ready to prove a stronger version of the main theorem. **Theorem 6.5**.: _For any radially regular function \(u\in\mathcal{F}(B_{1})\), its residual Monge-Ampere mass \(\tau_{u}(0)\) is zero, if its Lelong number \(\nu_{u}(0)\) is zero at the origin._ Proof.: As we have shown in the proof of Theorem (4.4), it boils down to prove \[\liminf_{r\to 0^{+}}\mathrm{MA}(u)(B_{r})=0, \tag{6.15}\] as a decreasing sequence of \(r\). Suppose not, and then there exists an \(R>0\) and \(\delta>0\) such that for all \(r<R\) \[\mathrm{MA}(u)(B_{r})\geq\delta. \tag{6.16}\] Thanks to the condition-(ii), we first note that Lemma (5.2) also works for a radially regular function \(u\in\mathcal{F}(B_{1})\). Therefore, we have the convergence of the \(J_{u}\)-functional as \[J_{u}(t)\to\pi[\nu_{u}(0)]^{2}=0, \tag{6.17}\] for \(t\to-\infty\). Moreover, Lemma (6.3) implies the convergence of the \(I_{u}\)-functional as \(t\to-\infty\). \[I_{u}(t)\to\pi[\nu_{u}(0)]=0. \tag{6.18}\] Hence Proposition (6.4) implies that there exists a real number \(\delta^{\prime}>0\) and \(A^{\prime}>A\) such that we have \[\frac{dI_{u}}{dt}(t)>\delta^{\prime}, \tag{6.19}\] for almost all \(t<-A^{\prime}\). As a non-negative, non-decreasing function along \(t\), the Fundamental Theorem of Calculus partially holds for the functional \(I_{u}\) as \[\int_{-B^{\prime}}^{-A^{\prime}}\frac{dI_{u}}{dt}(t)\leq I_{u}(-A^{\prime})-I_{ u}(-B^{\prime})\leq I_{u}(-A^{\prime}), \tag{6.20}\] for any \(B^{\prime}>A^{\prime}\). However, equation (6.19) implies \[\int_{-B^{\prime}}^{-A^{\prime}}\frac{dI_{u}}{dt}(t)\geq\delta^{\prime}(B^{ \prime}-A^{\prime}), \tag{6.21}\] and then we conclude with a contradiction if \(B^{\prime}\to+\infty\). Hence our result follows. ## 7. Energy Pictures In this section, we will introduce another point of view to look at the decomposition formula (Theorem (4.4)) and also the zero mass conjecture. For simplicity, we will assume that all functions under consideration are in the space \(\mathcal{F}^{\infty}(B_{1})\) in this section. The observation is that the function \(u_{t}\) actually defines a curve in the space of all quasi-plurisubharmonic functions on \(\mathbb{CP}^{1}\), and it can be thought of as a subgeodesic with non-trivial \(S^{1}\)-fibration in the space of Kahler potentials. Then the first term on the R.H.S. of equation (4.42) corresponds to the first variation of the so called _pluri-complex energy_. ### Subgeodesic on fiber bundles We recall some basic facts in Kahler geometry. Let \(X\) be a compact Riemann surface without boundary, and \(\omega_{0}\) a Kahler metric on this Riemann surface. Denote \(\mathbb{D}^{*}\) by the punctured unit disk in \(\mathbb{C}\), and it can be identified with the product \((0,1)\times S^{1}\) via the polar coordinate \((r,s)\), namely, we can write \[z:=re^{is}\] as a complex variable in \(\mathbb{D}^{*}\). Consider the product space \(Y:=\mathbb{D}^{*}\times X\), and the projection maps \(\pi_{1}:Y\to\mathbb{D}^{*}\) and \(\pi_{2}:Y\to X\). The pull back \(\pi_{2}^{*}\omega_{0}\) is a closed non-negative \((1,1)\)-form on \(Y\). Then a _subgeodesic ray_ in the space of Kahler potentials is an \(S^{1}\)-invariant \(\pi_{2}^{*}\omega_{0}\)-plurisubharmonic functions \(v\) on \(Y\). That is to say, if we take any local potential \(\Phi\) of \(\omega_{0}\), then the function \[V:=\pi_{2}^{*}\Phi+v\] is independent of the variable \(s\), and plurisubharmonic in the product manifold \((0,1)\times S^{1}\times X\). In other words, a subgeodesic \(V\) is locally an \(S^{1}\)-invariant plurisubharmonic function on the trivial \(\mathbb{D}^{*}\)-bundle of \(X\). Furthermore, a subgeodesic ray \(v\) is a _geodesic ray_, if it satisfies the following _homogeneous complex Monge-Ampere equation_ on \(Y\) \[(dd_{z,X}^{c}V)^{2}=(\pi_{2}^{*}\omega_{0}+dd_{z,X}^{c}v)^{2}=0. \tag{7.1}\] From now on, we put \(X=\mathbb{CP}^{1}\) and \(\omega_{0}\) the Fubini-Study metric on it. In fact, the projective space can be viewed as the moduli space of \(\mathbb{C}^{2}-\{0\}\), under the natural \(\mathbb{C}^{*}\)-action. The punctured disk \(\mathbb{D}^{*}\) acts in the same way on the punctured ball \(B_{1}^{*}\) in \(\mathbb{C}^{2}\), and then \(B_{1}^{*}\) can be thought of as a non-trivial \(\mathbb{D}^{*}\)-bundle of \(\mathbb{CP}^{1}\) via the Hopf-fiberation, i.e. we can write the bundle map as follows \[\mathbb{D}^{*}\hookrightarrow B_{1}^{*}\xrightarrow{p}\mathbb{CP}^{1}.\] If we take a function \(u\in\mathcal{F}(B_{1})\), then it is naturally an \(S^{1}\)-invariant plurisubharmonic function on this non-trivial \(\mathbb{D}^{*}\)-bundle. Writing \(B_{1}^{*}\) as a product \((0,1)\times S^{3}\), a fiber \(\{r\}\times S^{3}\) is identified with the 3-sphere \(S_{r}\) for all \(r\in(0,1)\). Then the restriction \(u|_{S_{r}}\) can be viewed as a function on \(\mathbb{CP}^{1}\) via the Hopf-fiberation. In fact, this restriction \(u|_{S_{r}}\) is exactly the function \(u_{t}\) defined in equation (4.40), via the change of variables \(r=e^{t}\). If we further require \(u\in\mathcal{F}^{\infty}(B_{1})\), then the Laplacian decomposition formula (equation (5.12)) implies on each fiber \(\mathbb{CP}^{1}\times\{t\}\) \[\frac{1}{2}\left(\ddot{u}_{t}+2\dot{u}_{t}\right)+\Delta_{\omega}u_{t}\geq 0. \tag{7.2}\] Therefore, the function \(u_{t}\) is quasi-plurisubharmonic on each fiber \(\mathbb{CP}^{1}\times\{t\}\), but the lower bound of its complex hessian varies with respect to \(t\) and \(u\) itself. For this reason, we introduce the following definition. **Definition 7.1**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), we say that \(u_{t}\) is a \(C^{2}\)-continuous subgeodesic ray on the fiber bundle \(\mathbb{D}^{*}\hookrightarrow B_{1}^{*}\xrightarrow{p}\mathbb{CP}^{1}\). Moreover, it is a geodesic ray on this fiber bundle if we have_ \[(dd^{c}u)^{2}=0,\] _on \(B_{1}^{*}\)._ ### Energy functionals For a quasi-plurisubharmonic function on \(\mathbb{CP}^{1}\), the pluri-complex energy \(\mathcal{E}\) is defined as \[\mathcal{E}(u_{t}):=\int_{\mathbb{CP}^{1}}(-u_{t})dd^{c}_{\zeta}u_{t}=-\int_{ \mathbb{CP}^{1}}u_{t}(\Delta_{\omega}u_{t})\omega.\] It is well known that this energy is concave along a subgeodesic in the space of Kahler potentials. In fact, the push-forward of the Monge-Ampere measure is exactly the complex hessian of this energy along a sub-geodesic ray, cf. [3]. Next we will show that our decomposition formula is an analogue of this on a non-trivial fiber bundle. Let \(u_{t}\) be a \(C^{2}\)-subgeodesic ray on the fiber bundle \(\mathbb{D}^{*}\hookrightarrow B_{1}^{*}\xrightarrow{p}\mathbb{CP}^{1}\). Then the first variation of this energy with respect to \(t\) can be computed as \[\frac{d}{dt}\mathcal{E}(u_{t})=-2\int_{\mathbb{CP}^{1}}\dot{u}_{t}(\Delta_{ \omega}u_{t})\omega, \tag{7.3}\] and this is exactly the negative of the first term on the R.H.S. of the decomposition formula (equation (4.42)). Take the following non-negative functional to represent the complex Monge-Ampere mass for \(r=e^{t}\) \[M_{u}(t):=\pi^{-1}\cdot\operatorname{MA}(u)(B_{r}). \tag{7.4}\] Then the decomposition formula can be rewritten as \[-\frac{d}{dt}\mathcal{E}(u_{t})+J_{u}(t)=M_{u}(t), \tag{7.5}\] for all \(t\in(-\infty,0)\). By invoking toric plurisubharmonic functions, we can take a closer look at this this decomposition formula under the energy picture. **Example 7.2**.: _Suppose \(u\in\mathcal{F}^{\infty}(B_{1})\) also has toric symmetry, i.e. we have_ \[u(z_{1},z_{2})=u(e^{i\theta_{1}}z_{1},e^{i\theta_{2}}z_{2}),\] _for arbitrary \(\theta_{1},\theta_{2}\in\mathbb{R}\). Then in the real Hopf-coordinate, we can write_ \[u(r,\eta,\theta,\varphi)=u(r,\theta).\] _On the one hand, equation (4.49) boils down to the following_ \[8\int_{S_{R}}d^{c}u\wedge dd^{c}u\] \[= 16\pi^{2}\int_{0}^{\pi}(ru_{r})\partial_{\theta}(\sin\theta u_{ \theta})d\theta+4\pi^{2}\int_{0}^{\pi}(ru_{r})^{2}\sin\theta d\theta. \tag{7.6}\] _Then the first variation of the energy \(\mathcal{E}\) with respect to \(t\) is_ \[-\frac{d}{dt}\mathcal{E}(u_{t})=2\pi\int_{0}^{\pi}(ru_{r})\partial_{\theta}( \sin\theta u_{\theta})d\theta,\] _and the \(L^{2}\)-Lelong number is_ \[J_{u}(t)=\frac{\pi}{2}\int_{0}^{\pi}(ru_{r})^{2}\sin\theta d\theta.\] _On the other hand, we can also compute the complete complex Hessian of \(u\) as follows_ \[4\ dd^{c}u = \left\{2\sin\theta u_{,r\theta}-\partial_{r}(ru_{r})\cos\theta \right\}dr\wedge d\varphi+\partial_{r}(ru_{r})dr\wedge d\eta\] \[+ \left\{(ru_{r})\sin\theta-(ru_{,r\theta})\cos\theta\right\}d \theta\wedge d\varphi\] \[+ 2\left\{\sin\theta\cdot u_{,\theta\theta}+\cos\theta\cdot u_{ \theta}\right\}d\theta\wedge d\varphi+(ru_{,r\theta})d\theta\wedge d\eta. \tag{7.7}\] _Therefore, we obtain a formula for the positive measure as_ \[8(dd^{c}u)^{2}\] \[= \left\{2\partial_{r}(ru_{r})\partial_{\theta}(\sin\theta u_{ \theta})-2r\sin\theta(u_{,r\theta})^{2}+\partial_{r}(ru_{r})(ru_{r})\sin\theta\right\}\] \[dr\wedge d\theta\wedge d\varphi\wedge d\eta \tag{7.8}\] _Then we continue to compute the second variation of the energy \(\mathcal{E}\) as_ \[-\frac{d^{2}\mathcal{E}(u_{t})}{dt^{2}} = 2\pi r\int_{0}^{\pi}\left\{\partial_{r}(ru_{r})\partial_{\theta} (\sin\theta u_{\theta})+(ru_{r})\partial_{\theta}(\sin\theta u_{,r\theta}) \right\}d\theta\] \[= 2\pi r\int_{0}^{\pi}\left\{\partial_{r}(ru_{r})\partial_{\theta} (\sin\theta u_{\theta})-r\sin\theta(u_{,r\theta})^{2}\right\}d\theta\] \[\geq -\pi r\int_{0}^{\pi}\dot{u}_{t}\ddot{u}_{t}\sin\theta d\theta. \tag{7.9}\] As we have expected, the last term on the R.H.S. of equation (7.9) is exactly equal to \(-dJ_{u}(t)/dt\). That is to say, the almost concavity of the \(\mathcal{E}\) functional along an \(S^{1}\)-fibered subgeodesic ray \(u_{t}\) is due to the positivity of the complex Monge-Ampere measure \((dd^{c}u)^{2}\). In fact, the functionals \(J\) and \(M\) are both non-negative, non-decreasing functions along \(t\in(-\infty,0)\). Take two new functionals \(\mathcal{M}\) and \(\mathcal{J}\) as primitives of \(M\) and \(J\), i.e. we have \[\frac{d}{dt}\mathcal{M}(u_{t})=M_{u}(t),\ \ \text{and}\ \ \ \ \frac{d}{dt}\mathcal{J}(u_{t})=J_{u}(t),\] for all \(t\in(-\infty,0)\). Then both functionals are non-decreasing, convex functions along \(t\). Therefore, we can rephrase the decomposition formula as follows. **Theorem 7.3**.: _Suppose \(u_{t}\) is a \(C^{2}\)-continuous subgeodesic ray on the fiber bundle \(\mathbb{D}^{*}\hookrightarrow B_{1}^{*}\xrightarrow{p}\mathbb{CP}^{1}\). Then we have the equality_ \[\mathcal{M}(u_{t})=\mathcal{J}(u_{t})-\mathcal{E}(u_{t})+A, \tag{7.10}\] _for a constant \(A\) and all \(t\in(-\infty,0)\). In particular, the difference \(\mathcal{J}-\mathcal{E}\) is non-decreasing and convex along the ray \(u_{t}\). Moreover, it is affine if and only if \(u_{t}\) is a geodesic ray on this fiber bundle._ Proof.: It is left to prove the statement about the geodesic. For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), it is clear that \(M_{u}(t)=0\), for all \(t\in(-\infty,0)\) if and only if \((dd^{c}u)^{2}=0\) on \(B_{1}^{*}\). Then our result follows. **Remark 7.4**.: _Due to a result by Blocki [5], a plurisubharmonic function in \(\mathbb{C}^{2}\) with isolated singularity at the origin is in the Sobolev space \(W^{1,2}_{loc}\). Thanks to the slicing theory again, this implies that the \(\mathcal{J}\)-functional is well defined for a general \(u\in\mathcal{F}(B_{1})\)._ Finally, recall that we take a primitive of the \(I_{u}\)-functional as \[\mathcal{I}(u_{t})=\int_{\mathbb{CP}^{1}}u_{t}\omega.\] It is also non-decreasing and convex along \(t\), and the asymptote of \(\pi^{-1}\cdot\mathcal{I}\) as \(t\to-\infty\) is exactly the Lelong number of \(u\) at the origin. Then our main theorem can be rephrased in the energy setting, and it reveals the relation between the asymptote of \(\mathcal{I}\) and \(\mathcal{M}\) functionals along a subgeodesic ray. **Theorem 7.5**.: _Suppose \(u_{t}\) is a \(C^{2}\)-continuous subgeodesic ray on the fiber bundle \(\mathbb{D}^{*}\hookrightarrow B_{1}^{*}\xrightarrow{p}\mathbb{CP}^{1}\). Assume that we have_ \[\lim_{t\to-\infty}\frac{d\mathcal{I}(u_{t})}{dt}=0.\] _Then it follows_ \[\lim_{t\to-\infty}\frac{d\mathcal{M}(u_{t})}{dt}=0.\] **Remark 7.6**.: _In a similar way, we can say that \(u_{t}\) is a radially regular subgeodesic ray on the fiber bundle \(\mathbb{D}^{*}\hookrightarrow B_{1}^{*}\xrightarrow{p}\mathbb{CP}^{1}\), if \(u\in\mathcal{F}(B_{1})\) is radially regular. Thanks to the slicing theory and Theorem (6.5), the same result of Theorem (7.5) also holds along such a sub-geodesic ray._ ## 8. Remarks ### Transition functions The \(\mathbb{D}^{*}\)-bundle structure \(B_{1}\to\mathbb{CP}^{1}\) can be illustrate under local coordinate charts. Take the following two homeomorphisms as in equation (3.3) and (3.4): define \[\Psi_{1}:\mathbb{D}^{*}\times\mathbb{C}\to(0,1)\times p^{-1}(\mathbb{C})\] by sending \[(z,\zeta)\to\left(\frac{z\zeta}{(1+|\zeta|^{2})^{1/2}},\;\frac{z}{(1+|\zeta|^{2 })^{1/2}}\right); \tag{8.1}\] and \[\Psi_{2}:\mathbb{D}^{*}\times(\mathbb{C}_{\infty}-\{0\})\to(0,1)\times p^{-1} (\mathbb{C}_{\infty}-\{0\})\] by \[(w,\xi)\to\left(\frac{w}{(1+|\xi|^{2})^{1/2}},\;\frac{w\xi}{(1+|\xi|^{2})^{1/2 }}\right), \tag{8.2}\] and the transition function \(\Psi_{2}^{-1}\circ\Psi_{1}\) is \[w=z(\zeta|\zeta|^{-1});\quad\xi=\zeta^{-1}. \tag{8.3}\] On the first factor, there is some twisting on the \(S^{1}\)-direction, but we have \(|w|=|z|=r\). On the second factor, it is holomorphic. Therefore, the collection \(\Psi_{1}\) and \(\Psi_{2}\) defines a \(C^{\infty}\)-atlas of \(B_{1}^{*}\). In other words, the smooth manifold \(B_{1}^{*}\) can be obtained by glueing two pieces of \(\mathbb{D}^{*}\times\mathbb{C}\) by twisting the circles in \(\mathbb{D}^{*}\) and glueing two pieces of \(\mathbb{C}\) as a \(\mathbb{CP}^{1}\). Furthermore, we can introduce the homeomorphism \[\Psi_{1}^{\prime}:\mathbb{D}^{*}\times\mathbb{C}_{+}\to(0,1)\times p^{-1}( \mathbb{C}_{+})\] as \[(z^{\prime},\zeta)\to\left(\frac{z^{\prime}(\zeta\cdot|\zeta|)^{1/2}}{(1+| \zeta|^{2})^{1/2}},\;\frac{z^{\prime}(|\zeta|/\zeta)^{1/2}}{(1+|\zeta|^{2})^{ 1/2}}\right), \tag{8.4}\] where \(z^{\prime}=re^{\frac{i\eta}{2}}\) is the coordinate on \(\mathbb{D}^{*}\), and \(\zeta=\tan(\theta/2)e^{i\varphi}\) is the coordinate on \(\mathbb{C}_{\infty}\). Then the above coordinate charts can be glued together, and the transition function \(\Psi_{1}^{-1}\circ\Psi_{1}^{\prime}\) is \[z=z^{\prime}(|\zeta|/\zeta)^{1/2};\quad\zeta=\zeta. \tag{8.5}\] We can see a different twisting along the \(S^{1}\)-direction on \(\mathbb{D}^{*}\) during the glueing. Therefore, the complex Hopf-coordinate is not a holomorphic chart of \(B_{1}^{*}\) with respect to the usual complex structure on \(\mathbb{C}^{2}\). On the other hand, we can introduce a real coordinate \((r,\eta^{\prime},\theta,\varphi^{\prime})\) as a point in \(\mathbb{R}^{4}-\{0\}\) under the local trivialization \(\Psi_{1}\). Then it follows \(z=re^{i\eta^{\prime}}\) and \(\zeta=e^{i\varphi^{\prime}}\tan(\theta/2)\), and we have \[z_{1}:=r\sin\left(\frac{\theta}{2}\right)e^{i(\eta^{\prime}+\varphi^{\prime})},\quad z_{2}=r\cos\left(\frac{\theta}{2}\right)e^{i\eta^{\prime}},\] for \(\eta^{\prime}\in[0,2\pi]\) and \(\varphi^{\prime}\in[0,2\pi]\). Comparing with our real Hopf-coordinate, we have the change of variables \[\eta^{\prime}=\frac{1}{2}(\eta-\varphi);\quad\varphi^{\prime}=\varphi,\] and it follows for any \(u\in\mathcal{F}^{\infty}(B_{1})\) \[\frac{\partial u}{\partial\eta}=\frac{1}{2}\frac{\partial u}{\partial\eta^{ \prime}}=0;\quad\frac{\partial u}{\partial\varphi}=\left(\frac{\partial}{ \partial\varphi^{\prime}}-\frac{1}{2}\frac{\partial}{\partial\eta^{\prime}} \right)u=\frac{\partial u}{\partial\varphi^{\prime}},\] since \(\eta^{\prime}\) is also the direction of the \(S^{1}\)-action. Hence the Laplacian decomposition formula (equation (5.11)) still holds under this new coordinate \((r,\eta^{\prime},\theta,\varphi^{\prime})\). Moreover, the 3-form (equation (4.45)) reads \[4\ d^{c}u\wedge dd^{c}u|_{S_{R}}\] \[= ru_{r}^{2}\left\{\frac{\partial_{\varphi^{\prime}}(u_{\varphi^{ \prime}}\cdot u_{r}^{-1})}{\sin\theta}+\sin\theta\cdot\partial_{\theta}(u_{ \theta}\cdot u_{r}^{-1})\right\}d\theta\wedge d\varphi^{\prime}\wedge d\eta^ {\prime}\] \[+ ru_{r}^{2}\left\{\cos\theta\cdot(u_{\theta}\cdot u_{r}^{-1})+ \frac{r}{2}\sin\theta\right\}d\theta\wedge d\varphi^{\prime}\wedge d\eta^{ \prime}.\] After taking the integration on \(S_{R}\), we can still perform the integration by parts as \[4\int_{S_{R}}d^{c}u\wedge dd^{c}u\] \[= 4\pi\int_{0}^{2\pi}\int_{0}^{\pi}(ru_{r})\left\{\ \frac{u_{,\varphi^{\prime}\varphi^{\prime}}}{\sin\theta}+\partial_{\theta}( \sin\theta u_{\theta})\right\}d\theta d\varphi^{\prime}\] \[+ \pi\int_{0}^{2\pi}\int_{0}^{\pi}(ru_{r})^{2}\sin\theta d\theta d \varphi^{\prime}\] \[= 4\pi\int_{S^{2}}(ru_{r}\cdot\Delta_{\Theta}u)d\sigma_{2}+\pi\int _{S^{2}}(ru_{r})^{2}d\sigma_{2}.\] Again, this is exactly the decomposition formula for the Monge-Ampere mass (cf. equation (4.42)) of \(u\). ### Radial symmetry If \(u\in\mathcal{F}^{\infty}(B_{1})\) is further radially symmetric, then Proposition (2.5) and Theorem (4.4) say \[\frac{1}{\pi^{2}}\int_{B_{r}}(dd^{c}u)^{2}=(\dot{u}_{t})^{2}, \tag{8.8}\] for \(r=e^{t}\). Then the following relation is apparent after taking the limit as \(t\to-\infty\) \[\tau_{u}(0)=[\nu_{u}(0)]^{2}. \tag{8.9}\] In general, if \(u\) is in \(\mathcal{F}(B_{1})\) and radially symmetric, then we claim that equation (8.9) also holds. In fact, the function \(u\) is again a non-decreasing convex function of \(t\). Moreover, we can find a sequence of smooth radially symmetric plurisubharmonic functions \(u_{j}\) decreasing to \(u\) (see Appendix, [71]). For each \(u_{j}\), equation (8.8) holds. Then we can infer the following inequalities from equations (2.4) and (2.5) \[\mathit{MA}(u)(\overline{B}_{r})\geq\pi^{2}\limsup_{j\to+\infty}(\dot{u}_{j,t} )^{2}, \tag{8.10}\] and \[\mathit{MA}(u)(B_{r})\leq\pi^{2}\liminf_{j\to+\infty}(\dot{u}_{j,t})^{2}. \tag{8.11}\] As a convex function, the left derivative \(\partial_{t}^{-}u_{t}\) and right derivative \(\partial_{t}^{+}u_{t}\) exist for all \(t\), and they coincide with the standard derivative \(\dot{u}_{t}\) for almost everywhere \(t\in(-\infty,0)\). Moreover, we have \[\partial_{t}^{-}u_{t}\leq\partial_{t}^{+}u_{t}\leq\partial_{t}^{-}u_{t+ \varepsilon}\leq\partial_{t}^{-}u_{t+1}, \tag{8.12}\] where \(\varepsilon\in(0,1)\) and \(t+\varepsilon\) is the place where \(\dot{u}_{t}\) exists. Therefore, we infer from equation (5.1) and (8.12) that both limits are equal to the Lelong number \[\nu_{u}(0)=\lim_{t\to-\infty}\partial_{t}^{-}u_{t}=\lim_{t\to-\infty}\partial_ {t}^{+}u_{t} \tag{8.13}\] For the same reason, the function \(u_{j,t}\) is convex, non-decreasing in \(t\), and it is decreasing to \(u\) pointwise as \(j\to+\infty\). Then it also follows from the properties of convex functions \[\limsup_{j\to+\infty}\dot{u}_{j,t}\geq\partial_{t}^{-}u_{t},\ \ \text{and}\ \ \liminf_{j\to+\infty}\dot{u}_{j,t}\leq\partial_{t}^{+}u_{t}. \tag{8.14}\] Combing inequalities (8.10), (8.11) and (8.14), we obtain the estimate \[\text{\it MA}(u)(B_{r})\leq\pi^{2}(\partial_{t}^{+}u_{t})^{2},\ \ and\ \ \text{\it MA}(u)(\overline{B}_{r})\geq\pi^{2}(\partial_{t}^{-}u_{t})^{2}. \tag{8.15}\] However, it is easy to see the control \[\text{\it MA}(u)(B_{r})\leq\text{\it MA}(u)(\overline{B}_{r})\leq\text{\it MA }(u)(B_{2r}),\] and we conclude \[\tau_{u}(0)=\frac{1}{\pi^{2}}\lim_{r\to 0}\text{\it MA}(u)(B_{r})=\frac{1}{ \pi^{2}}\lim_{r\to 0}\text{\it MA}(u)(\overline{B}_{r}). \tag{8.16}\] Therefore, it follows from equation (8.15) and (8.16) \[\lim_{t\to-\infty}(\partial_{t}^{-}u_{t})^{2}\leq\tau_{u}(0)\leq\lim_{t\to- \infty}(\partial_{t}^{+}u_{t})^{2}, \tag{8.17}\] and our claim follows from equation (8.13). In fact, the above argument works for any radially symmetric plurisubharmonic function \(u\) with isolated singularity in \(\mathbb{C}^{n}\) for all \(n\geq 1\), and then we have proved the equality \[\tau_{u}(0)=[\nu_{u}(0)]^{n}, \tag{8.18}\] in any dimension \(n\), cf. Proposition (A.1), [23].
2308.03224
Quantifying the evolution of harmony and novelty in western classical music
Music is a complex socio-cultural construct, which fascinates researchers in diverse fields, as well as the general public. Understanding the historical development of music may help us understand perceptual and cognition, while also yielding insight in the processes of cultural transmission, creativity, and innovation. Here, we present a study of musical features related to harmony, and we document how they evolved over 400 years in western classical music. We developed a variant of the center of effect algorithm to call the most likely for a given set of notes, to represent a musical piece as a sequence of local keys computed measure by measure. We develop measures to quantify key uncertainty, and diversity and novelty in key transitions. We provide specific examples to demonstrate the features represented by these concepts, and we argue how they are related to harmonic complexity and can be used to study the evolution of harmony. We confirm several observations and trends previously reported by musicologists and scientists, with some discrepancies during the Classical period. We report a decline in innovation in harmonic transitions in the early classical period followed by a steep increase in the late classical; and we give an explanation for this finding that is consistent with accounts by music theorists. Finally, we discuss the limitations of this approach for cross-cultural studies and the need for more expressive but still tractable representations of musical scores, as well as a large and reliable musical corpus, for future study.
Alfredo González-Espinoza, Joshua B. Plotkin
2023-08-06T23:00:34Z
http://arxiv.org/abs/2308.03224v1
# Quantifying the evolution of harmony and novelty in western classical music ###### Abstract Music is a complex socio-cultural construct, which fascinates researchers in diverse fields, as well as the general public. Understanding the historical development of music may help us understand perceptual and cognition, while also yielding insight in the processes of cultural transmission, creativity, and innovation. Here, we present a study of musical features related to harmony, and we document how they evolved over 400 years in western classical music. We developed a variant of the center of effect algorithm to call the most likely for a given set of notes, to represent a musical piece as a sequence of local keys computed measure by measure. We develop measures to quantify key uncertainty, and diversity and novelty in key transitions. We provide specific examples to demonstrate the features represented by these concepts, and we argue how they are related to harmonic complexity and can be used to study the evolution of harmony. We confirm several observations and trends previously reported by musicologists and scientists, with some discrepancies during the Classical period. We report a decline in innovation in harmonic transitions in the early classical period followed by a steep increase in the late classical; and we give an explanation for this finding that is consistent with accounts by music theorists. Finally, we discuss the limitations of this approach for cross-cultural studies and the need for more expressive but still tractable representations of musical scores, as well as a large and reliable musical corpus, for future study. Data Sience Complex Systems Music Information Retrieval Music Evolution ## 1 Introduction Music represents an important part of our lives, whether for listening or playing, as a hobby or profession. Even from early societies until today, music is part of everyday life. For this reason, music itself has been a subject of intense study in the fields beyond musicology, including cognitive science, sociology, and even history. A wide swath of academics sett the value of exploring music through an interdisciplinary lens [1, 2]. Studies by musicologists on music evolution have provided informative insight and identified trends in music style. While there is recent work in quantitative and statistical analysis of music, most studies have been qualitative. In recent years, the development of technology and digital formats have made the access to music data easier for researchers from different fields, allowing them to address questions ranging from social interactions and cultural evolution [3, 4] to creativity or innovation [5, 6, 7]. However, even with the access to digital formats, there are still several challenges unique to music, that it apart from studies of other recorded cultural traits such as written language, names, or artistic designs. One of the big challenges in quantifying musical change is to accurately represent music with quantities that are both mathematically and computationally tractable, yet still musicially salient even to non-experts. Three musical elements are often the focus of analysing musical scores: Melody, Harmony and Rhythm. Although there are many more dimensions to consider, including dynamics and tempo among many others, there is a trade-off between the detail and the dimensionality f the musical representation for purposes of systematic analysis. For example, considering notes from a melody would not include the contextual information from the harmony in the chords. On the other hand, considering all notes in a score in formal analysis will include both melody and harmony but increase the alphabet size and dimensionality of any representation. While most studies have focused in melodic representation of musical pieces [8, 9, 10, 11, 12, 13, 14, 15, 16], the number of studies considering harmonic properties has increased considerably in recent years [7, 17, 18, 19, 20, 21, 22]. Harmony and tonality are musical concepts that have been exhaustively described by musicians, music theorists [23, 24, 25, 26], psychologists [27] and mathematicians [28, 29]. Multiple approaches to quantifying harmony have been implemented, such as defining a measure of consonance [30] for chords [21, 31] or individual notes played together (such as codewords) [7, 17, 32]. We focus our study on features related to harmony - to the concept of tonality in particular - while trying to preserve information relevant in the melody and rhythm. We use a mathematical and computational model for tonality based on human cognition built around features of a score that have been previously proposed as tonal models, in an effort to preserve as much as musical information as possible based on features that are both scientifically and musically understandable. We provide simple examples of our tonal representations for pieces and composers, which illustrate the concepts we define and quantify in this work, in the hope of making the study accsible o a broad audience. ## 2 Materials and methods Reducing dimensionality while preserving information from a musical score can be achieved by defining a feature of a higher = order, other than the series of singular notes in the score itself. One possibility is to consider chords instead of individual notes, as most music theorists do in harmony analysis [21, 33]). This approach requires defining a mapping from sets of notes to chords - e.g. the notes C-E-G maps to the C chord. But developing algorithms to re-label set of notes as chords in a systematic fashion for a large corpus is difficult without a good definition or metric of harmony, and it doesn not account account for note duration or rhythm. One alternative approach is to define a _local key_ given a set of notes in a contiguous region of the score, although this is still an open problem lacking clear editions. Different algorithms have been developed aiming to approach the problem of local key identification [34, 35, 36]. We implement a variation of the _center of effect_ algorithm previously introduced by E. Chew [37], which has been used for music information retrieval problems [38, 39, 40] and has been shown to outperform other algorithms, compared to gold-standard hand annotations by musicologists [41]. ### Key representation in the spiral array We will represent a musical score as an ordered sequence of elements: \(\xi=\{e_{1},e_{2},...,e_{n}\}\), where each element corresponds to the local key of each measure (or fixed number of measures) in the piece (see figure 1). We use a geometrical representation where notes, chords and keys are represented as points \((x,z,y)\in\mathbb{R}^{3}\) in a helix-type configuration known as the spiral array [41]. Using the spiral representation and center of effect as model for tonality allows us to not only analyze a large amount of musical score but also to relate their mathematical features with concepts from information theory. Derived from the tonnetz network on Riemann's theory [42, 43, 44], the spiral representation preserves the hierarchical structure of tonality since key representations are generated from combining chords, and chord representations from combining notes while representing all of these structures in the same space (\(\mathbb{R}^{3}\)). Notes in the spiral array are defined as \[\vec{P}(k)=\begin{bmatrix}x_{k}\\ y_{k}\\ z_{k}\end{bmatrix}=\begin{bmatrix}rsin\frac{k\pi}{2}\\ rcos\frac{k\pi}{2}\\ kh\end{bmatrix}, \tag{1}\] where \(r\) and \(h\) are parameters (see SI) and \(k\) is a number representing a specific note. The starting note \(k_{0}\) is chosen arbitrary, for simplicity we define \(k_{0}\) as the C note. Notes \(k\) and \(k+n\) are separated by \(n\) fifths, preserving the harmonic relationships between notes, chords and keys (e.g. with \(k_{0}=\) C, \(k+1=\) G, \(k+2=\) D and so on). Major and minor chords (\(\vec{C}_{M}\) and \(\vec{C}_{m}\)) are constructed by linear combinations of notes and major and minor keys (\(\vec{T}_{M}\) and \(\vec{T}_{m}\)) from linear combinations of chords (see SI). The _center of effect_ is based in Von Neumann's center of gravity (mass) where a set of notes \(P=\{\vec{p}_{1},\vec{p}_{2},...,\vec{p}_{N}\}\) has an effective center of mass in the form of a linear combination of its elements \[\vec{C}_{e}=\sum_{i=1}^{N}\omega_{i}\vec{p}_{i}. \tag{2}\] Here the coefficients \(\omega_{i}\) are normalized (\(\sum\omega_{i}=1\)) and they represent the _importance_ of each note. These coefficients can be defined in multiple ways (see supplementary S1) but we choose to use the normalized duration of each note - so that our representation captures some aspects of rhythmic structure. The Center of Effect (CoE) key finding algorithm we develop uses the vector \(\vec{c}_{e}\) for the set of notes, and defines the most likely key as: \[\operatorname*{arg\,min}_{T\in\mathbf{T}}||\vec{c}_{e}-\vec{T}||, \tag{3}\] which corresponds to the key \(T\) for which the euclidean distance to the measure is minimum. Here \(\mathbf{T}\) is the set of possible major and minor keys: \(\mathbf{T}=\{\mathbf{T}_{M}(k)\forall k\}\cup\{\mathbf{T}_{m}(k)\forall k\}\). The CoE method also assigns a likelihood to the most likely local key, based on the euclidean distance above, as well as a likelihood to all alternative keys based on their distances (Eq. 5). The CoE algorithm has proven to be effective and to have better performance than other methods when defining a key with small amount of information [41], and has shown useful not only to identify a key but to other applications like pitch spelling [38] and passage segmentation even with post-tonal music from composers like Messiaen[39]. By construction, the spiral representation is en-harmonic in-equivalent, meaning that it distinguishes between sharp and flat notes that equal temperament would consider to be the same (e.g. C# and Db). This in most cases can be an advantage of the model. We evaluated the accuracy our implementation of the CoE (Center of Effect), including the various weights we introduce in its implementation, by comparing its output to a gold standard of local keys annotated by hand, measure by measure, for Beethoven's string quartets [33]. This annotated data is one of the most detailed set of such information in the literature, it contains information about the key and chord in functional harmony for each measure in all Beethoven's string quartets. Our CoE implementation matches the hand annotation in for more than 68.43% of the measures in the corpus, and cases of mis-matches are often musically plausible alternatives (see SI). ### Key diversity and uncertainty We use two quantities to describe harmonic features in a musical piece: key diversity and key uncertainty. Key diversity represents how diverse the distribution of most-likely keys are, across measures in a score. For each piece \(\xi\) we compute the probability for each of its elements as their normalized frequency (counts) in the sequence: \[p(e_{i})=\frac{f(e_{i})}{\sum_{e_{j}\in\xi}f(e_{j})}, \tag{4}\] Figure 1: Example of determining the “local key” of each measure. The figure shows the first five measures from piano sonata K. 545 in C by W. A. Mozart, along with the most likely key (or two most likely keys) determined by the CoE algorithm.. where the sum is over the elements in the sequence \(\xi\). The key diversity of a piece is related to how many different keys the piece contains, over measures, and how localized or spread out their probability distribution. Key uncertainty is a quantity that gives us information about how _ambiguous_ is the tonal reference. The more ambiguous the less certain we are about calling the key in each measure. The center of effect algorithm returns a list of possible keys \(\{T_{1},T_{2},T_{3},...,T_{m}\}\) with their associated distances \(\{d(T_{1}),d(T_{2}),...,d(T_{m})\}\) to the center that represents the notes (\(\vec{C_{e}}\)). We define \(p(T_{i})\) as the probability for a set of notes to be in the key \(T_{i}\) as a function of the distance \(d(T_{i})\): \[p(T_{i})=\frac{e^{-\lambda d(T_{i})}}{\sum_{j}e^{-\lambda d(T_{j})}}, \tag{5}\] where the parameter \(\lambda\) is defined by setting \(P(T_{i})>0.98\) for the case when \(d(T_{i})\approx 0\). The closest the key \(T_{i}\) is to \(\vec{C_{e}}\) the most likely key given the set of notes in \(\vec{C_{e}}\) represented in that key. If the \(\vec{C_{e}}\) is similarly close to more than one key, their probabilities are similar to the most likely key, and key uncertainty would be high. We use Shannon's definition of entropy, also known as the expected information content for a random variable \(X\), defined as \(H(X)=-\sum_{x\in X}p(x)log(p(x))\) to compute both key diversity (across measures) and key uncertainty (within each measure) with their respective probability distributions \(p(e)\) and \(p(T)\). In the case of the key uncertainty, we perform one computation for each measure of the piece and we take the average uncertainty of the piece over all measures in the piece: \[\text{Key Uncertainty}=-\frac{1}{M}\sum_{m=1}^{M}\sum_{T\in\mathbf{T}}p(T)log(p(T )), \tag{6}\] where \(M\) is the total number of measures in the piece. ### Innovation in key transitions Although notes in a musical score are known to have long range dependencies [19, 45], several studies have used memory-one Markov chains to quantify musical scores [7, 21, 46], where the state \(s_{t+1}\) in a musical sequence depends only on the previous state \(s_{t}\) or a set of previous states \(\{s_{t},s_{t-1}...\}\)).Markov chains of a higher order representations, such as _local keys_, viewed as harmony transitions, are perhaps more useful than Markov Chains on individual notes that lack of contextual information [16]. A piece \(\xi=\{e_{1},e_{2},...,e_{n}\}\) can be seen at the outcome a Markov Chain where the states are the elements (local keys) the transitions between elements are given by a stochastic matrix \(\mathbf{P_{\xi}}\) with the initial state \(e_{1}\in\xi\). The value of the entry \(P_{\xi ij}\) correspond to the probability of the transition between the states \(e_{i}\to e_{j}\). Given an empirical score, represented as a series of local keys, the Markov transition matrix is computed via maximum likelihood estimation: \[P_{\xi ij}=P_{\xi}(e_{i}\to e_{j})=P_{\xi}(e_{j}|e_{i})=\frac{f_{\xi}(e_{i},e _{j})}{\sum_{x\in V_{\xi}}f_{\xi}(e_{i},x)}, \tag{7}\] where \(f_{\xi}(e_{i},e_{j})\) is the frequency (counts) for the bi-gram \((e_{i},e_{j})\) in the piece \(\xi\) and the term \(\sum_{x\in V_{\xi}}f_{\xi}(e_{i},x)\) is equivalent to the frequency of the element \(e_{i}\) since it is over all the elements \(x\in V_{\xi}\), where \(V_{\xi}\) is the set of unique elements (vocabulary) in \(\xi\). One of our goals is to evaluate how _innovative_ a given piece \(\xi\) is, when compared with a set of previous historical works. This comparison is made by assuming that a model representing all previous works \(\Omega\) can be used to reproduce \(\xi\). We quantify how how good the model \(\mathbf{P_{\Omega}}\) is at generating the piece reproduce \(\mathbf{P_{\xi}}\). We can directly relate this quantity to the Kullback-Leibler divergence, which is a measure for the amount of extra information needed to code a distribution \(P\) from a given distribution \(Q\). For stochastic matrices the \(KL\)-divergence rate is defined by (see SI): \[D_{KL_{R}}(\mathbf{P_{\xi}}||\mathbf{P_{\Omega}})=\sum_{e_{i}\in V_{\Xi}}\sum_{e_{j} \in V_{\Xi}}\mu_{i}P_{\xi}(e_{j}|e_{i})\cdot log\left(\frac{P_{\xi}(e_{j}|e_{ i})}{P_{\Omega}(e_{j}|e_{i})}\right), \tag{8}\] where \(\mu_{i}\) corresponds to the asymptotic distribution of elements in \(\xi\). Here, \(D_{KL_{R}}(\mathbf{P_{\xi}}||\mathbf{P_{\Omega}})\) is the amount of information per step the model \(\mathbf{P_{\Omega}}\) needs to reproduce \(\mathbf{P_{\xi}}\). This quantity is different from information content used in previous works to quantify novelty [7] since it considers the asymptotic distributions of the elements \(\mu_{i}\) (the more repetitive the less novel) and does not depend on the length of the sequence (see fig. S6). To quantify innovation for a given piece from year \(y_{i}\), we construct a transition matrix by adding all key transitions from pieces in the previous years \[\mathbf{P_{\Omega_{i}}}=\hat{\mathbf{F}}_{\Omega_{i}}\quad\text{with}\quad\mathbf{F}_{ \Omega_{i}}=\sum_{y_{j}<y_{i}}\mathbf{F}_{\xi_{j}} \tag{9}\] where \(\hat{\mathbf{F}}_{\Omega_{i}}\) is the normalized frequency matrix or stochastic matrix for all the pieces in the previous years. We then compute the Kullback-Leiber divergence as in Eq. 8 and refer to this value as how novel (the novelty value) of a piece, given all preceding scores. ### Corpus We use a set of MIDI files from the Kunstderfugue [47] website. This data set have been used in previous studies [7, 10, 19, 32], it consists of a compilation of \(~{}18,000\) MIDI files from more than 79 western composers, from the years 1200-1950. We retained only those pieces to which we could assign a year of composition (see SI), reducing the dataset to 5,374 MIDI files. We processed the pieces using a Julia script [48] to extract the information needed to compute the center of effect - such as numbers of measure, pitch, and duration of all notes (see supplementary material for details). ## 3 Results To test the performance of our implementation of the center of effect model, we compared the results to hand-annotations of local key performed by musicologists, obtaining an accuracy of \(>68.43\%\) across all measures in Beethoven's string quartets [33]. Some cases of mis-matches against this gold standard are necessarily incorrect example of calling the local key (see supplementary S4). Overall, the ability to match hand annotations this often seems to validate our methodology, especially consider that professional musicians and musicologists may disagree about the key of a measure, even within the well structured Beethoven quartets. It is somewhat useful, in addition, that the CoE algorithm provides several alternatives, with associated likelihoods, for each measure - because the entropy of this distribution provides a measure of key uncertainty, which is itself an interesting feature in music. ### Key diversity and uncertainty A time series show key diversity (within each piece) is shown in figure 2. We observe a trend towards increasing diversity of keys within a piece, over time, which is consistent with results from previous studies that show how many elements in music tend to be more diverse over time [32]. In our case, key diversity does not follow a constant increasing trend. Rather, the change in diversity is different in specific periods of time associated roughly with different periods of classical music defined by musicologists (Early/Mid Baroque, Late Baroque, Classical, Romantic, and Modern). The increase in key diversity is most evident during the Classical (1750-1820) and Modern (1890-1960) periods. This result in the Classical period agrees with the historical analysis in the style of music [49]. Indeed, the early Classical period saw the development of new instruments that increased the size of the orchestras, allowing composers o explore more tonal modulation and musical forms. A similar trend is seen in the Modern period, wh Figure 2: **Key diversity.** Time series for key diversity across 5,374 musical scores. The solid line represents the median of the distribution of values within a time bin, while the first and third quartiles are represented by the shaded area. All distributions are computed for values that lie within a sliding window of 20 years. Historical periods are denoted by dashed lines, where the letters stand for: (A) Early/Mid Baroque, (B) Late Baroque, (C) Classical, (D) Romantic and (E) Modern. more experimentation of modulation and more complex conceptualizations of tonality or even the rejection of tonality altogether [50]. Figure 3 shows the timeseries of key uncertainty over the 5,374 scores in our dataset. The overall trend shows an increase the uncertainty associated with assigning local keys to measures by the CoE algorithm, over the course of 400 years. This increasing trend in key uncertainty may reflect changing concepts of tonality. Key uncertainty can be interpreted as how ambiguous a tonal center (key) is, and so ambiguity may be related harmonic complexity or immediate tonal tension [51]. A secular changing in key uncertainty are largely concordant with historical features of the Baroque and Classical periods [52]. For instance, in the Baroque period, music was not only polyphonic but also contrapunctual, promoting higher density of different notes and more propensity to dissonances. In the early Classical period the texture of the sound is clearer than in the Baroque period [52], giving more emphasis to order and hierarchy resulting in a homophonic texture with a clear melody above a chordal accompaniment that results in a less complex representation of tonality. While key diversity and key uncertainty represent distinct aspects of a musical piece, the 400-year trend of overall increasing in value holds for both, with the modern period showing the highest uncertainty and diversity. The relationship between diversity and uncertainty is shown in figure 4, with a Pearson correlation value of \(\rho=0.38\). In the same figure some of the extreme scores are selected and highlighted in circles, and these particular pieces are listed in table 1. Simply listening the pieces highlighted in Figure 4 - which are selected for having extreme values of key diversity and/or key uncertainty - helps provide an intuitive understanding of what these two measures quantify. And so we include audio files for each pieces in table 1, representing low uncertainty - low diversity, low uncertainty - high diversity, high uncertainty - low diversity and high uncertainty - high diversity extremes across the corpus. For example, the pieces that are most uncertain and most diverse in tonality are typically from modern composers (Scriabin and Stravinsky), while the other extreme (least uncertain and least diverse) corresponds to a simple religious hymn (The Great Physician) that was written to be easy to sing and to remember. ### Innovation in key transitions We compute a novelty value for each piece using the Kullback-Leibler divergence rate from equation 8, described in more detail in the materials and methods section. The novelty value is computed with two alternative key representations: the original key or the transposed key. The original representation key refers to the actual name of the key in the measure (C,E,D etc) while the transposed corresponds the key relative a given reference-in our case the reference is the tonic or the main key of the piece. We use these two different key representations to control for the fact that tonic keys of pieces underwent substantial change over the course of the dataset. The transposed novelty measure attempts to control for this secular variation in tonic keys, when computing novelty. Figure 3: **Key uncertainty.** Time series for the key uncertainty value, the solid line represents the median of the distribution of values while the first and third quartiles are represented by the shaded area. Regions between dashed lines are the historical periods presented in figure 2 For the representation of novelty after transposition, we use roman numerals, as in harmony analysis (see S3 for details), mapping every key sequence to a sequence of roman numerals in which I denotes the global tonic key of the piece. This mapping allows us not only to study sequences of roman numerals (functional harmony), but to transpose all the pieces and analyze them within the same reference, avoiding transitions that have the same harmonic relation to be counted as distinct. Results for innovations values in both original and transposed representations are shown in figure 5. Lower innovation values overall for transposed pieces are expected, e.g. in the case for the transition E \(\rightarrow\) A in a piece with tonic key E, the transition becomes I \(\rightarrow\) IV, and is the same as in any other major piece where the tonic transitions to the subdominant, making the piece in E less novel than it would be scored in the original key representation. Figure 5 shows a notable decline in novelty or innovation of harmonic transitions in the Classical period. Although perhaps surprising, this feature has been previously reported, based on a smaller set of pieces and with a representation that considers as different chords with the same harmonic function [7]. Our results, by contrast, show that the decline in innovation for harmonic transitions during the classical period is even more evident after transposing all pieces to the same key - that is, focusing on harmonic transitions per se, after accounting for any possible changes in the tonic key of the piece. Surprisingly, one of the least innovative composers according to this measure is W. A. Mozart. But it is well known that Mozart's style, like Haydn's, is an archetype of the classical style, where clarity, balance and transparency are the characteristics of his work [53, 54]. It was until his late period of active years when he exploited chromatic harmony, \begin{table} \begin{tabular}{|c|c|c|c|} \hline Piece & Year & Name & Composer \\ \hline 1 & 1869 & The Great Physician & William Hunter / Hymn \\ \hline 2 & 1868 & 30 Progressive Etudes - Etude \#1 & Joachim Raff \\ \hline 3 & 1720 & Two part inventions - Invention \#5 & J.S. Bach \\ \hline 4 & 1911 & 3 Etudes Op 65. - Etude \#1 & Scriabin \\ \hline 5 & 1910 & Capture of the Firebird (excerpt) & Igor Stravinsky \\ \hline 6 & 1880 & Waltzes 54 - \#6 in F Major & Antonin Dvorak \\ \hline 7 & 1828 & Sonata 959 Mov 3 & Franz Schubert \\ \hline \end{tabular} \end{table} Table 1: **Compositions List.** List of the seven pieces highlighted in figure 4, the number in the first column corresponds to the label in figure 4. Figure 4: **Key Diversity vs Key Uncertainty.** Diversity and uncertainty values for individual pieces classified by their historical period. The ellipses are constructed by computing the covariance error with 95% confidence. one example is his popular String Quartet in C major also known as the "Dissonance" quartet. Other observations can be made from the results in figure 5, such as the change of novelty between the classical and romantic period, where Beethoven (number 7, indicated in the figure) played an important role in the movement that consolidated with composors including Schubert and Liszt (numbers 8 and 9 respectively). The increasing trend in harmonic transition innovation after the second half of the Classical period is a tipping point that led to subsequent continued increase in innovation in the Romantic and Modern periods, which are known for their development in musical form and harmony representation. We also computed separately novelty values for pieces in major and minor keys (see fig. 6). We found that although the trends in both major and minor tonalities are similar, they differ considerably in the functional harmony (transposed) representation with a clear separation in the classical period. This could indicate that the exploration of modulation occurred more often in minor keys. This result may reflect the fact that minor tonalities have three different scale patterns compared with major tonalities, with only one; and these three patterns prove a richest space of harmonic modulation in a minor key compared to a major key. A list of the five compositions with the lowest novelty value for each period of time is presented in table 2. The elements of the key sequences in these pieces are very repetitive the most well known in music theory, such as the tonic (I), the dominant (V) and the subdominant (IV). Figure 5: **Innovation values.** Novelty or innovation values for original key representation (yellow) and transposed or functional harmony representation (blue). Values are computed in the same fashion as in figure 2 with an overlapping window of 20 years. Figure 6: **a) Original keys.** Novelty values for scores represented in their original keys (without transposition), separating minor and major pieces.**b) Novelty scores after transposition to a common key. Examples for the pieces with highest novelty values for each period are listed in table 3. Inspecting the local keys of the first 10 measures of these pieces we find many examples that do not appear among the common local keys in table 2, with the exception of the piece by Schubert that starts with common keys before changing to more rare keys (see S6). ## 4 Discussion The center of effect model has been shown to identify a meaningful notion of local key, in both tonal and even non-tonal music [39]. Our implementation does a decent job of matching a gold standard corpus of manual annotations by musicians. The extended application of CoE for local key calling may be useful not only to musicians who want to explore artistic development, but also to the scientific community that is interested in the quantitative description and development of variation in tonality and tonal tension, within a piece or across different pieces or time periods. One disadvantages of this approach is the need to select a window size where the center of effect is computed. We have chosen a window size of one measure, but this is a somewhat arbitrary decision and, indeed, measure length shows variation over time and within pieces. The local key may change in a meaninful sense even within; or variation across two or three measures may also be meaningful. We can see this issue in the example shown in figure 1, where some measures may be assigned to two plausible different keys (e.g. in measure #2 there are two possible keys: C and D); and we choose the most likely one according to its distance to the center of effect. Addressing this complixity requires a different approach with automatic segmentation, where the location for the change in key is determined by a new parameter that can be expressed as a function of the distance between different centers of effect calculated within different time windows ([37]). Such an implementation would ideally reduce the dispersion for the uncertainty and diversity values from figures 2 and 3, but this remains a topic for future technical development. The structure and evolution of rhythmic and melodic properties have been explored previously in several studies on large corpora [9, 10, 15, 18, 19, 55], mostly using time series analysis methods to describe concepts like scaling, predictability and reversibility within a given musical piece. By contrast to most of the time-series methods used in these studies, we have focused our analysis by coarse-graining a musical score, but in a way that hopefully retains specifically musical information. That is, we represent a piece as a series of transitions between local keys, from measure to measure.Using our definition of local key we can attribute information-theoretic concepts to musical pieces, such as diversity and average uncertainty, based on an intrinsically musical representation of the piece. In particular, key uncertainty is a concept we have used to quantify the degree of uncertainty whe n a set of notes is assigned a key; and this uncertainty can also be related to how unpredictable is, a given set of notes, to know what note will be added next. Similar types of unpredictability has been quantified in [19], where it was shown an overall increasing trend for more unpredictability of notes over time, with Shostakovich being one of the most unpredictable composers in short time frames. Those findings agree with our results for secular trends in key uncertainty, which again finds Shostakovich amongst among composers with highest key uncertainty. These results are also consistent with the trend for the development of tonality, which it was not formally defined until 1722 in Jean Phillip Rameau's work _Treatise on harmony_[23]. The decline of uncertainty during the classical \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Year & Name & Composer & Novelty value & First 10 elements \\ \hline 1570 & O Sacrum Convivium & F. Guerrero & 0.2137 & I-IV-I-ii-I-IV-I-IV-I-V-IV \\ \hline 1696 & String Sonata II Mov 4 & D. Buxtehude & 0.1969 & I-I-IV-I-I-I-I-IV-I \\ \hline 1749 & Royal Fireworks Suite Mov 4 & G.F. Handel & 0.1550 & I-I-I-I-V-V-V-V-I-I \\ \hline 1798 & 7 Lander in D & Beethoven & 0.1670 & I-I-I-V-I-I-I-V-I-I \\ \hline 1867 & Les Mois Op. 74 Mov 4 & Alkan & 0.1888 & I-I-I-I-I-I-I-I-I-I-I \\ \hline \end{tabular} \end{table} Table 2: **Compositions with lowest novelty values.** List of compositions with the lowest novelty values on each period of time, and their first 10 elements. Full sequences are included in supplementary material (S6) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Year & Name & Composer & Novelty value & First 10 elements \\ \hline 1570 & Madrigals Book 4 \#19 & C. Gesualdo & 1.72 & I-IV-I-vi-V-IV-IV-ii-I \\ \hline 1696 & Mass for the parishes Mov 14 & F. Couperin & 1.8597 & I-i-i-i-v-IV-I-i-I-IV \\ \hline 1718 & WTC I Prelude 14 in F\# minor & J.S. Bach & 2.1341 & i-i-\#VI/bVII-i-\#IV/bVII-\#II/bIII-\#II/bIII-v-v-v \\ \hline 1817 & Sonata Op. 147 Mov 1 & F. Schubert & 1.6387 & I-I-I-V-I-I-I-I-V \\ \hline 1861 & Esquisses Op. 63 No. 7 & Alkan & 1.58 & I-I-I-I-v-ii/bii-v-ii/bii-III \\ \hline \end{tabular} \end{table} Table 3: **Compositions with highest novelty values.** List of compositions with the highest novelty values on each period of time and their first 10 elements. Full sequences are included in supplementary material (S6) period reflections a period of time when this convention became adopted by many composers, changing the texture of the polyphonic and contrapunctual forms of the Baroque period to more clear and homophonic forms defined by a melody and subordinate chordal accompaniments. The subsequent increase of uncertainty in the Romantic and Modern periods reflects the evolution of the concept of tonality in the late Romantic and early modern periods with composers like Bruckner, Tchaikovsky, Mahler, Scriabin, Wagner and Strauss, with the last two being known for "furthered the musical language of Opera taking tonality itself to breaking point". [56]. Schoenberg described this period of tonality as "fluctuating" or "suspended", implying that it was not decided or it was ambiguous [24]. Finally, in the Modern period harmonic progressions become more unpredictable making tonality even more ambiguous, as Meyer describes "the increased use of the ambiguous chords, the less probable harmonic progressions, and the more unusual melodic and rhythmic inflections... the felt probabilities of the style system had become obscure; at worst, they were approaching a uniformity which provided few guides for either composition or listening" [57]. Key diversity within a piece also has a close relationship with tonality, as it is closely related to harmonic function (functional harmony). In this case the trend from the Classical, Romantic and Modern periods largely coincide with trends in uncertainty. But one important differences between uncertainty and diversity is that the increase in diversity is more evident in the late Classical period, indicating increasing modulations within a piece. This result agrees with the historical fact that in the late Classical period composers started to explore different harmonic transitions and innovations, as the industrial revolution contributed to the expansion and diversification of orchestras giving more material to composers to explore different styles and sounds[58, 59, 60]. In the scientific literature there are few studies are directly related to key uncertainty and diversity whereas more studies explore the distribution of notes, chords or other tokens in analogy to language. There is a substantial body of work establishing empirical laws like Zipf's and Heap's laws in such musical features. One such study considers the concept of vocabulary richness, considering an element (or token) as the set of different notes played during a beat; and the authors find an increasing linear trend in vocabulary richness over time [32]. Our study shows the Classical period as a tipping point for novelty in harmonic transitions. While this does not means that there was no innovation prior to this period, it is a quantitative account of the evolution of harmony discussed in qualitative terms by many musical historians [57, 58, 59, 60]. The evident increase in key diversity or modulation is a component that plays an important role in our computation of novelty. However, key diversity is not the only factor that contributes to novelty, as recurrent transitions in a piece reduce its novelty due under our metric. This means that even if a piece has novel transitions, if the piece is also highly repetitive it will not be score as novel as if it had fewer repetitions. This effect can be seen in the value for the entropy rate for each piece that appears to not have a particular trend over time (see supplementary figure S3), meaning that some music has a functional repetitive component. Indeed, Schoenberg describes music as a perfect balance between repetition and surprise[24]. Our is not the first study trying to quantify novelty in musical scores. In [7] the authors address a similar question, although they represent a piece a sequence of chords without making any attempt to determine the local key of a measure. [7] was also constrained to a considerably smaller corps (\(<1000\) pieces) and fewer composers ( 20). (And, notably, sometimes all of the pieces by a given composer were assigned the same year, in [7]. Although ours results share some similarities with [7], it also has its notable differences. For instance, Clementi is one of the least novel composers according to [7], while Mozart is the least novel in our study. B Both approaches show is a decline of novelty during the Classical period, although is not clear if the explanation is the same, because the lack of transposition and reduction of octavers in [7] produce a fundamentally different novelty measure, that is influenced by variation in the tonic of pieces. We believe that transposition are important to consider features of harmonic relevance: a chord played on a higher octave is not more novel than on a lower one, in our analysis (unlike [7]); and the the same piece played in a different key is not considered novel according to our meaure, unlike [7]. Although we have been able to quantify novelty in harmonic transitions, and identify historical trends, much remains for a systematic understand of the underlying process for innovation in music. The mechanisms that allows composers or musicians to create new transitions could involve microscopic details, for example, to define a new local key (center of effect) the linear combination of its elements can be modified by adding another note or simply modifying the coefficients and this change could end up in a different key. This hypothesis shares some similarities with ideas about the emergence of innovations by correlated novelties, or the "adjacent possible" [61, 62]. In those models, a new state (local key in our case) is discovered by combining previous ideas that are _semantically_ close enough. Future work to identify actual processes of innovation would be valuable, where the elements are not only local keys but also combinations of notes, rhythmic patterns, melodic and harmonic motifs and dynamics. ## Acknowledgements We thank the Plotkin lab for engaging discussion and feedback, Vladimir Viro for providing fruitful feedback. ### Author's contribution A.G-E. and J.B.P. designed the study and developed the methodology. A.G.-E. processed and curated the data. A.G.-E. implemented the programming code, performed the calculations and compiled the results. A.G.-E. and J.B.P. analyzed and interpreted the results. A.G.-E. and J.B.P. wrote and revised the manuscript. ### Competing interests The authors have declared no competing interests.
2304.04846
Helix++: A platform for efficiently securing software
The open-source Helix++ project improves the security posture of computing platforms by applying cutting-edge cybersecurity techniques to diversify and harden software automatically. A distinguishing feature of Helix++ is that it does not require source code or build artifacts; it operates directly on software in binary form--even stripped executables and libraries. This feature is key as rebuilding applications from source is a time-consuming and often frustrating process. Diversification breaks the software monoculture and makes attacks harder to execute as information needed for a successful attack will have changed unpredictably. Diversification also forces attackers to customize an attack for each target instead of attackers crafting an exploit that works reliably on all similarly configured targets. Hardening directly targets key attack classes. The combination of diversity and hardening provides defense-in-depth, as well as a moving target defense, to secure the Nation's cyber infrastructure.
Jack W. Davidson, Jason D. Hiser, Anh Nguyen-Tuong
2023-04-10T20:06:49Z
http://arxiv.org/abs/2304.04846v1
# Helix++: A platform for efficiently ###### Abstract The open-source Helix++ project improves the security posture of computing platforms by applying cutting-edge cybersecurity techniques to diversify and harden software automatically. A distinguishing feature of Helix++ is that it does not require source code or build artifacts; it operates directly on software in binary form--even stripped executables and libraries. This feature is key as rebuilding applications from source is a time-consuming and often frustrating process. Diversification breaks the software monoculture and makes attacks harder to execute as information needed for a successful attack will have changed unpredictably. Diversification also forces attackers to customize an attack for each target instead of attackers crafting an exploit that works reliably on all similarly configured targets. Hardening directly targets key attack classes. The combination of diversity and hardening provides defense-in-depth, as well as a moving target defense, to secure the Nation's cyber infrastructure. ## Introduction Cybersecurity in modern software remains a critical problem that must be addressed for the safety of our personal medical data and devices, critical infrastructure, and banking systems. Many advances, including code review techniques, security scanners, fuzzers have helped improve the quality of deployed software. (1-10) However, as new bugs inevitably surface, these techniques still require software to be patched and re-deployed. This process can take weeks, months or years! Redeployment of software is complicated by issues such as compatibility problems which can prevent or delay updates. Sometimes security patches are "backported" to previous versions of software to provide security-enhanced software that is more compatible with existing systems. Further, even if security updates are ready to be deployed with an update manager, users may delay this process for months due to it requiring downtime (possible in the form of rebooting a machine to install updates.) Is your Windows/Linux/MacOS machine right now asking to be rebooted to install updates? End-point detection and prevention techniques also have drawbacks. [(11, 12)] They often only detect known threats, and once an attacker knows how to evade the technique, they are of little use. Further, they are highly prone to false positives. [(13, 14)] Ideally, security could be continuously applied to deployed software. This approach avoids the cost of re-deployment by applying security features directly to the deployed software. Also, it avoids the possibility of false positives/negatives during an end-point detection, since the software security is applied to software vetted by system administrators. Lastly, it allows a pool of randomized software variants to be deployed to make a moving-target defense, thwarting common "script kiddie" style attacks, where cyber attacks are automated by exploiting the software monoculture and easily embedding deployed software details into the attack scripts. Docker containers provides an interesting opportunity to enact this kind of security. Software systems are designed to be built and re-built upon existing software layers and can efficiently and quickly be re-launched for a user when they need access to software. Helix++ is our proposed answer to extend docker containers with retrofitted security. Helix++ leverages modern, robust, transparent, and efficient binary rewriting integrated into a software's Docker build process to create Docker images that can be transparently used by an end user. The image deployment model is extended to randomly select from a set of equivalent images to avoid deploying the same software to every user, breaking the software monoculture and increasing the attacker's workload. This paper describes the Helix++ vision, and our progress to date on enabling these possible benefits for wide-scale distribution. Our current preliminary case study with the University of Virginia ACCORD system indicate that key infrastructure can be protected at modest cost in terms of both analysis time and storage space. ## Helix++ Figure 1 shows the overall view of the Helix++ architecture. The key idea is to take a repository full of software ready to deploy (e.g., a Docker Registry), apply security hardening and diversity techniques to the software in that registry, and build a hardened registry full of functionally equivalent software that has been hardened and diversified. Each piece of software is in the hardened registry multiple times with a different random seed used for the pseudo-random diversification transforms. This feature allows each user request for software to get a diverse variant of the software. In theory, every user request could get a piece of software that has never been deployed before. However, in practice the registry may not be large enough or the transformer fast enough to maintain unique software for each request during periods of high demand. Our research goals are to understand request frequency dis tributions, computational cost for variant generation, and the system costs and benefits. Binary Rewriting.Helix++ is based on binary rewriting. Binary rewriting allows modification of the behavior of an executable file without access to its source code. This feature is particularly useful for security researchers who need to analyze and modify the behavior of a binary file. With binary rewriting users can modify the code of an executable file by replacing or inserting instructions, changing function parameters, and manipulating data structures. This capability allows users to add security checks or patch vulnerabilities in software applications. Many binary rewriters have an API for building plugins. A plugin can typically transform a program and plugins can be composed to combine functionality. For example, one plugin might transform the stack layout, while another plugin might rearrange the layout of global variables. Together, both stack and global variable locations can be randomized. Helix++ leverages the Zipr static binary rewriter, which is described more in Helix++ State of Development Section. Hardened Registry.A registry full of variants of security-hardened programs can offer several benefits to users and organizations. Firstly, it provides a centralized location for accessing a range of security-hardened programs, which reduces the time and effort needed to locate, download, and install secure software. This approach can be particularly beneficial for organizations that need to ensure the security of their systems and data, as it allows them to easily access and deploy secure software across their network. Another benefit of a registry full of security-hardened programs is that it can help promote security best practices and standards. By providing users with access to secure software, it encourages the use of security-conscious practices, such as keeping software up-to-date. Furthermore, having a registry of security-hardened programs can help promote collaboration and knowledge sharing between security professionals, as they can contribute their own security transformations to Zipr, improving the security of the registry. This benefit can help to improve the overall security of the software ecosystem, benefiting all users who rely on these programs to keep their systems and data secure. A key feature of the registry is that variants can expire. With sufficient compute resources, it may be possible to expire a variant after it is deployed just one time. Helix++ State of Development Helix++ is being built from several stable industrial software components and the Zipr static binary rewriter. Binary Rewriting with Zipr.Helix++'s binary rewriting is based on Zipr. (15-20) Zipr's core functionality supports block-level instruction randomization (BILR), similar to Zhan, et.al. (21) Zipr achieves this functionality by doing deep binary analysis and building an IR. Zipr's IR includes every instruction in the program, the static data object in the program (globals, switch tables, ELF linking tables, etc.), exception handling information, and meta-data about the program (the target architecture bit width, etc.) Zipr's IR is similar to a low-level compiler's back-end IR. After building the IR, Zipr can invoke Zipr plugins built against the Zipr API. The API allows for easy composition of plugins, but of course the plugins have to be robust enough to work together. For example, if one plugin converts indirect branches to direct branches with an if/then/else construct, and a subsequent plugin instruments indirect branch instructions with a control flow integrity technique, the subsequent plugin will not find any indirect branches to instrument and the two plugins would not compose well. However, if one plugin instruments stack operations and another plugin instruments global data operations, the two should compose without any special consideration. The Zipr plugin API allows typical, low-level modifications to the IR - insert, modify or delete any portion of the IR. Further, it allows the user to selectively re-run the deep static analysis techniques used to build the initial IR on the modified IR. This feature can be useful, Figure 1: The Helix++ architecture. for example, when instrumentation uses a dead register for performance optimizations, and later optimizations need to know which registers are still dead. These features make Zipr one of the most stable, efficient and effective static binary rewriters resulting from millions of dollars of funding. First commits to the project's source code are from the year 2010. Zipr transforms binary programs, stripped or not, and generates a functionally equivalent binary program. It is most robust on x86-64, Linux ELF binaries, but also has support for ARM ELF binaries, MIPS ELF Binaries and Windows PE/PE+ files. For all file formats, both 32- and 64-bit architectures are supported. Zipr has been demonstrated to be one of very few robust binary rewriters [(22)]. Zipr commits are regularly tested against 42 real world software applications compiled with a variety of compilers (gcc, clang, olvm, icx), compiler flags (O0, O1, O2, O3, Ofast, OSize), in both PIE and non-PIE mode, in both stripped and unstripped form, and across 3 different flavors of Ubuntu (16.04 LTS, 18.04 LTS, and 22.04 LTS). The selected programs are comprised of C, C++ and Rust applications. Zipr has been used for a variety of projects, including the DARPA Cyber Grand Challenge (CGC). CGC is an autonomous capture the flag contest where cyber reasoning systems played the game. Zipr was part of the TechX team's submission, Xandra. Zipr was used to generate the best security score of all performers and placed 2nd overall. [(23, 24)] Zipr was also used in DARPA's Cyber Fault-tolerant Attack Recovery (CFAR) Program [(25)]. The program's goal was to generate diverse variants of web servers, and run these programs in parallel. If the variants diverged in behavior, one could assume that a cyber attack was occurring and remediative actions could be taken. Zipr was used to generate variants with varying code, stack, global, and heap layout, provably preventing exploits against several important classes of common memory errors. Because Zipr's technology is agnostic to the source language, it was the only solution in the program that was able to handle the ADA Web Server application (obviously written in ADA.) in addition these projects, Zipr has been used to do antifragility work, and is the basis for effective binary-only fuzzing with tools like Zafl and the binary-only based version of Untracer, called HeXcite. [(26, 27, 28, 29, 30)] Hardened Registries in UVA ACCORD.As a practical example of how to use hardened registries in a real-world environment, we have partnered with the ACCORD team at the University of Virginia (UVA). ACCORD is: a web-based platform which allows researchers from public universities across the state of Virginia to analyze and store their sensitive data in a central location. ACCORD is appropriate for de-identified PII, FERPA, de-identified HIPAA, business confidential, and other types of de-identified sensitive data Thanks to funding provided by the National Science Foundation (Award #: 1919667), ACCORD is available at no cost to researchers in the state of Virginia. [(31)] To copy data to/from the ACCORD secure storage, one needs to use Globus. [(32)] To examine or compute on this data, one needs to run a "session". The user interacts with a web front end to start these sessions. Current sessions include a C/C++ IDE, a Python interpreter, or R studio. [(33, 34)] Each session is deployed in Kubernetes pod. [(35)]. Each pod contains two docker containers, one that the user directly interacts with, and a second side-car container that mediates all network traffic to/from the user-controlled container for security. Current security policies allow the user to use pip to install python packages from a set of "trusted" repositories. While the repositories are generally thought to be secure, one is certain about the provenance or security that's generally in the python repositories. Thus, python is a key vector that the administrators worry about with regard to security. In light of this concern, we have worked with the ACCORD team to apply basic Zipr protections to the python interpreter. So far, we have limited ourselves to this key piece of software. We have created multiple versions of the ACCORD Docker images. Creating these images is done by extending the Github workflows with new jobs that alter the images created in previous steps. Leveraging the massive parallelism provided by Github, only minutes of additional processing time is added to the workflows. We have further made a slight modification to ACCORD's web interface to randomly select from the set of functionally equivalent containers when starting a new session. While these mechanisms are fully functional to the best of our knowledge, we are working with the ACCORD team to gradually deploy these changes to the end users of the ACCORD system. At the current time, we need to discuss the details of this procedure with the ACCORD administrators, but they are excited about the general plan to add security to their system. If successful, hardened containers will be an end goal of the Helix++ project. We plan to measure features such as increased storage space for the registry, time to transform images, and the what replacement rate are manageable by our infrastructure. ## Conclusions Helix++ is a system where a binary rewriter is used to add security hardening and diversity transformations in deployed software. A docker registry full of hardened variants is created. These variants are used to satisfy user requests for software. An expiration policy ensures that variants are always fresh, such that a software monoculture never occurs. Further, when additional hardening is available for the software, they can be automatically re-applyed with the binary rewriter. Leveraging the Zipr binary rewriter and its suite of transforms, we have built upon the UVA ACCORD system to randomly select variants for use by end users. While the project is still ongoing, we have made excellent progress and the ACCORD staff are excited for the opportunity to secure their system further. ## Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 2115130. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
2310.06116
OptiMUS: Optimization Modeling Using MIP Solvers and large language models
Optimization problems are pervasive across various sectors, from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the-art solvers, as the expertise required to formulate and solve these problems limits the widespread adoption of optimization tools and techniques. We introduce OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve MILP problems from their natural language descriptions. OptiMUS is capable of developing mathematical models, writing and debugging solver code, developing tests, and checking the validity of generated solutions. To benchmark our agent, we present NLP4LP, a novel dataset of linear programming (LP) and mixed integer linear programming (MILP) problems. Our experiments demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM prompting strategy. OptiMUS code and NLP4LP dataset are available at \href{https://github.com/teshnizi/OptiMUS}{https://github.com/teshnizi/OptiMUS}
Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell
2023-10-09T19:47:03Z
http://arxiv.org/abs/2310.06116v2
# Optimus: Optimization Modeling Using MIP Solvers and Large Language Models ###### Abstract Optimization problems are pervasive across various sectors, from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-the-art solvers, as the expertise required to formulate and solve these problems limits the widespread adoption of optimization tools and techniques. We introduce OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve MILP problems from their natural language descriptions. OptiMUS is capable of developing mathematical models, writing and debugging solver code, developing tests, and checking the validity of generated solutions. To benchmark our agent, we present NLP4LP, a novel dataset of linear programming (LP) and mixed integer linear programming (MILP) problems. Our experiments demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM prompting strategy. OptiMUS code and NLP4LP dataset are available at [https://github.com/teshnizi/OptiMUS](https://github.com/teshnizi/OptiMUS) ## 1 Introduction Optimization problems are common in many fields like operations, economics, engineering, and computer science. Important applications of optimization include reducing energy use in smart grids, improving supply chains, or increasing profits in algorithmic trading (Singh, 2012; Antoniou & Lu, 2007). Major advances in optimization algorithms over the last several decades have led to reliable and efficient optimization methods for a wide variety of structured optimization problems, including linear programming (LP) and mixed-integer linear programming (MILP) among many others. Unfortunately, optimization modeling -- transforming a business problem into a mathematical optimization problem in standard form -- still requires expert knowledge. This expertise gap prevents many organizations from using optimization, even when it could significantly improve their operations. E samples include inventory management in supermarkets, patient operations in hospitals, transportation policies in small municipalities, energy management in local solar farms, and operations in small businesses or NGOs (Saghafian et al., 2015; Aastrup & Kotzab, 2010; Yao et al., 2020; Shakoor et al., 2016). Automating optimization modeling would allow sectors that can not afford access to optimization experts to improve efficiency using optimization techniques. Large language models (LLMs) offer a promising way to make optimization more accessible. LLMs have demonstrated the capability to understande, generate, and interpret natural language for many tasks. They make it easier to formulate problems and set up solutions, making expert knowledge more widely available. However, the role of LLMs in the optimization landscape is still unexplored, mainly due to their novelty and the absence of comprehensive benchmarks. To explore the capabilities and limitations of LLMs in optimization, this paper makes the following contributions: * We introduce a novel dataset, NLP4LP, comprising human-expert formulations of 52 LP and MILP optimization problems, annotated with their solutions, code to check optimality, and sample formulations of the problem in markdown and in code. To construct this dataset, we introduce a standardized format to represent optimization problems in natural language. * We present OptiMUS, an LLM-based agent to formulate and solve optimization problems. Fig. 1 demonstrates the structure of OptiMUS. * We develop techniques to improve the quality of OptiMUS and demonstrate their effectiveness, including automated data augmentation via problem rephrasing and self-fixing of solutions via automated testing and debugging. Using these techniques, OptiMUS increases the solve rate by 91% compared to direct prompting. By integrating the capabilities of LLMs with optimization techniques, our work aims to democratize access to optimization across application domains, extending the reach and utility of optimization. This paper builds on recent progress in Large Language Models (LLMs) and optimization. A more comprehensive review of this work is deferred to Section 6, and we herein on ideas most closely related to our topic. In a very recent paper, Chen et al. (2023) develop a chatbot to help users detect and fix infeasible optimization problems. In contrast to our work, their agent takes a Pyomo code rather than a natural language description as input, and acts as an AI assistant rather than as a solver. (Yang et al., 2023) use LLMs to directly generate solutions to optimization problems, focusing on the problem of modifying prompts to improve performance. In contrast to our work, their method does not rely on solvers, coding, or any other tools. Their model can not address problems with medium or large input data sizes because 1) even the context size of LLMs is very limited compared to the input data size of many optimization problems and 2) LLMs' performance substantially decreases as the input context grows longer, even for explicitly long-context models (Liu et al., 2023). This paper is organized as follows: Section 2 discusses the challenges of using LLMs to solve optimization problems; Section 3 describes the details of our LLM-based optimization agent; Section 4 outlines the dataset creation process and statistics; Section 5 presents the experiments and analysis; Section 6 explores the related work; and Section 7 concludes the paper with future directions Figure 1: An illustration explaining how OptiMUS uses various components to effectively model and solve optimization problems. First, a mathematical formulation is generated from the problem representation. The solver code is then generated based on the formulation. The code is executed to generate and save a solution to file. The solution is then tested on a set of unit tests generated by the LLM and revised by the user. If the code does not run or fails the tests it is passed to the LLMs along for the relevant error code for revision until it is fixed (dashed lines might be executed multiple times). Otherwise, the output is selected as the final solution. An example template is shown in the bottom left corner. and implications. The appendix includes prompts, details on the experiments' setup, and further analysis. ## 2 Challenges of Optimization Modeling using LLMs Optimization problems are defined mathematically by an objective function and a set of constraints. For example, an MILP can be written as \[\begin{array}{ll}\text{minimize}&\mathbf{c}^{T}\mathbf{x}\\ \text{subject to}&\mathbf{A}\mathbf{x}\leq\mathbf{b}\\ &x_{i}\in\mathbb{Z},\;i\in\mathcal{I}\end{array}\] An optimization workflow consists of 1) formulating an optimization problem in standard form by identifying its objective and constraints, and then 2) solving the problem, generally using code that calls an optimization solver. Formulation is often a challenging task even for experts in optimization. Different formulations can lead to significantly different solving times and enable the use of different solvers or solution techniques (Boyd & Vandenberghegheghegheghegheghegheghegheghegheghegheghegheghegheheghehegheheghehegheheheghehehegheheheheghehehehegheheheheheghehehehehegheheheheheheghehehehehegheheheheheheghehehehehegheheheheheghehehehehehegheheheheheheghehehehehehegheheheheheheghehehehehehehegheheheheheheheg ## 3 Methodology This section details the design of OptiMUS. See Fig. 1 for an illustration. OptiMUS starts with a structured description of the optimization problem, explained in Section 3.1, and a separate data file. It first transforms the structured description into 1) a mathematical formulation of the problem and 2) tests that check the validity of a purported solution. Afterwards, OptiMUS transforms the mathematical formulation into solver code. It joins the solver code with the problem data to solve the problem. If the code raises an error or fails a test, OptiMUS revises the code and repeats until the problem is solved or maximum iterations are reached. All prompts used by OptiMUS appear in Appendix A. ### Structured Natural language Optimization Problem (SNOP) As mentioned in 2, passing all the problem information directly to an LLM is not a scalable solution. To address this issue, we separate the data from the problem description, save the data as a JSON file, and then pass the format of this file to the LLM (an example is illustrated in Fig. 2). We use a standardized structure to organize our dataset instances, which we call Structured Natural language Optimization Problem (SNOP). OptiMUS takes a SNOP as input, and our benchmark dataset contains SNOPs and their solutions. A SNOP has 6 fields (see Fig. 3): * **Problem type**: The type of the problem (as a string), e.g., LP, MILP, QP, etc. For example, by targeting LP rather than MILP, a user can instruct OptiMUS to relax an integer problem. This field can be set to _ANY_ to let OptiMUS decide how to formulate the problem. * **Problem info**: A list of statements detailing the problem. We use \(\backslash\)_param_\(\{\}\) symbol to identify problem parameters in the description. The optimization solver will replace these by the problem data after the problem is formulated by the LLM. Figure 2: Scaling OptiMUS to problems with large numerical data: Instead of passing everything to the LLM directly (left), in OptiMUS we separate the numerical data from the problem description and give the metadata to the LLM (right). The LLM then writes a code to interact with the data file. * **Input format**: A string describing the format of the input data. We use [] to represent lists of values, names in quotes (") to represent JSON keys, and pseudo-for to show indices. * **Output info**: A list of statements describing the desired output. * **Output format**: A string describing the format of the desired output. * **Objective**: A string stating the objective of the optimization problem. * **Solver**: The solver of choice (as a string), e.g., Gurobi, cvxpy, etc. This field can be set to _ANY_ to let OptiMUS decide which solver to use. Each problem in the benchmark has input data saved as a JSON file matching the input format. OptiMUS uses only the SNOP to develop a formulation, and then generates code to call an optimization solver, joining the formulation with the (possibly large) data to solve the problem. Figure 3: a). An example of a real-world optimization problem and a SNOP representation for it. b). An example markdown formulation of a problem generated by OptiMUS. c) Example rephrasings generated by OptiMUS from a problem info statement in the augmentation process. ### Formulation Given a SNOP, OptiMUS generates a mathematical formulation for the problem in markdown. At this stage, OptiMUS prompts the language model to define the optimization variables, the constraints, and the objective function. ### Code Generation Given the problem formulation, OptiMUS generates Python code to read the input data from a JSON file, calls an optimization solver to solve the problem, and saves the output to another JSON file. OptiMUS uses Gurobi and cvxpy to solve the problems in our benchmark, and is capable of using advanced solver features. For example we observe it using gurobi.abs_ to model \(\ell_{1}\)-norm objective instead of adding auxiliary constraints and variables. We observe that certain models make recurring mistakes when writing codes for certain solvers, and therefore our prompt includes solver-specific instructions to improve performance. For example, in cvpxy, the model commonly uses cvxpy.sum with generator objects instead of lists (see Fig. 4). ### Tests and revision Once solver code is generated, OptiMUS executes it with a Python interpreter. Two outcomes are possible: 1) an execution error occurs or 2) an output is written to file. Given an execution error, the error string is passed to OptiMUS along with the code and formulation, so that OptiMUS can revise the solver code to fix the error. This process is repeated until the code successfully executes or maximum iterations are reached. Given an output, we need to ensure its correctness. OptiMUS generates unit tests using the SNOP to ensure the validity of the output. These tests check for 1) correct json formatting (e.g. the output json should contain "amount" as a key) 2) constraint satisfaction (e.g. detecting negative order quantities), and 3) consistency between the output values (e.g. sum of monthly profits should be equal to total profit) See Fig. 5 for an example. Optionally, a user of OptiMUS can also revise these generated tests or write additional tests. We call these auto-generated tests, supervised tests, and human tests, respectively. Our benchmark includes supervised tests for every problem. In our experience, developing supervised tests is roughly five times faster than developing equivalent human tests from scratch. Given an output, OptiMUS runs the unit tests. Any tests that fail will generate error messages. OptiMUS uses these error messages to revise the code and fix it. ### Augmentation As an additional strategy, OptiMUS automatically rephrases problems and attempts to solve the rephrased version using the same workflow above. If any of the rephrased versions succeeds, OptiMUS is able to use the solution and hence solve the problem. See Fig. 3c for an example. Figure 4: OptiMUS prompts include instructions to avoid common coding mistakes. For example, ChatGPT commonly uses cvxpy.sum on generator objects instead of lists. Adding the instruction “_- cvxpy.sum takes a list as input, and not a generator_” to the code generation template reduces the incidence of this mistake. Top) generated code before the instruction; Bottom) generated code after adding the instruction. ## 4 Dataset To evaluate the performance of language models for solving optimization problems represented in natural language, we create a natural language optimization benchmark NLP4LP (Natural Language Processing for Linear Programming), consisting of 41 LP and 11 MILP problems (52 instances in total). The problems are drawn from textbooks and lecture notes in optimization (Bertsimas & Tsitsiklis, 1997; Williams, 2013; Nace, 2020). These resources appeared before 2021, and there is a chance that parts of these books have been discussed on the internet and used to train LLMs. However, none of these textbooks include code snippets. The natural language representations used in NLP4LP are further modified from the original problem statement by representation as SNOPs and by abstraction, as we replace numerical values in the original text by parameters. Moreover, it is clear from our results that the LLMs still find it challenging to formulate and solve these problems. The data consists of several views of each problem: * SNOP (string) (see Section 3.1) * validity tests (code) (see Section 3.4) * example data file (JSON) (see Fig. 8) * optimal value (floating point number) * sample optimal output (JSON) (see Fig. 8) ## 5 Experiments and Analysis In this section, we empirically evaluate OptMus on NLP4LP and identify its strengths and weaknesses. We use GPT-3.5 and GPT-4 models for our experiments. The task of developing optimization formulations and solver code directly from natural language representations is new and there are no baselines in the literature. Therefore, we use simple prompting as a baseline and analyze the effect of adding each component. Concretely, we consider these five modes: * Prompt: The problem is described in a few prompts and the LLM is asked to formulate the problem and write code to solve it using a given optimization solver. The code is run and the output is observed. We use this mode as a baseline. * Prompt + Debug: In addition to the above, if the code raises syntax or runtime errors, the errors along with the code and the problem info are passed to the language model, which is prompted to debug the code. This cycle is repeated until the code runs or the maximum iterations are reached. Figure 5: LLM can be used to generate tests and check the correctness of the output. After feeding the problem to an LLM using the test generation template, the model generates a script that checks the correctness of output (constraint satisfaction, output format, etc.) and returns appropriate error message if it finds a problem. The error message is used to automatically fix the code. * Prompt + Debug + AutoTests: In addition to the above, when the code successfully runs and produces an output, automatically-generated tests are run to check the validity of the output. If the tests fail, the error messages are included with the problem description and passed back to the LLM for revision until the tests pass or the maximum iterations are reached. * Prompt + Debug + Supervised Tests: Same as the above, except that the automatically-generated tests are all manually revised and fixed by experts if necessary. * Prompt + Debug + Supervised Tests + Augmentation (OptiMUS): In addition to the above, each problem is rephrased using an LLM five times, and then the whole pipeline is applied to each of the rephrased versions independently. The problem is solved if at least one of the rephrased versions is solved. We assess the models based on two metrics: success rate (the ratio of outputs satisfying all constraints and finding the optimal solution), and execution rate (the ratio of generated codes that are executable and generate an output). The results are available in Fig. 6. Using GPT-4, the success rate of our method improves with each of the additional features and capabilities described above. Basic prompting yields the lowest performance, as anticipated. Debugging improves the model's ability to correct execution errors, while increasing the execution rate. Adding AutoTests improves the success rate as it helps the model identify errors not found during the execution stage. Expert supervision to fix tests enhances performance but lowers the execution rate, Figure 6: OptiMUS outperforms standard prompting by a large margin. Top: A comparison of the success rate and the execution rate for different modes on NLP4LP dataset using GPT-3.5 and GPT-4. Bottom left: Performance of OptiMUS vs the length of the SNOP. Bottom right: Distribution of the number of generated characters for solved instances using OptiMUS and GPT4 as some revisions to fix test errors may inadvertently break the code. Finally, adding augmentations improves both success and execution rates, as OptiMUS can approach and solve problems in different ways, increasing the probability of finding a correct solution. We observe a slightly different trend for GPT-3.5. Debugging improves the execution rate to some degree, but automated tests decrease performance due to the generation of incorrect, misleading tests. Compared to GPT-4, generating incorrect tests is more common when using GPT-3.5 since it is a weaker model. Supervised test-fixing brings performance back to the level of simple prompting, showing that GPT-3.5 is not capable of correcting codes based on test error messages. However, augmentations significantly improve the success rate by giving the model multiple attempts to solve the problem with different rephrasings. More augmentations can improve the performance of OptiMUS at the cost of more computation. We observe that adding one or two augmentations increases the performance considerably, but additional augmentations beyond that point almost does not change OptiMUS's performance anymore (See Fig. 6(a)). The reason is that further rephrasings after that point result in similar outputs to the initial ones. We observe a similar pattern for the maximum number of debugging iterations: one or two extra iterations can significantly improve performance, but increasing the maximum number of iterations beyond that point is not useful (See Fig. 6(b)). If the model is not able to solve the problem in its first few attempts, it usually gets stuck in an incorrect solution space and is often not able to find the right solution. Debugging step is capable of fixing minor errors like missing constraints or syntax/semantic errors, but it is not capable of fixing more fundamental errors like incorrect modeling problems. Successful runs of the agent generate \(4117.1\pm 1509.7\) tokens on average. Given the fact that we used API calls to generate tokens, the formulation speed depends on factors like the internet speed, server responsiveness, model size, account priority, etc. In our experiments, all of the runs took less than 7 minutes (this is for all formulation and code/test generation steps, and does not include the solver run time). Note that OptiMUS can tackle augmented instances of the same problem in parallel. ## 6 Related Work Many authors have considered the use of LLMs to solve mathematical problems. Most of the work in this domain is focused on training and tuning models on new datasets. Frieder et al. (2023) introduce two datasets of math problems in algebra, counting, proof, calculus, probability, and various other topics to evaluate the performance of ChatGPT (OpenAI, 2023). Yuan et al. (2023) propose an arithmetic dataset, MATH 401, to evaluate the performance of LLMs on arithmetic operations. Lewkowycz et al. (2022) further trains PaLM on a dataset of mathematical and scientific papers Figure 7: Comparison of the success rate of OptiMUS with GPT-3.5 and GPT-4 for (a) different number of augmentations and (b) different number of iterations. taken from arxiv. These papers aim to evaluate and improve the direct performance of LLM, but do not seek to integrate LLMs with other tools. Other recent studies have explored ways to improve the accuracy and reach of LLMs by improving prompts and connecting LLMs to external tools. Wei et al. (2023) propose prompting the model to consider a series of intermediate reasoning steps to improve the performance of LLMs. They also use a calculator as an external tool to execute mathematical operations for LLMs. Gao et al. (2022) uses an LLM to read natural language problems and generate programs as the intermediate reasoning steps, but offloads the solution step to a runtime such as a Python interpreter. He-Yueya et al. (2023) improve the performance of LLMs by delegating expression calculations to symbolic solvers. These methods take a step forward in augmenting LLMs with other tools. However, these models are general-purpose and aim at a wide range of tasks. In contrast, our work exploits the particular structure of optimization problems to develop a new workflow for optimization modeling using LLMs together with optimization solvers that improves on simple prompting of LLMs. Many research efforts aim to improve existing optimization solvers and develop new ones, using exact algorithms, heuristics, local and global search, simplex methods, branch and bound, dynamic programming, simulated annealing, and other methods (Adby, 2013; Koziel and Yang, 2011). Deep learning has also been used to enhance optimization solvers (Bengio et al., 2021; Cappart et al., 2021; Mazyakrina et al., 2021). Recent work has explored the possibility of using LLMs directly as solvers (Yang et al., 2023). In addition, ongoing work to enhance large language models (LLMs) (OpenAI, 2023; Touvron et al., 2023; Hoffmann et al., 2022; Vaswani et al., 2017) can support and enhance our results as the underlying LLMs improve. ## 7 Conclusion In summary, we developed OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve optimization problems interpreted from natural language. We constructed NLP4LP, a novel dataset for optimization modeling from natural language, utilizing it to demonstrate the efficacy of the techniques implemented within OptiMUS. Our research serves as a proof of concept, illustrating the potential for automating various stages of the optimization procedure by leveraging LLMs together with traditional solvers. Several avenues remain unexplored and can be further investigated. The quality of prompts can be enhanced through methods such as those proposed in Yang et al. (2023). The NLP4LP dataset can be expanded by adding more instances and including other classes of optimization problems. It would be interesting to enable OptiMUS to work with unstructured natural language representations of problems instead of SNOPs. Moreover, performance of OptiMUS can be potentially improved by fine-tuning the LLM specifically for optimization problem modeling and solving.
2308.13037
Multimass modelling of Milky Way globular clusters -- II. present-day black hole populations
Populations of stellar-mass black holes (BHs) in globular clusters (GCs) influence their dynamical evolution and have important implications on one of the main formation channels for gravitational wave sources. Inferring the size of these populations remains difficult, however. In this work, multimass models of 34 Milky Way GCs, first presented in Dickson et al., are used to explore the present-day BH populations. Direct constraints on both the total and visible mass components provided by several observables allow these models to accurately determine the distribution of the dark mass (including BHs) within clusters, as we demonstrate in a proof-of-concept fitting of the models to mock observations extracted from Monte Carlo cluster models. New constraints on the BH population retained to the present-day in each cluster are inferred from our models. We find that BH mass fractions ranging from 0 to 1 per cent of the total mass are typically required to explain the observations, except for Omega Cen, for which we infer a mass fraction above 5 per cent, in agreement with previous works. Relationships between the dark remnant populations and other cluster parameters are examined, demonstrating a clear anti-correlation between the amount of BHs and mass segregation between visible stars, as well as a correlation between remnant mass fractions and the dynamical age of clusters. Our inferred BH populations are in good agreement overall with other recent studies using different methodologies, but with notable discrepancies for individual clusters.
Nolan Dickson, Peter J. Smith, Vincent Hénault-Brunet, Mark Gieles, Holger Baumgardt
2023-08-24T19:03:26Z
http://arxiv.org/abs/2308.13037v2
# Multimass modelling of Milky Way globular clusters - II. present-day black hole populations ###### Abstract Populations of stellar-mass black holes (BHs) in globular clusters (GCs) influence their dynamical evolution and have important implications on one of the main formation channels for gravitational wave sources. Inferring the size of these populations remains difficult, however. In this work, multimass models of 34 Milky Way GCs, first presented in Dickson et al., are used to explore the present-day BH populations of a large sample of clusters. Direct constraints on both the total and visible mass components provided by several observables allow these models to accurately determine the distribution of the dark mass (including BHs) within clusters, as we demonstrate in a proof-of-concept fitting of the models to mock observations extracted from Monte Carlo cluster models. New constraints on the BH population retained to the present-day in each cluster are inferred from our models. We find that BH mass fractions ranging from 0 to 1 per cent of the total mass are typically required to explain the observations, except for \(\omega\) Cen, for which we infer a mass fraction of 5 per cent, in agreement with previous works. Relationships between the dark remnant populations and other cluster parameters are examined, demonstrating a clear anti-correlation between the amount of BHs and mass segregation between visible stars, as well as a correlation between remnant mass fractions and the dynamical age of clusters. Our inferred BH populations are in good agreement overall with other recent studies using different methodologies, but with notable discrepancies for individual clusters. keywords: galaxies: star clusters - globular clusters: general - stars: kinematics and dynamics - stars: black holes ## 1 Introduction It has been suggested that dynamical formation of close stellar-mass black hole (BH) binaries in the dense cores of globular clusters (GCs) could be one of the main formation channels for gravitational wave sources from BH-BH mergers (e.g. Portegies Zwart & McMillan, 2000; Rodriguez et al., 2016; Abbott et al., 2016; Antonini & Gieles, 2020; Antonini et al., 2023). The amount of dynamically formed BH-BH binaries in GCs and subsequent mergers, however, depends on many uncertain physical ingredients, such as the initial mass and number distribution of BHs (which itself is dependent on the stellar initial mass function and stellar evolution; e.g. Spera et al., 2015) and the magnitude of natal kicks that BHs receive at the time of their formation in a supernova explosion, which can eject them from the cluster (Chatterjee et al., 2017), as well as the fraction of the initially retained BHs ejected due to dynamical encounters throughout a cluster's life, which depends on its initial properties and dynamical evolution (e.g. Breen & Heggie, 2013; Hurley et al., 2016; Gieles et al., 2021). The presence of massive BHs in GCs will have a large impact on their dynamical evolution and present-day structure. As the most massive objects in the system, any BHs which have not been ejected from the cluster by natal kicks will rapidly segregate to the cluster centre, forming a concentrated population of BHs in the core. While it was long argued that this dense subsystem would dynamically decouple from the rest of the GC (e.g. Spitzer, 1969; Sigurdsson & Hernquist, 1993), and in short order eject all of the BHs through strong dynamical interactions, recent work has shown that these BHs can be retained on much longer timescales (e.g. Breen & Heggie, 2013; Morscher et al., 2013, 2015), and populations of BHs can be expected to survive in most clusters to the present day, depending on the initial cluster central densities. This theoretical work is complemented by recent detections of BH candidates in binaries within GCs, based on radio/X-ray emission (from accretion) or the radial velocity variations of a bright companion (Strader et al., 2012; Miller-Jones et al., 2015; Giesers et al., 2018, 2019). Several studies examining detailed evolutionary dynamical models (such as \(N\)-body or Monte Carlo models) of clusters with an initial population of BHs have used these expected impacts on observable quantities, such as the absence of mass segregation among visible stars (e.g. Peuten et al., 2016; Alessandrini et al., 2016; Weatherford et al., 2018, 2020), an elevated central mass-to-light ratio (e.g. Baumgardt et al., 2019) or a large effective radius of the cluster (Torniamenti et al., 2023) and the presence of tidal tails (Gieles et al., 2021), to argue for the presence of retained populations of BHs in certain clusters at the present day. Alternatively, equilibrium modelling approaches, such as Jeans modelling (e.g. Kamann et al., 2014, 2016; Abbate et al., 2019; Vitral and Mamon, 2021; Vitral et al., 2022) and multimass distribution-function (DF) based models (e.g. Sollima et al., 2012, 2016; Zocchi et al., 2019; Henault-Brunet et al., 2019, 2020), have also been used to probe the dark remnant content of GCs (including BHs) through fitting models to observations of specific clusters while accounting for both visible and dark mass components. Rapid and flexible equilibrium models, such as DF models, allow for a more complete exploration of parameter space compared to more computationally expensive evolutionary models, and, through the application of statistical fitting techniques, the ability to very precisely reproduce the kinematics and structure of real, observed GCs. In Dickson et al. (2023, hereafter Paper I), we presented multimass DF models fit to several observables (proper motions, line-of-sight velocities, stellar densities and mass functions), for 34 Milky Way GCs. By inferring the global stellar mass functions of these clusters and simultaneously constraining their distributions of stellar remnants, the stellar initial mass function and its possible dependence on metallicity was examined, in particular in the high-mass regime (\(\geq 1\) M\({}_{\odot}\)) where stars in old GCs have evolved into stellar remnants by the present day. In this work, these same best-fitting models are used to now examine in detail the remnant populations in this large sample of Milky Way GCs. In particular, the amount and distribution of BHs in each cluster are inferred from our models. The multimass limpy models (Gieles and Zocchi, 2015), mass function evolution algorithm, observational datasets and model fitting procedures used are all restated briefly, from Paper I, in Section 2. In Section 3, we provide a proof of concept and show that our method is able to reliably recover the mass fraction in BHs in GCs by fitting our models to mock data extracted from snapshots of evolutionary models containing different amounts of BHs. The overall distributions of BHs in our sample of GCs are presented in Section 4, alongside an analysis of their relationships with other cluster parameters. In Section 5 we discuss the implications of these results on the co-evolution of GCs and their BHs, and provide comparisons between our results and the inferred BH populations of other recent studies. Finally, we summarize our results and conclude in Section 6. ## 2 Methods In this section, the methodology used in Paper I to fit multimass dynamical models to a number of observables is restated briefly, with emphasis on the elements most crucial to inferring BH populations. For more detail on all procedures summarized here, we refer the reader to Sections 2 through 4 of Paper I. ### Models To model the mass distribution of the globular clusters we use the limpy multimass distribution-function (DF) based models (Gieles and Zocchi, 2015). DF based models are equilibrium models built around a distribution function which describes the particle density of stars and satisfies the collisionless Boltzmann equation. This DF is used to self-consistently solve for the system's potential (\(\phi(r)\)) using Poisson's equation, and to derive a variety of useful quantities for describing a globular cluster, such as the projected velocity dispersion, the projected surface density and the total mass. _Multimass_ DF models, defined by a sum of component DFs for individual mass bins, allow for a more accurate description of real globular clusters, which are made up of a spectrum of stellar and remnant masses, and are necessary in order to account for the effects of mass segregation. Our multimass models are defined by 10 free parameters dictating the mass function and physical solution of the limpy DF. The overall structure of these models is controlled by the (dimensionless) central potential parameter \(\hat{\phi}_{0}\), which defines how centrally concentrated the model is. To mimic the effects of the host galaxy's tides, the energy near the truncation radius is reduced, lowering the escape velocity of stars and making it easier to escape, with a sharpness of truncation defined by the parameter \(g\) (lower values resulting in a more abrupt truncation). The models can be expressed in physical units by adopting relevant size and mass scales. In order to match observations, we opt to scale the models using the parameters for total cluster mass \(M\) and 3D half-mass radius \(r_{\rm h}\). The limpy models allow for velocity anisotropy through an extra angular momentum term in the DF, which produces an isotropic core followed by a degree of radial velocity anisotropy at a distance from the centre defined by the anisotropy radius parameter \(r_{\rm a}\) and then returning to isotropy again near the truncation radius. The multimass version of the limpy DF is defined in part by the mass-dependent velocity scale \(s_{j}\). This scaling captures the trend towards kinetic energy equipartition among stars of different masses and models the effects of mass segregation (Gieles and Zocchi, 2015; Peuten et al., 2017; Henault-Brunet et al., 2019), and is defined based on the parameter \(\delta\), such that \(s_{j}\propto m_{j}^{-\delta}\). The constituent discrete mass components which approximate the mass spectrum of a GC are represented in the multimass models by the total (\(M_{j}\)) and mean (\(m_{j}\)) masses of each mass bin. As DF-based models, such as limpy, are equilibrium, instantaneous "snapshot" models, and do not directly simulate any temporal astrophysical processes during their computation, we must instead incorporate a separate prescription for stellar evolution from an initial mass function, over the age of the cluster, to the present-day stellar and remnant mass functions. In keeping with the formulation of canonical IMFs (e.g. Kroupa, 2001), we use a 3-component broken power law, with power-law "slopes" of each component given by the free parameters \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) (with break masses at 0.5 and 1 M\({}_{\odot}\) and bounded between 0.1 and 100 M\({}_{\odot}\)). To evolve the population of stars to the present day we follow the algorithm first described by Balbinot and Giesies (2018) and expanded upon in the ssptools library1 and Paper I. The amount of stars which evolve off the main sequence over the lifetime of the clusters is dictated by a set of equations based on interpolated Dartmouth Stellar Evolution Program models (Dotter et al., 2007, 2008). The types and masses of the stellar remnants formed by these evolved stars are then determined based on their initial mass, metallicity and initial-final mass relation (IFMR). The white dwarf (WD) IFMR, including the maximum initial mass which will form a WD, is interpolated from the MIST 2018 isochrones (Dotter, 2016; Choi et al., 2016). The BH IFMR, as well as the minimum initial mass required to form a BH, is interpolated directly from a grid of stellar evolution library (SSE) models (Banerjee et al., 2020), using the rapid supernova scheme (Fryer et al., 2012). Stars with initial masses between the WD and BH precursor masses are assumed to form neutron stars (NS) with a mass of 1.4 M\({}_{\odot}\). Footnote 1: Available at [https://github.com/SMU-clusters/ssptools](https://github.com/SMU-clusters/ssptools) The algorithm then must account for the loss of BHs through two different channels. Firstly, the ejection of, primarily low-mass, BHs through natal kicks is simulated. Beginning with the assumption that the kick velocity is drawn from a Maxwellian distribution with a dispersion of \(265\,\mathrm{km\,s^{-1}}\)(Hobbs et al., 2005) and scaled down by \(1-f_{\mathrm{fb}}\), where \(f_{\mathrm{fb}}\) is the fallback fraction, which we interpolated from the same grid of SSE models used for the BH IFMR. The fraction of BHs retained, in each mass bin, is then found by integrating the Maxwellian kick velocity distribution from 0 to the system initial escape velocity, which we compute as \(v_{\mathrm{esc}}=2\sqrt{2\phi_{0}}\). BHs are also ejected over time from the core of GCs due to dynamical interactions with one another (e.g. Breen & Heggie, 2013a,b). This is simulated through the direct removal of BHs, beginning with the heaviest mass bins (with larger gravitational interaction cross-sections) through to the lightest (e.g. Morscher et al., 2015; Antonini & Gieles, 2020), until the combination of mass in BHs lost through both ejection channels leads to a final retained mass in BHs equal to the percentage of the initial mass in BHs for the given IMF's specified by the BH mass retention fraction parameter (\(\mathrm{BH_{ret}}\)). Finally, the heliocentric distance to the GCs \(d\) is introduced as a free parameter, to allow for the conversion between projected, linear model units and the angular units of observations. ### Fitting models to observations In Paper I, best-fitting model parameters for 34 Milky Way GCs were determined through the comparison of the phase-space distribution of stars in the models to observations of GC structure and kinematics. These clusters were selected primarily to address the main hypothesis of Paper I, the possible metallicity dependence of the IMF (based primarily on the quantity and quality of kinematic and mass function data available), however they also allow us to conduct a census of BH populations in a significant sample of Milky Way GCs (although our sample may be biased against low-mass and low-density clusters, as discussed in section 5). A variety of observational datasets were used to fit all chosen GCs, providing direct constraints on the phase-space distribution of visible stars and the overall mass of the cluster, and in turn providing indirect constraints on the amount and distribution of dark mass (in both faint low-mass stars and dark remnants). This is possible due to the fact that the distribution of the different dark and visible mass components are arranged with limited flexibility and linked, thanks to partial energy equipartition. Though BHs may represent only up to a few per cent of the total mass of a GC, they can actually dominate the mass density in the central regions of a cluster, with significant dynamical effects, and thus can be probed. Details of the datasets used for each cluster can be found in Appendix A of Paper I, but we summarize the main sources below. Radial profiles of proper motion (PM) and line-of-sight (LOS) velocity dispersions of cluster stars are used to constrain the internal kinematics of each cluster and thus its total (visible and dark) mass. PMs in both the radial and tangential directions also provide constraints on the degree of velocity anisotropy in the cluster, which is important given the degeneracy between anisotropy and central dark mass (e.g. Zocchi et al., 2017). PM dispersion (radial and tangential) profiles were computed in Paper I using _Gaia_ (DR3; Gaia Collaboration et al., 2022) proper motions for all clusters. This data was supplemented with PM dispersion profiles in the cores of most clusters from Hubble Space Telescope data (_HST_, Watkins et al., 2015; Libralato et al., 2022). LOS velocity dispersion profiles are taken from compilations of various ground-based (Baumgardt & Hilker, 2018; Kamann et al., 2018; Dalgleish et al., 2020) and _Gaia_(Baumgardt et al., 2019) datasets. Radial profiles of the projected stellar number density in all GCs provide vital constraints on the spatial distribution and concentration of the clusters stars. The density profiles for all clusters are taken from de Boer et al. (2019), consisting of combined _Gaia_ star counts in the outskirts and _HST_ counts (Miocchi et al., 2013) or ground-based surface-brightness profiles (SBPs, Trager et al., 1995) in the central regions. Finally, constraints on the global present-day mass function of the clusters, the degree of mass segregation and the total mass in visible stars are provided by _HST_ datasets, from which local stellar mass functions were extracted. The mass function data for each cluster is taken from Baumgardt et al. (2023), consisting of a stellar counts, based on a large amount of archival _HST_ data, in radial annuli and mass bins. The compilation of photometry in each cluster is made up of several _HST_ fields at varying distances from the cluster centres, typically covering stars within a mass range of \(\sim 0.16-0.8\,\mathrm{M_{\odot}}\). The models are constrained by these datasets in order to provide best-fitting values and posterior probability distributions of the model parameters that describe each cluster, determined through Bayesian parameter estimation techniques2. The posterior probability distributions of all parameters are sampled using dynamic nested sampling, through the dynesty software package (Speagle, 2020). All fitting is carried out using the software library and fitting pipeline GCfit3. For a discussion of the model fits and parameter posterior distributions, see Paper I. Footnote 2: Model fits, cumulative mass profiles and black hole distributions for all clusters are available online at [https://github.com/mndickson/GCfit-results](https://github.com/mndickson/GCfit-results) Footnote 3: Available at [https://github.com/mndickson/GCfit](https://github.com/mndickson/GCfit) Footnote 4: Available at [https://cmc.ciera.northwestern.edu/home](https://cmc.ciera.northwestern.edu/home) The posterior probability distributions of the model-derived quantities used throughout this work, such as the black hole mass fractions (\(f_{\mathrm{BH}}=M_{\mathrm{BH}}/M_{\mathrm{cluster}}\)), are constructed based on the models representing the set of weighted posterior samples retrieved from the nested sampler. ## 3 Validation of BH population inference In order to test the reliability of our method in inferring BH populations, we first apply it to simulated observations from Monte Carlo models with known BH populations. In order to explore a number of models with similar properties as real Milky Way clusters, we select snapshots from the existing grid of Cluster Monte Carlo (CMC; Rodriguez et al., 2022) models presented in Kremer et al. (2020)4. We select the snapshots using the same methodology as Rui et al. (2021) which we briefly summarize here. Footnote 4: Available at [https://people.smp.uq.edu.au/HolgerBaumgardt/globular](https://people.smp.uq.edu.au/HolgerBaumgardt/globular) The selections are based on the SBPs of Trager et al. (1995) and the velocity dispersion profiles (VDP) compiled by Baumgardt (2017), Baumgardt & Hilker (2018) and Baumgardt et al. (2023)5. We search for snapshots that are a good match to any clusters from the Harris (1996, 2010 edition) catalogue present in both the VDP and SBP compilations, leaving us with about 100 clusters to match to snapshots. We first use the metallicities from Harris (1996, 2010 edition) and the present-day galactocentric radii from Baumgardt et al. (2019) to select the subset of models which are closest to the true values for each cluster. Footnote 5: Available at [https://people.smp.uq.edu.au/HolgerBaumgardt/globular](https://people.smp.uq.edu.au/HolgerBaumgardt/globular) From this subset, we then search every model for snapshots that match suitably well to a given cluster's observed SBP and VDP simultaneously 6. In order to select a snapshot, we adopt a threshold of \(s\equiv\max\left(\tilde{\chi}_{\text{SBP}}^{2},\tilde{\chi}_{\text{VDP}}^{2} \right)<10\) for the "fitting heuristic" \(s\) of Rui et al. (2021c), which describes the goodness-of-fit of a snapshot based on the \(\tilde{\chi}^{2}\) statistic between the observations and the interpolated model profiles. We have found that a threshold of \(s<10\) provides an acceptable balance between the number and fit quality of the retained snapshots. While a number of snapshots passing this filter have an apparently poor overall fit to one or both of the observational profiles, we opt to still include these snapshots in the sample, as our goal is not to select only snapshots which are perfect matches to specific real clusters but instead to build a sample of snapshots that are qualitatively similar to the Milky Way clusters examined in this work. For each cluster covered by the observational datasets we select the single best-fitting snapshot, where one exists. Footnote 6: See Rui et al. (2021c) for details on how the model profiles are extracted from the CMC snapshots for comparison with the observations. ### Mock observations The search described above results in a sample of 41 CMC model snapshots, representative of Milky Way GCs. From these we next extract synthetic observations designed to emulate the real observational data used to constrain the models examined in this work. We place each cluster at its respective heliocentric distance as reported by Baumgardt & Vasiliev (2021) and then use the cmctoolkit library (Rui et al., 2021c; Rui et al., 2021a) to calculate projected positions and velocities as well as simulated photometry for objects in each snapshot. #### 3.1.1 Number density profiles We extract projected number density profiles from the models, designed to emulate those of de Boer et al. (2019). We select all stars brighter than _Gaia_\(G=20\), sort them into 50 radial bins, with equal numbers of stars, and calculate the number density in each radial bin. All densities are assigned a Poisson counting error. The de Boer et al. (2019) profiles are combinations of _Gaia_ star counts in the outer regions and _HST_ and archival SBPs in the inner regions, where crowding becomes an issue, however we find that using the same \(G<20\) cut over the entire radial extent of the cluster results in well-sampled profiles which cover a similar radial extent and have similar uncertainties to the de Boer et al. (2019) profiles. #### 3.1.2 Proper motion dispersion profiles We extract two sets of PM dispersion profiles for each snapshot, in order to represent the performance of the two different sources of PM observations used. In the inner regions, we seek to emulate the performance of the _HST_ based PM dispersion profiles of Libralato et al. (2022). We select stars within the central 100'' of the cluster to mimic the footprint of an _HST_ ACS footprint, and limit our selection to stars within \(15<V<18\). We split the stars into radial bins containing at least 120 stars each, up to a maximum of five bins. This provides sufficient radial coverage of the cluster while still allowing us to construct profiles for distant clusters, where limited numbers of stars pass the magnitude cut. We assume a typical uncertainty of \(0.1\,\mathrm{mas/yr}\) on all stars. Within each bin we compute the mean velocity and velocity dispersion along with their associated uncertainties, assuming the velocities are drawn from a Gaussian error distribution, using MCMC. This is repeated for both the radial and tangential components of PM. The median and \(1\sigma\) values of the dispersion in each bin are used going forward. In the outer regions, we seek to emulate the _Gaia_ DR3 based profiles of Vasiliev & Baumgardt (2021). We base our magnitude cuts on their profiles, selecting all stars in the \(13<G<19\) range outside of the innermost 100'', to avoid overlapping with the _HST_ profiles. We assign each star an uncertainty in proper motion based on its \(G\) band magnitude using the calibrations provided in Table 4 of Lindegen et al. (2021), allowing us to replicate the performance of the _Gaia_ DR3 catalogue. We again bin the stars using the same conditions as in the inner _HST_ profiles, and calculate the velocity dispersion in each bin using the same method, again for both radial and tangential components. #### 3.1.3 Line-of-sight velocity dispersion profiles In addition to the PM dispersion profiles, we also extract LOS velocity dispersion profiles, designed to emulate those presented by Baumgardt (2017), Baumgardt & Hilker (2018) and Baumgardt et al. (2022)5. As the compilation of velocity dispersions that make up these profiles consist of several different inhomogeneous datasets, with varying precisions, as a simplifying assumption we adopt a typical uncertainty of \(1\,\mathrm{km\,s^{-1}}\) on all observed stars. We limit this dataset to only giants brighter than \(V=17\), which is typical of the datasets used in the observed compilations. We again sort the stars into several radial bins, requiring at least 70 stars per bin, up to a maximum of 10 bins, and compute the velocity dispersion for each in the same way as for the PM profiles. Footnote 5: See Rui et al. (2021c) for details on how the model profiles are extracted from the CMC snapshots for comparison with the observations. #### 3.1.4 Stellar mass functions We extract stellar mass function data for each of our snapshots, designed to emulate the datasets presented in Baumgardt et al. (2023). These datasets consist of star counts, binned by stellar mass, extracted from archival _HST_ observations. While the real _HST_ fields are distributed somewhat randomly around each GC according to the various goals of each proposal for which they were observed, in general there is typically at least one exposure centred on the cluster centre and a number of fields placed outside of the central region. For simplicity, we opt to place one field over the centre of the cluster, covering a range of 0' - 1.6' in projected angular separation from the centre, and two outer annuli at radial distances of 2.5' and 5', each sized such that they cover the same area as the central field. The central field is split into 4 annuli, giving us a total of 6 annular fields covering the central, intermediate and outer regions of the cluster. In many of the more well-studied clusters in our sample, such as NGC 104 and \(\omega\,\mathrm{Cen}\), the large number of observed _HST_ fields actually provides much better coverage of the clusters than our fields simulated here, and thus the results of this section may actually be conservative. In each of these fields we extract stellar counts separately, in bins of stellar mass \(0.1\,\mathrm{M_{\odot}}\) wide. In the real datasets, the faintest stars (lowest stellar masses) for which stellar counts can be extracted with reasonable completeness (\(>90\) per cent) is a function of crowding. To replicate this effect in our synthetic mass functions we construct an empirical relation between the surface number density and the lowest observable mass within a field. We use NGC 104 as the basis for this calibration because it covers a wide range of number densities from its core to the outskirts and has a large number of _HST_ fields for which the mass function was extracted. We extract star counts in each field, down to a lower mass limit calculated by the above relation and up to the main sequence turn-off, replicating the performance of the observed _HST_ stellar counts. We assign Poisson counting errors to our stellar counts, though to reflect the scatter we typically see in the real data we also inflate these errors by a factor of \(F=3\) and re-sample each point within the errors, resulting in mass functions that are very similar to those of Baumgardt et al. (2023) (see Section 4.1.1.3 of Paper I for a description of this \(F\) parameter and its motivation). ### Validation results After extracting the synthetic datasets, we then directly apply our model fitting method (as described in Section 2 and Paper I) and compare the resulting inferred BH mass fractions to the known BH population in the CMC models from which the mock data was extracted. As in Paper I, we discard any obviously poor fits. We also discard any snapshots which have much smaller datasets (mostly consisting of snapshots matched to very distant clusters). This leaves us with 30 final snapshots with datasets of similar quality to the Milky Way clusters we study in this work, and with model fits which satisfyingly reproduce the mock observations and recover the various cluster parameters well, such as the total mass, which we generally recover within \(\sim 10\) per cent. Our inferred values of \(f_{\rm BH}\), compared to the true values for our collection of snapshots, are shown in Figure 1 and Table 1. In general, our fits satisfyingly recover the mass fraction in BHs in all snapshots. However, as expected, the statistical uncertainties derived solely from the fitting procedure seem to underestimate the real uncertainties slightly. Our fitting procedure operates under the assumption that our models are a good representation of the data, and as such, in reality, may underestimate the true errors. It has been shown that multimass DF models, such as those used here, may underestimate the uncertainties when compared to more flexible models, such as Jeans models (Henault-Brunet et al., 2019), which could be indicative of systematic errors not captured in the statistical uncertainties and limitations in the ability of these models to perfectly reproduce the data. In an attempt to quantify this underestimation, we search for the factor by which the statistical uncertainties on our inferred values of \(f_{\rm BH}\) need to be inflated to make them fully consistent with a one-to-one relation with the true model values. Based on Figure 1, it seems like the factor needed may be function of the true \(f_{\rm BH}\). Therefore, we define a nuisance parameter \(\mathbf{\epsilon}\equiv af_{\rm BH}+b\) which is added in quadrature to the statistical errors \(\sigma_{f_{\rm BH}}\) to determine the total, inflated error \(\Delta\) on each inferred \(f_{\rm BH}\): \[\Delta^{2}=\epsilon^{2}+\sigma_{f_{\rm BH}}^{2} \tag{1}\] We then fit a fixed, one-to-one (i.e. slope of 1, intercept of 0) line through the points in Figure 1, assuming a Gaussian likelihood and allowing the \(a\) and \(b\) parameters to vary freely and inflate the uncertainties on the inferred \(f_{\rm BH}\). This fit results in values of \(a=0.6^{+0.1}_{-0.1}\) and \(b=0.02^{+0.03}_{-0.01}\), which define the inflated uncertainty factor \(\epsilon(f_{\rm BH})\), shown in the lower panel of Figure 1. We also show the inflated errors on the inferred \(f_{\rm BH}\) in the top panel (in orange), demonstrating the additional uncertainty needed to make our inferred values consistent with the true values There is one snapshot which is not shown in Figure 1 but is worth discussing briefly. This snapshot is matched to two clusters and has a true \(f_{\rm BH}\) of 2.22 per cent, a much larger BH population than any of the other snapshots we match to. We instead infer \(f_{\rm BH}\) values of \(0.4^{+0.3}_{-0.2}\) and \(0.5^{+0.3}_{-0.2}\) per cent, significant underestimations of the true value. Due to the limitations of the existing CMC grid, this is the _only_ example we have of a matched snapshot with an \(f_{\rm BH}\) above 1 per cent. As a single snapshot is not sufficient for us to confidently quantify our performance in this region, we have excluded it from the fitting of the nuisance parameter \(\epsilon\), though it should be noted that including it has a nearly negligible impact on the values of \(a\) and \(b\). In fact, the inferred \(f_{\rm BH}\) values are consistent (within \(\sim 1.3\sigma\)) with the true values, once the errors are inflated using the above relation. Nonetheless, we refrain from further quantifying our method's performance in this higher-\(f_{\rm BH}\) regime until future work, in which a similar analysis will be performed for a larger selection of snapshots, not limited to those that match Milky Way GCs. As discussed below, all of the clusters examined in this work have typical values of \(f_{\rm BH}\) below \(\sim 1\) per cent (both in our inferred values and those of Weatherford et al., 2020), where we are confident in the validation presented here. Overall, this comparison with mock observations extracted from dynamical simulations lends confidence in the ability of our methods to correctly recover the mass fraction in BHs in GCs, with the important note that the uncertainties on these inferred values may be underestimated by up to \(\sim 0.5\) per cent at a value of \(\sim 1\) per cent. ## 4 Black hole populations We will now explore the populations of black holes (and other dark remnants) inferred from our best-fitting models. We will examine the distribution of the total mass, mass fraction and amount of black holes found from these fits, and explore possible correlations present between the remnant populations and other cluster properties. We Figure 1: The \(f_{\rm BH}\) values inferred based on the mock observations extracted from CMC models, against the true values in those models (\(f_{\rm BH,T,T,BH}\)). The one-to-one line is shown in grey, representing perfect agreement. The median and \(1sigma\) values, based solely on the statistical uncertainties from the fit, are shown in blue. The inflated errors, based on the nuisance parameter \(\epsilon\) (\(\Delta^{2}=e^{2}+\sigma_{f_{\rm BH}}^{2}\)), are shown in orange. In the bottom panel, the value of this parameter is shown as a (linear) function of the BH mass fraction in black. The best fitting (median) values of the slope (\(a\)) and intercept (\(b\)) are shown in the top left of the panel. will also discuss in more detail the case of 'core-collapsed' clusters and how we model them. \begin{table} \begin{tabular}{l c c c c c} \hline Cluster & \(f_{\rm BH}\) [\%] & \(M_{\rm BH}\) [\(M_{\odot}\)] & \(N_{\rm BH}\) & \(\tilde{m}_{\rm BH}\) [\({\rm M_{\odot}}\)] & \(f_{\rm remnant}\) [\%] \\ \hline NGC104 & \(0.116^{+0.041}_{-0.009}\) & \(1040^{+120}_{-80}\) & \(135^{+13}_{-9}\) & \(7.7^{+0.1}_{-0.1}\) & \(45.2^{+0.1}_{-0.1}\) \\ NGC288 & \(0.09^{+0.09}_{-0.06}\) & \(80^{+80}_{-60}\) & \(6^{+6}_{-4}\) & \(14.088^{+0.006}_{-0.006}\) & \(55.3^{+1.0}_{-1.0}\) \\ NGC362\({}^{*}\) & \(0.081^{+0.005}_{-0.008}\) & \(230^{+10}_{-20}\) & \(16^{+1}_{-2}\) & \(14.032^{+0.002}_{-0.002}\) & \(45.0^{+0.4}_{-0.7}\) \\ NGC1261 & \(0.10^{+0.08}_{-0.06}\) & \(200^{+100}_{-10}\) & \(13^{+10}_{-8}\) & \(14.054^{+0.008}_{-0.010}\) & \(38^{+2}_{-2}\) \\ NGC1851 & \(0.054^{+0.005}_{-0.005}\) & \(180^{+20}_{-20}\) & \(13^{+2}_{-1}\) & \(13.1^{+0.5}_{-0.2}\) & \(41.0^{+0.3}_{-0.3}\) \\ NGC2808 & \(0.100^{+0.006}_{-0.008}\) & \(980^{+50}_{-80}\) & \(6^{+5}_{-6}\) & \(15.5^{+0.4}_{-0.4}\) & \(35.3^{+0.2}_{-0.3}\) \\ NGC3201 & \(0.010^{+0.023}_{-0.010}\) & \(20^{+40}_{-20}\) & \(1^{+3}_{-1}\) & \(14.084^{+0.004}_{-0.005}\) & \(46^{+1}_{-1}\) \\ NGC5024 & \(0.2^{+0.2}_{-0.2}\) & \(1200^{+800}_{-800}\) & \(90^{+60}_{-60}\) & \(14.06^{+0.01}_{-0.20}\) & \(42^{+1}_{-2}\) \\ NGC5139 & \(5.1^{+0.2}_{-0.1}\) & \(1640^{+6000}_{-4000}\) & \(12900^{+800}_{-700}\) & \(12.8^{+0.5}_{-0.5}\) & \(52.6^{+0.6}_{-0.5}\) \\ NGC5272 & \(0.36^{+0.06}_{-0.06}\) & \(1800^{+300}_{-300}\) & \(130^{+20}_{-20}\) & \(13.5^{+0.1}_{-0.1}\) & \(45.8^{+0.5}_{-0.7}\) \\ NGC5904 & \(0.05^{+0.05}_{-0.04}\) & \(200^{+200}_{-100}\) & \(20^{+10}_{-10}\) & \(11^{+1}_{-1}\) & \(55.9^{+0.8}_{-0.8}\) \\ NGC5986 & \(0.010^{+0.015}_{-0.010}\) & \(30^{+40}_{-30}\) & \(2^{+3}_{-2}\) & \(14.062^{+0.005}_{-1.389}\) & \(52.7^{+1.0}_{-0.9}\) \\ NGC6093 & \(0.46^{+0.08}_{-0.08}\) & \(1400^{+200}_{-30}\) & \(100^{+20}_{-20}\) & \(13.37^{+0.05}_{-0.10}\) & \(55.5^{+0.8}_{-0.7}\) \\ NGC6121 & \(0.08^{+0.03}_{-0.03}\) & \(80^{+30}_{-30}\) & \(6^{+2}_{-2}\) & \(13.5^{+0.2}_{-0.3}\) & \(64.3^{+0.8}_{-0.8}\) \\ NGC6171 & \(0.07^{+0.04}_{-0.04}\) & \(40^{+30}_{-30}\) & \(3^{+2}_{-2}\) & \(14.0^{+0.1}_{-0.3}\) & \(66.3^{+0.9}_{-0.9}\) \\ NGC6205 & \(0.5^{+0.2}_{-0.2}\) & \(2200^{+600}_{-700}\) & \(160^{+70}_{-50}\) & \(13.8^{+0.1}_{-0.2}\) & \(55^{+1}_{-1}\) \\ NGC6218 & \(0.143^{+0.008}_{-0.013}\) & \(143^{+9}_{-74}\) & \(10.2^{+0.6}_{-1.0}\) & \(14.044^{+0.003}_{-0.003}\) & \(61.5^{+0.7}_{-0.6}\) \\ NGC6254 & \(0.08^{+0.08}_{-0.06}\) & \(200^{+200}_{-100}\) & \(13^{+41}_{-9}\) & \(13.0^{+0.8}_{-2.0}\) & \(56.9^{+0.9}_{-0.8}\) \\ NGC6266\({}^{*}\) & \(0.137^{+0.009}_{-0.009}\) & \(990^{+60}_{-60}\) & \(102^{+0}_{-9}\) & \(8.3^{+0.2}_{-0.2}\) & \(52^{+2}_{-2}\) \\ NGC6341 & \(0.15^{+0.08}_{-0.08}\) & \(500^{+200}_{-200}\) & \(40^{+20}_{-10}\) & \(11.9^{+0.6}_{-1.3}\) & \(52.2^{+1.1}_{-0.8}\) \\ NGC6352 & \(0.061^{+0.010}_{-0.010}\) & \(59^{+8}_{-9}\) & \(5.6^{+0.8}_{-0.8}\) & \(10.6^{+0.1}_{-0.2}\) & \(58^{+1}_{-2}\) \\ NGC6362 & \(0.03^{+0.04}_{-0.02}\) & \(30^{+20}_{-20}\) & \(2^{+2}_{-2}\) & \(12.4^{+1.9}_{-1.0}\) & \(64.9^{+0.9}_{-1.0}\) \\ NGC6366 & \(0.06^{+0.08}_{-0.04}\) & \(20^{+30}_{-10}\) & \(2^{+2}_{-2}\) & \(13.1^{+0.6}_{-2.0}\) & \(55^{+2}_{-2}\) \\ NGC6397\({}^{*}\) & \(0.013^{+0.001}_{-0.001}\) & \(14^{+1}_{-1}\) & \(2.0^{+0.2}_{-0.2}\) & \(7.051^{+0.001}_{-0.001}\) & \(56.8^{+0.7}_{-0.8}\) \\ NGC6541\({}^{*}\) & \(0.000^{+0.004}_{-0.000}\) & \(0^{+9}_{-0}\) & \(0^{+2}_{-0}\) & \(5.6083^{+0.0004}_{-0.002}\) & \(55.7^{+0.8}_{-0.8}\) \\ NGC6624\({}^{*}\) & \(0.51^{+0.02}_{-0.02}\) & \(450^{+20}_{-20}\) & \(42^{+1}_{-1}\) & \(10.73^{+0.05}_{-0.05}\) & \(66.6^{+0.8}_{-0.9}\) \\ NGC6681\({}^{*}\) & \(0.058^{+0.005}_{-0.008}\) & \(56^{+5}_{-4}\) & \(6 Figure 2.— Violin plots (in blue) of the posterior probability distribution of the mass fraction in BHs (upper panel), the total mass in BHs (middle panel) and total number of BHs (lower panel) in all clusters in our sample, except for NGC 5139, which has a median total mass in BHs of approximately \(1.7\times 10^{7}\) M\({}_{\odot}\) (\(f_{\rm BH}-5\) per cent), and is excluded in order to be higherlight the distributions of the other clusters. The median, \(1\sigma\) and \(2\sigma\) values are denoted by the horizontal blue dots on each distribution. These errors include only the statistical uncertainties on our data, and thus are likely underestimated (see Section 3). Clusters are sorted based on the mass fraction in black holes. All clusters classified as core-collapsed in Trager et al. (1995) are denoted by an asterisk. The median and \(1\sigma\) results are also shown for the corresponding quantities in Weatherford et al. (2020) (red, Rui et al. (2021) (purple) and Makar et al. (2018) (green), for all clusters in common with our sample. Values from Weatherford et al. (2020) are computed using the median clustercentric mass segregation parameter A\({}_{\rm s,g}\) (Table 1 of Weatherford et al. 2020), and any necessary conversions between total mass and mass fraction are computed using our red cluster mass estimates (from Paper 1). Figure 2 shows the posterior probability distributions of the mass fraction in BHs (\(f_{\rm BH}\)), the total mass in BHs (\(M_{\rm BH}\)), and total number of BHs (\(N_{\rm BH}\)) inferred from our best-fitting models of most clusters in our sample. The median and \(1\sigma\) credibility intervals of the distributions of these quantities, as well as the mean individual BH mass (\(\bar{m}_{\rm BH}\)) and the mass fraction in all dark remnants (\(f_{\rm fermm}\)), are also presented in Table 1. NGC 5139 (\(\omega\) Cen) is not included in Figure 2, due to the very large inferred amount of black holes (\(f_{\rm BH}=5.1^{+0.2}_{-0.1}\) per cent), but it is discussed in more detail in Section 5.3. The posterior probability distributions of the model parameters for NGC 6723, as first noted in Paper I, favour a bimodal distribution with two completely separate peaks, in almost all parameters, with comparable posterior probability. These two separate sets of models are entirely disconnected and therefore, in most of the proceeding figures in this paper, in order to demonstrate how each behaves with respect to the relations discussed, we have opted to isolate these two peaks and show their medians and widths separately. This cluster and its bimodal posterior probability distributions are examined in further detail in Section 5.3. A large number of the clusters are consistent, within \(2\sigma\), with harbouring little to no black holes, while the remainder possess, on average, at most a few thousand M\({}_{\odot}\) of stellar-mass black holes, with constituent individual BH masses ranging between \(\sim 5\) to 15 M\({}_{\odot}\). Other than \(\omega\) Cen, it is clear that none of our models favour a very large population of black holes, with all clusters having a mass fraction in black holes between 0 and 1 per cent at the present day. It must be noted again that, as examined in Section 3, the errors on all of these quantities represent only the statistical uncertainties, and in reality the uncertainties on a quantity like \(f_{\rm BH}\) may be underestimated by between 0 to 0.5 per cent. ### Core-collapsed clusters After all BHs are dynamically ejected, the visible core 'collapses' (Breen & Heggie, 2013). Globular clusters having undergone core collapse are typically defined based on the shape of their central density profiles, with core-collapsed clusters showing a power-law density profile increasing all the way to their centres, while non core-collapsed clusters possess larger, isothermal cores with a flat central density profile (e.g. Djorgovski & King, 1986; Trager et al., 1995). Core-collapsed clusters are expected to contain very few; if any, black holes at the present day (Breen & Heggie, 2013, 2013; Kremer et al., 2020). In GCs with a population of stellar-mass BHs, core collapse occurs within the BH subsystem but, due to the efficient heat transfer from BHs to stars, the visible core will actually remain large (relative to \(r_{\rm h}\)). The presence of BHs in a cluster may thus play a large role in explaining the observed population of core-collapsed Milky Way GCs, which, given the ages of most clusters, is smaller than would be expected when considering only stellar binaries as the sole mechanism delaying core collapse. It is not until almost all BHs (and the last BH binary; Hurley, 2007) are ejected that a cluster core will collapse and exhibit the defining power-law central density profile (Chatterjee et al., 2013; Kremer et al., 2020). Almost all GCs have likely reached a state of balanced evolution (Henon, 1961; Gieles et al., 2011) but, due to BHs, only a minority of GCs (\(\sim 20\%\)) _appear_ to be post-collapse. The nine clusters in our sample defined as core-collapsed in Trager et al. (1995) (NGC 362, NGC 6266, NGC 6397, NGC 6541, NGC 6624, NGC 6681, NGC 6752, NGC 7078, NGC 7099) are denoted in Figure 2 and table 1 by an asterisk. Our best-fitting models of some of these clusters clearly favour a non-negligible amount of mass in black holes, however, as discussed below, we suspect the results are unreliable for these few clusters. Part of this discrepancy between the theoretical expectations and the inferred BH populations for some core-collapsed clusters may arise simply due to the limitations of the limepy models themselves. limepy models, by definition, possess an isothermal core, characterized by a flat inner density profile, which is incompatible with the central cusp of core-collapsed clusters (see also Section 3.1.4 of Gieles & Zocchi, 2015). As such, our models may struggle to accurately capture the inner density profiles of these clusters right up to the centre. Indeed this divergence can be seen in the profiles of the core-collapsed clusters with for which we infer substantial BH populations from our best-fitting models, which tend to underestimate the amount of stars within a very small distance from the centre (typically \(\sim 0.1\) pc in these clusters). While, in most core-collapsed clusters, the models may be able to provide a satisfactory fit to the available density data simply by having a sufficiently small core (below the radial reach of the data), in some clusters the shape of other datasets (such as the mass functions) may require a larger core, and cause the fitting procedure to sacrifice the quality of the fit to the central density profiles. In order to investigate these systems further, models were also fit to these clusters, in the same fashion as before, but with the amount of retained black holes at the present day now fixed to 0 (by fixing the BH\({}_{\rm ret}\) parameter to 0 per cent). As might be expected, the most immediately noticeable change to the model fits is in the number density profiles. Shown in Figure 3 are examples of the changes between the sets of models for NGC 6624 and NGC 7078, the two core collapsed clusters in our sample with the highest inferred mass fraction in BHs (\(f_{\rm BH}\sim 0.5\) per cent) and the only two where significant differences can be seen between the models with and without BHs. In the other core-collapsed clusters, which favour smaller or negligible BH populations, the fits do not change noticeably. Likely, as suggested above, the cores of these models are small enough that even though the models are not truly core collapsed, they are still able to represent the observed density profiles. In the case of NGC 6624 and NGC 7078, Figure 3 clearly shows that models without BHs _are_ able to also reproduce the central density profiles, and would provide a much better fit in that regime, however, the models containing BHs have a higher overall likelihood, due to the fits to other datasets, which cannot be reproduced as well by the models without BHs. Overall, it is clear that caution must be applied when attempting to fit core-collapsed clusters using limepy models. The original models (with BHs) are used throughout this paper and in all discussion of BHs, however the results for these core-collapsed clusters should be regarded with great caution, especially when a large population of BHs is inferred. All core-collapsed clusters are noted in the figures and tables by an asterisk or a red outline. As the larger inferred BH populations of some of these clusters are most likely artificial, the models introduced here, fixed to 0 BHs, were used in Paper I to avoid any impacts on the inferred mass function slopes (though these were found to be minimal). ### Relationships between BH population and other parameters We next examine how the population of black holes and other remnants in our cluster models correlate with various related parameters. Figure 4 shows the relationship between the high-mass initial mass function exponent \(\alpha_{3}\) (\(\geq 1\) M\({}_{\odot}\)) from Paper I and both the black hole retention fraction BH\({}_{\rm ret}\) and the BH mass fraction \(f_{\rm BH}\). This serves to demonstrate the role of the BH\({}_{\rm ret}\) parameter, which is not _directly_ proportional to the number of BHs. At high values of \(\alpha_{3}\) (i.e. steeper slopes), only a small number of black holes can be formed initially from the IMF, and a higher retention fraction is required to maintain any amount of black hole mass at the present day. The right panel of Figure 4 shows that no clear correlation is present between \(\alpha_{3}\) and the mass fraction in BHs at the present day. The visible pattern instead relates to the relationship with BH\({}_{\rm ret}\); to end up with similar mass fractions in BHs, clusters with higher \(\alpha_{3}\) values produce fewer BHs initially while clusters with low \(\alpha_{3}\) values produce more BHs initially but retain few. This of course does not imply a causal relationship between BH mass fraction and either \(\alpha_{3}\) or BH\({}_{\rm ret}\), but merely helps to explain the distribution of BH\({}_{\rm ret}\). We do find an interesting relationship between the mass fraction of BHs and the parameter \(\delta\), which sets the mass-dependence of the velocity scale and acts as a proxy of mass segregation, as shown in Figure 5. Clusters with little to no mass in BHs tend to converge near values of \(\delta\sim 0.4\)\(-\)0.5, which is typical of evolved and mass-segregated clusters, whereas the clusters with more substantial populations of black holes congerate closer to the lower bound of 0.3. This is in agreement with the study of Peuten et al. (2017), who find, by comparing limepy models against \(N\)-body models with and without black holes, that the majority of mass-segregated clusters should converge to a value of \(\sim 0.5\), but also show that, in models with a significant population of black holes, the degree of mass segregation as traced by the parameter \(\delta\) may be suppressed. ## 5 Discussion ### The evolution of clusters and their BH populations The co-evolution of a cluster and its BH population depends directly on the BH mass fraction \(f_{\rm BH}\) and the initial density of the cluster. Very early on in the lifetime of a cluster, \(f_{\rm BH}\) will rapidly increase, as stellar mass is lost due to stellar evolution and BHs are formed. This initial population of BHs, and \(f_{\rm BH}\), depends primarily on the cluster metallicity, the shape of the IMF and the velocity distribution of natal kicks imparted on BHs when they form. The BHs initially retained (after kicks) will rapidly segregate to the cluster core. During the proceeding expansionary phase, BHs will be ejected from the core due to dynamical BH-BH interactions, while the lowest mass stars, on the cluster periphery, will be the most affected by tidal losses, leading to a decrease in \(f_{\rm BH}\) (Breen & Heggie, 2013b; Gieles et al., 2021). The amount that \(f_{\rm BH}\) decreases during this stage is dependent on the initial cluster density, relative to the tidal density. Higher density clusters will eject a higher proportion of the initially formed BHs (Banerjee & Kroupa, 2011). Once the cluster becomes tidally limited, the remaining \(f_{\rm BH}\) determines the subsequent evolution of the cluster mass. Theory (Breen Figure 4: Relation between the high-mass initial mass function exponent (\(\alpha_{3}\)) and the black hole retention fraction parameter (BH\({}_{\rm ret}\)) and the mass fraction in black holes (\(f_{\rm BH}\)) for all clusters, except for NGC 5139, which has a substantially higher BH\({}_{\rm ret}\) of \(\sim 19\) per cent and a more typical \(\alpha_{3}\) value of \(\sim 2.2\). All core-collapsed clusters, whose inferred black hole populations may not be physical, are highlighted by a red outline. The two possible solutions for NGC 6723 are each shown by a square marker. Figure 5: Relation between the mass-dependant velocity scale (\(\delta\)) and the total mass in black holes for all clusters, except NGC 5139. All core-collapsed clusters, whose inferred black hole populations may not be physical, are highlighted by a red outline. The two possible solutions for NGC 6723 are each shown by a square marker. Figure 3: The number density profiles of the best-fitting models of NGC 6624 and NGC 7078, with and without allowing for a population of BHs. The number density data used to constrain the models is shown by the orange circles. Background levels have been subtracted from all data. Inset frames show a zoomed view on the model profiles near the cluster cores, to showcase the differences between the sets of models. & Heggie, 2013b) and \(N\)-body modelling (Gieles & Gnedin, 2023) suggests that there exists a critical BH mass fraction \(f_{\rm crit}\), at which the dynamical losses of BHs (relative to the total cluster mass loss) will exactly equal \(f_{\rm BH}\), causing \(f_{\rm BH}\) to remain constant until the dissolution of the cluster. Given an \(f_{\rm BH}\) greater(less) than this critical fraction, tidal stellar mass losses will be greater(less) than the dynamical ejections of BHs from the core. Clusters above the critical fraction will progress towards 100 per cent BHs, while clusters below will eject most of the BHs and begin to evolve similarly to a 0 BH system. This critical fraction was determined to be \(\sim 10\) per cent (Breen & Heggie, 2013b) for idealized single-mass clusters filling their tidal radius, however recent \(N\)-body modelling placed it closer to 2.5 per cent for clusters with a full mass spectrum (Gieles et al., 2021). In our sample, while there is a range of values between individual clusters, we find that nearly all clusters have a \(f_{\rm BH}\) below 1 per cent (except for \(\omega\) Cen). Gieles & Gnedin (2023) modelled the Milky Way GC population using a parametrization for the mass-loss rate from GCs with BHs, assuming that the initial GC half-mass density is 30 times higher than Roche filling. Under this assumption, they predicted that the majority of GCs should now have \(f_{\rm BH}\sim 2\) per cent. Our empirical results are lower than this, which could indicate that their initial densities are not high enough. However, the authors do note that the constant density (relative to the tidal density) they have used for all GCs is only meant to approximate the _average_ density of GCs. In reality, a distribution of ratios of initial densities over the tidal densities must exist, which would lead to both a population of GCs that can eject all their BHs (the highest density GCs) and a population of extended GCs (like the Palomar GCs) which fill the Roche volume (the lowest density GCs) (Baumgardt et al., 2010). Lower density GCs would also have a higher mass-loss rate, and therefore there may exist a survival bias in the clusters visible at the present-day, biasing our sample. It must also be noted that the initial \(f_{\rm BH}\) is highly dependent on the IMF of the clusters. The IMF of our clusters is likely bottom-light compared to a Kroupa IMF (see Paper I and Baumgardt et al., 2023). This would lead to an increased initial \(f_{\rm BH}\), and thus require an even higher density in order to reach the critical fraction. As the \(f_{\rm BH}\) we see in our clusters is well below the critical mass fraction noted by Gieles et al. (2021), it is clear that the \(f_{\rm BH}\) in these clusters must have been below this critical amount if/when they became tidally limited. The clusters can thus be expected to continue to evolve towards 0 BHs. Early theoretical expectations of the behaviour of GCs and their compact BH subsystem suggested that the subsystem would become dynamically decoupled from the rest of the stars in the cluster, succumbing to the _Spitzer Instability_(Spitzer, 1969; Spitzer, 1987), and rapidly ejecting nearly all BHs. However, more recent modelling work has shown that clusters must not necessarily become Spitzer unstable (Morscher et al., 2013, 2015). In our models, a large fraction (\(24/34\)) of the clusters remain Spitzer stable (according to the classification given in Spitzer, 1987), consisting mostly of clusters with \(f_{\rm BH}\lesssim 0.1\) per cent. Breen & Heggie (2013a), examining an idealized, two-component model of clusters, suggested a relationship between the mass fraction in BHs and the relative size of the BH subsystem, proportional to \(f_{\rm BH}^{3/2}\) for Spitzer unstable systems and \(f_{\rm BH}\) for stable systems. While it is difficult to compare directly with the fits on this relation by their simple models, they do predict a BH subsystem size of \(r_{\rm H,BH}/r_{h}\sim 0.04\) near \(f_{\rm BH}\sim 0.1\) per cent, which is similar to the bulk of our clusters. Baumgardt et al. (2023), examining a suite of \(N\)-body models compared to observations, demonstrated that the trend seen between the global low-mass stellar mass function slopes, as derived from observations, and the dynamical age of clusters could be reproduced only by models with a maximum BH retention fraction of 30 per cent (immediately after natal kicks), ruling out a high initial retention rate. Their models with higher initial BH retention fractions (which suppresses mass segregation and the preferential loss of low-mass stars) cannot reproduce the trend since they are not able to produce clusters strongly depleted in low-mass stars. After the subsequent BH hardening and dynamical ejections, they estimate an average surviving BH mass fraction of \(f_{\rm BH}\sim 0.03-0.1\) per cent at the present day, in generally good agreement with many of our clusters. It should be noted that any analysis of our results with respect to the complete population of Milky Way GCs could be biased by the choice of clusters examined. The sample of clusters chosen was limited primarily by the availability of good quality data, especially mass function depth and radial coverage, requiring adequate deep _HST_ photometry. These criteria bias our sample somewhat towards more massive and nearby clusters. In comparison to the overall population of GCs (as given in Harris, 1996, 2010 edition), our chosen clusters tend to be slightly more massive, have a smaller core radius and a lower galactocentric radius.This could indeed bias our sample slightly towards clusters with lower \(f_{\rm BH}\), as they do not contain many low-density or "flufty' outer halo GCs, which may have much higher \(f_{\rm BH}\)(Gieles et al., 2021). We can also examine how the populations of remnants in our clusters relate to their dynamical age. Figure 6 shows the relationship between the fraction of mass in BHs and the fraction of the cluster mass in all remnants (WD, NS and BHs), against the dynamical age Figure 6: Relations between mass fraction in BHs (top panel) and mass fraction in all dark remnants (bottom panel) with the remaining mass fraction for all clusters, except for NGC 5139 (\(M_{\rm radio}/M_{\rm initial}=0.50\), \(f_{\rm BH}=5.1\) per cent, \(f_{\rm remnant}=52.6\) per cent). All core-collapsed clusters, whose inferred BH populations may be unreliable, are highlighted in red. The two possible solutions for NGC 6723 are each shown by a square marker. of the clusters, estimated based on the _remaining mass fraction_, as was described in Paper I: \[\frac{M_{\rm today}}{M_{\rm initial}}=0.55\times\left(1-\frac{\rm Age}{\tau_{\rm life }}\right), \tag{2}\] where the factor 0.55 reflects the typically assumed mass loss from stellar evolution of \(\sim 55\) per cent of the initial cluster mass in the first Gyr of a cluster's evolution and the dissolution time \(\tau_{\rm life}\) represents the estimated total lifetime of the cluster and is taken from Baumgardt et al. (2023). While we can see here that clusters with substantial populations of BHs tend to be less evolved, there is no strong correlation between the BH mass fraction and the dynamical age of the clusters, likely indicative of their different initial conditions and the effects they would have on the evolution of the BH population over time, as discussed above. The evolution of the _remnant_ mass fraction, which includes all types of stellar remnants, shows a stronger relationship with the dynamical age of the clusters, as might be expected, and has been previously reported by Sollima & Baumgardt (2017). As a cluster evolves and loses mass, the mass lost is preferentially in the form of lower-mass stars from the outer parts of the cluster, rather than the heavier remnants, and as such the fraction of mass in remnants should increase as the cluster's low-mass stars are depleted. Interestingly, some of the most dynamically evolved clusters have around 70 per cent of their mass in dark remnants at the present day, something worth bearing in mind when interpreting the mass-to-light ratios and inferring masses of unresolved GCs in distant galaxies. These relatively high remnant mass fractions are in good agreement with the results of Sollima & Baumgardt (2017), for the clusters in common, although it should be noted that these authors adopt a simpler prescription for the mass function of remnants and a fixed, canonical high-mass IMF, unlike the flexibility allowed in our models for both of these quantities. The \(N\)-body models of Baumgardt & Makino (2003) also showed that evolved clusters can consist of up to nearly 70 per cent WDs, by mass, in line with our results. As detailed in Paper I and Baumgardt et al. (2023), our dynamical masses are also in excellent agreement with other recent works, and therefore the cluster mass-to-light ratios implied by our results are in keeping with the recent literature (e.g. Baumgardt et al., 2020). These mass-to-light ratios are confined to a narrow range near \(M/L_{V}\sim 2\) and have been shown to be consistent with the the mass-to-light ratios predicted by stellar population models once the depletion of low-mass stars is taken into account (Baumgardt et al., 2020). Interestingly, the higher remnant mass fractions compensate the lack of low-mass stars to maintain the mass-to-light ratios within this narrow range, a finding consistent with the results of Sollima et al. (2012). ### Comparison with Literature Results Also shown in Figure 2 are comparisons with the distribution of BH mass fraction, total mass in BHs, and/or number of BHs in our models with those of Askar et al. (2018), Weatherford et al. (2020) and Rui et al. (2021c). In order to estimate the BH mass fraction in a number of Milky Way GCs, Weatherford et al. (2020) compared the amount of visible mass segregation in these clusters to the anti-correlation found in Weatherford et al. (2018) between the degree of mass segregation in a cluster and its BH population in the _Cluster Monte Carlo_ (CMC) catalogue of models. In similar fashion to their analysis, we also scale their computed estimates of \(f_{\rm BH}\) (based on the median clustercentric mass segregation parameter \(\Delta_{r50}\)) by the total cluster mass determined by our models in order to compare the total mass in BHs. Askar et al. (2018) predicted the amount of BHs in a number of Milky Way GCs based on the correlations found in Arca Sedda et al. (2018) between the density of inner BH-subsystems and the central surface brightness of the clusters in the _Monte Carlo Cluster Simulator_ (MOCCA) survey database. A somewhat analogous analysis may be found in Rui et al. (2021c), who matched the surface brightness and velocity dispersion profiles of 26 Milky Way GCs to a grid of CMC models and explored seven in more detail (see Section 3 for more information). The number of BHs reported for the three clusters in common with Rui et al. (2021c) is shown in Figure 2. Further analysis and comparison with other studies of interesting individual clusters is presented below in Section 5.3 The majority of our clusters agree well, within 2\(\sigma\), with the amount of mass in BHs estimated in these studies, however notable discrepancies can be seen between individual clusters. In Figure 7, the residuals (normalized by the combined 1\(\sigma\) uncertainties) between our BH mass fraction results and those of the literature sources discussed above are shown. We can see a clear trend showing that for clusters where we infer small BH populations, we tend to predict fewer BHs than other studies, while for clusters where we infer larger BH populations, we tend to predict more BHs than previously reported in the literature. This is especially pronounced in the comparisons with Askar et al. (2018), where the clusters with smaller BH populations do not even agree within 2\(\sigma\). In other words, the distributions of total BH masses between clusters predicted by Askar et al. (2018) and Weatherford et al. (2020) are somewhat flatter (i.e. less variation in BH mass fraction across the sample) than our results. Despite the differences between specific clusters in our samples, we do agree with the overall conclusion that, in general, the mass fraction of BHs retained in clusters at the present day is small, between 0 and 1 per cent. Our analysis of the BH populations of individual clusters may be more robust than many of these literature results, which rely on general correlations between models with only a few varied initial parameters, and are fit on only a single observed property (mass Figure 7: The (error weighted) residual of the BH mass fraction between literature sources and our models, with respect to the BH mass fraction of our models, for all clusters overlapping with our sample, excepting any core-collapsed clusters. Weatherford et al. (2020) values are computed using the median clustercentric mass segregation parameter \(\Delta_{r50}\) (Table 1 of Weatherford et al., 2020). Conversions between the total BH masses presented by Askar et al. (2018) and BH mass fractions are computed using our total cluster mass estimates. The two possible solutions for NGC 6723 are each shown by a square marker. segregation between three stellar populations for Weatherford et al. (2020) and the central surface luminosity for Askar et al. (2018)), whereas we self-consistently include the effect of BHs in our fits of numerous cluster observables with many free parameters. In particular, as noted in Weatherford et al. (2020), the correlations of Askar et al. (2018) rely on a number of chained parametric fits, which may bias the final values, and the models used to construct the correlations in Arca Sedda et al. (2018) exclude those with \(N_{\rm BH}<15\), which may lead to overpredictions in their inferred numbers of BHs. Rui et al. (2021c) matched available CMC model snapshots to profiles of surface brightness and velocity dispersion observations, but this study is based on a limited grid of models with only a few varied parameters. We are thus able to, in most clusters, place tighter constraints on the mass in BHs through our fits. It should, however, be noted again that our uncertainties account solely for the statistical uncertainties on the parameter fits and could thus be underestimated by around 0.5 per cent in the clusters with the largest BH populations (see Section 3). In addition, there are a number of astrophysical processes that could possibly have a small effect on the inferred amount of which we do not model, such as binaries and cluster rotation. Another possible major source of differences between these results is the (initial) mass function formulation, which was identified by Weatherford et al. (2020) and Rui et al. (2021c) as a potential source of unexplored uncertainty in their analysis. The freedom in the shape of our (initial) mass function, on a per-cluster basis, allows us to best explore the population of BHs and other heavy remnants, as well as their relative abundance compared to lower-mass stars. The generally more bottom-light, depleted in low-mass stars, low-mass mass function slopes found in our fits (see Paper I) and in similar analyses (Baumgardt et al., 2023), compared to the canonical Kroupa IMF assumed by Weatherford et al. (2020) and others, may for example affect the mass segregation correlation found in Weatherford et al. (2018) and thus the amount of mass in BHs found by Weatherford et al. (2020). However, the exact effects of such a bottom-light IMF on cluster evolution, and the evolution of BHs within the clusters, remain to be further explored. We can also place our BH results in the context of a number of studies that have searched for the presence of an intermediate-mass black hole (IMBH) in various Milky Way GCs, based on a number of indirect dynamical inferences (e.g. Noyola et al., 2008; McNamara et al., 2012; Kiziltan et al., 2017; Perera et al., 2017). All of these reported IMBH candidates have been controversial, and in many cases have been rebutted, either by the introduction of improved data, or by other, more plausible, physical interpretations of the data (e.g. van der Marel & Anderson, 2010; Zocchi et al., 2017, 2019; Gieles et al., 2018; Baumgardt et al., 2019b). Our models are able to reproduce well the observables of all clusters in our sample without the need for any central IMBH. It should, however, be noted that it is currently not possible to self-consistently include an IMBH in our models to compare directly and to explore any partial degeneracy possible between a central IMBH and a central concentration of stellar-mass BHs (e.g. Lutzgendorf et al., 2013). It has also been shown that an IMBH with a mass fraction below \(\sim 2\) per cent of the cluster mass may produce similar dynamical effects as a population of stellar-mass BHs (Aros & Vesperini, 2023). Some of the clusters previously claimed to host an IMBH are discussed in more detail in Section 5.3 below. ### Comments on individual clusters We now compare our results with other dynamical studies of BHs in GCs, based on a number of different methods, in certain particularly interesting clusters. #### 5.3.1 Ngc 5139 NGC 5139, or \(\omega\) Cen, is the most massive Milky Way GC, and stands apart from the population of Milky Way GCs due to its mass and stellar populations (e.g. Harris, 1996, 2010 edition; Baumgardt et al., 2019a). It has been suggested that \(\omega\) Cen may not be a classical globular cluster, but rather the possible remnant nuclear star cluster of an accreted and disrupted dwarf galaxy (e.g. Meza et al., 2005). It has also been hypothesized to harbour an elusive IMBH (Noyola et al., 2008; van der Marel & Anderson, 2010). While our models are able to fit the large amount of available data very well, it does not appear in many of the figures above given its significantly larger inferred mass fraction in BHs. Our fits favour a mass fraction in BHs of \(5.1^{+0.2}_{-0.1}\) per cent (\(164\,000^{+0000}_{-4000}\) M\({}_{\odot}\)), largely concentrated within the central regions of the cluster and driven, not by a top-heavy IMF producing more BHs initially (\(\alpha_{3}=2.16^{+0.04}_{-0.08}\) close to Salpeter; Paper I), but by a very large BH retention fraction (\(20^{+3}_{-5}\) per cent). This amount of BHs is substantially higher than any other cluster in our sample, but is in excellent agreement with the results of other studies (e.g. Zocchi et al., 2019; Baumgardt et al., 2019b). Zocchi et al. (2019) modelled \(\omega\) Cen using two-component (one representing stellar-mass BHs and one capturing all other lower-mass remnants and visible stars) limeby models. Our agreement with their results is interesting, given our inclusion of the full mass spectrum, and our fitting of the visible mass function, and reinforces the assertion of Zocchi et al. (2019) that a two-component model is a reasonable approximation when modelling \(\omega\) Cen, given its large amount of BHs, long two-body relaxation time, and young dynamical age. Many claims have been made for the presence of an IMBH in the centre of \(\omega\) Cen. As in the studies of Zocchi et al. (2019) and Baumgardt et al. (2019b), our models suggest that an IMBH is not needed to match the data, however we are also limited by the extent of the kinematical data available in the very centre of the cluster. As was also noted in Zocchi et al. (2019, Figure 5), our models would be discrepant with the velocity dispersion profiles of the different IMBH-containing models presented by Noyola et al. (2008), van der Marel & Anderson (2010) and Baumgardt (2017) mostly within the innermost few arcseconds of the cluster, where data is currently lacking. As such we cannot say for certain whether some of the dark mass we find may actually be in the form of a central IMBH, given the degeneracy between the effects produced by such an IMBH and a central concentration of smaller BHs. There is one caveat to our results; the Gaia proper motion anisotropy profile shows that \(\omega\) Cen transitions, at about 20 arcmin from the centre, from being radially anisotropic to being slightly tangentially anisotropic. Our limeby models are unable to reproduce any amount of tangential anisotropy (Gieles & Zocchi, 2015), and thus cannot match this feature. Instead, when tangentially biased anisotropy is present in our data. the models will favour a mostly isotropic fit as a compromise between the radial and tangential regimes (Peuten et al., 2017). There is a degeneracy present between the degree of radial anisotropy in a cluster and its mass in black holes (Zocchi et al., 2017), however the difference in the BH mass fraction between the isotropic and radially anisotropic models of Zocchi et al. (2019) is only on the order of \(\sim 0.7\) per cent. While further exploration of the effects of tangential anisotropy on mass models of \(\omega\) Cen would be interesting, given our excellent fits of all other datasets, this should have a negligible impact on the results presented here. #### 5.3.2 Ngc 104 NGC 104, or 47 Tuc, is one of the nearest and most massive Milky Way GCs, and as such has been extensively studied in the past. Recent modelling efforts using both Monte Carlo cluster models (Weatherford et al., 2020; Ye et al., 2022) and multimass limepy models (Henault-Brunet et al., 2020) have provided predictions on the amount of BHs in the cluster. As shown in Figure 8, our models tend to favour a similar amount of mass in BHs (\(f_{\rm BH}=0.116^{+0.014}_{-0.009}\)) to other studies, and are consistent within \(2\sigma\) with all. It is notable that we are able to not only constrain an upper limit on the mass in BHs (such as was presented in Henault-Brunet et al., 2020), but also now place clear and tight bounds on it, thanks to the updated treatment of BHs in our multimass models and the updated mass function data we are using. It was postulated by Kiziltan et al. (2017) that 47 Tuc may host an IMBH of around 2300 M\({}_{\odot}\), based on the analysis of the accelerations of millisecond pulsars in the cluster and comparisons with \(N\)-body simulations. However follow-up studies using equilibrium models fit to various cluster observables (Henault-Brunet et al., 2020; Mann et al., 2019, although see Mann et al., 2020) determined that there was no need for an IMBH to explain the observations, and that a central concentration of less-massive dark remnants could explain the data. This conclusion is again reinforced by our results, which favour a smaller central concentration of stellar-mass black holes, alongside a population of white dwarfs and neutron stars. #### 5.3.3 Ngc 6397 NGC 6397 is a metal-poor, core-collapsed Milky Way GC at a very short heliocentric distance (\(\sim 2.4\) kpc; Harris, 1996, 2010 edition), which has been well studied in the past. Kamann et al. (2016) first showed that models including an IMBH or very centrally concentrated cluster of stellar-mass BHs of \(\sim 600\) M\({}_{\odot}\) could best reproduce the central kinematics of this cluster. Vitral and Mamon (2021) showed, in turn, that leans models with more robust proper motion fitting disfavoured an IMBH, and instead proposed an inner subcluster of unresolved dark remnants measuring \(\sim 1000-2000\) M\({}_{\odot}\), which they suggested is dominated by stellar-mass BHs. However, Rui et al. (2021b,c) demonstrated, through fits of CMC models, that no BHs were required to explain the kinematics of NGC 6397 and that the inner density profile of the cluster argues against the presence of BHs. This is in line with the core-collapsed nature of NGC 6397 and reinforced by the mass segregation based estimates of Weatherford et al. (2020). These results suggest instead that the central dark subcluster could be made up largely of white dwarfs (Kremer et al., 2021). A subsequent re-examination of the Jeans modelling of NGC 6397 by Vitral et al. (2022), with updated proper motion datasets, lowered the claimed mass of the central "dark" cluster to \(\sim 800\) M\({}_{\odot}\), and concurred with a subcluster dominated by white dwarfs, instead of stellar-mass BHs. Our best-fitting models of NGC 6397, despite the caveats of modelling core-collapsed clusters discussed in Section 4.1, favour a negligible population of black holes (\(f_{\rm BH}=0.013^{+0.001}_{-0.001}\)), consistent with the results of Weatherford et al. (2020) and Rui et al. (2021b). Our models also favour a large population of white dwarfs and neutron stars dominating the core of the cluster, and it is clear that they concur with the general consensus that NGC 6397 hosts a massive central concentration of WDs, and little to no BHs, nor an IMBH. #### 5.3.4 Ngc 3201 NGC 3201 is a nearby Milky Way GC which has a notably low and flat core density profile (i.e. far from core-collapsed), and is the host of three confirmed stellar-mass black hole candidates in detached binaries (Giesers et al., 2018, 2019). CMC models of NGC 3201 (Kremer et al., 2018, 2019) suggested that models with \(\sim 120\) BHs were best able to recreate the velocity dispersion and surface brightness profiles, in general agreement with the results of Askar et al. (2018) and the inner subcluster of dark remnants found by Vitral et al. (2022). Weatherford et al. (2020) in turn favoured a slightly lower, but still consistent, \(\sim 44\) BHs. In contrast, our best-fitting models of NGC 3201 favour a remarkably small amount of BHs, with the distribution peaking at 0 BH (95% probability of containing less than 10 BHs). This is somewhat surprising, given the literature results and the shape of the cluster density profile, but is technically in agreement (within \(2\sigma\)) with the results of Weatherford et al. (2020), and follows the trend in our results of predicting fewer BHs than other studies in this regime (see Figure 7). It should be noted that the fit of our models to the number density profile is not perfectly satisfying, as it slightly overestimates the surface density in the outer parts of the cluster, and slightly underestimates the core density. This may likely be due to the irregular shape of the inner surface brightness profile published by Trager et al. (1995), and could have an impact on the amount of BHs recovered in our models. #### 5.3.5 Ngc 6723 As first mentioned in Section 4 and in Paper I, the posterior parameter distributions of NGC 6723 are uniquely bimodal, with two clear peaks in most parameters representing models of nearly equally good fits. Though it is difficult to discern what combination of observables may cause this bimodality to appear, given the good fits provided by both modes, the most noticeable difference is in the velocity anisotropy profiles. Models surrounding one peak are entirely isotropic, while the others are significantly radially anisotropic. The two are also clearly divided in their populations of BHs, as shown in Figure 9, with the peak of the distribution of mass in BHs lying at almost exactly 0 (\(f_{\rm BH}=0.01^{+0.02}_{-0.01}\) per cent) in the isotropic models, and the anisotropic models peaking near \(f_{\rm BH}=0.85^{+0.05}_{-0.06}\) per cent, among the highest BH mass fractions in our sample. This important divide may also drive the difference in other related parameters. This relationship between velocity anisotropy and the amount of black holes matches the degeneracy explored by Zocchi et al. (2017), in models of \(\omega\) Cen. These two modes can therefore be thought of as two separate and independent sets of suitable models; one (isotropic) with BHs and one (anisotropic) without. Differentiating the two models is difficult, and more data, especially proper motions in the outer regions, which currently are insufficiently covered by few _Gaia_ datapoints in this cluster, would be required to be better able to constrain the velocity anisotropy in NGC 6723. The models with BHs agree remarkably well with the results of other studies (Askar et al., 2018; Weatherford et al., 2020), as can be seen in Figure 9, though it should be noted again that the uncertainties in these studies are large, and within 2\(\sigma\) both would agree with 0-BH models as well. The two models also follow the various expected relationships discussed in Section 4.2 independently, as can be seen in the figures of this section, where the two peaks were split and shown separately. #### 5.3.6 Ngc 6121 NGC 6121, or M 4, is the nearest GC (\(d\sim 1.85\) kpc; Baumgardt & Vasiliev 2021) and as such has been extensively observed. Recently, Vitral et al. (2023) utilized Jeans modelling of M 4, followed up by a comparison with Monte Carlo models, to fit on _HST_ and _Gaia_ kinematic data and suggest an excess of dark mass of around \(800\pm 300\) M\({}_{\odot}\), concentrated within the inner 0.016-0.034 pc of the cluster. They also explore the possibility that this mass is concentrated in a single IMBH. Our models have a comparable amount of dark mass within the central regions (dominated almost entirely by white dwarfs) but in a less concentrated mass profile, reaching a similar cumulative mass of around \(800\) M\({}_{\odot}\) near 0.1 pc instead. Our models also suggest a small population of BHs, totalling around \(80^{+30}_{-30}\)M\({}_{\odot}\) (\(f_{\rm BH}=0.08^{+0.03}_{-0.03}\) per cent). This mass in BHs is significantly smaller than the dark mass suggested by the Jeans models of Vitral et al. (2023), however, notably, this is actually in good agreement with the best matching CMC model they also presented. One key difference between our analyses lies in the data used. Vitral et al. (2023) introduce new HST proper motion data which reaches deeper into the core of the cluster than the Libralato et al. (2022) data we use and their velocity dispersion profile seems to increase toward the centre in the inner few arcseconds, albeit with very large uncertainties. However, given these large uncertainties and our very good fits to all other datasets, it is unlikely that our results would change significantly with the inclusion of these new datapoints. ## 6 Conclusions In this work, we have utilized the best-fitting multimass models of 34 GCs, first presented in Paper I, to explore the BH and remnant populations of a large sample of Milky Way clusters, yielding a number of important conclusions: 1. The models allow us to infer best-fitting, posterior probability distributions for the total mass, number and mass fraction of BHs in our sample of clusters. These results indicate that a large number of the GCs are consistent with hosting little to no BHs, with the largest BH populations reaching masses in BHs up to a few thousand M\({}_{\odot}\) and mass fractions of around 1 per cent (save for \(\omega\) Cen, for which \(f_{\rm BH}\sim 5\) per cent). 2. We find an anti-correlation between the BH mass fraction and the \(\phi\) parameter, a proxy of mass segregation, with clusters having little mass in BHs congregating around \(\sim 0.5\), while \(\phi\) is lower (closer to 0.3) and mass segregation is increasingly suppressed in clusters with more substantial BH populations, in agreement with the findings of Peuten et al. (2017). 3. As the \(f_{\rm BH}\) we see in the clusters of our sample are well below the critical mass fraction of \(\sim 2.5\) per cent noted by Gieles et al. (2021), these clusters are expected to continue to evolve towards a point when they will have ejected all their BHs. The inferred present-day \(f_{\rm BH}\) encode information about the dynamical evolution and initial density of GCs that can be used in future work to infer the initial conditions of the population of Milky Way GCs. 4. A clear correlation is also found between the dynamical age of the clusters and the overall remnant mass fraction, which increases as clusters evolve and lose low-mass stars. Our results show that the most evolved GCs in our sample are made up of around 70 per cent dark remnants, by mass, at the present day. 5. We find typically good agreement overall, within uncertainties, between our results and those of other studies inferring BH populations in GCs in the literature (e.g. Askar et al. 2018; Weatherford et al. 2020), but with notable discrepancies between individual clusters. Our inferred masses in BHs are, generally, slightly smaller (larger) than these studies in clusters with small (large) BH populations. 6. Closer inspection of a number of interesting clusters with previous claims of hosting an elusive IMBH reveal no need for such an object to explain the large amount of data used in our model fitting. ## Acknowledgements ND is grateful for the support of the Durland Scholarship in Graduate Research. VHB acknowledges the support of the Nat Figure 8: Posterior probability distributions of the BH mass fraction, total mass and number of BHs in 47 Tuc. The results (median and 1\(\sigma\)) of various recently inferred values from the literature are shown in the bottom panels. ural Sciences and Engineering Research Council of Canada (NSERC) through grant RGPIN-2020-05990. MG acknowledges support from the Ministry of Science and Innovation (EUR2020-112157, PID2021-125485NB-C22, CEX2019-000918-M funded by MCIN/AEI/10.13039/501100011033) and from AGAUR (SGR-2021-01069). This research was enabled in part by support provided by ACENET (www.ace-net.ca) and the Digital Research Alliance of Canada ([https://alliancecan.ca](https://alliancecan.ca)). This work has also benefited from a variety of Python packages including astropy (Astropy Collaboration et al., 2013, 2018), dynesty (Speagle, 2020), emcee (Foreman-Mackey, 2016), h5py (Collette et al., 2022), cmctoolkit (Rui et al., 2021), JAX (Bradbury et al., 2018), blackjax (Lao & Louf, 2020), matplotlib (Hunter, 2007), numpy (Harris et al., 2020), scipy (Virtanen et al., 2020) and shapely (Gillies et al., 2022). ## Data Availability The data underlying this article are available at [https://github.com/mmdickson/GCfit-results](https://github.com/mmdickson/GCfit-results).
2310.09225
The parabolic quaternionic Monge-Ampère type equation on hyperKähler manifolds
We prove the long time existence and uniqueness of solution to a parabolic quaternionic Monge-Amp\`{e}re type equation on a compact hyperK\"{a}hler manifold. We also show that after normalization, the solution converges smoothly to the unique solution of the Monge-Amp\`{e}re equation for $(n-1)$-quaternionic psh functions.
Jixiang Fu, Xin Xu, Dekai Zhang
2023-10-13T16:24:06Z
http://arxiv.org/abs/2310.09225v1
# The parabolic quaternionic Monge-Ampere type equation on hyperKahler manifolds ###### Abstract. We prove the long time existence and uniqueness of solution to a parabolic quaternionic Monge-Ampere type equation on a compact hyperKahler manifold. We also show that after normalization, the solution converges smoothly to the unique solution of the Monge-Ampere equation for \((n-1)\)-quaternionic psh functions. ## 1. introduction A hypercomplex manifold is a smooth manifold \(M\) together with a triple \((I,J,K)\) of complex structures satisfying the quaternionic relation \(IJ=-JI=K.\) A hyperhermitian metric on a hypercomplex manifold \((M,I,J,K)\) is a Riemannian metric \(g\) which is hermitian with respect to \(I\), \(J\) and \(K\). On a hyperhermitian manifold \((M,I,J,K,g)\), let \(\Omega=\omega_{J}-i\omega_{K}\) where \(\omega_{J}\) and \(\omega_{K}\) are the fundamental forms corresponding to \(J\) and \(K\) respectively. Then \(g\) is called hyperKahler (HK) if \(d\Omega=0\), and called hyperKahler with torsion (HKT) if \(\partial\Omega=0\). Throughout this paper we use \(\partial\) and \(\bar{\partial}\) to denote the complex partial differential operator with respect to the complex structure \(I\). Analogous to the complex Calabi-Yau equation on Kahler manifolds which solved by Yau [30], Alesker and Verbitsky introduced a quaternionic Calabi-Yau equation on hyperhermitian manifolds in [4] \[(\Omega+\partial\partial_{J}u)^{n} =e^{f}\Omega^{n},\] \[\Omega+\partial\partial_{J}u >0, \tag{1.1}\] where \(f\) is a given smooth function on \(M\) and \(\partial_{J}:=J^{-1}\circ\overline{\partial}\circ J\). They conjectured that the equation is solvable on HKT manifolds with holomorphically trivial canonical bundle with respect to \(I\) and further obtained the \(C^{0}\) estimate in this setting [4]. Alesker [1] solved the equation on a flat hyperKahler manifold. In [2] Alesker and Shelukhin proved the \(C^{0}\) estimate without any extra assumptions and the proof was later simplified by Sroka [23]. Recently Dinew and Sroka [11] solved the equation on a compact HK manifold. Bedulli, Gentili and Vezzoni [6] considered the parabolic method. More partial results can be found in [3, 4, 5, 16, 19, 24, 31] and the conjecture remains open. By adopting the techniques of Dinew and Sroka [11], we considered the quaternionic form-type Calabi-Yau equation in [14] on compact HK manifolds, which is parallel to the complex case where the form-type Calabi-Yau equation was proposed by Fu-Wang-Wu [12, 13] and solved by Tosatti-Weinkove [26] on Kahler manifolds. Specifically, let \((M,I,J,K,g,\Omega)\) be a hyperhermitian manifold of quaternionic dimension \(n\), and \(g_{0}\) another hyperhermitian metric on \(M\) with induced \((2,0)\)-form \(\Omega_{0}\). Given a smooth function \(f\) on \(M\), the quaternionic form-type Calabi-Yau equation is \[\Omega_{u}^{n}=e^{f+b}\Omega^{n} \tag{1.2}\] in which \(b\) is a uniquely determined constant, and \(\Omega_{u}\) is determined by \[\Omega_{u}^{n-1}=\Omega_{0}^{n-1}+\partial\partial_{J}u\wedge\Omega^{n-2} \tag{1.3}\] where \(\Omega_{0}^{n-1}+\partial\partial_{J}u\wedge\Omega^{n-2}\) is strictly positive. In fact we solved the following Monge-Ampere equation for \((n-1)\)-quaternionic psh functions which is equivalent to (1.2). \[\begin{split}\big{(}\Omega_{h}+&\frac{1}{n-1}(( \frac{1}{2}\Delta_{I,g}u)\Omega-\partial\partial_{J}u)\big{)}^{n}=e^{f+b} \Omega^{n}\\ &\Omega_{h}+\frac{1}{n-1}((\frac{1}{2}\Delta_{I,g}u)\Omega- \partial\partial_{J}u)>0,\end{split} \tag{1.4}\] where \(\Omega_{h}\) a given strictly positive \((2,0)\)-form with respect to \(I\). In a slightly different context, on locally flat compact HK manifolds, Gentili and Zhang [17] solved a class of fully non-linear elliptic equations including (1.4). They later extended their work to the parabolic setting in [18]. In this article, we consider the parabolic version of (1.4) on a compact hyperKahler manifold \[\frac{\partial}{\partial t}u=\log\frac{\big{(}\Omega_{h}+\frac{1}{n-1}((\frac {1}{2}\Delta_{I,g}u)\Omega-\partial\partial_{J}u)\big{)}^{n}}{\Omega^{n}}-f, \tag{1.5}\] with \(u(\cdot,0)=u_{0}\in C^{\infty}(M,\mathbb{R})\) satisfying \[\Omega_{h}+\frac{1}{n-1}((\frac{1}{2}\Delta_{I,g}u_{0})\Omega-\partial \partial_{J}u_{0})>0. \tag{1.6}\] Our main result is as follows. **Theorem 1.1**.: _Let \((M,I,J,K,g,\Omega)\) be a compact hyperKahler manifold of quaternionic dimension \(n\), and \(\Omega_{h}\) a strictly positive \((2,0)\)-form with respect to \(I\). Let \(f\) be a smooth function on \(M\). Then there exists a unique solution \(u\) to (1.5) on \(M\times[0,\infty)\) with \(u(\cdot,0)=u_{0}\) satisfying (1.6). And if we normalize \(u\) by_ \[\tilde{u}:=u-\frac{\int_{M}u\,\Omega^{n}\wedge\overline{\Omega}^{n}}{\int_{M} \Omega^{n}\wedge\overline{\Omega}^{n}}, \tag{1.7}\] _then \(\tilde{u}\) converges smoothly to a function \(\tilde{u}_{\infty}\) as \(t\to\infty\), and \(\tilde{u}_{\infty}\) is the unique solution to (1.4) up to a constant \(\tilde{b}\in\mathbb{R}\)._ This gives a parabolic solution to the original equation (1.4). There are plenty of results on parabolic flows on compact complex manifolds, for example, [8, 10, 20, 21, 22, 32]. The article is organized as follows. In Section 2, we introduce some basic notations and useful lemmas. In Section 3, we prove the \(u_{t}\) and the \(C^{0}\) estimate. We derive the \(C^{1}\) estimate in Section 4 and the complex Hessian estimate in Section 5. The Theorem 1.1 is proved in Section 6. ## 2. Preliminaries On a hyperhermitian manifold \((M,I,J,K,g)\) of quaternionic dimension \(n\), we denote by \(\Lambda^{p,q}_{I}(M)\) the \((p,q)\)-forms with respect to \(I\). A form \(\alpha\in\Lambda^{2k,0}_{I}(M)\) is called \(J\)-real if \(J\alpha=\overline{\alpha}\), and denoted by \(\alpha\in\Lambda^{2k,0}_{I,\mathbb{R}}(M)\). In particular, we have \(\Omega=\omega_{J}-i\omega_{K}\) is a \(J\)-real \((2,0)\)-form. **Definition 2.1** ([14], Definition 2.2).: _A \(J\)-real \((2,0)\)-form \(\alpha\) is said to be positive (resp. strictly positive) if \(\alpha(X,\overline{X}J)\geq 0\) (resp. \(\alpha(X,\overline{X}J)>0\)) for any non-zero \((1,0)\)-vector \(X\). We denote by \(\Lambda^{2,0}_{I,\mathbb{R}}(M)_{>0}\) all strictly positive \(J\)-real \((2,0)\)-forms._ Note that \(\Omega\) is determined by \(g\) and is strictly positive. Conversely any \(\Omega\in\Lambda^{2k,0}_{I,\mathbb{R}}(M)_{>0}\) induces a hyperhermitian metric by \(g=\operatorname{Re}(\Omega(\cdot,\cdot J))\). Thus there is a bijection between strictly positive \(J\)-real \((2,0)\)-forms and hyperhermitian metrics. **Definition 2.2**.: _For \(\chi\in\Lambda^{2,0}_{I,\mathbb{R}}(M)\), define_ \[S_{m}(\chi)=\frac{C_{n}^{m}\chi^{m}\wedge\Omega^{n-m}}{\Omega^{n}}\quad\text{ for}\quad 0\leq m\leq n. \tag{2.1}\] In particular for \(u\in C^{\infty}(M,\mathbb{R})\) we have \[S_{1}(\partial\partial_{J}u)=\frac{1}{2}\Delta_{I,g}u. \tag{2.2}\] For convenience we denote \[\widetilde{\Omega}=\Omega_{h}+\frac{1}{n-1}(S_{1}(\partial\partial_{J}u) \Omega-\partial\partial_{J}u). \tag{2.3}\] It's easily checked that \(\widetilde{\Omega}\) is a \(J\)-real \((2,0)\)-form, thus one can define the corresponding hyperhermitian metric and the induced fundamental form by \[g_{u}=\operatorname{Re}(\widetilde{\Omega}(\cdot,\cdot J)),\quad\omega_{u}=g_ {u}(\cdot I,\cdot). \tag{2.4}\] **Lemma 2.3**.: (2.5) \[\omega_{u}=\omega_{h}+\frac{1}{n-1}(S_{1}(\partial\partial_{J}u)\omega-\frac{1 }{2}(i\partial\bar{\partial}u-iJ\partial\bar{\partial}u)).\] Proof.: It is showed in [24, Proposition 3.2] that \[\operatorname{Re}(\partial\partial_{J}u(\cdot I,\cdot J))=\frac{1}{2}(i\partial \bar{\partial}u-iJ\partial\bar{\partial}u).\] Hence by definition \[\omega_{u} =g_{u}(\cdot I,\cdot)=\operatorname{Re}(\widetilde{\Omega}( \cdot I,\cdot J))\] \[=\operatorname{Re}(\Omega_{h}(\cdot I,\cdot J))+\frac{1}{n-1}(S_ {1}(\partial\partial_{J}u)\operatorname{Re}(\Omega(\cdot I,\cdot J))- \operatorname{Re}(\partial\partial_{J}u(\cdot I,\cdot J)))\] \[=\omega_{h}+\frac{1}{n-1}(S_{1}(\partial\partial_{J}u)\omega- \frac{1}{2}(i\partial\bar{\partial}u-iJ\partial\bar{\partial}u)).\qed\] We also need the following lemma. **Lemma 2.4** ([14], Lemma 3.2).: (2.6) \[S_{1}(\partial\partial_{J}u)=S_{1}(\widetilde{\Omega})-S_{1}( \Omega_{h}),\] (2.7) \[\partial\partial_{J}u=(n-1)\Omega_{h}-S_{1}(\Omega_{h})\Omega+S_ {1}(\widetilde{\Omega})\Omega-(n-1)\widetilde{\Omega}.\] **Remark 2.5**.: _On a hyperhermitian manifold \((M,I,J,K,g,\Omega)\) of quaternionic dimension \(n\), we can find local \(I\)-holomorphic geodesic coordinates such that \(\Omega\) and another \(J\)-real \((2,0)\)-form \(\widetilde{\Omega}\) are simultaneously diagonalizable at a point \(x\in M\), i.e._ \[\Omega=\sum_{i=0}^{n-1}dz^{2i}\wedge dz^{2i+1},\quad\widetilde{\Omega}=\sum_{i =0}^{n-1}\widetilde{\Omega}_{2i2i+1}dz^{2i}\wedge dz^{2i+1},\] _and the Christoffel symbol of \(\nabla^{O}\) and first derivatives of \(J\) vanish at \(x\), i.e._ \[J^{l}_{\tilde{k},i}=J^{\bar{l}}_{k,\tilde{i}}=J^{\bar{l}}_{k,\tilde{i}}=J^{l} _{\tilde{k},\tilde{i}}=0.\] _We call such local coordinates the normal coordinates around \(x\)._ The linearized operator \(\mathcal{P}\) of the flow (1.5) is derived in the following lemma **Lemma 2.6**.: _The linearized operator \(\mathcal{P}\) has the form:_ \[\mathcal{P}(v)=v_{t}-\frac{A\wedge\partial\partial_{J}(v)}{\widetilde{\Omega} ^{n}}, \tag{2.8}\] _where \(A=\frac{n}{n-1}\big{(}S_{n-1}(\widetilde{\Omega})\Omega^{n-1}-\widetilde{ \Omega}^{n-1}\big{)}\) and \(v\in C^{2,1}(M\times[0,T))\)._ Proof.: Let \(w(s)\) be the variation of \(u\) and \(v=\left.\frac{d}{ds}\right|_{s=0}w(s)\). It is sufficient to compute the variation of \(\widetilde{\Omega}^{n}=(\Omega_{h}+\frac{1}{n-1}(S_{1}(\partial\partial_{J}u) \Omega-\partial\partial_{J}u))^{n}\). We have \[\delta(\widetilde{\Omega}^{n})= \frac{d}{ds}\Big{|}_{s=0}(\Omega_{h}+\frac{1}{n-1}(S_{1}(\partial \partial_{J}w(s))\Omega-\partial\partial_{J}w(s)))^{n}\] \[= \frac{n}{n-1}\widetilde{\Omega}^{n-1}\wedge(S_{1}(\partial \partial_{J}v)\Omega-\partial\partial_{J}v)\] \[= \frac{n}{n-1}\widetilde{\Omega}^{n-1}\wedge\Omega\cdot\frac{n \Omega^{n-1}\wedge\partial\partial_{J}v}{\Omega^{n}}-\frac{n}{n-1}\widetilde{ \Omega}^{n-1}\wedge\partial\partial_{J}v\] \[= \frac{n}{n-1}S_{n-1}(\widetilde{\Omega})\Omega^{n-1}\wedge \partial\partial_{J}v-\frac{n}{n-1}\widetilde{\Omega}^{n-1}\wedge\partial \partial_{J}v\] \[= A\wedge\partial\partial_{J}v.\] Then \(\mathcal{P}(v)=v_{t}-\delta(\log\frac{\widetilde{\Omega}^{n}}{\Omega^{n}})=v_ {t}-\frac{A\wedge\partial\partial_{J}(v)}{\widetilde{\Omega}^{n}}\) as claimed. ## 3. \(u_{t}\) estimate and \(C^{0}\) estimate We first prove the uniform estimate of \(u_{t}\). **Lemma 3.1**.: _Let \(u\) be a solution to (1.5) on \(M\times[0,T)\). Then there exists a constant \(C\) depending only on the fixed data \((I,J,K,g,\Omega,\Omega_{h})\) and \(f\) such that_ \[\sup_{M\times[0,T)}\big{|}u_{t}\big{|}\leq C. \tag{3.1}\] Proof.: One can see that \(u_{t}\) satisfies \[\mathcal{P}(u_{t})=\frac{\partial}{\partial t}(u_{t})-\frac{A\wedge\partial \partial_{J}(u_{t})}{\widetilde{\Omega}^{n}}=0. \tag{3.2}\] For any \(T_{0}\in(0,T)\), by maximum principle, \[\max_{M\times[0,T_{0}]}\big{|}u_{t}\big{|}\leq \max_{M}|u_{t}(x,0)|\] \[\leq \max_{M}\Big{|}\log\frac{\big{(}\Omega_{h}+\frac{1}{n-1}(S_{1}( \partial\partial_{J}u_{0})\Omega-\partial\partial_{J}u_{0})\big{)}^{n}}{ \Omega^{n}}\Big{|}+\max_{M}|f|.\] Since \(T_{0}\) is arbitrary, we have the desired estimate. Using the \(C^{0}\) estimate for the elliptic equation, which has been proved by Sroka [24] and Fu-Xu-Zhang [14], we have the following. **Lemma 3.2**.: _Let \(u\) be a solution to (1.5) on \(M\times[0,T)\). Then there exists a uniform constant \(C\) depending only on the fixed data \((I,J,K,g,\Omega,\Omega_{h})\) and \(f\) such that_ \[\sup_{M\times[0,T)}|\tilde{u}|\leq\sup_{t\in[0,T)}\big{(}\sup_{x\in M}u(x,t)- \inf_{x\in M}u(x,t)\big{)}\leq C. \tag{3.3}\] Proof.: The flow is equivalent to the following \[\widetilde{\Omega}^{n}=e^{u_{t}+f}\Omega^{n}. \tag{3.4}\] Since \(u_{t}\) is uniformly bounded, we can apply the \(C^{0}\)-estimate for the elliptic equation such that for any \(t\in(0,T)\), \[|u(x,t)-\sup\limits_{M}u(\cdot,t)|\leq C,\quad\forall x\in M. \tag{3.5}\] Since \(\int\limits_{M}\tilde{u}(\cdot,t)\,\Omega^{n}\wedge\overline{\Omega}^{n}=0\), there exists \(x_{0}\in M\) such that \(\tilde{u}(x_{0},t)=0\). Then we have \[|\tilde{u}(x,t)|= \,|\tilde{u}(x,t)-\tilde{u}(x_{0},t)|=|u(x,t)-u(x_{0},t)|\] \[\leq \,|u(x,t)-\sup\limits_{M}u(\cdot,t)|+|u(x_{0},t)-\sup\limits_{M}u (\cdot,t)|\] \[\leq \,2C,\quad\forall x\in M.\] Hence the \(C^{0}\) estimate follows. ## 4. \(C^{1}\) Estimate Although the gradient estimate is unnecessary for the proof of the main result, we provide it as the gradient estimate for fully nonlinear equations has independent interest. **Theorem 4.1**.: _Let \(u\) be a solution to (1.5) on \(M\times[0,T)\). Then there exists a constant \(C\) depending only on the fixed data \((I,J,K,g,\Omega,\Omega_{h})\) and \(f\) such that_ \[\sup\limits_{M\times[0,T)}|du|_{g}\leq C. \tag{4.1}\] Proof.: A simple computation in local coordinates shows that \[n\partial u\wedge\partial_{J}u\wedge\Omega^{n-1}=\frac{1}{4}|du|_{g}^{2} \Omega^{n}.\] Define \[\beta\coloneqq\frac{1}{4}|du|_{g}^{2}.\] Following [7], we consider \[G=\log\beta-\varphi(\tilde{u}),\] where \(\varphi\) is a function to be determined and \(\tilde{u}\) is the normalization of \(u\). For any \(T_{0}\in(0,T)\), suppose \(\max\limits_{M\times[0,T_{0}]}G=G(p_{0},t_{0})\) with \((p_{0},t_{0})\in M\times[0,T_{0})\). We want to show \(\beta(p_{0},t_{0})\) is uniformly bounded. If \(t_{0}=0\), we have the estimate. In the following, we assume \(t_{0}>0\). We choose the normal coordinates around \(p_{0}\) (see Remark 2.5) and all the calculation is at \((p_{0},t_{0})\). \[0\leq\partial_{t}G =\frac{\beta_{t}}{\beta}-\varphi^{\prime}\tilde{u}_{t};\] \[\partial G =\frac{\partial\beta}{\beta}-\varphi^{\prime}\partial u=0;\] \[\partial_{J}G =\frac{\partial_{J}\beta}{\beta}-\varphi^{\prime}\partial_{J}u=0;\] \[\partial\partial_{J}G =\frac{\partial\partial_{J}\beta}{\beta}-\frac{\partial\beta \wedge\partial_{J}\beta}{\beta^{2}}-\varphi^{\prime\prime}\partial u\wedge \partial_{J}u-\varphi^{\prime}\partial\partial_{J}u\] \[=\frac{\partial\partial_{J}\beta}{\beta}-((\varphi^{\prime})^{2} +\varphi^{\prime\prime})\partial u\wedge\partial_{J}u-\varphi^{\prime} \partial\partial_{J}u.\] Then we have \[0\leq \mathcal{P}(G)=G_{t}-\frac{\partial\partial_{J}G\wedge A\wedge \overline{\Omega}^{n}}{\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}\] \[=\frac{\beta_{t}}{\beta}-\varphi^{\prime}\tilde{u}_{t}-\frac{ \partial\partial_{J}\beta\wedge A\wedge\overline{\Omega}^{n}}{\beta \widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}+((\varphi^{\prime})^{2}+ \varphi^{\prime\prime})\frac{\partial u\wedge\partial_{J}u\wedge A\wedge \overline{\Omega}^{n}}{\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}+ \varphi^{\prime}\frac{\partial\partial_{J}u\wedge A\wedge\overline{\Omega}^{n }}{\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}. \tag{4.2}\] We first deal with \(\partial_{t}\beta\). By taking \(\partial_{t}\) on both sides of \(\beta\Omega^{n}=n\partial u\wedge\partial_{J}u\wedge\Omega^{n-1}\), we get \[\beta_{t}=\sum_{j=0}^{2n-1}(u_{t,j}u_{\overline{j}}+u_{j}u_{t,\overline{j}}). \tag{4.3}\] We next compute \(\partial\partial_{J}\beta\). Taking \(\partial_{J}\) on both sides of \(\beta\overline{\Omega}^{n}=n\overline{\partial}u\wedge\overline{\partial_{J}} u\wedge\overline{\Omega}^{n-1}\) and noticing \(\partial_{J}\overline{\Omega}=0\)(since \(\Omega\) is hyperKahler), we have \[\partial_{J}\beta\wedge\overline{\Omega}^{n}=n\partial_{J}\overline{\partial} u\wedge\overline{\partial_{J}}u\wedge\overline{\Omega}^{n-1}-n\overline{ \partial}u\wedge\partial_{J}\overline{\partial_{J}}u\wedge\overline{\Omega}^{n -1}.\] Then taking \(\partial\) on both sides, we obtain \[\partial\partial_{J}\beta\wedge\overline{\Omega}^{n}= n\partial\partial_{J}\overline{\partial}u\wedge\overline{\partial_{J}}u \wedge\overline{\Omega}^{n-1}+n\partial_{J}\overline{\partial}u\wedge\partial \overline{\partial_{J}}u\wedge\overline{\Omega}^{n-1}\] \[-n\partial\overline{\partial}u\wedge\partial_{J}\overline{ \partial_{J}}u\wedge\overline{\Omega}^{n-1}+n\overline{\partial}u\wedge \partial\partial_{J}\overline{\partial_{J}}u\wedge\overline{\Omega}^{n-1}.\] From the equation \[\widetilde{\Omega}^{n}=e^{u_{t}+f}\Omega^{n}, \tag{4.4}\] by taking \(\overline{\partial}\) on both sides we get \[n(\bar{\partial}S_{1}(\partial\partial_{J}u)\wedge\Omega-\bar{\partial} \partial\partial_{J}u)\wedge\widetilde{\Omega}^{n-1}=(n-1)(\bar{\partial}e^{u_{ t}+f}\wedge\Omega^{n}-n\bar{\partial}\Omega_{h}\wedge\widetilde{\Omega}^{n-1}).\] The left hand side can be calculated as the following. \[n(\bar{\partial}S_{1}(\partial\partial_{J}u)\wedge\Omega-\bar{ \partial}\partial\partial_{J}u)\wedge\widetilde{\Omega}^{n-1}\] \[= \,n(\bar{\partial}S_{1}(\partial\partial_{J}u)\wedge\Omega^{n} \cdot\frac{\Omega\wedge\widetilde{\Omega}^{n-1}}{\Omega^{n}}-\bar{\partial} \partial\partial_{J}u\wedge\widetilde{\Omega}^{n-1})\] \[= \,n(\bar{\partial}\big{(}\frac{\partial\partial_{J}u\wedge\Omega^ {n-1}}{\Omega^{n}}\cdot\Omega^{n}\big{)}\cdot S_{n-1}(\widetilde{\Omega})- \bar{\partial}\partial\partial_{J}u\wedge\widetilde{\Omega}^{n-1})\] \[= \,(S_{n-1}(\widetilde{\Omega})\Omega^{n-1}-\widetilde{\Omega}^{n -1})\wedge n\bar{\partial}\partial\partial_{J}u\] \[= \,(n-1)A\wedge\bar{\partial}\partial\partial_{J}u.\] Hence we obtain \[A\wedge n\overline{\partial}\partial\partial_{J}u=-n^{2}\widetilde{\Omega}^{n -1}\wedge\overline{\partial}\Omega_{h}+n\overline{\partial}e^{u_{t}+f}\wedge \Omega^{n}.\] By taking \(\overline{\partial_{J}}\) on both sides of (4.4), we obtain \[A\wedge n\overline{\partial_{J}}\partial\partial_{J}u=-n^{2}\widetilde{\Omega }^{n-1}\wedge\overline{\partial_{J}}\Omega_{h}+n\overline{\partial_{J}}e^{u_{ t}+f}\wedge\Omega^{n}.\] Thus for the third term of (4.2), we have \[\partial\partial_{J}\beta\wedge A\wedge\overline{\Omega}^{n}=I_{1}+I_{2}+n \partial_{J}\overline{\partial}u\wedge\partial\overline{\partial_{J}}u\wedge \overline{\Omega}^{n-1}\wedge A-n\partial\overline{\partial}u\wedge\partial_ {J}\overline{\partial_{J}}u\wedge\overline{\Omega}^{n-1}\wedge A \tag{4.5}\] where \[I_{1} =(-n^{2}\widetilde{\Omega}^{n-1}\wedge\overline{\partial}\Omega_ {h}+n\overline{\partial}e^{u_{t}+f}\wedge\Omega^{n})\wedge\overline{\partial _{J}}u\wedge\overline{\Omega}^{n-1},\] \[I_{2} =(n^{2}\widetilde{\Omega}^{n-1}\wedge\overline{\partial_{J}} \Omega_{h}-n\overline{\partial_{J}}e^{u_{t}+f}\wedge\Omega^{n})\wedge \overline{\partial}u\wedge\overline{\Omega}^{n-1}.\] By direct computation, \[\partial_{J}\overline{\partial}u =\sum u_{\overline{j}i}J^{-1}d\overline{z^{i}}\wedge d\overline{z ^{j}},\] \[\partial\overline{\partial_{J}}u =\sum u_{ij}dz^{j}\wedge J^{-1}dz^{i},\] \[\partial\overline{\partial}u =\sum u_{i\overline{j}}dz^{i}\wedge d\overline{z^{j}},\] \[\partial_{J}\overline{\partial_{J}}u =\sum u_{i\overline{j}}J^{-1}d\overline{z^{j}}\wedge J^{-1}dz^{i},\] the third term of (4.5) becomes \[n\partial_{J}\overline{\partial}u\wedge\partial\overline{\partial_{J}}u \wedge\overline{\Omega}^{n-1}\wedge A=\frac{1}{n-1}\sum_{k=0}^{n-1}\sum_{j=0}^ {2n-1}(\sum_{i\neq k}\frac{1}{\widetilde{\Omega}_{2i2i+1}})(|u_{2kj}|^{2}+|u_{2 k+1j}|^{2})\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}; \tag{4.6}\] and the forth term \[-n\partial\overline{\partial}u\wedge\partial_{J}\overline{\partial_{J}}u \wedge\overline{\Omega}^{n-1}\wedge A=\frac{1}{n-1}\sum_{k=0}^{n-1}\sum_{j=0}^ {2n-1}(\sum_{i\neq k}\frac{1}{\widetilde{\Omega}_{2i2i+1}})(|u_{2k\overline{j} }|^{2}+|u_{2k+1\overline{j}}|^{2})\widetilde{\Omega}^{n}\wedge\overline{\Omega }^{n}. \tag{4.7}\] For \(I_{1}\) and \(I_{2}\) we have \[\begin{split} I_{1}&=-n^{2}\widetilde{\Omega}^{n-1} \wedge\overline{\partial}\Omega_{h}\wedge\overline{\partial}_{J}u\wedge \overline{\Omega}^{n-1}-n\overline{\partial_{J}}u\wedge\overline{\partial}e^{u _{t}+f}\wedge\Omega^{n}\wedge\overline{\Omega}^{n-1}\\ &=-\sum_{i=0}^{n-1}\sum_{j=0}^{2n-1}\frac{(\Omega_{h})_{2i2i+1, \overline{j}}u_{j}}{\widetilde{\Omega}_{2i2i+1}}\widetilde{\Omega}^{n}\wedge \overline{\Omega}^{n}+\sum_{j=0}^{2n-1}u_{j}(u_{t}+f)_{\overline{j}}\widetilde {\Omega}^{n}\wedge\overline{\Omega}^{n}\end{split} \tag{4.8}\] and \[\begin{split} I_{2}&=n\widetilde{\Omega}^{n-1} \wedge\overline{\partial_{J}}\Omega_{h}\wedge\overline{\partial}u\wedge \overline{\Omega}^{n-1}+\overline{\partial}u\wedge\overline{\partial_{J}}e^{u _{t}+f}\wedge\Omega^{n}\wedge\overline{\Omega}^{n-1}\\ &=-\sum_{i=0}^{n-1}\sum_{j=0}^{2n-1}\frac{(\overline{\Omega}_{h}) _{2i2i+1,j}u_{\overline{j}}}{\widetilde{\Omega}_{2i2i+1}}\widetilde{\Omega}^{ n}\wedge\overline{\Omega}^{n}+\sum_{j=0}^{2n-1}u_{\overline{j}}(u_{t}+f)_{j} \widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}.\end{split} \tag{4.9}\] Combining (4.6), (4.7), (4.8), (4.9), we obtain estimate of (4.5) \[\begin{split}\frac{\partial\partial_{J}\beta\wedge A\wedge \overline{\Omega}^{n}}{\beta\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}& =-\frac{1}{\beta}\sum_{i=0}^{n-1}\sum_{j=0}^{2n-1}\frac{(\Omega _{h})_{2i2i+1,\overline{j}}u_{j}+(\overline{\Omega}_{h})_{2i2i+1,j}u_{ \overline{j}}}{\widetilde{\Omega}_{2i2i+1}}\\ &+\frac{1}{\beta}\sum_{j=0}^{2n-1}\left(u_{j}(u_{t}+f)_{ \overline{j}}+u_{\overline{j}}(u_{t}+f)_{j}\right)\\ &+\frac{1}{(n-1)\beta}\sum_{k=0}^{n-1}\sum_{j=0}^{2n-1}\sum_{i \neq k}\frac{|u_{2kj}|^{2}+|u_{2k+1j}|^{2}+|u_{2k\overline{j}}|^{2}+|u_{2k+1 \overline{j}}|^{2}}{\widetilde{\Omega}_{2i2i+1}}.\end{split} \tag{4.10}\] Again by direct computation, the forth term of (4.2) is \[\partial u\wedge\partial_{J}u\wedge A\wedge\overline{\Omega}^{n}=\frac{1}{n-1 }\sum_{i=0}^{n-1}(\sum_{k\neq i}\frac{1}{\widetilde{\Omega}_{2k2k+1}})(|u_{2i }|^{2}+|u_{2i+1}|^{2})\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}. \tag{4.11}\] For the fifth term of (4.2), we compute \[\begin{split}\partial\partial_{J}u\wedge A&=\frac{n }{n-1}\partial\partial_{J}u\wedge(\frac{n\widetilde{\Omega}^{n-1}\wedge \Omega}{\Omega^{n}}\Omega^{n-1}-\widetilde{\Omega}^{n-1})\\ &=\frac{n}{n-1}(S_{1}(\partial\partial_{J}u)\Omega-\partial \partial_{J}u)\wedge\widetilde{\Omega}^{n-1}\\ &=n(\widetilde{\Omega}^{n}-\Omega_{h}\wedge\widetilde{\Omega}^{n-1 }).\end{split} \tag{4.12}\] By compactness of \(M\), there exists \(\epsilon>0\) such that \(\Omega_{h}\geq\epsilon\Omega\). Hence we obtain \[\begin{split}\varphi^{\prime}\frac{\partial\partial_{J}u\wedge A \wedge\overline{\Omega}^{n}}{\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n }}&=n\varphi^{\prime}-n\varphi^{\prime}\frac{\Omega_{h}\wedge \widetilde{\Omega}^{n-1}\wedge\overline{\Omega}^{n}}{\widetilde{\Omega}^{n} \wedge\overline{\Omega}^{n}}\\ &\leq n\varphi^{\prime}-\epsilon\varphi^{\prime}\sum_{i=0}^{n-1} \frac{1}{\widetilde{\Omega}_{2i2i+1}}.\end{split} \tag{4.13}\] We assume \(\beta\gg 1\) otherwise we are finished. By (4.3), (4.10), (4.11) and (4.13), the inequality (4.2) becomes \[\begin{split} 0\leq&-\frac{1}{\beta}\sum_{i=0}^{2n-1}(u_{i} (f)_{\overline{i}}+u_{\overline{i}}(f)_{i})\\ &+\frac{(\varphi^{\prime})^{2}+\varphi^{\prime\prime}}{n-1}\sum_{ i=0}^{n-1}(\sum_{k\neq i}\frac{1}{\widetilde{\Omega}_{2k2k+1}})(|u_{2i}|^{2}+|u_{2i+1 }|^{2})\\ &+n\varphi^{\prime}-(\epsilon\varphi^{\prime}-C_{1}\frac{\sum|u_ {j}|}{\beta}-C_{2}\frac{\sum|u_{\overline{j}}|}{\beta})\sum_{i=0}^{n-1}\frac{ 1}{\widetilde{\Omega}_{2i2i+1}}-\varphi^{\prime}\tilde{u}_{t}.\end{split} \tag{4.14}\] The first term is bounded from above. Now we take \[\varphi(s)=\frac{\log(2s+C_{0})}{2}. \tag{4.15}\] where \(C_{0}\) is determined by \(C^{0}\) estimate. Then (4.14) becomes \[C_{3}\geq C_{4}\sum_{i=0}^{n-1}(\sum_{k\neq i}\frac{1}{\widetilde{\Omega}_{2 k2k+1}})(|u_{2i}|^{2}+|u_{2i+1}|^{2})+C_{5}\sum_{i=0}^{n-1}\frac{1}{\widetilde{ \Omega}_{2i2i+1}}. \tag{4.16}\] Thus for any fixed \(i\) \[\widetilde{\Omega}_{2i2i+1}\geq\frac{C_{5}}{C_{3}}\geq C.\] By equation (4.4) we also have \[\frac{1}{\widetilde{\Omega}_{2i2i+1}}=e^{-u_{t}-f}\prod_{j\neq i}\widetilde{ \Omega}_{2j2j+1}\geq\frac{C^{n-1}}{\sup_{M}e^{u_{t}+f}},\ 0\leq i\leq n-1.\] Then by (4.16) we obtain \(\beta\) is uniformly bounded. ## 5. Bound on \(\partial\partial_{J}u\) **Theorem 5.1**.: _Let \(u\) be a solution to (1.5) on \(M\times[0,T)\). Then there exists a constant \(C\) depending only on the fixed data \((I,J,K,g,\Omega,\Omega_{h})\) and \(f\) such that_ \[\sup_{M\times[0,T)}|\partial\partial_{J}u|_{g}\leq C. \tag{5.1}\] Proof.: For simplicity denote \[\eta=S_{1}(\partial\partial_{J}u).\] Consider the function \[G=\log\eta-\varphi(\tilde{u})\] where \(\varphi\) is the same as before. For any \(T_{0}\in(0,T)\), suppose \(\max\limits_{M\times[0,T_{0}]}G=G(p_{0},t_{0})\) with \((p_{0},t_{0})\in M\times[0,T_{0})\). We want to show \(\eta(p_{0},t_{0})\) is uniformly bounded. We choose the normal coordinates around \(p_{0}\). All the calculations are carried at \((p_{0},t_{0})\). We have \[0\leq\partial_{t}G =\frac{\eta_{t}}{\eta}-\varphi^{\prime}\tilde{u}_{t},\] \[\partial G =\frac{\partial\eta}{\eta}-\varphi^{\prime}\partial u=0,\] \[\partial_{J}G =\frac{\partial_{J}\eta}{\eta}-\varphi^{\prime}\partial_{J}u=0,\] \[\partial\partial_{J}G =\frac{\partial\partial_{J}\eta}{\eta}-((\varphi^{\prime})^{2}+ \varphi^{\prime\prime})\partial u\wedge\partial_{J}u-\varphi^{\prime}\partial \partial_{J}u.\] We further have \[0 \leq\mathcal{P}(G)=G_{t}-\frac{\partial\partial_{J}G\wedge A \wedge\overline{\Omega}^{n}}{\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{ n}}\] \[=\frac{\eta_{t}}{\eta}-\varphi^{\prime}\tilde{u}_{t}-\frac{ \partial\partial_{J}\eta\wedge A\wedge\overline{\Omega}^{n}}{\eta\widetilde{ \Omega}^{n}\wedge\overline{\Omega}^{n}}+((\varphi^{\prime})^{2}+\varphi^{ \prime\prime})\frac{\partial u\wedge\partial_{J}u\wedge A\wedge\overline{ \Omega}^{n}}{\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}+\varphi^{ \prime}\frac{\partial\partial_{J}u\wedge A\wedge\overline{\Omega}^{n}}{ \widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}. \tag{5.2}\] The last two terms were dealt with in the previous section. Since \[\eta\Omega^{n}=n\partial\partial_{J}u\wedge\Omega^{n-1},\] by taking \(\partial_{t}\) on both sides we have for \(\eta_{t}\) in the first term \[\eta_{t}=u_{t,p\overline{p}}. \tag{5.3}\] We now focus on \(\partial\partial_{J}\eta\) in the third term of (5.2). By definition \(\eta\) is real, and \[\eta\overline{\Omega}^{n}=n\bar{\partial}\bar{\partial}_{J}u\wedge\overline{ \Omega}^{n-1}.\] Under the hyperKahler condition \(\mathrm{d}\Omega=0\), differentiating twice the above equation gives \[\partial\partial_{J}\eta\wedge\overline{\Omega}^{n}=n\partial\partial_{J} \bar{\partial}\bar{\partial}_{J}u\wedge\overline{\Omega}^{n-1}=n\bar{\partial }\bar{\partial}_{J}\partial\partial_{J}u\wedge\overline{\Omega}^{n-1}. \tag{5.4}\] We know that (see (2.7)) \[\partial\partial_{J}u=(n-1)\Omega_{h}-S_{1}(\Omega_{h})\Omega+S_{1}(\widetilde {\Omega})\Omega-(n-1)\widetilde{\Omega}.\] Thus \[\bar{\partial}\bar{\partial}_{J}\partial\partial_{J}u=(n-1)\bar{\partial}\bar {\partial}_{J}\Omega_{h}-\bar{\partial}\bar{\partial}_{J}S_{1}(\Omega_{h}) \wedge\Omega+\bar{\partial}\bar{\partial}_{J}S_{1}(\widetilde{\Omega})\wedge \Omega-(n-1)\bar{\partial}\bar{\partial}_{J}\widetilde{\Omega}, \tag{5.5}\] where we used the hyperKahler condition on \(\Omega\). Now we have \[\partial\partial_{J}\eta\wedge A\wedge\overline{\Omega}^{n} =nA\wedge\bar{\partial}\bar{\partial}_{J}\partial\partial_{J}u \wedge\overline{\Omega}^{n-1}\] \[=n(n-1)A\wedge\bar{\partial}\bar{\partial}_{J}\Omega_{h}\wedge \overline{\Omega}^{n-1}-n\bar{\partial}\bar{\partial}_{J}S_{1}(\Omega_{h}) \wedge A\wedge\Omega\wedge\overline{\Omega}^{n-1}\] \[\quad+n\bar{\partial}\bar{\partial}_{J}S_{1}(\widetilde{\Omega}) \wedge A\wedge\Omega\wedge\overline{\Omega}^{n-1}-n(n-1)A\wedge\bar{\partial} \bar{\partial}_{J}\widetilde{\Omega}\wedge\overline{\Omega}^{n-1} \tag{5.6}\] Note that \[A\wedge\Omega=\frac{n}{n-1}\big{(}S_{n-1}(\widetilde{\Omega})\Omega^{n}- \widetilde{\Omega}^{n-1}\big{)}\wedge\Omega=S_{n-1}(\widetilde{\Omega})\Omega ^{n}\] and \[\bar{\partial}\bar{\partial}_{J}S_{1}(\widetilde{\Omega})\wedge\Omega^{n}=n \bar{\partial}\bar{\partial}_{J}\widetilde{\Omega}\wedge\Omega^{n-1}.\] The third term of (5.6) becomes \[n\bar{\partial}\bar{\partial}_{J}S_{1}(\widetilde{\Omega}) \wedge A\wedge\Omega\wedge\overline{\Omega}^{n-1} =n\bar{\partial}\bar{\partial}_{J}S_{1}(\widetilde{\Omega})\wedge (\Omega^{n}\cdot S_{n-1}(\widetilde{\Omega}))\wedge\overline{\Omega}^{n-1}\] \[=n^{2}S_{n-1}(\widetilde{\Omega})\bar{\partial}\bar{\partial}_{J} \widetilde{\Omega}\wedge\Omega^{n-1}\wedge\overline{\Omega}^{n-1}.\] The forth term is \[n(n-1)A\wedge\bar{\partial}\bar{\partial}_{J}\widetilde{\Omega}\wedge \overline{\Omega}^{n-1}=n^{2}S_{n-1}(\widetilde{\Omega})\bar{\partial}\bar{ \partial}_{J}\widetilde{\Omega}\wedge\Omega^{n-1}\wedge\overline{\Omega}^{n- 1}-n^{2}\widetilde{\Omega}^{n-1}\wedge\bar{\partial}\bar{\partial}_{J} \widetilde{\Omega}\wedge\overline{\Omega}^{n-1}.\] The first two terms of (5.6) are similar and we get \[\partial\partial_{J}\eta\wedge A\wedge\overline{\Omega}^{n} =n^{2}\bar{\partial}\bar{\partial}_{J}\widetilde{\Omega}\wedge \widetilde{\Omega}^{n-1}\wedge\overline{\Omega}^{n-1}-n^{2}\bar{\partial}\bar{ \partial}_{J}\Omega_{h}\wedge\widetilde{\Omega}^{n-1}\wedge\overline{\Omega}^{ n-1}\] and \[\frac{\partial\partial_{J}\eta\wedge A\wedge\overline{\Omega}^{n} }{\eta\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}} =n^{2}\frac{\bar{\partial}\bar{\partial}_{J}\widetilde{\Omega} \wedge\widetilde{\Omega}^{n-1}\wedge\overline{\Omega}^{n-1}}{\eta\widetilde{ \Omega}^{n}\wedge\overline{\Omega}^{n}}-n^{2}\frac{\bar{\partial}\bar{ \partial}_{J}\Omega_{h}\wedge\widetilde{\Omega}^{n-1}\wedge\overline{\Omega}^{n -1}}{\eta\widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}\] \[=\frac{1}{\eta}\sum_{i=0}^{n-1}\sum_{p=0}^{2n-1}\frac{\widetilde{ \Omega}_{2i2i+1,p\bar{p}}}{\widetilde{\Omega}_{2i2i+1}}-\frac{1}{\eta}\sum_{i= 0}^{n-1}\sum_{p=0}^{2n-1}\frac{(\Omega_{h})_{2i2i+1,p\bar{p}}}{\widetilde{ \Omega}_{2i2i+1}}\] \[\geq\frac{1}{\eta}\sum_{i=0}^{n-1}\sum_{p=0}^{2n-1}\frac{\widetilde {\Omega}_{2i2i+1,p\bar{p}}}{\widetilde{\Omega}_{2i2i+1}}-\frac{C_{1}}{\eta} \sum_{i=0}^{n-1}\frac{1}{\widetilde{\Omega}_{2i2i+1}}. \tag{5.7}\] We now rewrite the right hand side of (5.7) using the equation \[\mathrm{Pf}(\widetilde{\Omega}_{ij})=e^{u_{t}+f}\mathrm{Pf}(\Omega_{ij}) \tag{5.8}\] where \(\Omega^{n}=n!\mathrm{Pf}(\Omega_{ij})dz^{0}\wedge\cdots\wedge dz^{2n-1}\). Take logarithm of both sides \[\log\mathrm{Pf}(\widetilde{\Omega}_{ij})=u_{t}+f+\log\mathrm{Pf}(\Omega_{ij}). \tag{5.9}\] Since \(\bar{\partial}\Omega=0\), we have \(\bar{\partial}\mathrm{Pf}(\Omega)=0\). By taking \(\bar{\partial}\) of (5.9) and using \(\mathrm{Pf}(\widetilde{\Omega}_{ij})^{2}=\det(\widetilde{\Omega}_{ij})\), we get \[\frac{1}{2}\sum\widetilde{\Omega}^{ij}\widetilde{\Omega}_{ji,\bar{p}}=u_{t, \overline{p}}+f_{\bar{p}}. \tag{5.10}\] By taking \(\partial\) of both sides we obtain \[\frac{1}{2}\sum\widetilde{\Omega}^{ij}\widetilde{\Omega}_{ji,\bar{p}p}=\frac{1}{ 2}\sum\widetilde{\Omega}^{ik}\widetilde{\Omega}_{kl,p}\widetilde{\Omega}^{lj} \widetilde{\Omega}_{ji,\bar{p}}+f_{p\bar{p}}+u_{t,p\overline{p}}. \tag{5.11}\] In local coordinates, the left hand side of (5.11) is \[\frac{1}{2}\sum\widetilde{\Omega}^{2i2i+1}\widetilde{\Omega}_{2i+12i,p\bar{p} }+\frac{1}{2}\sum\widetilde{\Omega}^{2i+12i}\widetilde{\Omega}_{2i2i+1,p\bar{ p}}=\sum\frac{\widetilde{\Omega}_{2i2i+1,p\bar{p}}}{\widetilde{\Omega}_{2i2i+1}}. \tag{5.12}\] It was proved in [14] that the first term of the right hand side of (5.11) is nonnegative, i.e. \[\sum\widetilde{\Omega}^{ik}\widetilde{\Omega}_{kl,p}\widetilde{\Omega}^{lj} \widetilde{\Omega}_{ji,\bar{p}}\geq 0. \tag{5.13}\] Hence we obtain \[\frac{\partial\partial_{J}\eta\wedge A\wedge\overline{\Omega}^{n}}{\eta \widetilde{\Omega}^{n}\wedge\overline{\Omega}^{n}}\geq\frac{1}{2\eta}\Delta_{ I,g}f-\frac{C_{1}}{\eta}\sum_{i=0}^{n-1}\frac{1}{\widetilde{\Omega}_{2i2i+1}}+ \frac{1}{\eta}u_{t,p\overline{p}}. \tag{5.14}\] Inserting (5.3), (5.14), (4.11) and (4.13) into (5.2), we have \[\begin{split} 0\leq-\frac{1}{2\eta}\Delta_{I,g}f+\frac{( \varphi^{\prime})^{2}+\varphi^{\prime\prime}}{n-1}\sum_{i=0}^{n-1}(\sum_{k\neq i }\frac{1}{\widetilde{\Omega}_{2k2k+1}})(|u_{2i}|^{2}+|u_{2i+1}|^{2})\\ \qquad\qquad+n\varphi^{\prime}-\left(\epsilon\varphi^{\prime}- \frac{C_{1}}{\eta}\right)\sum_{i=0}^{n-1}\frac{1}{\widetilde{\Omega}_{2i2i+1} }-\varphi^{\prime}\tilde{u}_{t}.\end{split} \tag{5.15}\] Assuming \(\eta\gg 1\), we obtain from (5.15) \[C_{2}\geq C_{3}\sum_{i=0}^{n-1}\frac{1}{\widetilde{\Omega}_{2i2i+1}}. \tag{5.16}\] Hence all \(\widetilde{\Omega}_{2i2i+1}\) are uniformly bounded. Since \(\eta=S_{1}(\partial\partial_{J}u)=S_{1}(\widetilde{\Omega})-S_{1}(\Omega_{h})\), we can therefore obtain a uniform bound on \(\eta\). ## 6. Proof of Theorem 1.1 In [25], Tosatti-Wang-Weinkove-Yang derived \(C^{2,\alpha}\) estimates for solutions of some nonlinear elliptic equations based on a bound on the Laplacian of the solution, which was improved and extended to parabolic equations by Chu [9]. Bedulli-Gentili-Venozzi [6] proved the \(C^{2,\alpha}\) for the quaternionic complex Monge-Ampere equation. In this section we apply their techniques to derive the \(C^{2,\alpha}\) estimates in our setting. Then the longtime existence and convergence follows. We first need to rewrite equation (1.5) in terms of real \((1,1)\)-forms, which can be done by using the following relation \[\frac{\Omega^{n}\wedge\overline{\Omega}^{n}}{(n!)^{2}}=\frac{\omega^{2n}}{(2n)!}.\] And the equation is reformulated as \[\omega_{u}^{2n}=e^{2(u_{t}+f)}\omega^{2n}, \tag{6.1}\] where \(\omega\) and \(\omega_{u}\) are induced by \(\Omega\) and \(\widetilde{\Omega}\) respectively. **Lemma 6.1**.: _Let \(u\) be a solution to (1.5) on \(M\times[0,T)\) and \(\epsilon\in(0,T)\), then we have_ \[||\nabla^{2}u||_{C^{\alpha}(M\times[\epsilon,T))}\leq C_{\epsilon,\alpha}, \tag{6.2}\] _where the constant \(C_{\epsilon,\alpha}>0\) depending only on \((I,J,K,g,\Omega,\Omega_{h})\), \(f\), \(\epsilon\) and \(\alpha\)._ Proof.: The proof here follows from [25], [9] and [10]. For any point \(p\in M\), choose a local chart around \(p\) that corresponds to the unit ball \(B_{1}\) in \(\mathbb{C}^{2n}\) with \(I\)-holomorphic coordinates \((z^{0},\ldots,z^{2n-1})\). We have \(\omega=\sqrt{-1}g_{i\bar{j}}dz^{i}\wedge d\bar{z}^{j}\) where \((g_{i\bar{j}}(x))\) is a positive definite \(2n\times 2n\) hermitian matrix given by the metric at any point \(x\in B_{1}\). We introduce the real coordinates by \(z^{i}=x^{i}+\sqrt{-1}x^{2n+i}\) for \(i=0,\ldots,2n-1\). The complex structure \(I\) corresponds to an endomorphism of the real tangent space which we still denote by \(I\), written in matrix form \[I=\begin{pmatrix}0&-I_{2n}\\ I_{2n}&0\end{pmatrix},\] where \(I_{2n}\) denotes the identity matrix. For any \(2n\times 2n\) hermitian matrix \(H=A+\sqrt{-1}B\), the standard way to identify \(H\) with a real symmetric matrix \(\iota(H)\in\operatorname{Sym}(4n)\) is defined as \[\iota(H)=\begin{pmatrix}A&B\\ -B&A\end{pmatrix}.\] Let \(Q_{(x,t)}(r)\) denote the domain \(B_{x}(r)\times(t-r^{2},t]\). We want to check the equation (6.1) is of the following form as in [9, p. 14] \[u_{t}(x,t)-F(S(x,t)+T(D_{\mathbb{R}}^{2}u,x,t),x,t)=h(x,t) \tag{6.3}\] where \(u\) is defined in \(Q_{(0,0)}(1)\) up to scaling and translation, \(D_{\mathbb{R}}^{2}u\) is the real Hessian and the functions \(F\), \(S\) and \(T\) are defined as the following. \[F:\operatorname{Sym}(4n)\times Q_{(0,0)}(1)\to\mathbb{R},\quad F (N,x,t):=\frac{1}{2}\log\det(N);\] \[S:Q_{(0,0)}(1)\to\operatorname{Sym}(4n),\quad S(x,t):=\iota(g_{i \bar{j}}(x));\] \[T:\operatorname{Sym}(4n)\times Q_{(0,0)}(1)\to\operatorname{Sym}(4n),\] \[T(N,x,t):=\frac{1}{n-1}\Big{(}\frac{1}{8}\operatorname{tr}\big{(}\iota(g_{i \bar{j}}(x))^{-1}p(N)\big{)}\iota(g_{i\bar{j}}(x))-G(N,x)\Big{)},\] where \[p(N):=\frac{1}{2}(N+{}^{t}INI),\] \[G(N,x):=\frac{1}{4}\big{(}p(N)+\iota({}^{t}J(x))p(N)\iota(J(x))\big{)}.\] Here we are using \(J(x)\) as the matrix representation of the complex structure \(J\). Observe that \(p(D_{\mathbb{R}}^{2}u)=2\iota(D_{\mathbb{C}}^{2}u)\), we have \[G(D_{\mathbb{R}}^{2}u,x)=\frac{1}{2}\Big{(}\iota(u_{i\bar{j}})+\iota(J)_{i}^{ \bar{k}}\iota(D_{\mathbb{C}}^{2}u)_{l\bar{k}}\iota(J)_{\bar{j}}^{l}\Big{)}(x) =\frac{1}{2}\iota\big{(}\operatorname{Re}(\partial\partial_{J}u(\cdot I, \cdot J))_{i\bar{j}}\big{)}(x).\] Moreover, one can verify that \[\operatorname{tr}\big{(}\iota(g_{i\bar{j}}(x))^{-1}p(D_{\mathbb{R}}^{2}u) \big{)}=4\operatorname{tr}(g_{i\bar{j}}^{-1}(x)D_{\mathbb{C}}^{2}u)=4 \Delta_{I,g}u.\] Notice that for a hermitian matrix \(H\), \(\det(\iota(H))=\det(H)^{2}\), hence we get \[u_{t}(x,t)-F(S(x,t)+T(D_{\mathbb{R}}^{2}u,x,t),x,t)\] \[= \,\frac{1}{2}\log\det\Big{(}\iota(g_{i\bar{j}}(x))+\frac{1}{n-1} \big{(}(\frac{1}{2}\Delta_{I,g}u)\iota(g_{i\bar{j}}(x))-\frac{1}{2}\iota\big{(} \operatorname{Re}(\partial\partial_{J}u(\cdot I,\cdot J))_{i\bar{j}}\big{)}(x )\big{)}\Big{)}\] \[= \,\log\det\Big{(}g_{i\bar{j}}(x)+\frac{1}{n-1}\big{(}S_{1}( \partial\partial_{J}u)g_{i\bar{j}}(x)-\frac{1}{2}\iota\big{(}\operatorname{ Re}(\partial\partial_{J}u(\cdot I,\cdot J))_{i\bar{j}}\big{)}(x)\big{)}\Big{)}\] \[= \,-2f(x)-\log\det(g_{i\bar{j}}(x)).\] Thus the equation (6.1) is indeed of form (6.3). It remains to verify that the functions \(F\), \(S\) and \(T\) defined above satisfies all the assumptions **H1** to **H3** in [9, p. 14]. From Theorem 5.1 we have \(tr_{g}g_{u}\leq C\), thus we get \[C_{0}^{-1}I_{4n}\leq S(x,t)+T(D_{\mathbb{R}}^{2}u,x,t)\leq C_{0}I_{4n}.\] Take the convex set \(\mathcal{E}\) to be the set of matrices \(N\in\operatorname{Sym}(4n)\) with \[C_{0}^{-1}I_{4n}\leq N\leq C_{0}I_{4n}.\] It is straightforward that **H1**, **H3** and **H2**.(1), **H2**.(2) hold. For **H2**.(3), we choose local coordinates such that \(g(x)=Id\) and \(J\) is block diagonal with only \(\frac{J_{2i}^{2i}}{2i+1}\) and \(J_{\frac{2i}{2i}}^{2i+1}\) non-zero, while \(p(P)\) is diagonal with eigenvalues \(\lambda_{1},\lambda_{1},\ldots,\lambda_{2n},\lambda_{2n}\geq 0\). Then one computes the eigenvalues of \(T(P,x,t)\) are \(\frac{1}{2}\sum_{i\neq j}\lambda_{i}\geq 0\). Thus for \(P\geq 0\) we have \(T(P,x,t)\geq 0\), and let \(K=2(n-1)\), then \(K^{-1}||P||\leq||T(P,x,t)||\leq K||P||\). Finally, to apply [9, Theorem 5.1], we need overcome the lack of \(C^{0}\) bound of \(u\) using the same argument as in [10, Lemma 6.1]. Specifically, we split into two cases \(T<1\) and \(T\geq 1\). If \(T<1\) then we have a \(C^{0}\) bound on \(u\) since by Lemma 3.1\(\sup_{M\times[0,T)}|u_{t}|\leq C\). Hence Theorem 5.1 in [9] applies directly in this case. If \(T\geq 1\), for any \(b\in(0,T-1)\), we consider \[u_{b}(x,t)=u(x,t+b)-\inf_{M\times[b,b+1)}u(x,t)\] for all \(t\in[0,1)\). By Lemma 3.2, we have \(\sup_{M\times[0,1)}|u_{b}(x,t)|\leq C\). Moreover, it is obvious that \(u_{b}\) also satisfies the equation, thus we have a Laplacian bound on \(u_{b}\). By Theorem 5.1 in [9] to \(u_{b}\), for any \(\epsilon\in(0,\frac{1}{2})\), we have \[||\nabla^{2}u||_{C^{\alpha}(M\times[b+\epsilon,b+1))}=||\nabla^{2}u_{b}||_{C^ {\alpha}(M\times[\epsilon,1))}\leq C_{\epsilon,\alpha},\] where \(C_{\epsilon,\alpha}\) is a uniform constant depending only on the fixed data \((I,J,K,g,\Omega,\Omega_{h})\), \(f\), \(\epsilon\) and \(\alpha\). Since \(b\in(0,T-1)\) is arbitrary, we obtain the estimate. Proof of Theorem 1.1.: Once we have the \(C^{2,\alpha}\) estimates, we obtain the longtime existence and the exponential convergence of \(\tilde{u}\) similar as the argument in [20]. Let \(\tilde{u}_{\infty}=\lim_{t\to\infty}\tilde{u}(\cdot,t)\), then \(\tilde{u}_{\infty}\) satisfies \[\big{(}\Omega_{h}+ \frac{1}{n-1}((\frac{1}{2}\Delta_{I,g}\tilde{u}_{\infty})\Omega- \partial\partial_{J}\tilde{u}_{\infty})\big{)}^{n}=e^{f+\tilde{b}}\Omega^{n}\] \[\Omega_{h}+\frac{1}{n-1}((\frac{1}{2}\Delta_{I,g}\tilde{u}_{ \infty})\Omega-\partial\partial_{J}\tilde{u}_{\infty})>0,\] where \[\tilde{b}=\Big{(}\int_{M}\Omega^{n}\wedge\overline{\Omega}^{n}\Big{)}^{-1}\int _{M}\Big{(}\log\frac{\big{(}\Omega_{h}+\frac{1}{n-1}((\frac{1}{2}\Delta_{I,g} \tilde{u}_{\infty})\Omega-\partial\partial_{J}\tilde{u}_{\infty})\big{)}^{n} }{\Omega^{n}}-f\Big{)}\Omega^{n}\wedge\overline{\Omega}^{n}.\]
2303.07330
Optical activity and transport in twisted bilayer graphene: the essence of spatial dispersion effects
This study investigates optical activity and quantum transport in twisted bilayer graphene (TBG) systems, demonstrating that the former results from spatial dispersion effects. The transfer matrix method is used to solve the propagation of electromagnetic waves through two graphene layers that act as the coupling surfaces of a dielectric slab. The resulting optical conductivity tensor is decomposed into a local and a drag part, with the drag transverse conductivity $\sigma_{xy}^{(drag)}$ governing the TBG system's optical property. An effective continuum model is employed to analyze electron state formation and calculate relevant parts of the optical conductivity tensor. Correlation of electron motions leads to incomplete cancellation and a finite $\sigma_{xy}^{(drag)}$ in the chiral TBG lattice. The study also calculates DC conductivity, showing TBG supports quantum conductivity proportional to $e^2/h$ at the intrinsic Fermi energy.
S. Ta Ho, V. Nam Do
2023-03-13T17:51:42Z
http://arxiv.org/abs/2303.07330v1
Optical activity and transport in twisted bilayer graphene: the essence of spatial dispersion effects ###### Abstract This study investigates optical activity and quantum transport in twisted bilayer graphene (TBG) systems, demonstrating that the former results from spatial dispersion effects. The transfer matrix method is used to solve the propagation of electromagnetic waves through two graphene layers that act as the coupling surfaces of a dielectric slab. The resulting optical conductivity tensor is decomposed into a local and a drag part, with the drag transverse conductivity \(\sigma_{xy}^{(drag)}\) governing the TBG system's optical activity. An effective continuum model is employed to analyze electron state formation and calculate relevant parts of the optical conductivity tensor. Correlation of electron motions leads to incomplete cancellation and a finite \(\sigma_{xy}^{(drag)}\) in the chiral TBG lattice. The study also calculates DC conductivity, showing TBG supports quantum conductivity proportional to \(e^{2}/h\) at the intrinsic Fermi energy. ## I Introduction The study of two-dimensional (2D) materials has received significant attention in recent years due to its potential applications in electronics, optoelectronics, spintronics, and valleytronics.[1; 2; 3; 4] The use of multiple-layer materials with van der Waals (vdW) interlayer coupling offers various means to tune the electronic structure, such as altering the interlayer distance, changing the lattice alignment through twisting or sliding the layers, and applying external fields. Physically, these material engineering solutions often result in breaking the spatial and temporal symmetries that govern the dynamics of electrons within the material layers. For instance, a magnetic field breaks the time-reversal symmetry, leading to exciting phenomena such as the quantum Hall effect, magneto-optical effects, and the Kerr and Faraday rotations of the polarization plane of linearly polarized light and circular dichroism.[5; 6; 7; 8] These intriguing optical phenomena have been utilized in the development of various devices for various applications.[9; 10; 11] However, the use of external fields has limitations for small-scale device design. As a result, it is crucial to search for materials with intrinsic properties and solutions that break the spatial symmetries of an electronic system to meet technological demands. Twisted-bilayer graphene (TBG) is a 2D van der Waals (2D vdW) material that has garnered significant attention from the research community due to its interesting electronic properties.[12; 13; 14] The TBG system is formed by stacking two graphene layers on top of each other with a rotation, or twist, angle between the two hexagonal lattices. The TBG system has been shown to be optically active, meaning that it has the ability to rotate the polarization plane of linearly polarized light by reflection and transmission and to absorb left- and right-handed light differently.[15] This, combined with the ability to mechanically tune the twist angle,[16; 17] makes TBG a potential candidate for advanced optical applications. The optical activity of a dielectric medium is typically attributed to the anisotropy of its atomic lattice,[18] or to the magneto-electric effect.[19] External electric, magnetic, and mechanical forces can also induce anisotropy in a medium.[15; 20; 7] It has been shown that strained monolayer graphene is optically active due to the presence of the off-diagonal element \(\sigma_{xy}(\omega)\) of the optical conductivity tensor.[21] The AA- and AB-stacked configurations of the TBG system are isotropic and characterized by the point groups \(D_{6h}\) and \(D_{3d}\), respectively.[22; 23] The breaking of the lattice symmetry from the \(D_{6h}\) and \(D_{3d}\) groups to the \(D_{6}\) and \(D_{3}\) groups, respectively, has been suggested as the mechanism causing the optical activity in TBG.[24] However, even with this symmetry, the TBG system is characterized by a total diagonal optical conductivity tensor, i.e., with zero off-diagonal components \(\sigma_{xy}(\omega)=0\). So, the mechanism supporting the optical activity of TBG remains to be further explored. In this paper, we analyze the optical activity of twisted bilayer graphene by treating simultaneously two essential aspects of the field-matter interaction problem, namely the propagation of the electromagnetic waves through a material layer and the response of a material layer to the action of an external field. We show that the optical activity of the TBG system is a result of spatial dispersion effects. Our theory is based on the following analysis. Working with 2D van der Waals systems, the physical quantities that describe them, such as the dielectric tensor and the conductivity tensor, are often considered as if they were ideal 2D systems, i.e., with zero thickness, even though the interlayer spacing is taken into account in the description of interlayer coupling.[25; 26; 27] However, in reality, even for graphene with only one layer of carbon atoms, an effective thickness can be defined based on the width of the quantum well potential caused by the carbon atoms.[28] As a result, 2D vdW systems cannot be considered ideal 2D systems but rather quasi-2D (Q2D) systems. This means that when describing the interaction between an electromagnetic field and a Q2D vdW material system at a macroscopic level, special considerations must be made. Typically, field equations are determined by averaging a set of microscopic field equations over infinitesimal volume elements.[18] However, this averaging procedure is only valid when the wavelength of the fields is much smaller than the dimensionalities of the physical system. For Q2D materials, averaging cannot be performed along the \(Oz\) direction perpendicular to the \(Oxy\) plane of the atomic lattice. As a result, we need to use a hybrid basis of reciprocal-real spaces to represent physical quantities and field equations. For example, the equation \[J_{\alpha}(z,\mathbf{q}_{\parallel},\omega)=\int dz^{\prime}\sigma_{\alpha \beta}(z,z^{\prime},\mathbf{q}_{\parallel},\omega)E_{\beta}(z^{\prime}, \mathbf{q}_{\parallel},\omega) \tag{1}\] describes the relationship between the component \(J_{\alpha}(z,\mathbf{q}_{\parallel},\omega)\) of an electrical current density induced in the material by an electric field \(E_{\beta}(z,\mathbf{q}_{\parallel},\omega)\), where \(\alpha,\beta=x,y\) are the indices of the components of vector fields, \(\mathbf{q}_{\parallel}\) is the wave vector in the material plane, \(\omega\) is the frequency, and \(\sigma_{\alpha\beta}(z,z^{\prime},\mathbf{q}_{\parallel},\omega)\) is a component of the electrical conductivity tensor of the material layer. In the optical limit, where \(\|\mathbf{q}_{\parallel}\|a\ll 1\) and \(a\) is the lattice constant, we can approximate all quantities at \(\mathbf{q}_{\parallel}=0\), i.e., effectively ignoring the spatial effects in the material plane but not along the thickness.[29; 30; 31] This description highlights the importance of taking into account the spatial dispersion along the \(Oz\) direction, which is typically the transmission direction of electromagnetic fields used to study the material's properties. Theoretical studies of optical activity in TBG have been discussed in several references, including [20; 15; 24], and [24]. These studies have revealed the role of the spacing \(a\) between two graphene layers and explained optical activity as a magneto-electric effect. In this approach, a magnetization \(\mathbf{m}_{\parallel}\) and a polarization \(\mathbf{p}_{\parallel}\) emerge in the atomic plane due to the difference in current densities in the two graphene layers. To establish a relationship between these two quantities with electric and magnetic fields, a symmetry analysis and some assumptions were made,[20] such as the linear change of the fields in the space between the two graphene layers and the zero-frequency limit,[32] or the dephasing of the current operators in the two graphene layers.[24] The magnetization and polarization affect the direction and magnitude of the electric and magnetic field vectors within the material layer. However, the problem is that the transverse component \(\sigma_{xy}^{(drag)}\) of the conductivity tensor, which plays a role in creating the effect of one layer on the other when an electromagnetic field is transmitted through the system, is treated as the total Hall conductivity of the system in this wave propagation problem. This may be misleading as the system does not support a finite Hall conductivity due to time-reversal symmetry. Therefore, it is essential to consider in detail the electromagnetic wave propagation through the system, where it is understood that the electromagnetic wave must pass through one layer before reaching the second layer, and the coupling between the two layers must play a crucial role. In the next section, we will use Eq. (1) without considering symmetry analysis and the mentioned assumptions to obtain the most important theoretical results in Ref. ([20]), see Eq. (2) therein, and clarify the role of \(\sigma_{xy}^{(drag)}\) in the electromagnetic wave propagation through the TBG system. Our study encompasses both macroscopic and microscopic perspectives. At the macroscopic level, we use the transfer matrix method to analyze the light propagation through a 2D van der Waals system composed of multiple material layers, each treated as a boundary between two vacuum layers of finite width.[33; 34] The material system's conductivity tensor is decomposed into local and drag terms, with the local term defining the local relationship between current density and electric field in each layer, and the drag term defining the non-local relationship between current density in one layer and the electric field in other layers. Technically, the finite thickness of the material system makes the assumption of the spatial homogeneity along the thickness invalid. It therefore requires evaluating expressions that relate physical quantities through integrals over \(z\) and \(z^{\prime}\). This results in increased computational complexity and the need for a microscopic description of the material surfaces, such as the potential caused by atoms. To simplify the analysis, we assume that each layer of atoms has zero thickness. It allow us to express relevant quantities as sums of delta functions: \[J_{\alpha}(z,\omega)=\sum_{\ell}J_{\alpha}^{(\ell)}\delta(z-z_{\ell}), \tag{2}\] where \(z_{\ell}\) is the position of the \(\ell\)th material layer, and \(J_{\alpha}^{(\ell)}\) is the current density on that layer. At the microscopic level, we use the Kubo formula to calculate each term of the conductivity tensor required for the macroscopic level of description. We apply an effective model to determine the low-energy states of electrons in the TBG material lattice, and show that the electron eigen-states can be expressed as a hybridization of single-layer states in the two graphene layers. If the atomic lattice has mirror symmetry (the AA-stacked configuration) or glide symmetry (the AB-stacked configuration) in the lattice plane, the drag transverse conductivity will be vanished due to the cancelling contribution of the electron states. However, generic twisted bilayer graphene lattices have a chiral structure and do not possess these symmetries, leading to a non-zero drag transverse conductivity that determines the optical activity of the system. In addition to determining the components of the optical conductivity tensor, the reflection, transmission, absorption coefficients, and the circular dichroism spectrum, we also calculate the DC conductivity of the TBG system. We find that the hybridized states support a quantum value of \(2\sigma_{0}^{DC}\), where \(\sigma_{0}^{DC}=\frac{4e^{2}}{\pi\hbar}\), at the intrinsic Fermi energy. This holds true even for the TBG configuration with a magic twist angle, which has a high density of states there due to the presence of a flat band. The structure of this paper is divided into three sections. After the "Introduction" section, the next section (Sec. II) provides a solution to the problem of wave transmission through the twisted bilayer graphene (TBG) system in Sec. II.1. The solution is based on the transfer matrix method, which illustrates the contribution of the conductivity tensor components to the transfer matrix elements such as the reflection and transmission coefficients and the absorption coefficient. In Sec. II.2, the local and drag components of the optical conductivity tensor are calculated through the use of the Kubo formula, while the DC conductivity of the TBG system is calculated using the Kubo-Greenwood formula. In Sec. II.3, an effective model for the electrons in the TBG lattices is presented. In Sec. III, numerical results for the electronic band structure of various TBG configurations, the analysis of the optical conductivity tensor elements, and the spectra of circular dichroism as a function of photon frequency are presented. The conclusion is presented in the final section, Sec. IV. ## II Theory and calculation methods ### Optical chiral responses In this section, we investigate the optical response of TBG systems by formulating a theory for the propagation of electromagnetic waves through a generic TBG sheet. Unlike typical dielectric slabs with two separate surfaces, the electric current densities in the two graphene layers of the TBG sheet are interdependent. The current density in one graphene layer is affected by both the electric field within that layer and the electric field in the other layer. This relationship is described by Eq. (1), which we use to solve the problem at hand. We set up the system, following Ref. [34], with two graphene layers separating the entire space into three regions, each characterized by parameters \((\epsilon_{i},\mu_{i})\), where \(i=1,2,3\) labels the mediums. Our goal is to determine the transfer matrix of the system and to develop expressions for the transmission and reflection matrices of the bilayer graphene system. As the inhomogeneity of the space only occurs in the \(z\) direction perpendicular to the TBG surface, the Maxwell equations support a plane wave solution of the form: \[\mathbf{F}_{i}(x,y,z)=\left[\mathbf{F}_{i}^{+}e^{ik_{i}z}+\mathbf{F}_{i}^{-}e^{-ik_{i}z} \right]e^{ik_{i}x}e^{ik_{iy}y}, \tag{3}\] where \(\mathbf{F}\) represents \(\mathbf{E}\) (the electric field), \(\mathbf{D}\) (the electric induction), \(\mathbf{H}\) (the magnetic field), and \(\mathbf{B}\) (the magnetic induction). The vector coefficients \(\mathbf{F}_{i}^{\pm}\) depend on the wave vector \(\mathbf{k}_{i}\) in general. The conditions for the continuity of the vector fields at the interface between two mediums are given by: \[\mathbf{n}\times[\mathbf{E}_{2}(1)-\mathbf{E}_{1}(1)]=0, \tag{4a}\] \[\mathbf{n}\times[\mathbf{H}_{2}(1)-\mathbf{H}_{1}(1)]=\mathbf{J}(1), \tag{4b}\] for the first graphene layer surface, and \[\mathbf{n}\times[\mathbf{E}_{3}(2)-\mathbf{E}_{2}(2)]=0, \tag{5a}\] \[\mathbf{n}\times[\mathbf{H}_{3}(2)-\mathbf{H}_{2}(2)]=\mathbf{J}(2). \tag{5b}\] for the second graphene layer surface. In Eqs. (4b) and (5b), the distribution of current densities on each graphene layer is linearly related to the electric field by the conductivity tensor given via Eq. (17). To the completeness of the setup, we rewrite this equation in the explicit form: \[\mathbf{J}(1) =\mathbf{\sigma}^{(1)}\mathbf{E}_{1}(1)+\mathbf{\sigma}^{(drag)}\mathbf{E}_{2}(2), \tag{6a}\] \[\mathbf{J}(2) =\mathbf{\sigma}^{(drag)\dagger}\mathbf{E}_{1}(1)+\mathbf{\sigma}^{(2)}\mathbf{E} _{2}(2)\] \[=\mathbf{\sigma}^{(drag)\dagger}\mathbf{E}_{2}(1)+\mathbf{\sigma}^{(2)}\mathbf{E} _{3}(2). \tag{6b}\] The continuity conditions of the fields at interfaces allow us to determine the relationship between the vector fields \((\mathbf{F}_{i}^{+},\mathbf{F}_{i}^{-})\) in two different media at their interface. In particular, the magnetic field \(\mathbf{H}_{i}\) is related to the electric field \(\mathbf{E}_{i}\) through the Maxwell equation \(\text{curl}(\mathbf{E})=-\partial_{t}\mathbf{B}\), where \(\mathbf{B}=\mu\mathbf{H}\). Utilizing the plane wave solution (3), we obtain the expression for \(\mathbf{H}_{i}^{\pm}\): \[\mathbf{H}_{i}^{\pm}=\pm\sqrt{\frac{\epsilon_{i}}{\mu_{i}}\frac{\mathbf{k}_{i}\times \mathbf{E}_{i}^{\pm}}{|\mathbf{k}_{i}|}}, \tag{7}\] where \(\epsilon_{i}\) and \(\mu_{i}\) represent the absolute permittivity and absolute permeability of medium \(i\), respectively. This equation makes use of the dispersion relation \(\omega=c|\mathbf{k}_{i}|/n_{i}\), where \(c\) is the speed of light in vacuum and \(n_{i}\) is the refractive index of medium \(i\). It is worth noting that \(\sqrt{\epsilon_{i}/\mu_{i}}\) can be expressed through \(n_{i}\) as \(\sqrt{\epsilon_{i}/\mu_{i}}\approx n_{i}\sigma_{0}/2\alpha\), where \(\sigma_{0}=e^{2}/h\) is the unit of quantum conductivity and \(\alpha=e^{2}/(4\pi\epsilon_{0}\hbar c)\) is the fine-structure constant. In this study, the problem is solved for light transmission along the normal direction to the TBG layer, so \(\mathbf{k}_{i}=k_{i}\mathbf{n}\). The transfer matrix \(\mathbf{M}31\) (of size \(4\times 4\)) is then derived, relating the components of the electric fields in the first and third mediums as follows: \[\left(\begin{array}{c}\mathbf{E}_{3}^{+}(2)\\ \mathbf{E}_{3}^{-}(2)\end{array}\right)=\mathbf{M}31\left(\begin{array}{c}\mathbf{E}_ {1}^{+}(1)\\ \mathbf{E}_{1}^{-}(1)\end{array}\right). \tag{8}\] Here, \(\mathbf{M}_{31}=\mathbf{M}_{32}\mathbf{M}_{2}^{f}\mathbf{M}_{21}\), where: Figure 1: The energy band structures of electrons in three TBG configurations with twist angles \(\theta=3.890^{\circ}\) (a, d), \(\theta=1.890^{\circ}\) (b, e), and \(\theta=1.050^{\circ}\) (c, f) are analyzed. The blue curves in (a), (b), and (c) represent the dispersion and density of states of graphene calculated without considering the interlayer coupling. Panels (d), (e), and (f) show the dispersion curves for the two valleys \(\xi=+1\) and \(\xi=-1\) in blue and red, respectively. \[\mathbf{M}_{32} =\left(\begin{array}{cc}\mathbf{1}&\mathbf{1}\\ -\sqrt{\frac{\varepsilon_{3}}{\mu_{3}}}\mathbf{1}-\mathbf{\sigma}^{(2)}&\sqrt{\frac {\varepsilon_{3}}{\mu_{3}}}\mathbf{1}-\mathbf{\sigma}^{(2)}\end{array}\right)^{-1} \left(\begin{array}{cc}\mathbf{1}&\mathbf{1}\\ -\sqrt{\frac{\varepsilon_{3}}{\mu_{2}}}\mathbf{1}+\mathbf{\sigma}^{(drag)\dagger}e^ {-ik_{2z}d}&\sqrt{\frac{\varepsilon_{2}}{\mu_{2}}}\mathbf{1}+\mathbf{\sigma}^{( drag)\dagger}e^{ik_{2z}d}\end{array}\right), \tag{9a}\] \[\mathbf{M}_{21} =\left(\begin{array}{cc}\mathbf{1}&\mathbf{1}\\ -\sqrt{\frac{\varepsilon_{2}}{\mu_{2}}}\mathbf{1}-\mathbf{\sigma}^{(drag)}e^{ik_{ 2z}d}&\sqrt{\frac{\varepsilon_{2}}{\mu_{2}}}\mathbf{1}-\mathbf{\sigma}^{(drag)}e^ {-ik_{2z}d}\end{array}\right)^{-1}\left(\begin{array}{cc}\mathbf{1}& \mathbf{1}\\ -\sqrt{\frac{\varepsilon_{1}}{\mu_{1}}}\mathbf{1}+\mathbf{\sigma}^{(1)}&\sqrt{ \frac{\varepsilon_{1}}{\mu_{1}}}\mathbf{1}+\mathbf{\sigma}^{(1)}\end{array}\right),\] (9b) \[\mathbf{M}_{2}^{f} =\left(\begin{array}{cc}e^{ik_{2z}d}\mathbf{1}&0\\ 0&e^{-ik_{2z}d}\mathbf{1}\end{array}\right), \tag{9c}\] with \(\mathbf{1}\) the \(2\times 2\) identity matrix. Let \(\mathbf{r}\) and \(\mathbf{t}\) be the \(2\times 2\) matrices that relate the components of the electric field vector of the incident light to those of the reflected and transmitted light, i.e., \(\mathbf{E}_{1}^{-}(1)=\mathbf{r}\cdot\mathbf{E}_{1}^{+}\) and \(\mathbf{E}_{3}^{+}=\mathbf{t}\cdot\mathbf{E}_{1}^{+}\). These matrices are referred to as the reflection and transmission matrices, respectively. From Eq. (8), we can deduce the following: \[\mathbf{r} =-[\mathbf{M}_{31}^{22}]^{-1}\mathbf{M}_{31}^{21}, \tag{10a}\] \[\mathbf{t} =\mathbf{M}_{31}^{11}-\mathbf{M}_{31}^{12}[\mathbf{M}_{31}^{22}] ^{-1}\mathbf{M}_{31}^{21}=[\mathbf{M}_{31}^{-1}]^{11}. \tag{10b}\] From these matrices, the reflectance (\(R\)) and transmittance (\(T\)) are determined as the ratio of energy fluxes, specifically: \[R =\left|\frac{(\mathbf{r}\cdot\mathbf{E}_{1}^{+})^{\dagger}\cdot( \mathbf{r}\cdot\mathbf{E}_{1}^{+})}{\mathbf{E}_{1}^{+\dagger}\cdot\mathbf{E}_{1}^{+}}\right|, \tag{11a}\] \[T =\left|\frac{n_{3}}{n_{1}}\frac{(\mathbf{t}\cdot\mathbf{E}_{1}^{+})^{ \dagger}\cdot(\mathbf{t}\cdot\mathbf{E}_{1}^{+})}{\mathbf{E}_{1}^{+\dagger}\cdot\mathbf{E }_{1}^{+}}\right|. \tag{11b}\] On the base of the conservation of energy flux, the absorptance is then determined by \(A=1-(R+T)\). The values of \(R\), \(T\), and \(A\) are dependent not only on the energy, but also on the polarization states of light. It is important to note that the left- and right-handed polarization states of light are characterized by electric field vectors, \(\mathbf{E}_{L,R}^{+}\propto(1,\pm i)^{T}/\sqrt{2}\) (The superscript \(T\) here denotes the transpose operation). To measure the dependence of light absorption on these polarization states, we calculate the spectrum of circular dichroism (CD), defined as: \[\text{CD}=\frac{A_{\text{L}}-A_{\text{R}}}{A_{\text{L}}+A_{\text{R}}}. \tag{12}\] In the following section, we present the method for calculating the conductivity tensors \(\mathbf{\sigma}^{(\ell)}\) and \(\mathbf{\sigma}^{(drag)}\), and we will use the results obtained to calculate the transmittance, reflectance, absorbance and circular dichroism via Eqs. (11a), (11b) and (12). ### AC and DC conductivities The optical properties of bilayer graphene systems are notable, as evidenced by previous research.[25; 35; 36; 36] When taking into account the layer thickness, the effects of spatial dispersion become evident. To describe these effects, we decompose the current response to an external electric field caused by an incident monochromatic light beam perpendicular to the material plane. To start, we rewrite Equation (1) in the real space representation as follows: \[J_{\alpha}(\mathbf{x},z,\omega)=\int dz^{\prime}\sigma_{\alpha\beta}(\mathbf{x},z;\mathbf{ x},z^{\prime},\omega)E_{\beta}(\mathbf{x},z^{\prime},\omega), \tag{13}\] where \(\mathbf{x}=(x,y)\) represents the position vector in the material plane with \(x\) and \(y\) as its coordinates. On the assumption of spatial homogeneity in the material plane, we can write \(\sigma_{\alpha\beta}(\mathbf{x},z;\mathbf{x},z^{\prime},\omega)=\sigma_{\alpha\beta}(z,z^{\prime},\omega)\), and: \[\sigma_{\alpha\beta}(z,z^{\prime},\omega)= \sum_{\ell=1}^{2}\sigma_{\alpha\beta}(z,z^{\prime},\omega)\delta (z^{\prime}-z_{\ell}). \tag{14}\] Figure 2: The longitudinal DC conductivities \(\sigma_{xx}^{DC}\) (represented by solid curves) and \(\sigma_{yy}^{DC}\) (represented by dashed curves) for three TBG configurations with twist angles of \(\theta=3.890^{\circ}\) (represented by blue curves), \(\theta=1.890^{\circ}\) (represented by green curves), and \(\theta=1.050^{\circ}\) (represented by red curves). Eq. (13) therefore becomes: \[J_{\alpha}(\mathbf{x},z,\omega)=\sum_{\ell=1}^{2}\sigma_{\alpha\beta}(z,z_{\ell}, \omega)E_{\beta}(\mathbf{x},z_{\ell},\omega). \tag{15}\] More specifically, we resolve the current density induced in each graphene layer as follows: \[J_{\alpha}^{(\ell)}(\mathbf{x},\omega)=\sigma_{\alpha\beta}^{(\ell)}(\omega)E_{ \beta}^{(\ell)}(\mathbf{x},\omega)+\sigma_{\alpha\beta}^{(drag)}(\omega)E_{\beta}^ {(\ell^{\prime})}(\mathbf{x},\omega), \tag{16}\] where the involved quantities are defined by: \(J_{\alpha}^{(\ell)}(\mathbf{x},\omega)=J_{\alpha}(\mathbf{x},z_{\ell},\omega);E_{\beta} ^{(\ell)}(\mathbf{x},\omega)=E_{\beta}(\mathbf{x},z_{\ell},\omega)\) and \(\sigma_{\alpha\beta}^{(\ell)}(\omega)=\sigma_{\alpha\beta}(z_{\ell},z_{\ell},\omega)\), and \(\sigma_{\alpha\beta}^{(drag)}(\omega)=\sigma_{\alpha\beta}(z_{\ell},z_{\ell},\omega)\) where \(\ell\neq\ell^{\prime}\). Eqs. (16) can be cast in the matrix form: \[\left(\begin{array}{c}\mathbf{J}^{(1)}\\ \mathbf{J}^{(2)}\end{array}\right)_{\omega}=\left(\begin{array}{cc}\mathbf{\sigma}^{ (1)}&\mathbf{\sigma}^{(drag)}\\ \mathbf{\sigma}^{(drag)\dagger}&\mathbf{\sigma}^{(2)}\end{array}\right)_{\omega}\left( \begin{array}{c}\mathbf{E}^{(1)}\\ \mathbf{E}^{(2)}\end{array}\right)_{\omega}, \tag{17}\] where \(\mathbf{\sigma}^{(\ell)}(\omega)=\sigma^{(\ell)}(\omega)\mathbf{\tau}_{0}\) is called the local part, and \(\mathbf{\sigma}^{(drag)}(\omega)=\sigma^{(drag)}(\omega)\mathbf{\tau}_{0}-i\sigma_{xy }^{(drag)}(\omega)\mathbf{\tau}_{y}=\sigma^{drag}(\omega)-\sigma_{xy}^{drag}\mathbf{ \hat{z}}\times\) is called the drag part. Here \(\mathbf{\tau}_{0}\) is the \(2\times 2\) identity matrix and \(\mathbf{\tau}_{2}\) is the conventional second Pauli matrix. The compact form (17) is identical to results reported in Refs. [20; 37]. Importantly, it guarantees the time-reversal symmetry, the rotation symmetry and the layer interchange symmetry. The conductivities \(\sigma^{(\ell)}(\omega),\sigma^{(drag)}(\omega)\) and \(\sigma_{xy}^{(drag)}(\omega)\) can be determined from the Kubo formula. There are a number of versions of the Kubo formula for the electrical conductivity suitable for implementing it in different situations. However, the most important ingredient we must specify is the velocity operators \(\hat{v}_{\alpha}\). In general, these operators are determined from the position operator \(\hat{x}_{\alpha}\) and the Hamiltonian \(\hat{H}\) via the Heisenberg equation: \[\hat{v}_{\alpha}=\frac{1}{i\hbar}[\hat{x}_{\alpha},\hat{H}]\to v_{\alpha}(\bm {k})=\frac{1}{\hbar}\frac{\partial}{\partial k_{\alpha}}H(\mathbf{k}). \tag{18}\] According to the linear response theory, we see that the conductivity has the paramagnetic part only. Using the eigen-vectors of single-particle Hamiltonian (without the presence of external vector potential \(\mathbf{A}(\mathbf{x},t)\)) as the representation basis, \(\hat{H}|n\rangle=E_{n}|n\rangle\), the elements of the optical conductivity tensor are given by this formula:[38] \[\sigma_{\alpha\beta}^{(c)}(\omega)=\frac{ie^{2}}{S}\frac{1}{\omega+i\eta} \sum_{m,n}(f_{m}-f_{n})\frac{O_{\alpha\beta}^{(c)mn}}{E_{m}-E_{n}+\hbar(\omega +i\eta)}, \tag{19}\] where \(S\) is the area of material sample; \(c=\{\ell,drag\}\); \(f_{n}=f(E-\mu,k_{B}T)\) is the occupation weight of the energy level \(E_{n}\) determined by the Fermi-Dirac function \(f\) with \(\mu,k_{B}T\) the chemical potential and thermal energy; \(\eta\) is a positive infinitesimal number, and \(O_{\alpha\beta;mn}^{(c)}\) denotes the product of velocity matrix elements: \[O_{\alpha\beta}^{(\ell)mn} = \langle m|\hat{v}_{\alpha}^{(\ell)}|n\rangle\langle n|\hat{v}_{ \beta}^{(\ell)}|m\rangle, \tag{20a}\] \[O_{\alpha\beta}^{(drag)mn} = \langle m|\hat{v}_{\alpha}^{(1)}|n\rangle\langle n|\hat{v}_{\beta }^{(2)}|m\rangle. \tag{20b}\] To the DC conductivity, it might be optimistic to think that the DC conductivity can be simply obtained from the expression of the optical conductivity in the limit of zero frequency. However, it is not, at least in the numerical calculation. This is because for some finite frequency \(\omega\), there is always a finite length scale governing the behavior of electron. This length scale is \(L_{\omega}=2\pi v_{F}/\omega\). Meanwhile, there is no such length scale for the DC transport. Furthermore, the static transport has the diffusion nature due to the similarity of the Schrodinger equation and the diffusion equation. The DC conductivity can be obtained from the linear response theory. The vector potential is chosen in the form \(\mathbf{A}(t)=(-Et,0,0)\), where \(E\) is the intensity of a static electric field. The DC conductivity is calculated using the Kubo-Greenwood formula:[39; 40] \[\sigma_{\alpha\alpha}(\mu,kT)=-\int_{-\infty}^{+\infty}dE\frac{\partial f(E- \mu,kT)}{\partial E}\sigma_{\alpha\alpha}(E), \tag{21}\] where \[\sigma_{\alpha\alpha}(E)=\frac{2\pi e^{2}\hbar}{S}\sum_{m,n}|\langle m|\hat{v}_{ \alpha}|n\rangle|^{2}\delta(E-E_{n})\delta(E-E_{m}). \tag{22}\] This quantity is seen as the DC conductivity at zero temperature. When performing the above formula, we use the Gaussian function to approximate the \(\delta\)-Dirac function: \[\delta(E-E_{n})\approx\frac{1}{\eta\sqrt{\pi}}\exp\left(-\frac{(E-E_{n})^{2}}{ \eta^{2}}\right), \tag{23}\] where \(\eta>0\) is a tiny number that is appropriately chosen to smear the energy levels for the numerical calculation. ### Effective continuum model Effective continuum models for the low-energy states of electrons in TBG systems have been developed since [2007] by Lopes et al.[41] However, the model proposed by Bistritzer and MacDonald in 2011 is better known and widely used. In this paper, we present our solution to the Bistritzer-MacDonald model.[42] We implement this model for TBG lattices in which the first layer is clockwise rotated by a half-twist angle, \(\theta_{1}=-\theta/2\), and the second layer is counterclockwise rotated by \(\theta_{2}=+\theta/2\). The unrotated layers are defined by unit vectors \(\mathbf{a}_{1}=a\cdot\mathbf{e}_{x}\) and \(\mathbf{a}_{2}=a\cos\pi/3\cdot\mathbf{e}_{x}+a\sin\pi/3\cdot\mathbf{e}_{y}\), where \(a\) is the lattice constant of graphene, and \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\) are the unit vectors defining the \(x\) and \(y\) directions, respectively. The two associated vectors defining the reciprocal lattice are thus \(\mathbf{a}_{1}^{\star}=(4\pi/a)(\cos\pi/6\cdot\mathbf{e}_{x}+\sin\pi/6\cdot\mathbf{e}_{y})\) and \(\mathbf{a}_{2}^{\star}=(4\pi/a)\cdot\mathbf{e}_{y}\). Under the twisting, these vectors defining the atomic lattice of the two graphene layers as well as their reciprocal lattice are determined by applying the rotation matrix \(R_{z}(\theta_{\ell})\), specifically \(\mathbf{a}_{i}^{(\ell)}=R_{z}(\theta_{\ell})\cdot\mathbf{a}_{i}\), and \(\mathbf{a}_{i}^{\star}=R_{z}(\theta_{\ell})\cdot\mathbf{a}_{i}^{\star}\). The first Brillouin zone BZ\({}^{(\ell)}\) of each graphene layer is defined by a set of six corner points \(\mathbf{K}_{i}^{(\ell)}\). For instance, the point \(\mathbf{K}_{1}^{(\ell)}\) is determined by the point \(\mathbf{K}_{-}^{(\ell)}\) with \(\mathbf{K}_{\xi}^{(\ell)}=-\xi(2\mathbf{a}_{1}^{*(\ell)}+\mathbf{a}_{2}^{*(\ell)})/3\), where \(\xi=\pm 1\). For commensurate twisting, the twist angle \(\theta\) is determined by two integer numbers \(m\) and \(n\) via the formula: \[\theta=\arctan\left(\frac{|n^{2}-m^{2}|\sin(\pi/3)}{(n^{2}+m^{2})\cos(\pi/3)+2 mn}\right). \tag{24}\] When the absolute value of the difference between \(m\) and \(n\) is equal to 1, two unit vectors \(\mathbf{A}_{i}^{*}=\mathbf{a}_{i}^{*(1)}-\mathbf{a}_{i}^{*(2)}\) define the moire reciprocal lattice. The mini-Brillouin zone comprises six \(\mathbf{K}_{i}^{M}\) points. \(\mathbf{K}_{1}^{M}=(-\mathbf{A}_{1}^{*}+\mathbf{A}_{2}^{*})/3\), and \(\mathbf{K}_{6}^{M}=(\mathbf{A}_{1}^{*}+2\mathbf{A}_{2}^{*})/3\) define two of these points. The remaining points can be obtained by shifting these points using the reciprocal unit vectors. We begin by representing the Hamiltonian \(H\) for electrons using the layer-resolution vector basis set \(\{|\ell\rangle,|,\ell=1,2\}\). Using the identity \(1=\sum_{\ell=1}^{2}|\ell\rangle\langle\ell|\), we can express \(H\) as follows: \[H=\sum_{\ell,\ell^{\prime}=1}^{2}|\ell\rangle H_{\ell,\ell^{\prime}}\langle \ell^{\prime}|, \tag{25}\] where \(H_{\ell,\ell^{\prime}}=\langle\ell|H|\ell^{\prime}\rangle\). Next, we can further specify \(H_{\ell,\ell^{\prime}}\) by using the lattice-resolution vector basis set \(|\mathbf{k},m\rangle=|\mathbf{k}+\mathbf{G}_{m}\rangle,|,\mathbf{k}\in BZ,m\in\mathbb{Z}\). We can then use the identity \(1=\sum\mathbf{k}\in BZ\sum_{m}|\mathbf{k},m\rangle\langle\mathbf{k},m|\) to obtain: \[H=\sum_{\mathbf{k}\in BZ}\sum_{\ell,m}\sum_{\ell^{\prime},n}|\ell,\mathbf{k},m\rangle H _{\ell,m;\ell^{\prime},n}(\mathbf{k})\langle\ell^{\prime},\mathbf{k},n|, \tag{26}\] where \(|\ell,\mathbf{k},m\rangle=|\ell\rangle|\mathbf{k},m\rangle\) and we use the relation \(\langle\ell,\mathbf{k},m|H|\ell^{\prime},\mathbf{k}^{\prime},n\rangle=H_{\ell,m;\ell^ {\prime},n}(\mathbf{k})\delta_{\mathbf{k},\mathbf{k}^{\prime}}\). In the case of twisted bilayer graphene (TBG) configurations with tiny twist angles, the low energy states of electrons are distinguished by a quantum index \(\xi=\pm 1\), which corresponds to the two nonequivalent Dirac valleys \(\mathbf{K}_{\xi}^{(1,2)}\) of the graphene monolayers. As the twist angle becomes smaller, the positions of the two points \(\mathbf{K}_{\xi}^{(1)}\) and \(\mathbf{K}_{\xi}^{(2)}\) become closer to each other. The Bistritzer-MacDonald model is represented by a Hamiltonian in real-space representation as follows:[42] \[\hat{H}^{\xi}=\begin{pmatrix}H_{1}^{\xi}(\hat{\mathbf{p}})&T^{\xi}(\hat{\mathbf{r}}) \\ T^{\xi\dagger}(\hat{\mathbf{r}})&H_{2}^{\xi}(\hat{\mathbf{p}})\end{pmatrix}. \tag{27}\] Here, \(H_{1}^{\xi}(\hat{\mathbf{p}})\) and \(H_{2}^{\xi}(\hat{\mathbf{p}})\) are the blocks for the Hamiltonians of the two monolayers, and \(T^{\xi}(\hat{\mathbf{r}})\) is the interlayer coupling block, which are determined via the momentum and position operators, \(\hat{\mathbf{p}}\) and \(\hat{\mathbf{r}}\), respectively. For the range of energy around the intrinsic Fermi level the two Hamiltonian blocks \(H_{1}^{\xi}(\hat{\mathbf{p}})\) and \(H_{2}^{\xi}(\hat{\mathbf{p}})\) are well approximated by the 2D Dirac Hamiltonian: \[H_{\ell}^{\xi}(\hat{\mathbf{p}})=-v_{F}(\xi\sigma_{x},\sigma_{y})\cdot\left[R^{z} \left(-\theta_{\ell}\right)\cdot(\hat{\mathbf{p}}-\hbar\mathbf{K}_{\xi}^{\ell})\right], \tag{28}\] where \(\sigma_{x},\sigma_{y}\) are two conventional Pauli matrices, \(R^{z}\left(-\theta_{\ell}\right)\) is a matrix that rotates the relevant vectors around the \(Oz\) axis to maintain the canonical form of the Dirac Hamiltonian, \(\mathbf{K}_{\xi}^{\ell}\) is the corner point of type (valley) \(\xi\) in the first Brillouin zone of layer \(\ell\), and \(v_{F}\) is the Fermi velocity. The minus sign in the equation is due to the negative value of the hopping parameter \(V_{pp\pi}=-2.7\) eV. The block term for interlayer coupling is expressed as follows:[43] \[T^{\xi}(\hat{\mathbf{r}})= \left(\begin{array}{cc}u&u^{\prime}\\ u^{\prime}&u\end{array}\right)+\left(\begin{array}{cc}u&u^{\prime}\omega^{- \xi}\\ u^{\prime}\omega^{\xi}&u\end{array}\right)e^{i\xi\delta\mathbf{k}_{2}\cdot\hat{\mathbf{r }}}\] \[+\left(\begin{array}{cc}u&u^{\prime}\omega^{\xi}\\ u^{\prime}\omega^{-\xi}&u\end{array}\right)e^{i\xi\delta\mathbf{k}_{3}\cdot\hat{\mathbf{r }}}, \tag{29}\] The values of \(\delta\mathbf{k}_{2}\) and \(\delta\mathbf{k}_{3}\) are defined as \(\mathbf{A}_{1}^{*}\) and \(\mathbf{A}_{1}^{*}+\mathbf{A}_{2}^{*}\), respectively, where \(\mathbf{A}_{1}^{*}\) and \(\mathbf{A}_{2}^{*}\) are reciprocal lattice vectors and \(\omega=\exp(i2\pi/3)\). To account for the effects of lattice reconstruction, the parameters \(u\) and \(u^{\prime}\), which are set to be 0.0797 eV and 0.0975 eV, respectively, are chosen based on the description in reference [43]. The velocity operator corresponding to the Hamiltonian in Eq. (28) can be expressed as follows: \[\hat{v}_{\alpha}=-v_{F}(\xi\sigma_{x},\sigma_{y})\cdot R_{\alpha}^{z}(-\theta_{ \ell}), \tag{30}\] Here, \(R_{\alpha}^{z}(-\theta_{\ell})\) denotes the first or second column of the rotation matrix \(R^{z}(-\theta_{\ell})\) depending on whether \(\alpha=x\) or \(\alpha=y\). It is noteworthy that, because of the relativistic nature of \(H_{\ell}^{\xi}(\hat{\mathbf{p}})\) in Eq. (25), the current operator \(\hat{j}_{\alpha}\) only comprises a drift current component, \(\hat{j}_{\alpha}=e\hat{v}_{\alpha}\), with no diffusion or diamagnetic terms. The spectrum of the Hamiltonian \(\hat{H}\) is obtained by solving the secular equation given by: \[\left(\begin{array}{cc}H_{1}^{\xi}(\hat{\mathbf{p}})&T^{\xi}(\hat{\mathbf{r}})\\ T^{\xi\dagger}(\hat{\mathbf{r}})&H_{2}^{\xi}(\hat{\mathbf{p}})\end{array}\right)|\psi _{\xi,\mathbf{k}}\rangle=E|\psi_{\xi,\mathbf{k}}\rangle, \tag{31}\] where the electron states in the twisted bilayer graphene (TBG) lattice are modeled as Bloch state vectors \(|\psi_{\xi,\mathbf{k}}\rangle\). Because of the approximate periodicity of the moire lattice in TBG the state vectors \(|\psi_{\xi,\mathbf{k}}\rangle\) can be expanded in terms of the plane-wave vectors \(|\mathbf{k}+\mathbf{G}_{m}\rangle\) which are the eigenvectors of the momentum operator \(\hat{\mathbf{p}}\), i.e., \(\hat{\mathbf{p}}|\mathbf{k}+\mathbf{G}_{m}\rangle=(\mathbf{k}+\mathbf{G}_{m})|\mathbf{k}+\mathbf{G}_{m}\rangle\) and \(\langle\mathbf{r}|\mathbf{k}+\mathbf{G}_{m}\rangle=e^{i(\mathbf{k}+\mathbf{G}_{m})\cdot\mathbf{r}}\). Specifically, we have: \[|\psi_{\xi,\mathbf{k}}\rangle=\sum_{m}\left(\begin{array}{c}C_{1,\xi,\mathbf{k}}(\mathbf{G} _{m})\\ C_{2,\xi,\mathbf{k}}(\mathbf{G}_{m})\end{array}\right)|\mathbf{k}+\mathbf{G}_{m}\rangle, \tag{32}\] where \(C_{1,\xi,\mathbf{k}}(\mathbf{G}_{m})\) and \(C_{2,\xi,\mathbf{k}}(\mathbf{G}_{m})\) are 2D vectors of combination coefficients that need to be found. \(\{\mathbf{G}_{n}\,|\,n=1,2,3,\ldots,N_{\mathbf{G}}\}\) is a set of \(N_{\mathbf{G}}\) vectors of the moire reciprocal lattice. Substituting the expression (32) into Eq. (31) then left-multiplying both sides with \(\langle\mathbf{k}+\mathbf{G}_{n}|\) we obtain a set of linear equations in the form: \[\sum_{m}\left[\left(\begin{array}{cc}H_{1}^{\xi}(\mathbf{k}+\mathbf{G}_{n}) &T_{1}^{\xi}\\ T_{1}^{\xi\dagger}&H_{2}^{\xi}(\mathbf{k}+\mathbf{G}_{n})\end{array}\right)\delta_{\mathbf{G }_{n},\mathbf{G}_{m}}+\\ +\sum_{j=2}^{3}\left(\begin{array}{cc}0&T_{j}^{\xi}\\ 0&0\end{array}\right)\delta_{\mathbf{G}_{n},\mathbf{G}_{m}+\delta\mathbf{k}_{j}}+\sum_{j=2 }^{3}\left(\begin{array}{cc}0&0\\ T_{j}^{\xi\dagger}&0\end{array}\right)\delta_{\mathbf{G}_{n},\mathbf{G}_{m}-\delta \mathbf{k}_{j}}\right]\left(\begin{array}{c}C_{1,\xi,\mathbf{k}}(\mathbf{G}_{m})\\ C_{2,\xi,\mathbf{k}}(\mathbf{G}_{m})\end{array}\right)=E\left(\begin{array}{c}C_{1, \xi,\mathbf{k}}(\mathbf{G}_{n})\\ C_{2,\xi,\mathbf{k}}(\mathbf{G}_{n})\end{array}\right), \tag{33}\] wherein we denote: \[\langle\mathbf{k}+\mathbf{G}_{n}|H_{\ell}^{\xi}|\mathbf{k}+\mathbf{G}_{m}\rangle =H_{\ell}^{\xi}(\mathbf{k}+\mathbf{G}_{n})\delta_{\mathbf{G}_{n},\mathbf{G}_{m}}, \tag{34a}\] \[\langle\mathbf{k}+\mathbf{G}_{n}|T^{\xi}(\hat{\mathbf{r}})|\mathbf{k}+\mathbf{G}_{m}\rangle =\sum_{j=1}^{3}T_{j}^{\xi}\delta_{\mathbf{G}_{n},\mathbf{G}_{m}+\delta\mathbf{ k}_{j}},\] (34b) \[\langle\mathbf{k}+\mathbf{G}_{n}|T^{\xi\dagger}(\hat{\mathbf{r}})|\mathbf{k}+\mathbf{ G}_{m}\rangle =\sum_{j=1}^{3}{T_{j}^{\xi\dagger}}_{\delta\mathbf{G}_{n},\mathbf{G}_{m}- \delta\mathbf{k}_{j}}, \tag{34c}\] To obtain a numerical solution, we define a \(4N_{\mathbf{G}}\times 4N_{\mathbf{G}}\) Hermitian matrix \(H_{\mathbf{k}}^{\xi}\) in block form for each value of \(\xi\) and \(\mathbf{k}\in\text{MBZ}\): \[[H_{\mathbf{k}}^{\xi}]_{n,n}=\left(\begin{array}{cc}H_{1}^{\xi}(\bm {k}+\mathbf{G}_{n})&T_{1}^{\xi}\\ T_{1}^{\xi\dagger}&H_{2}^{\xi}(\mathbf{k}+\mathbf{G}_{n})\end{array}\right), \tag{35a}\] \[[H_{\mathbf{k}}^{\xi}]_{n,m_{j}}=\left(\begin{array}{cc}0&T_{1}^{ \xi}\\ 0&0\end{array}\right),\] if \[\mathbf{G}_{m_{j}}=\mathbf{G}_{n}-\delta\mathbf{k}_{j},\] (35b) \[[H_{\mathbf{k}}^{\xi}]_{n,m_{j}^{\prime}}=\left(\begin{array}{cc}0&0\\ T_{j}^{\xi\dagger}&0\end{array}\right),\] if \[\mathbf{G}_{m_{j}^{\prime}}=\mathbf{G}_{n}+\delta\mathbf{k}_{j}. \tag{35c}\] The eigenvalues \(E_{n}^{\xi}(\mathbf{k})\) and corresponding eigenvectors, obtained by diagonalizing the matrix \(H_{\mathbf{k}}^{\xi}\), approximate the states of low-energy electrons in the TBG lattice. The accuracy of the effective description is numerically dependent on the number of reciprocal lattice vectors \(\{\mathbf{G}_{n}\,|\,n=1,2,3,\dots,N_{\mathbf{G}}\}\) considered in the calculation. To ensure validity in the low-energy range near the intrinsic Fermi level, where the energy surfaces of the monolayer graphene take the form of cones, we control the value of \(N_{\mathbf{G}}\) by using a cutoff energy \(E_{c}\). Specifically, we count only the \(\mathbf{G}_{n}\) vectors such that \(|\mathbf{G}_{n}|\leq E_{c}/\hbar v_{F}\). ## III Results and Discussions ### Electronic band structure of TBGs We used numerical methods to solve Eq. (33). Specifically, for each value of \(\xi=\pm 1\), we constructed a Hermitian matrix \(H_{\mathbf{k}}^{\xi}\) of size \(4N_{\mathbf{G}}\times 4N_{\mathbf{G}}\) and diagonalized it for each value of \(\mathbf{k}\) in the mini-Brillouin zone of the TBG system. The sets of eigenvalues \(E_{m}(\mathbf{k})\) and eigenvectors \(\{\mathbf{C}_{\xi}^{m}(\mathbf{k})=[C_{1,\xi,\mathbf{k}}^{m}(\mathbf{G}_{1}),C_{2,\xi,\mathbf{k }}^{m}(\mathbf{G}_{1}),\dots,C_{1,\xi,\mathbf{k}}^{m}(\mathbf{G}_{N_{\mathbf{G}}}),C_{2,\xi,\bm {k}}^{m}(\mathbf{G}_{N_{\mathbf{G}}})]^{T}\}\) were calculated. The results for three TBG configurations with twist angles of \(\theta=1.05^{\circ},1.89^{\circ}\) and \(3.89^{\circ}\) are shown in Fig. 1. The blue curves represent the results obtained when the electronic interlayer coupling is artificially turned off. Both sets of data are plotted on the same figure for comparison, to highlight the impact of the interlayer coupling. The presence of multiple dispersion curves in the figure results from the folding of the energy band structure of two graphene layers due to the enlargement of the unit cell of the TBG lattice and the corresponding shrinkage of the Brillouin zone into the mini-Brillouin zone. However, the electronic band structure of the TBG system is not simply formed in this manner. At the points where the bands of monolayer graphene cross, the interlayer coupling leads to the hybridization of Bloch states and the lifting of energy degeneracy to form new bands. Our results agree with other available data in the literature [43; 44; 25; 45]. However, in this study, we wish to emphasize the formation of new electronic states, which are special to TBG systems and play a crucial role in determining its electronic, optical and transport properties. Fig. 1 can be considered a visual representation of the evolution of the band structure with respect to the twist angle. We observe that for the configuration with \(\theta=3.89^{\circ}\), the dispersion curves in the low energy range around the intrinsic Fermi energy (\(E_{F}=0\)) have a similar form to the blue curves, although they are not identical. This implies that the states created in the bilayer system share similarities with those in monolayer graphene. For smaller twist angles, particularly \(\theta=1.05^{\circ}\), a band with a very narrow bandwidth forms around the Fermi energy, separated from the lower and upper bands by a narrow gap. These dispersion curves are completely different from the blue curves, as seen in Fig. 1(e). We conclude that the electronic interlayer coupling creates special states in TBG lattices with small twist angles. Although the TBG systems are two-dimensional, it can be challenging to analyze their geometrical features of energy surfaces in wave-vector space. However, the density of states (DOS) generally exhibits key features, such as van Hove singularity behavior, as shown in the right panels of Figs. 1 (a, b, c). The blue curves in these figures were obtained numerically without considering the effects of electronic interlayer coupling, which is equivalent to the DOS of two independent graphene layers. The red curves show significant peaks due to van Hove singularities, indicating the presence of extremal and saddle points of the energy surfaces. It's worth noting that the energy bands with index \(\xi=+1\) (resulting from the hybridization of graphene Bloch states in the \(K_{1}\) valley of the first layer and \(K_{2}\) valley of the second layer) differ from those with index \(\xi=-1\) (resulting from the hybridization of Bloch states in the \(K_{1}^{\prime}\) valley and \(K_{2}^{\prime}\)). This difference is evident along the \(\Gamma M\) direction as shown in Figs. 1 (d, e, f). However, it does not appear in the DOS picture. We computed the DOS for both \(\xi=+1\) and \(\xi=-1\) separately and obtained the same results. This implies that the energy surfaces with \(\xi=+1\) are simply rotations of the corresponding ones with \(\xi=-1\) by an angle of \(\pi/3\). Figure 3: The real and imaginary components of the overall longitudinal conductivity for TBG configurations with \(\theta=3.890^{\circ}\) (a), \(1.890^{\circ}\) (b), and \(1.050^{\circ}\) (c). There are no total transverse conductivities due to the presence of time-reversal symmetry. ### DC conductivity The calculation of the electronic structures of TBG configurations with twist angles greater than 2 degrees shows the presence of two energy bands near the Fermi energy (\(E_{F}=0\)), which have dispersion curves similar to those of monolayer graphene. However, the Fermi velocity of the linear dispersion law in TBG (\(v_{F}^{TBG}\)) is smaller than that of monolayer graphene (\(v_{F}^{MLG}\)). As the twist angle decreases, these two bands become closer and eventually form an isolated band with a narrow bandwidth. This "flat band" near the Fermi energy is expected to have limited impact on the transport properties of TBG sheets because of its small Fermi velocity. In other words, electrons occupying these bands are expected to be spatially confined in the atomic lattice. Although these bands are not completely dispersionless, the study of their transport properties is still of great interest. To this end, within the framework of the non-interacting electron approximation, the DC conductivity was calculated using the Kubo-Greenwood formula as given by Eqs. (21) and (22). The results of the calculation for the longitudinal components (\(\sigma_{xx}^{DC}\) and \(\sigma_{yy}^{DC}\)) of three TBG configurations at low temperatures are presented in Fig. 2. The data indicate that the electrical conductivity of TBG systems is anisotropic, which is reflected in the two different longitudinal elements of the conductivity tensor, \(\sigma_{xx}^{DC}\) (represented by solid curves) and \(\sigma_{yy}^{DC}\) (represented by dashed curves). The conductivity should reflect the electronic structure with respect to the Fermi energy, and our calculations for the two cases of \(\xi=\pm 1\) yielded the same result. This is similar to the picture of the density of states, but the difference between the two longitudinal conductivities highlights the anisotropic nature of the energy surfaces. For the TBG configuration with a twist angle of \(\theta=3.890^{\circ}\), the electronic structure is similar to that of monolayer graphene (except for the value of the Fermi velocity), and the conductivity curves have a characteristic V-shape (or an M-shape when considered over a wider energy range, as shown by the blue curves). For TBG configurations with smaller twist angles, the electronic structure and thus the conductivity picture becomes much more complex. However, our calculations show that the conductivity at the intrinsic Fermi level \(E_{F}=0\) is always finite and non-zero. Interestingly, the calculated DC conductivity for TBG configurations with twist angles that are not magic values is about two times the quantum value of the minimal conductivity of monolayer graphene, which is \(\sigma_{0}^{DC}=4e^{2}/\pi h\). In the case of the TBG configuration with the magic twist angle \(\theta=1.050^{\circ}\), the conductivity curve shows a small hump at the Fermi energy \(E_{F}=0\) with \(\sigma_{\alpha\alpha}^{DC}(0)\approx\sigma_{0}^{DC}\). This quantum value of the conductivity of monolayer graphene has been intensively discussed in the past and is rooted in the fact that the valence and conduction bands touch each other at the \(K\) points, resulting in always-available free carriers due to fluctuations in the electrostatic profile around the Fermi energy level. [46; 47; 48; 49; 50; 51] In our calculation, we approximated the delta-Dirac functions in the Kubo-Greenwood formula with a Gaussian function with a parameter \(\eta\) that defines the finite width of the peak. The value of \(\eta\) was taken to be \(\eta<5\) meV to account for both numerical calculation needs and the broadening of energy levels and finite lifetime of the electric charge-carrying quasi-particles. Although the observed DC conductivity value in TBG systems is supported by the characteristic hybridized states in the TBG lattice, we suppose that it has the same underlying physical explanation as in monolayer graphene. ### AC conductivity and optical Hall drag conductivity In Fig. 3, we present the results of our calculations for the total optical conductivity of three different twisted bilayer graphene (TBG) configurations. The calculation was performed using the Kubo formula for the conductivity tensor, where the velocity operator \(\hat{v}_{\alpha}\) was determined from the Hamiltonian (27) using Eq. (18). The time reversal symmetry and the Onsager reciprocal relations eliminate the off-diagonal elements \(\sigma_{xy}\) and \(\sigma_{yx}\) of the conductivity tensor, which we numerically confirmed behave as noise with an amplitude of \(10^{-3}\). We also verified all of the symmetry properties of the conductivity tensor according to Eq. (17), demonstrating the accuracy of our numerical data. Figure 3 displays the longitudinal conductivity curves \(\sigma_{xx}(\omega)\) with significant peaks, which are attributed to dominant interband transition processes. This correlation is supported by its relationship to the electronic band structures of the TBG configurations, shown in Fig. 1. For instance, in Fig. 1(b), at an energy of 0.5681 eV, there is an optical absorption peak that can be assigned to a transition from the highest valence band to the lowest conduction band around the \(M\) points in the mini-Brillouin zone. As the twist angle decreases, the electronic structure becomes more complex and the optical conductivity exhibits more optical absorption peaks. For TBG configurations with twist angles greater than 2 degrees, there is a distinctive feature in the low-energy range of the conductivity curve: the real part of the conductivity is independent of the photon energy \(\hbar\omega\) and takes a value of \(2\sigma_{0}\), where \(\sigma_{0}=\pi e^{2}/2h\). This behavior is similar to that of monolayer graphene and results from the linear dispersion law of electron states. In the low-energy range, the energy band structure of TBG systems is qualitatively similar to that of monolayer graphene, but with a smaller Fermi velocity, as previously analyzed. The value of \(\sigma_{xx}(\omega)\) is equal to \(2\sigma_{0}\), as found in previous studies. [23; 25; 35] For the case of the "magic angle" \(\theta=1.050^{\circ}\), our calculation shows a clear dominant contribution from optical transitions from the quasi-flat bands around the Fermi level to the lowest conduction band around the \(\Gamma\) point, as shown in Fig. 3(c) and Figs. 1(c) and 1(f). Thus far, we would like to emphasize that the use of the total DC conductivity tensor is not relevant when studying the wave transmission problem in TBG. As we have presented in Secs. II.1 and II.2, the required quantities to be utilized are \(\mathbf{\sigma}^{(\ell)}(\omega)\) and \(\mathbf{\sigma}^{(drag)}(\omega)\). In Figs. 4 and 5, we present the calculation results for the elements of the conductivity tensor blocks \(\mathbf{\sigma}^{(\ell)}(\omega)\) and \(\mathbf{\sigma}^{(drag)}(\omega)\) as analyzed in Sec. II.2. Our numerical calculation reveals that \(\mathbf{\sigma}^{(1)}(\omega)=\mathbf{\sigma}^{(2)}(\omega)=\sigma_{xx}^{(\ell)}( \omega)\mathbf{\tau}_{0}\), verifying the equivalent role of the two graphene layers in the TBG lattice. Figs. 4(a), (b), and (c) display the real (red) and imaginary (blue) parts of \(\sigma_{xx}^{(\ell)}(\omega)\) for \(\ell=1,2\). It is worth noting that decreasing the twist angle increases the magnitude of \(\sigma_{xx}^{(\ell)}\), particularly when the imaginary part becomes dominant and varies in accordance with \(1/\hbar\omega\), as seen in Fig. 4(c). Fig. 5 shows the real and imaginary parts of the off-diagonal element \(\mathbf{\sigma}^{(drag)}(\omega)\) of the drag part of the conductivity tensor. For large twist angles in the TBG configuration, there is a notable difference between \(\sigma_{xx}^{(\ell)}(\omega)\) and \(\sigma_{xx}^{(drag)}(\omega)\), see Figs. 4(a) and 5(a). However, for small twist angles (\(<2^{\circ}\)), these curves are quite similar but have opposite signs. As a result, although there is significant variation in the magnitude of \(\sigma_{xx}^{(\ell)}(\omega)\) and \(\sigma_{xx}^{(drag)}(\omega)\), they compensate each other when summed to give the total value \(\sigma_{xx}(\omega)=2[\sigma_{xx}^{(\ell)}(\omega)+\sigma_{xx}^{(drag)}(\omega)]\), as seen in Fig. 2. Figures 5(b), (d), and (f) present the calculated results for the transverse conductivity component \(\sigma_{xy}^{(drag)}(\omega)\). The behavior of \(\sigma_{xy}^{(drag)}(\omega)\) is primarily characterized by close-to-zero values across the energy range, with some sharp peaks and dips appearing at energy positions with high optical absorption. This finite value of \(\sigma_{xy}^{(drag)}(\omega)\) can be attributed to both the correlation between the current densities in the two graphene layers and the chiral structure of the atomic lattices. The former is considered a necessary condition, representing the result of the coupling between the two graphene layers, which leads to the formation of composite states like the superposition of Bloch states between the two layers. This means that when electrons move in one graphene layer along the \(Ox\) direction, they induce a motion in the second layer along the transverse \(Oy\) direction due to interlayer coupling. The sufficient condition, however, is the contribution of the electronic states in the system to the resulting conductivity. If the system had mirror symmetry, these correlations would cancel each other out completely. However, the TBG lattices lack mirror symmetry, unlike the hexagonal lattice of graphene, which results in a finite, non-zero conductivity of \(\sigma_{xy}^{(drag)}(\omega)\neq 0\). To emphasize this point, the hexagonal mini-Brillouin zone of the TBG lattice was divided into three equal parts, as depicted in the inset of Fig. 6. This division was based on the assumption of mirror symmetry through the plane \(M_{yz}\), with domain (1) containing the symmetry, and domains (2) and (3) being its mirror images. Using the Kubo formula, the value of \(\sigma_{xy}^{(drag)}(\omega)\) was calculated by summing over all \(\mathbf{k}\) points in each of the three domains. The results, shown in Fig. 6, indicate that the results in domains (2) and (3) are similar, opposite in sign, and much larger than the result in domain (1). Upon summing these results, a strong but incomplete cancellation takes place, leading to the final value of \(\sigma_{xy}^{(drag)}(\omega)\) displayed in Figs. 5(b), (d), and (f). For the AA- and AB-stacked configurations with mirror symmetries, the cancellation was found to be complete, resulting in \(\sigma_{xy}^{(drag)}(\omega)=0\). The symmetry analyses in Refs. [44] and [23] support this theory. ### Optical characteristics On the basis of the theory established in Sec. II and the data of the conductivity tensor already computed, we performed the calculation for the transfer matrix, then the reflection, transmission and absorption spectra of several TBG configurations. The obtained results for transmittance and CD are presented in Figs. 7 and 8, respectively. Since the reflectance is negligible, it is not shown here. It should be noted that while the optical properties of TBGs have been extensively studied, they are usually analyzed through the behavior of the real and imaginary parts of the diagonal components of the total optical conductivity tensor,[25; 36; 52] rather than through the transmittance and reflectance, which are quantities that can be directly measured in experiments. From the transmission spectra as shown, we see that the transmittance of all TBG configurations can get the value of \(98\%\) and the absorptance of \(2\%\) in average in a large energy range. The spectral curves clearly exhibit the peaks Figure 6: Contribution of Bloch States to \(\sigma_{xy}^{(drag)}(\omega)\): The red and blue curves represent the contribution of Bloch states with \(\mathbf{k}\) in the first third of the Brillouin zone, while the solid purple and dashed green curves represent the contribution of Bloch states with \(\mathbf{k}\) in the second and third thirds of the Brillouin zone, respectively. Figure 7: The transmission spectra for three TBG configurations are shown in (a), (b), and (c) respectively, with twist angles of \(\theta=3.890^{\circ}\), \(\theta=1.890^{\circ}\), and \(\theta=1.050^{\circ}\). and dips consistent with the picture of the longitudinal conductivities. The transmission and absorption spectra for the left- and right-handed lights are different to be distinguished. However, the CD spectra is clearly manifested. From Fig. 8, we see that for the TBG configuration with \(\theta=3.890^{\circ}\), the CD curve exhibits essential features of experimental data reported by Kim et. al., [15] for instance, with the peaks and dips of large width, except several sharp peaks at the energy position of the absorption peaks. In order to verify the role of the drag transverse conductivity \(\sigma_{xy}^{(drag)}\), we did not include it into the calculation. As expected, it results in the zero CD for the total energy range, but mostly does not change the transmission spectrum. We therefore conclude that this drag transverse conductivity plays the decisive role in governing the chiral optical behaviors of the TBG systems. Last but not least, it is important to note that the CD spectrum shown in Fig. 8 is extracted from the theory presented in Sec. II.1, which may differ from the formula used in Refs. [24] and [32], where CD is simply proportional to the real part of \(\sigma_{xy}\). ## IV Conclusion The field of engineering two-dimensional quantum materials for electronic and optoelectronic applications has been the subject of extensive research. In this work, we demonstrate that the optical activity observed in the TBG system results from the spatial dispersion effects of a typical vdW heterostructure of two graphene layers lacking mirror and glide symmetries. Our analysis addresses two fundamental aspects of the field-matter interaction problem: the propagation of electromagnetic waves through a material layer and the material layer's response to an external field, both treated simultaneously using the macroscopic and microscopic descriptions, respectively. Technically, we utilize an effective continuum model to demonstrate how electron states in each graphene layer can hybridize, forming electron states in the bilayer system that allow for the correlation of transverse motions between the two layers. In terms of electromagnetic wave propagation, we clarify how the coupling between the two layers can transform the magnitude and direction of the fields within the material layer. We provide a detailed solution to the electromagnetic wave propagation problem through a TBG sheet by considering each graphene layer as a conducting interface between two dielectric media. The two interfaces are not independent and conduct mutually influenced current densities. Our solution to the wave propagation problem clarifies how the roles of the conductivity components characterized by local and nonlocal spatial effects manifest themselves in the optical behavior of the material system. On the basis of explicitly considering the spatial dispersion effects, our analysis naturally composes the conductivity tensor into the local part and the drag part without relying on symmetry analysis or assumptions about the phase shift between current operators in the two layers. We calculate all elements of these parts using the Kubo formula, and formulate and calculate the transfer matrix numerically. We demonstrate that the transverse drag transverse conductivity \(\sigma_{xy}^{(drag)}\) plays a critical role in defining the chiral optical response of the TBG system. This conductivity arises from the chiral structure of the atomic lattice, which leads to hybridized states that lack mirror symmetry and support the correlation of transverse motions between the two graphene layers. We use the results of our calculations of the components of the optical conductivity tensor to calculate the transmission and circular dichroism spectra. We show that the overall transparency of twisted bilayer graphene (TBG) samples, on average, is generally 98%, but it has the potential to absorb light up to 2%. Significantly, we establish that the drag transverse conductivity is a decisive factor in determining the circular dichroism, thereby affirming that the optical activity of the TBG system is a manifestation of spatial effects. Furthermore, we found that the DC conductivity of the TBG system exhibits a quantum conductivity value of \(\propto e^{2}/h\) at the intrinsic Fermi energy, which is determined by the hybridized states in the bilayer system. This quantum value is obtained from a single-particle approximation. However, a rigorous discussion of the intrinsic transport properties of TBG configurations with magic twist angles should be based on a strong correlation picture of low-energy excited states as Dirac fermions. In summary, our study emphasizes the importance of considering the finite thickness of twisted bilayer graphene when analyzing its optical response and transport properties, owing to the significance of spatial dispersion effects. This theoretical approach can also be extended to other Figure 8: The circular dichroism (CD) spectra of three TBG configurations: with twist angles of \(\theta=3.890^{\circ}\) (a), \(\theta=1.890^{\circ}\) (b), and \(\theta=1.050^{\circ}\) (c) are displayed. van der Waals material systems with multiple layers. ## Acknowledgements One of the authors, S.T.H., acknowledges the financial support of Hanoi University of Civil Engineering (HUCE) for his work under grant number 28-2021/KHXD-TD.
2308.14980
Scope and limitations of ad hoc neural network reconstructions of solar wind parameters
Solar wind properties are determined by the conditions of their solar source region and transport history. Solar wind parameters, such as proton speed, proton density, proton temperature, magnetic field strength, and the charge state composition of oxygen, are used as proxies to investigate the solar source region of the solar wind. The transport and conditions in the solar source region affect several solar wind parameters simultaneously. The observed redundancy could be caused by a set of hidden variables. We test this assumption by determining how well a function of four of the selected solar wind parameters can model the fifth solar wind parameter. If such a function provided a perfect model, then this solar wind parameter would be uniquely determined from hidden variables of the other four parameters. We used a neural network as a function approximator to model unknown relations between the considered solar wind parameters. This approach is applied to solar wind data from the Advanced Composition Explorer (ACE). The neural network reconstructions are evaluated in comparison to observations. Within the limits defined by the measurement uncertainties, the proton density and proton temperature can be reconstructed well. We also found that the reconstruction is most difficult for solar wind streams preceding and following stream interfaces. For all considered solar wind parameters, but in particular the proton density, temperature, and the oxygen charge-state ratio, parameter reconstruction is hindered by measurement uncertainties. The reconstruction accuracy of sector reversal plasma is noticeably lower than that of streamer belt or coronal hole plasma. The fact that the oxygen charge-state ratio, a non-transport-affected property, is difficult to reconstruct may imply that recovering source-specific information from the transport-affected proton plasma properties is challenging.
Maximilian Hecht, Verena Heidrich-Meisner, Lars Berger, Robert F. Wimmer-Schweingruber
2023-08-29T02:14:08Z
http://arxiv.org/abs/2308.14980v1
# Scope and limitations of ad hoc neural network reconstructions of solar wind parameters ###### Abstract Context:Solar wind properties are determined by the conditions of their solar source region and transport history. Solar wind parameters, such as proton speed, proton density, proton temperature, magnetic field strength, and the charge state composition of oxygen, are used as proxies to investigate the solar source region of the solar wind. The solar source region of the solar wind is relevant to both the interaction of this latter with the Earth's magnetosphere and to our understanding of the underlying plasma processes, but the effect of the transport history of the wind is also important. The transport and conditions in the solar source region affect several solar wind parameters simultaneously. Therefore, the typically considered solar wind properties (e.g. proton density and oxygen charge-state composition) carry redundant information. Here, we are interested in exploring this redundancy. Aims:The observed redundancy could be caused by a set of hidden variables that determine the solar wind properties. We test this assumption by determining how well a (arbitrary, non-linear) function of four of the selected solar wind parameters can model the fifth solar wind parameter. If such a function provided a perfect model, then this solar wind parameter would be uniquely determined from hidden variables of the other four parameters and would therefore be redundant. If no reconstruction were possible, this parameter would be likely to contain information unique to the parameters evaluated here. In addition, isolating redundant or unique information contained in these properties guides requirements for in situ measurements and development of computer models. Sufficiently accurate measurements are necessary to understand the solar wind and its origin, to meaningfully classify solar wind types, and to predict space weather effects. Methods:We employed a neural network as a function approximator to model unknown, arbitrary, non-linear relations between the considered solar wind parameters. This approach is not designed to reconstruct the temporal structure of the observations. Instead a time-stable model is assumed and each point of measurement is treated separately. This approach is applied to solar wind data from the Advanced Composition Explorer (ACE). The neural network reconstructions are evaluated in comparison to observations, and the resulting reconstruction accuracies for each reconstructed solar wind parameter are compared while differentiating between different solar wind conditions (i.e. different solar wind types) and between different phases in the solar activity cycle. Therein, solar wind types are identified according to two solar-wind classification schemes based on proton plasma properties. Results:Within the limits defined by the measurement uncertainties, the proton density and proton temperature can be reconstructed well. Each parameter was evaluated with multiple criteria. Overall proton speed was the parameter with the most accurate reconstruction, while the oxygen charge-state ratio and magnetic field strength were most difficult to recover. We also analysed the results for different solar wind types separately and found that the reconstruction is most difficult for solar wind streams preceding and following stream interfaces. Conclusions:For all considered solar wind parameters, but in particular the proton density, proton temperature, and the oxygen charge-state ratio, parameter reconstruction is hindered by measurement uncertainties. The proton speed, while being one of the easiest to measure, also seems to carry the highest degree of redundancy with the combination of the four other solar wind parameters. Nevertheless, the reconstruction accuracy for the proton speed is limited by the large measurement uncertainties on the respective input parameters. The reconstruction accuracy of sector reversal plasma is noticeably lower than that of streamer belt or coronal hole plasma. We suspect that this is a result of the effect of stream interaction regions, which strongly influence the proton plasma properties and are typically assigned to sector reversal plasma. The fact that the oxygen charge-state ratio --a non-transport-affected property-- is difficult to reconstruct may imply that recovering source-specific information from the transport-affected proton plasma properties is challenging. This underlines the importance of measuring the heavy ion charge-state composition. ## 1 Introduction The properties of the solar wind mostly depend on two factors, the conditions in the solar source region and the transport history of the solar wind until it is measured at a spacecraft. The charge states observed in the solar wind are determined by the electron temperature in the solar region and are in good approximation frozen in after the solar wind leaves the hot corona (Geiss et al., 1995; Aellig et al., 1997). The initial properties bulk proton speed, proton density, and proton temperature also vary with the solar source region of the solar wind, but are, in addition, affected by transport effects. In the context of our study, we consider the following aspects as transport effects: expansion, wave-particle interactions, collisions, and compression regions as found in stream interaction regions (SIRs). Except for expansion, each of them affect the reconstruction of solar wind properties in our study. As a result of expansion, the magnetic field, the proton density, and proton temperature all decrease with increasing solar distance (Marsch et al., 1982; Perrone et al., 2019). While expansion affects all the solar wind at 1 astronomical unit (AU) in the same way, other transport effects impact different types of solar wind differently. Depending on the solar source region, at least two types of solar wind are typically distinguished (von Steiger et al., 2000; Zhao et al., 2009; Zhao and Fisk, 2010; Xu and Borovsky, 2015; Camporeale et al., 2017). Coronal holes have been identified as the source of the (typically) faster component of the solar wind (Hundhausen et al., 1968; Tu et al., 2005; Schwadron et al., 2005), which is associated with low oxygen charge states, low proton densities, high proton temperatures, and high magnetic field strength. Coronal hole wind is strongly affected by wave-particle interactions; in particular (real and apparent) heating of the proton bulk, which explains the high observed proton temperatures in this solar wind type. Wave-particle interactions are also assumed to be the cause of the differential streaming observed in coronal hole wind (Berger et al., 2011; Kasper et al., 2008, 2012; Janitzek et al., 2016). An example of the redundant information contained in the different solar wind parameters is provided by the fact that the high observed proton temperature in coronal hole wind is an effect of the presence of Alfven waves in the solar wind (Marsch et al., 1982). This is illustrated by Heidrich-Meisner et al. (2020), who show that explicit information about the magnetic field is not necessary to identify the same solar wind types as in Xu and Borovsky (2015). Our study investigates such redundancies among solar wind parameters. The properties of the slow solar wind systematically differ from those of the coronal hole wind. Slow solar wind is typically associated with high proton densities, low proton temperatures, low magnetic field, and high (oxygen) charge states (von Steiger et al., 2000; Zhao et al., 2009; Zhao and Fisk, 2010; Xu and Borovsky, 2015). These properties correspond to the properties of closed-field-line regions on the Sun. However, the exact source regions of slow solar wind and the corresponding release mechanisms are still a matter of debate (Schwadron et al., 2005; Sakao et al., 2007; Rouillard et al., 2010; Antiochos et al., 2011; Stakhiv et al., 2015; D\({}^{\prime}\)Amicis and Bruno, 2015). At 1 AU, the slow solar wind has experienced just enough collisions that their impact begins to thermalise the velocity distribution function (Kasper et al., 2012; Janitzek et al., 2016) and Alfvenic wave activity is low. Solar wind originating in equatorial coronal holes can also be observed with comparatively low solar wind proton speeds. Such slow coronal hole wind is also called Alfvenic slow solar wind (D\({}^{\prime}\)Amicis and Bruno, 2015; Panascenco et al., 2020; Louarn et al., 2021). Observing both coronal hole wind and slow solar wind in the same proton speed range also illustrates that the solar wind proton speed alone is not well suited to characterizing solar wind. Other solar wind properties are better tracers of solar wind type. As observations from Helios (Marsch et al., 1982), Parker Solar Probe (Verniero et al., 2020; Zhao et al., 2021), and Solar Orbiter (Jannet et al., 2021; Carbone et al., 2021) show that waves occur frequently in all types of solar wind close to the Sun, the presence or absence of waves can also be considered as a transport effect that is more important close to the Sun than at greater solar distances. Another important transport effect that is increasingly influential as the solar wind travels further from the Sun is linked to the compression regions in SIRs that develop at the boundary of solar wind streams with different speeds (Smith and Wolfe, 1976; Richardson, 2018). If a faster solar wind stream interacts with a preceding slower solar wind stream, an SIR forms that is characterised by (hot and dense) compression regions in both the slow and the fast participating solar wind stream and a high magnetic field strength at the stream interface. As modelled in Hofmeister et al. (2022), SIRs evolve with radial distance and a decreasing amount of unperturbed fast solar wind is observed with increasing distance. Therefore, in this study, we consider SIRs as a transport effect on the solar wind. Since SIRs are often associated with a change in magnetic field polarity, in the Xu and Borovsky (2015) categorisation, compressed slow solar wind tends to be identified as so-called sector reversal plasma (Heidrich-Meisner et al., 2020). Although the properties of slow solar wind can be highly variable and coronal hole wind also shows variability (Zhao and Landi, 2014; Heidrich-Meisner et al., 2016) on multiple scales, the respective average properties are systematically correlated with each other (Lepri et al., 2013; McComas et al., 2000; von Steiger et al., 2000). This redundancy hints at a common underlying cause that determines these properties. Under the assumption that all observed solar wind parameters are determined by the same set of hidden variables in the solar corona, it would be possible to reproduce each solar wind parameter from the redundant measurements of the other solar wind parameters. In this study, we test this assumption with the help of a general function approximator to model the (partly) unknown dependencies of the respective solar wind properties. After the solar wind leaves the solar corona, such a relation can be modified by transport effects. Therefore, we investigate the resulting reconstruction separately for different solar wind types with their different respective transport histories. In this way, our study evaluates the degree to which the relationship between solar wind parameters is modified by different transport effects. To this end, we employ feed-forward neural networks as general function approximators (Hornik et al., 1989) and apply our method to solar wind observed at L1. In recent years, the application of machine learning to solar physics questions has become increasingly popular. For example, unsupervised clustering methods are very well suited to solar wind classification (Heidrich-Meisner and Wimmer-Schweingruber, 2018; Amaya et al., 2020). Camporeale et al. (2017) provide a generalisation of the Xu and Borovsky (2015) method, with a supervised learning approach based on Gaussian processes. Ambitious projects aim to predict the solar wind speed directly from remote sensing observations of the solar corona with deep neural network architectures (Upendran et al., 2020; Raju and Das, 2021). Simple neural networks have been successfully applied as general function approximators in many different research areas (e.g. Kuschewski et al. (1993); An and Moon (1993); Smits et al. (1994); Heidrich-Meisner and Igel (2009); Tahmasebi and Hezarkhani (2011)) and are therefore well suited to our purposes. The main goal of our study is to investigate how the relationship between the considered solar wind parameters depends on transport effects. To this end, we compare how accurately each solar wind parameter can be reconstructed from the others under different solar wind conditions, with different dominant transport effects. The relationship between different solar wind properties depends on the solar source region. All effects that further modify this relationship after the solar wind leaves the Sun are considered as transport effects in this study. This includes an increase in the proton temperature due to wave-particle interactions, a systematic increase in the proton speed \(v_{\rm p}\) and the proton temperature \(T_{\rm p}\) derived from moments of proton velocity distributions that contain a beam, and increased proton density, proton temperature, and magnetic field strength in compression regions in SIRs. The importance of these transport effects is different for different solar wind types: wave-particle interactions are most important in coronal hole wind; collisions become more relevant as the solar wind slows and becomes more dense (and therefore affect slow solar wind), and compression regions are typically found in sector reversal plasma associated with SIRs. In addition, by investigating the impact of measurement uncertainties on our results, our approach provides guidelines as to which solar wind parameters need to be measured with high accuracy. There are several semi-empiric models of the solar wind (Arge & Pizzo, 2000; Cranmer & Van Ballegooijen, 2005; Cranmer et al., 2007; van der Holst et al., 2010; Pizzo, 2011; Schultz, 2011; van der Holst et al., 2014; Pomoell & Poedts, 2018) that derive the solar wind properties at arbitrary positions in the heliosphere through magneto-hydrodynamic (MHD) simulations based on observations of the solar photosphere or the source surface. This is a challenging task, particularly because the release mechanisms of slow solar wind are still unknown and it is not obvious whether the observations that provide the boundary conditions for these simulations contain all the underlying relevant properties of the solar corona at sufficient resolution. Nevertheless, these models manage to derive the properties of pure slow and coronal hole wind streams with reasonable accuracy. However, SIRs tend to be modelled less accurately. Our approach serves as a minimal sanity check for these kinds of models in two respects: First of all, we can determine whether or not all of the considered solar wind properties are determined by the same set of (unknown) properties in the solar corona. Second, we attempt to determine the degree to which transport effects obscure a potentially underlying relationship between different solar wind parameters. In addition, our approach can also be applied to alleviate the problem of data gaps in solar wind data sets in cases where only some but not all quantities are available. As the solar wind properties of interest are determined by different instruments, such situations occur repeatedly because the corresponding data gaps usually do not line up. Of particular interest is the question of whether charge-state ratios, such as the oxygen \(O^{7+}\) to \(O^{6+}\) ratio, can be reproduced from the measurements of proton plasma properties and the magnetic field strength alone. On the one hand, this would imply that a property that is not affected by transport effects but is solely determined by the solar origin can be recovered from the plasma properties that are (strongly) affected by transport effects. On the other hand, this could help with situations where information on the charge-state of heavy ions is not available. Measuring the charge-state composition of the solar wind is a challenging task and the resulting instruments have repeatedly suffered from difficulties. Therefore, for many points in time and space within the heliosphere, only observations of the proton plasma properties are available but no charge-state measurements. If charge-state information could be recovered (even with low accuracy), this could be employed to augment existing data sets. Our neural network approach to reconstruct solar wind parameters is described in detail in Sect. 2. This includes the preprocessing applied to the solar wind data from the Advanced Composition Explorer (ACE). In Sect. 3 we present and analyse the results of this reconstruction. Our results are discussed in Sect. 4. ## 2 Data and methods We use solar wind data from the Advanced Composition Explorer (ACE) measured by the Solar Wind Electron Proton And Alpha Monitor (SWEPAM, McComas et al. (1998)), the magnetometer (MAG, Smith et al. (1998)), and the Solar Wind Ion Composition Spectrometer (SWICS, Gloeckler et al. (1998)) from 2001-2010. All data products are binned to the native 12 minute time resolution of SWICS and the only data points considered are those that contain valid entries for proton speed \(v_{\rm p}\), proton density \(n_{\rm p}\), proton temperature \(T_{\rm p}\) (from SWEPAM), the magnetic field strength \(B\) (from MAG), and the oxygen charge-state ratio, \(n_{O^{3}}/n_{O^{8+}}\), with \(n_{O^{8+}}\), \(n_{O^{3+}}\) as the \(O^{6+}\) and \(O^{7}\) densities measured by SWICS. Each 12 minute bin is treated as its own isolated data point. Thus, our method does not exploit or model the temporal structure of the solar wind. We assume a time-independent relationship. We test the limits of this assumption by analysing the dependency of the results of our approach in Sect. 3.3. The data set used in this study is available at Berger et al. (2023). We chose the 12 min SWICS time resolution as a compromise: it is sufficiently short that we are able to catch short-term variations, while being as long as is necessary to be able to include charge-state composition data. \(O^{6+}\) is the most abundant ion (heaver than He) that is measured in SWICS. Although \(O^{7+}\) is less abundant, \(n_{O^{5+}}/n_{O^{6+}}\) is among the best determined quantities from SWICS (together with Fe, which is instrumentally well separated from other similarly abundant ions). This choice was influenced by the observations that the majority of the 12 minute \(n_{O^{5+}}/n_{O^{6+}}\) data points are within reasonable error margins. This can be seen in Figure 5, where the median of the \(\chi^{2}_{\rm red}\) error is just below 1. The Monte Carlo simulations --which estimate the effect of the measurement uncertainty on the neural network reconstruction-- take the counting statistics of O into account and show that the neural network reconstruction is stable against the sometimes large uncertainties in the oxygen charge-state composition. In the following, we select four of the five aforementioned solar wind parameters as input parameters for a general purpose function approximator and use this function approximator to reconstruct the remaining fifth parameter. As a general function approximator, we employ a simple feed-forward neural network, namely a multi-layer perception (MLP). This type of neural network is described in more detail in Sect. 2.3. Our objective is formulated as a supervised regression task, that is, the neural network is used to model a functional relationship between input and output data and is provided correct output data examples during training. Our experimental setup is described in the remainder of this section and is summarised in Fig. 1 and the source code is available at Hecht et al. (2023). ### Preprocessing: data selection Before the data are presented to the neural network, we apply the following preprocessing to the ACE data set. We apply a decade logarithm to \(n_{O^{3+}}/n_{O^{6+}}\). An output variable \(\mathbf{y}_{\rm rec}\in[v_{\rm p},n_{\rm p},T_{\rm p},B,\log n_{O^{3+}}/n_{O^{6+}}]\) is then selected for each training scenario. Depending on the chosen output variable \(\mathbf{y}_{rec}\), we construct an input vector \(\mathbf{X}\) from the remaining four solar wind parameters. The output variable \(\mathbf{y}_{rec}\) is the data product that is going to be reconstructed, while the input vector \(\mathbf{X}\) contains the measurements provided for the reconstruction. To categorise solar wind types --and thereby implicitly select solar wind observations with different transport histories--we employ the scheme presented in Xu & Borovsky (2015) or order the data according to proton-proton collisional age, which allows (as shown in Heidrich-Meisner et al. (2020)) a very similar solar wind classification. The proton-proton collisional age is calculated by \[\alpha_{\rm col\_p}=\frac{6.4\cdot 10^{8}\rm{K}^{3/2}}{\rm{cm}^{-3}}\frac{n_{\rm p}}{v _{\rm p}\rm{\,T}_{\rm p}^{3/2}}\ . \tag{1}\] The Xu & Borovsky (2015) solar wind classification scheme distinguishes between coronal hole wind, two types of slow solar wind, and ejecta. The two types of slow solar wind were defined to distinguish between helmet-streamer and pseudo-streamer plasma. Sector reversal (or helmet-streamer) plasma includes a change in the magnetic field polarity and therefore consists of slow and dense slow solar wind in the vicinity of stream interaction regions. Streamer belt (or pseudo-streamer) plasma contains the remaining slow solar wind plasma. The fourth category from the Xu & Borovsky (2015) scheme, namely the ejecta category, which is designed to detect interplanetary coronal mass ejections (ICMEs), is disregarded here because it tends to misidentify particularly cold and dense slow solar wind (Sanchez-Diaz et al., 2016) as ejecta. As ICMEs undergo a (most likely) very different release mechanism from the ubiquitous solar wind, we cannot expect the same relations that hold between properties in the solar wind to also hold between properties in ICMEs. Therefore, we do not consider ICMEs in the following analysis. Instead, ICMEs are identified based on the ICME list from Cane & Richardson (2003); Richardson & Cane (2010) and Jian et al. (2006, 2011) and are subsequently removed from the data set. As the start and end times in both ICME lists are not necessarily well defined, we extended each ICME time interval by six hours at the beginning and the end of each ICME. ### Test, training, and validation data sets To apply and evaluate a supervised learning method, we need to separate the available data set into three different subsets: training, validation, and test data. The training data are used in the training of the neural network, and the validation data are used to estimate the generalisation error of the trained model and for the selection of optimal hyperparameters of the model (see Section 2.5). The previously unseen test data set is only used to evaluate the final performance of the model and is the only data set suitable for a comparison between different models. Therefore, we partition the ACE data set into batches with the approximate length of one Carrington rotation, which is 27.24d. Now, these batches are split into test, training, and validation data sets. The selection of training, validation, and test data sets is randomised, but to ensure that each data set is well distributed over time, we apply the following to the five two-year time frames in our data set. Herein, for each two-year time frame, we randomly select four data batches of Carrington rotation length as test data. From the remaining ten data batches of each two-year time frame, we select training and validation data sets for a five-fold cross-validation (Allen, 1974; Stone, 1974, 1977), that is, two batches become part of the validation data set and the remaining eight go into the training data set. Five-fold cross-validation helps to improve the generalisation capabilities of supervised learning methods by permuting the role of each fold as a validation data set (while always training on the remaining data). This reduces the dependency of the result on a particular choice of the training or validation data set. Next, the scikit-learn (Pedregosa et al., 2011) StandardScaler is applied to the input vector \(\mathbf{X}\) to ensure that the learning is not inhibited by ill-conditioned data points. This standardised the numerical value of each input dimension by removing the mean and scaling to unit variance. Afterwards, the training data are shuffled. As we consider each 12 minute bin independently, we neglect the temporal information. Randomising the order of data points is beneficial for the learning speed of a neural network. Then, after training, each trained neural network is applied to the corresponding validation data set from the five-fold cross-validation. The resulting validation score (see Sect. 2.4) is averaged over the five folds. The average validation score is then used for model selection (see Sect. 2.5). The complete experimental framework is illustrated in Fig 1. Figure 1: Workflow of the solar wind parameter reconstruction algorithm. The top shows the steps done during a single test with specific hyperparameters. The data preprocessing steps are indicated with an orange background. The bottom part shows the model selection phase. Here, different hyperparameters are compared and the best hyperparameter combination is chosen to reconstruct the measurements. Figure 2: Schematic overview of our neural network architecture. Four solar wind parameters form the input vector \(\mathbf{X}\) (in this example \(B\), \(\log n_{\rm{0^{\circ}}}\)-\(n_{\rm{0^{\circ}}}\), \(n_{\rm{p}}\), and \(T_{\rm{f}}\)), the hidden layer contains a variable number of neurons \(n_{\rm{k}}\), and the output \(\mathbf{y}_{\rm{src}}\) is the remaining solar wind parameter (in this example \(v_{\rm{p}}\)). Each layer is fully connected by weights that are represented by arrows. ### Multilayer perceptron We employed a multilayer perceptron (MLP) with one hidden layer as a general purpose function approximator (Hornik et al., 1989) to predict a data product \(\boldsymbol{y}_{\text{rec}}\). The input vector \(\boldsymbol{X}\) consists of the remaining four data products. The specific implementation used is the MLPregressor from the Python package scikit-learn (Pedregosa et al., 2011) version 1.1.2. Figure 2 shows a schematic overview of our neural network structure. In addition to the input vector \(\boldsymbol{X}\) and the output, which is the reconstructed value \(\boldsymbol{y}_{\text{rec}}\), the setup includes a hidden layer consisting of \(n_{k}\) neurons. Each layer is fully connected to the next layer by weights \(w_{i,j}\neq 0\in\mathbb{R}\) (with \(i,j\) indices of neurons from two consecutive layers, e.g. input to hidden layer or hidden layer to output). These form a weight matrix \(\mathbf{W}\). The general principle of neural network training can be summarised in three steps: (1) The forward pass. For each layer, the product of the input vector (or neuron vector) with the weight matrix is computed and the activation function \(f\) is applied to obtain the input vector for the next layer \(\boldsymbol{h}\) or the result \(\boldsymbol{y}_{\text{rec}}\): \(\boldsymbol{h}=f(\mathbf{W}\boldsymbol{X})\). (2) Computation of the difference between known output and current output. The difference between the calculated value \(\boldsymbol{y}_{\text{rec}}\) and the measured value \(\boldsymbol{y}_{\text{meas}}\) describes the current training progress and is calculated using the mean squared error. (3) Back-propagation. The aforementioned difference, as the estimated training progress, is minimised by propagating the error information from the output layer backwards through the neural network. This changes the values in the weight matrices and minimises the training error. Here, we employ the efficient Adam solver; see Kingma & Ba (2014) for a full description. These three steps are repeated \(n_{\text{iter}}\) times. As the chosen back-propagation variant, Adam, has stochastic elements, we repeated the training for 100 independent trials. Each trial is initialised with a different random seed for the initial weights and the training data is shuffled for each trial. Training is stopped after 200 iterations. As shown in Fig. 3, the performance appears to converge after less than 200 iterations for the majority of the hyperparameter combinations. While there is no guarantee that more iterations will not yield further improvements, some tests with 2000 iterations showed no indications of this. ### Error estimation and stability In this subsection, we describe the different performance and error estimates that are of interest for our study. First, to evaluate the validation error, which is the basis for selecting optimal hyperparameter settings, we employ the \(R^{2}\) score. For each reconstructed solar wind parameter, the reconstruction \(\boldsymbol{y}_{\text{rec}}\) is compared to the measurements \(\boldsymbol{y}_{\text{meas}}\). The score \(R^{2}\) is calculated by \[R^{2} =\left(1-\frac{r}{s}\right)\enspace, \tag{2}\] \[r =\sum_{i=1}^{m}(y_{\text{meas},i}-y_{\text{rec},i})^{2}\enspace,\] (3) \[s =\sum_{i=1}^{m}(y_{\text{meas},i}-\text{mean}(y_{\text{meas},i}) )^{2}\enspace. \tag{4}\] An \(R^{2}\) score of 1 indicates a perfect reconstruction. The \(R^{2}\) score is used in the model selection to choose the specific hyperparameters for each reconstruction. The \(R^{2}\) score is well suited to comparing different hyperparameter configurations for the same reconstructed parameter and on the same data set. For a comparison of each of the reconstructed solar wind parameters, different neural networks have to be assessed in relation to each other. For this, the \(R^{2}\) score, which depends on the estimated variance of the data set, is less well suited. \begin{table} \begin{tabular}{c|l l l l l l l} hyperparameter & tested hyperparameter & \(B\) & \(n_{O^{1/2}}/n_{O^{n}}\). & \(n_{\text{p}}\) & \(T_{\text{p}}\) & \(v_{\text{p}}\) \\ \hline \(n_{k}\) & [10, 20, 50, 100] & 10 & 10 & 10 & 10 & 10 \\ iterations & 200 & 200 & 200 & 200 & 200 & 200 \\ activation function & relu & relu & relu & relu & relu & relu \\ logarithmic \(n_{O^{1/2}}/n_{O^{n}}\). & True & True & True & True & True & True \\ solver & adam & adam & adam & adam & adam & adam \\ \(\lambda\) & [0.01, 0.001, 0.0001] & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ \(\beta_{1}\) & [0.75, 0.8, 0.85, 0.9, 0.95, 0.99] & 0.99 & 0.9 & 0.75 & 0.9 & 0.95 \\ \(\beta_{2}\) & [0.8, 0.85, 0.9, 0.95, 0.99, 0.999] & 0.95 & 0.999 & 0.9 & 0.999 & 0.999 \\ \(\epsilon\) & [10\({}^{-6}\), 10\({}^{-7}\), 10\({}^{-8}\), 10\({}^{-9}\), 10\({}^{-10}\)] & 1e-09 & 1e-09 & 1e-08 & 1e-06 & 1e-06 \\ \(\alpha\) & [0.001, 0.0001, 0.00001] & 0.0001 & 1e-05 & 0.001 & 0.0001 & 0.001 \\ \end{tabular} \end{table} Table 1: Investigated (column 2) and best-performing (columns 3-7) hyperparameters from the model selection for each solar wind parameter. The best hyperparameters are used in the final model as well as in the Monte Carlo error simulations. \begin{table} \begin{tabular}{c|l l l l l} parameter & \(v_{\text{p}}\) & \(n_{\text{p}}\) & \(T_{\text{p}}\) & \(B\) & \(nO\) \\ \hline \(\Delta\) & 1.5\% & 15\% & 20\% & 0.1 nT & from counts \\ \end{tabular} \end{table} Table 2: Relative measurements errors \(\Delta\) of solar wind parameters according to Smith et al. (1998); Skoug et al. (2004); Berger (2008). Figure 3: Validation scores for all considered hyperparameter configurations with ten neurons and all 10 trials for \(\boldsymbol{y}_{\text{rec}}=v_{\text{p}}\). Each individual trial is shown with a thin black line. The hyperparameter combination with the highest final median validation score is plotted in blue. In addition, the median and three overlapping confidence intervals (15.9th - 84.1th percentile, 2.5th - 97.5th percentiles, and 0th - 100th percentiles) range are shown in overlapping blue shaded regions. The results of our study are affected by different sources of uncertainty. Therefore, in the following, three different types of error or uncertainty measure are considered. The first type of uncertainty is the measurement error \(\Delta\) from the measurements of the solar wind parameters. For \(\Delta v_{\rm p},\Delta n_{\rm p}\), and \(\Delta T_{\rm p}\), relative errors are taken from the literature (Skoug et al., 2004). For \(\Delta B\), an absolute value of 0.1 nT is given by Smith et al. (1998). For \(\Delta\log n_{O^{+}}/n_{O^{+}}\), the error is derived from the actual counting statistics of SWICS based on Poisson statistics. As \(O^{+}\) is rare and can be at the limit of the detection capabilities of SWICS in very dilute solar wind, the resulting error can be enormous. We decided against excluding the data points with particularly high oxygen charge-state measurement errors because these occur systematically in very dilute coronal hole wind mainly during the solar activity minimum. Therefore, excluding these data points from our analysis would introduce a systematic bias in the data set. These errors and the reference they were taken from can be seen in Table 2. The second type of error is defined by the comparison of the original measured data to the reconstructed values. These errors are only calculated on the test data set, which encompasses a sample size of 63432 points. For this purpose, we consider linear and quadratic measures. A first straightforward approach is to calculate the relative reconstruction errors \(y_{\rm diff,p}\) between the observed and reconstructed quantity: \[y_{\rm diff}=\frac{y_{\rm meas}-y_{\rm rec}}{y_{\rm meas}}\ . \tag{5}\] Further insights are provided by the mean absolute percentage error (MAPE) and the reduced \(\chi^{2}_{\rm red}\) score. These errors are used to evaluate the accuracy of the reconstruction or as a goodness of fit parameter. The MAPE is calculated as \[\mathrm{MAPE}=\frac{1}{m}\sum_{i=1}^{n}\left|\frac{y_{\rm rec,i}-y_{\rm meas,i} }{y_{\rm meas,i}}\right|\ \, \tag{6}\] with \(m\) being the number of samples and \(i\) the index of each sample. An approximation of the reduced chi-square statistic is calculated on the test data set: \[\chi^{2} =\sum_{i}\frac{(y_{\rm meas,i}-y_{\rm rec,i})^{2}}{(\Delta y_{\rm meas,i})^{2}}, \tag{7}\] \[\chi^{2}_{\rm red} =\frac{\chi^{2}}{\nu}\ \, \tag{8}\] with the degrees of freedom \(\nu\), which is here given as the sample size of the test set (63432) minus the number of parameters fixed by the model, that is, the number of connections in the neural network (here 61). Typically, the reduced \(\chi^{2}_{\rm red}\) score is used in the context of fitting and is computed for all summation indices \(i\) in Equation 7 in the data set to which the model was fitted. Here, we apply this concept to evaluate the reconstruction error with respect to the measurement uncertainty. Therefore, in our case, the reduced \(chi^{2}_{\rm red}\) is calculated from the test data set (and not the training data). The \(\chi^{2}\) score is still considered as a measure of goodness of fit and augments the comparison between the five reconstructions based on the MAPE score. In the context of fitting, a \(\chi^{2}_{\rm red}\) of one indicates a good fit consistent with the measurement errors. Lower values of the reduced \(\chi^{2}_{\rm red}\) can indicate overfitting due to large measurement uncertainties. The third type of error in our study is an estimate of the impact of the measurement uncertainties on the reconstruction error. This estimate is derived with a Monte Carlo simulation and the resulting error is therefore also called the Monte Carlo error. As the potential accuracy of the reconstruction by the MLP regressor is limited by the underlying measurement uncertainty of the five considered solar wind parameters, a basic Monte Carlo approach is used to estimate the effect of this measurement uncertainty. To this end, Gaussian noise is added to each data point. The respective standard deviations of these Gaussian noise distributions depend on the data products to be reconstructed and their respective measurement errors \(\Delta\), and are listed in Table 2. For each noisy data set generated in this way, we apply the same procedure as described in the previous subsections. We repeat the Monte Carlo simulation 100 times. The distribution of the resulting reconstructions based on the noisy data set provides a measure of the susceptibility of the MLP regressor to the measurement uncertainty. To ensure that the Monte Carlo simulation is not biased by the occasionally very poor statistics of \(O^{6+}\), we limit the Monte Carlo noise of \(\log n_{O^{+}}/n_{O^{+}}\) to 0.41 and use this value for the 14.9% of the oxygen data that exhibit a larger relative measurement uncertainty. The variability of the Monte Carlo results is indicated by confidence intervals corresponding to a 1\(\sigma\) environment defined by the 15.9th and 84.1st percentiles. We refer to these as 1\(\sigma\) equivalent percentile confidence intervals in the following. ### Model selection The performance of a machine learning method depends (often sensitively) on the choice of hyperparameters of the method. Therefore, unbiased evaluations and comparisons of machine learning methods are only feasible if optimal hyperparameters are used. The process of selecting hyperparameters is called model selection. Here, we employ a simple grid search to select optimal hyperparameters. Table 1 summarises the hyperparameters of the MLP regressor in sci-kit learn and our overall method (see also Fig. 1). For each combination of hyperparameters given in Table 1, we trained our neural network and computed the validation error. The considered hyperparameters includes the number of neurons \(n_{h}\), the initial learning rate \(\lambda\), the L2 penalty \(\alpha\), the exponential decay rate for estimates of the first moment vector \(\beta_{1}\) and the second moment vector \(\beta_{2}\), and the value for numerical stability \(\epsilon\). We also performed tests with different activation functions, finding that replacing relu with the logistic function or a tanh does not have a significant impact on the results of our study. We base the model selection not on the complete learning history as shown in Fig. 3 but on the final validation scores after 200 iterations. Due to the high number of hyperparameter combinations tested, we initially conducted only ten trials for each hyperparameter configuration. The resulting variability is illustrated in Fig. 3 where the performance of each individual trial for all combinations of the hyperparameters in Table 1 is shown for \(\mathbf{y}_{\rm rec}=v_{\rm p}\). Figure 3 illustrates that many (most) hyperparameters combinations lead to very similar final validation scores. The variability of the hyperparameter combination with the highest final median validation error is indicated with the blue shaded area. This shows that the uncertainty from the individual trials is larger than the differences in the median performance of different hyperparameter combinations. For each \(\mathbf{y}_{\rm rec}\), the hyperparameter combination with the highest final median validation score is considered as optimal. The optimal hyperparameters chosen in this way depend on which solar wind parameter is chosen as the reconstructed output vector \(\mathbf{y}_{rec}\). These optimal combinations are used in the remainder of this study to reconstruct the solar wind parameter on the test set. Table 1 shows the final hyperparameters. ## 3 Reconstruction of solar wind parameters We apply our method to the ACE data set as described in the previous section to obtain a model (realised by a neural network) for each of the five considered solar wind parameters. This model can then be applied to previously unseen solar wind data, namely the test data, to evaluate and analyse the performance of our neural network function approximators and to address our research questions. Figure 4 shows 3 of the 22 test data time periods of 27.24 days. For each reconstructed parameter, the respective observation is shown in the same panel. As an inset, in red the MAPE score of the reconstruction is given in each panel together with confidence intervals derived from the Monte Carlo simulations. These confidence intervals reflect the effect of the measurement uncertainty estimated by Monte Carlo simulations and are defined by the 15.9th and 84.1st percentiles of the Monte Carlo simulation results. As these values are calculated from noisy data, the score of the original data is in some cases significantly different (see in Fig. 4c). Overall, the reconstruction captures the major fluctuations in all solar wind parameters reasonably well. However, for all reconstructed solar wind parameters, particular low and high values in the observations are frequently over- or underestimated by the neural network. These observations are supported by the calculated MAPE scores. In particular, \(n_{O^{+}}/n_{O^{0+}}\) is consistently underestimated in the second time period (Fig. 4n)) and for most of 2003 August 26 in the first time period (Fig. 4m)). Further, the reconstruction quality varies for different reconstructed solar wind parameters and in different test data time periods. In particular, the reconstruction of the proton speed appears more accurate than the other reconstructions. This topic is investigated in more detail in the following subsections. ### Reconstruction error The upper row of Fig. 5 shows histograms of relative reconstruction errors for the complete test data set and all reconstructed so Figure 4: Time series of reconstructed and measured solar wind parameters for three selected test data time periods with the average length of a Carrington rotation. The time periods were chosen, based on the MAPE score, to be representative examples of ‘good’, ‘intermediate’, and ‘poor’ performance, respectively. The time period to the left represents one of the most accurate reconstructions, the middle time period represents a reconstruction of average accuracy, and the right time period is one of the poorest reconstructions. For each Carrington rotation, the observed data are plotted in blue and the reconstructed data are plotted in purple. The uncertainty on the reconstruction is estimated with 100 Monte Carlo simulations. The 15.9th to the 84.1st percentiles of the Monte Carlo runs are plotted as purple-shaded areas. Each row depicts one solar wind parameter, from top to bottom: \(\nu_{\rm p}\) (a), (b), and (c) in blue, \(n_{\rm p}\) (d), (e), and (f) in red, \(T_{\rm p}\) (g), (h), and (i) in green, \(B\) (j), (k), and (l) in light grey, and \(n_{O^{+}}/n_{O^{0+}}\) (m), (n), and (o) in dark grey. The MAPE score for each part of the test data set is shown in red as an inset. lar wind parameters. Each panel also gives the MAPE score for each reconstruction. A MAPE score of zero indicates a perfect reconstruction. The sign differentiates between overestimation (negative sign) and underestimation (positive sign). An overestimation of 50% would result in a MAPE score of -0.5 and an overestimation by 100%, or double the measured value, would lead to a MAPE score of -1.0. An underestimation by 50%, or half the measured value, results in a MAPE score of +0.5. The confidence intervals of the MAPE score were calculated for the 100 Monte Carlo runs and are given as \(1\sigma\) equivalent percentile confidence intervals. Each histogram in Fig. 5 is augmented by a black outline that summarises the results of the Monte Carlo simulations. We now first focus on the MAPE scores (included in the legend in each panel). The reconstruction of the proton speed results in a MAPE score of \(0.084\in[0.085,0.108]\). In comparison, the other reconstruction errors are \(n_{\mathrm{p}}:0.400\in[0.369,0.373]\), \(T_{\mathrm{p}}:0.327\in[0.320,0.324]\), \(B:0.277\in[0.280,0.302]\), and \(n_{\mathrm{\mathit{O}}^{2}}/n_{\mathrm{\mathit{O}}^{2}}:0.307\in[0.304,0.305]\). Therefore, as already illustrated in Fig. 4, the reconstruction of the proton speed appears more accurate than the other reconstructions. This is also apparent in the shape of the histograms. The histograms of reconstruction errors for the four other solar wind parameters are asymmetric and feature heavier tails biased towards negative values (Fig. 5 (b), (c), (d), and (e)). This means that the reconstructions for these solar wind parameters tend to overestimate the observations more frequently and more strongly than they underestimate them. The \(1\sigma\) equivalent percentile confidence intervals of the distribution of the individual reconstruction errors (indicated with grey hatching) also underline this. The lower percentile bound is further away from zero than the upper percentile border. In addition, while the maxima of the histogram for the proton speed and the magnetic field strength are located at zero, the maxima are shifted to the right for the three other solar wind parameters. Thus, while extreme values are frequently overestimated by the neural network for \(n_{\mathrm{p}}\), \(T_{\mathrm{p}}\), and \(n_{\mathrm{\mathit{O}}^{2}}/n_{\mathrm{\mathit{O}}^{2}}\), most values are slightly underestimated. This indicates that the model attempts to smooth the solar wind observations more than desired. While this could be a consequence of an excessively small hidden layer in our neural network, our model selection does not indicate an improvement of the validation error for larger hidden layer sizes. Therefore, we favour different explanations: Either (1) the reconstruction might be inhibited by the measurement accuracy, or (2), given that our model is time-independent, within the limitations of the measurement uncertainties, short-term variations that affect some but not all of the considered solar wind parameters cannot be captured by the neural network reconstruction. Figure 5: One-dimensional histograms of reconstruction errors \(y_{\mathrm{\mathit{diff}}}\) and \(\chi^{2}\) scores for each reconstructed solar wind parameter. In each panel, the x-axis is constrained to contain 100 bins from -2 to 1. Each of the top panels depicts the reconstruction accuracy of one of the five reconstructed solar wind parameters (a-e) based on the MAPE score, and the bottom panels (f-j) show normalised densities of the \(chi^{2}\) score for each determined parameter. Each column refers a different reconstructed parameter, from left to right: \(v_{\mathrm{p}}\) (a) and (f) in blue, \(n_{\mathrm{p}}\) (b) and (g) in red, \(T_{\mathrm{p}}\) (c) and (h) in green, \(B\) (d) and (i) in light grey, and \(n_{\mathrm{\mathit{O}}^{2}}/n_{\mathrm{\mathit{O}}^{2}}\) (e) and (j) in dark grey. In each histogram, an area is marked with grey hatches that contains all data from the 15.9th to the 84.1th percentiles. The respective MAPE score and \(\chi^{2}_{\mathrm{\mathit{O}}^{2}}\) are indicated in insets in the top and bottom rows. For both, the respective confidence intervals derived from the Monte Carlo runs are included. An additional black histogram outline is included in each panel, which represents the variability of the Monte Carlo simulations based on randomised input data. In each panel, the x-axis contains 30 logarithmic bins. In the bottom row only, the y-axis is also logarithmic with a lower bound of \(10^{-8}\), i.e. 0.00001% of the \(\chi^{2}\) score density, and the histogram is normalised to the sum of the distribution. The inset also depicts the \(\chi^{2}_{\mathrm{\mathit{O}}^{2}}\) score. In addition, the median of the individual \(\chi^{2}\) scores is noted to show the spread of the distribution. The lower row of Fig. 5 shows histograms of the individual \(\chi^{2}\) values for each reconstructed parameter. The x-axis shows a logarithmic bin distribution of the \(\chi^{2}_{\rm red}\) scores with 30 bins. The y-axis shows the density distribution of each \(\chi^{2}_{\rm red}\) bin. The density distribution is normalised to 1. The red inset provides the reduced \(\chi^{2}_{\rm red}\) score of the whole data set. The confidence intervals in the second row are calculated by taking the \(\chi^{2}_{\rm red}\) scores of the Monte Carlo runs and computing the 15.9th and 84.1st percentiles. The third line of the inset provides the median value of the individual \(\chi^{2}\) on the test data set (not of the Monte Carlo runs). The \(\chi^{2}_{\rm red}\) scores show that the reconstruction that is closest in line with the measurement errors is the proton temperature \(T_{\rm p}\) reconstruction with a score of \(13.3\in[13.6,16.1]\), followed by the proton density \(n_{\rm p}\), the proton speed \(v_{\rm p}\), the oxygen charge-state ratio \(n_{\rm o^{2}}/n_{\rm o^{6}}\), and finally the magnetic field strength \(B\). We interpret the comparatively very poor \(\chi^{2}_{\rm red}\) score of the proton speed despite the apparently good MAPE scores of the re Figure 6: Two-dimensional histograms of the relative reconstruction error (a, b, c, d, e) and MAPE score (f) for each reconstructed solar wind parameter over the proton–proton collisional age: From top to bottom: \(v_{\rm p}\), \(n_{\rm p}\), \(T_{\rm p}\), \(B\), and \(n_{\rm o^{2}}/n_{\rm o^{6}}\). For the first five subplots, the y-axis shows the normalised differences between the reconstruction and the measured data. The x-axis shows the proton-proton collisional age. In all panels, data are sorted into 50 bins between -3.5 and 2.2 on the x-axis, and in panels (a-e) the data are sorted into 50 bins between -2.0 and 1.0 on the y-axis. The vertical black lines give approximate thresholds separating sector reversal from streamer belt plasma (right black line) and streamer belt plasma from coronal hole wind (left black line). The red line highlights the maximum of the distribution in each vertical slice with sufficient statistics (at least 5000 data points over all Monte Carlo runs per column). The bottom-most subplot shows the MAPE scores (on the y-axis) computed separately for all test data points falling into the respective proton–proton collisional age bin (on the x-axis) for each reconstructed solar wind parameter. The MAPE score is calculated according to Equation 6. Confidence intervals in panel (f) are given as three overlapping areas per curve (15.9th - 84.1th percentile, 2.5th - 97.5th percentiles, and 0th - 100th percentiles). Figure 7: MAPE scores for each time period from the test data set for each reconstructed solar wind parameter over ten years and sorted by solar wind type. The top panel shows the respective MAPE scores on all (non-ICME) solar wind data from each test data time period. The three panels below show the MAPE scores separated by their Xu & Borovsky (2015) solar wind type (from top to bottom: sector reversal wind, streamer belt plasma, and coronal hole wind). The bottom panel gives the number of valid data points per test data time period and solar wind type. In addition, on the right y-axis, the bottom panel includes the monthly sun spot number as a reference (SILSO World Data Center 2001–2010). In the four top panels, simple linear fits to the scores of each reconstructed parameter (from 2002-2008) are shown as thin coloured lines. To the right of each of the four upper plots, plus and minus symbols in the colour of the respective reconstructed parameter indicate whether the slope of the corresponding line is positive (+) or negative (–). construction as being the result of the small measurement uncertainty on the proton speed (in particular in comparison to the measurement uncertainties and scores of the proton density and proton temperature; see Table 2). A similar effect probably impacts the magnetic field strength \(\chi^{2}_{\rm red}\) score. For values between 1 and 10 nT, a measurement error of 0.1 nT would correspond to a relative measurement error of 10% to 1%. This is a lower relative measurement error than for the proton density \(n_{\rm p}\) or the proton temperature \(T_{\rm p}\). Therefore, the \(\chi^{2}\) score is poorer despite the fact that the reconstruction accuracy estimated by the MAPE score is similar to that of \(n_{\rm p}\) and \(T_{\rm p}\). However, in the case of the magnetic field strength, the reconstruction is also less accurate than that of the proton speed. The charge-state ratio of oxygen is associated with the largest measurement errors (by far) and this is reflected in the poorest \(\chi^{2}_{\rm red}\) score. The median value of the individual \(\chi^{2}\) scores for the proton temperature (8) and the oxygen charge-state ratio (10) indicate that the majority of the reconstructed data points are consistent with the measurement errors even though their average, the reduced \(\chi^{2}_{\rm red}\), is strongly affected by outliers with a very poor \(\chi^{2}\) score. ### Dependence on solar wind type Next, we investigate how the reconstruction errors relate to the solar wind type. Separating the data into the solar wind types as described in Xu & Borovsky (2015) provides clues as to which solar wind type is most difficult to reconstruct. The MAPE scores and the \(\chi^{2}_{\rm red}\) scores for each parameter and solar wind type are recorded in Table 3. Additionally, the median value for the underlying distribution of each score is provided. Except for the solar wind proton speed \(v_{\rm p}\), the MAPE scores of the sector reversal solar wind type are consistently worse than those of the coronal hole and streamer belt type. Since the three solar wind types considered here are also affected by different transport effects, the comparison of the reconstruction error in different solar wind types also provides hints as to the influence on transport effects on the reconstruction. Most \(\chi^{2}_{\rm red}\) scores follow the pattern that they are higher for higher MAPE scores. Therefore, a poor reconstruction results in comparatively high MAPE scores and \(\chi^{2}_{\rm red}\) scores. Nevertheless, there are some exceptions. The proton density \(n_{\rm p}\) for sector reversal plasma shows a low \(\chi^{2}_{\rm red}\) score compared to coronal hole or streamer belt plasma. We suspect that reconstructing the proton density in coronal hole wind is an indirect result of wave activity. Waves, which at 1AU are mainly observed in coronal hole wind, increase the variability in the proton speed, proton temperature, and the magnetic field strength, while not affecting the proton density. This creates an ambiguity for the proton density, because for the same constant proton density, variable combinations of proton speed, proton temperature, and magnetic field are observed. Figure 6 shows the reconstruction errors for each solar wind parameter ordered by the proton-proton collisional age \(a_{\rm col,p-p}\). The proton-proton collisional age (also referred to as the Coulomb number; see Kasper et al. (2012)) is well suited to ordering the solar wind observations directly by their collisional history (Kasper et al., 2019, 2012; Tracy et al., 2016); it also serves as a proxy for the Xu & Borovsky (2015) solar wind types, as discussed in (Heidrich-Mestiner et al., 2020). Low proton-proton collisional age (\(\log a_{\rm col,p+p}\lesssim-0.9\)) approximately corresponds to coronal hole wind, intermediate proton-proton collisional age (\(-0.9\lesssim\log a_{\rm col,p-p}\lesssim 0.1\)) to streamer belt plasma, and high proton-proton collisional age (\(0.1\lesssim\log a_{\rm col,p+p}\)) to sector reversal plasma. Therefore, the three solar wind plasma types can be approximately separated by the proton-proton collisional age. The locations of the column-wise maxima in Fig. 6 and the shape of the distributions in each panel are different for each reconstructed solar wind parameter. For the magnetic field strength, the core of the coronal hole wind population (low proton-proton collisional age) and the streamer belt population are shifted to the negative relative errors, that is, to overestimation. For the proton density, the peak positions exhibit a similar systematic behaviour to the magnetic field strength. For the proton temperature, the core of the streamer belt population is shifted slightly towards the positive direction, that is, underestimation in the intermediate proton-proton collisional age range, and more strongly in the positive direction in the low proton-proton collisional age range, which again indicates underestimation. Under the assumption that waves are predominantly observed in coronal hole wind, which is here identified by a low proton-proton collisional age, this systematic change of the positions of the core positions could be interpreted as the effect of the presence or absence of waves on the reconstruction. The (apparent or real, see Verscharen & Marsch (2011)) heating of the proton core distribution by waves increases the observed proton temperature in coronal hole wind. This is likely not covered well by the neural network model, because in the corresponding proton-proton collisional age range, the distributions of the reconstruction errors for the proton temperature moves to the right, which indicates underestimation of the observed proton temperatures by the neural network model. In addition, for the sector reversal plasma, the higher the proton-proton collisional age, the less accurate the reconstruction for all reconstructed parameters except for the proton speed. This effect is most pronounced in the proton density and in the proton temperature. With compressed slow solar wind from SIRs, the sector reversal plasma contains strongly transported solar wind plasma. In particular, the compression regions affect the proton density, proton temperature, and magnetic field strength, but the proton speed is not affected to the same extent, and the oxygen charge-state ratio is not affected at all. The systematic shifts away from zero, which are visible in the reconstruction error for \(n_{\rm p}\), \(T_{\rm p}\), and \(B\) for the corresponding high proton-proton collisional age in Fig. 6, show that this case is not well represented in the static neural network model. We suspect that the same transport effect is indirectly responsible for the shift of the reconstruction error of the oxygen charge-state ratio in the same high proton-proton collisional age region. Here, the input solar wind parameters, in particular \(n_{\rm p}\) and \(T_{\rm p}\), are systematically changed by the SIR, which makes it more difficult for the static model to recover the transport unaffected charge-state ratio from these strongly transport-affected solar wind parameters. This effect is probably reinforced by the comparatively poorer statistics of sector reversal plasma compared to the other two solar wind types. The distribution of the proton speed reconstruction differences is visibly narrower and the MAPE score for the reconstruction of the proton speed is low over almost the complete range of proton-proton collisional age bins (see bottom-most panel in Fig. 6), which again supports the conclusion that the proton speed reconstruction is the most accurate based on the MAPE score --although the accuracy is low compared to the small measurement uncertainty of the proton speed; cf. Sect. 3.1. ### Limits of the time-independent model The underlying ACE data cover 10 years. This is almost one solar activity cycle. It is well known that the properties of the solar wind all change systematically over time (McComas et al., 2000; Kasper et al., 2012; Shearer et al., 2014). As our neural network model is stationary, this time-dependent effect cannot be captured by our model. Therefore, in this section, we investigate how strong this effect is and how the reconstruction accuracy varies over time. As the Sun and the solar wind properties are less variable during the solar activity minimum, we expected the reconstruction accuracy to worsen with increasing solar activity. Therefore, the best reconstruction would be expected during the solar activity minimum and the worst during the solar activity maximum. Figure 7 shows the MAPE score for each reconstructed solar wind parameter and different Xu & Borovsky (2015) solar wind types for each individual time period from the test data set. As discussed in Sect. 2, the training, validation, and test data sets are all similarly distributed over time. In each of the four top panels of Fig. 7, the MAPE score for each reconstructed parameter is shown for easy comparison. In each panel, we again see that the reconstruction of the proton speed achieves the smallest MAPE scores. The proton density reconstruction shows the largest variation over time, in particular in sector reversal plasma. Linear fits using the mean deviation of the upper and lower bound of the Monte Carlo confidence intervals from each data point as an estimate for the standard deviation of each value (restricted to 2002-2008) illustrate that, independently of solar wind type, the reconstruction accuracy is similar for all solar wind parameters during all times of the solar cycle. In all cases, the slope of the lines is small, but --as indicated by the plus and minus symbols to the right of each subplot-- the slopes are significantly different from zero (based on a Wald test for statistical significance). A possible explanation for the rising slope (+ symbol) of \(B\) and \(n_{O^{2}}\)-/\(n_{O^{2}}\)- could lie in their respective measurement errors. A rising slope means the reconstruction accuracy is lower during solar activity minimum. The measurement error of \(B\) is given as an absolute error (0.1 nT). Its impact on the MAPE score is greatest for smaller values of \(B\), which are more likely during solar activity minimum. Similarly, for \(n_{O^{2}}\)-/\(n_{O^{2}}\)-, less ions are measured during solar activity minimum, and therefore the counting statistics show smaller values, which results in greater measurement uncertainties. As discussed in Sect. 3.2 and shown in Table 3, the sector reversal plasma reconstruction is the least accurate. In Fig. 7 sector reversal plasma shows the greatest variability between the different data points (each data point corresponds to approximately one Carrington rotation). The outliers in sector reversal plasma are also the most extreme. One explanation could be that the Xu & Borovsky (2015) scheme employed here misidentifies some coronal hole plasma as sector reversal plasma (and vice versa). As plasma from stream interaction regions is strongly affected by transport effects, this regime is more difficult to reconstruct and therefore has a negative affect on the reconstruction accuracy of whichever group it gets sorted into; in this case, sector reversal plasma. However, in contrast to our expectations, for all reconstructed parameters, the influence of the solar activity cycle is small compared to the uncertainty arising from the measurement uncertainty. Therefore, with the large underlying measurement uncertainties on the proton temperature, the proton density, and the oxygen charge-state ratio, the reconstruction is not accurate enough to allow a considerably better reconstruction during the solar activity minimum. As the measurement uncertainty on the proton temperature, the proton density, and the oxygen charge-state ratio is also large compared to the systematic variations of these parameters with the solar activity cycle, this is not surprising. ## 4 Discussion The properties of the solar wind are determined by a combination of the conditions in the solar source region and the transport history experienced by the solar wind. As a result, the proton plasma properties, the magnetic field strength, and the charge-state composition are correlated to each other (Lepri et al., 2013; McComas et al., 2000; von Steiger et al., 2000). Here, we investigate whether a combination of four of these solar wind parameters is sufficient to reconstruct the remaining fifth solar wind parameter. If the considered solar wind parameters were to contain all the information necessary, a perfect reconstruction would be possible. We therefore consider the obtained reconstruction accuracy as a surrogate to quantify the degree to which other (unknown) hidden parameters play a role in the correlations between our considered solar wind parameters. By analysing how this changes under different solar wind conditions with different respective transport histories, we can extend this argument to the impact of the varying influence of transport processes on the properties of the solar wind. To this end, we investigate interdependencies between different solar wind parameters, namely \(v_{\rm p}\), \(n_{\rm p}\), \(T_{\rm p}\), \(B\), and \(n_{O^{2}}\)-/\(n_{O^{2}}\)-. \begin{table} \begin{tabular}{c|c c|c c|c c|c c} & \multicolumn{2}{c|}{all data} & \multicolumn{2}{c|}{coronal hole} & \multicolumn{2}{c|}{sector reversal} & \multicolumn{2}{c}{streamer belt} \\ & MAPE & median & MAPE & median & MAPE & median & MAPE & median \\ \hline \(v_{\rm p}\) & 0.084 & -0.001 & 0.090 & 0.015 & 0.085 & 0.002 & 0.079 & -0.012 \\ \(n_{\rm p}\) & 0.400 & -0.108 & 0.400 & -0.169 & 0.545 & 0.016 & 0.312 & -0.126 \\ \(T_{\rm p}\) & 0.327 & -0.078 & 0.291 & 0.016 & 0.382 & -0.347 & 0.316 & -0.033 \\ \(B\) & 0.277 & -0.053 & 0.222 & -0.023 & 0.352 & -0.107 & 0.265 & -0.052 \\ \(n_{O^{2}}\)-/\(n_{O^{2}}\)- & 0.307 & -0.031 & 0.261 & -0.014 & 0.379 & -0.036 & 0.290 & -0.040 \\ \hline \(\chi^{2}_{\rm red}\) & median & \(\chi^{2}_{\rm red}\) & median & \(\chi^{2}_{\rm red}\) & median & \(\chi^{2}_{\rm red}\) & median \\ \hline \(v_{\rm p}\) & 50.6 & 22.4 & 59.7 & 24.4 & 47.0 & 24.9 & 47.6 & 19.8 \\ \(n_{\rm p}\) & 37.4 & 3.9 & 80.1 & 3.0 & 19.5 & 6.0 & 22.6 & 3.4 \\ \(T_{\rm p}\) & 13.3 & 1.8 & 4.1 & 1.1 & 32.0 & 4.1 & 7.4 & 1.6 \\ \(B\) & 350.8 & 101.9 & 273.2 & 72.8 & 430.2 & 141.3 & 350.7 & 104.5 \\ \(n_{O^{2}}\)-/\(n_{O^{2}}\)- & 269.0 & 1.0 & 34.0 & 0.3 & 540.8 & 1.8 & 246.2 & 1.6 \\ \end{tabular} \end{table} Table 3: MAPE and \(\chi^{2}_{\rm red}\) scores for the five reconstructed solar wind parameters \(v_{\rm p}\), \(n_{\rm p}\), \(T_{\rm p}\), \(B\), and \(n_{O^{2}}\)-/\(n_{O^{2}}\)-. Additionally, the median value for each score is provided. The first column shows the scores for the complete test data set. After applying the scheme from Xu & Borovsky (2015), the scores of the resulting subsets are recorded in columns two to four. All five considered solar wind parameters depend on the respective solar source of the observed solar wind. However, they are affected differently (or not at all, as in the case of \(n_{O^{+}}/n_{O^{0+}}\)) by different transport effects, which obscures the original source-driven interrelationships. We use a neural network as a general function approximator to model the interdependencies of these solar wind parameters. The lowest mean absolute reconstruction error is achieved for the proton speed \(v_{\rm p}(T_{\rm p},n_{\rm p},B,n_{O^{0+}}/n_{O^{0+}})\). One possible interpretation is that the information carried by the proton speed can be extracted from the other four parameters. However, the proton speed is also associated with a small measurement uncertainty, and compared to the measurement uncertainty the accuracy of the proton speed reconstruction is lower than for the proton density and proton temperature. Nevertheless, our results indicate that the solar wind parameter proton speed can be replaced by other measurements, and appears therefore the least important parameter to measure. Reconstruction of any of the other four parameters, \(T_{\rm p}\), \(n_{\rm p}\), \(B\), or \(n_{O^{0+}}/n_{O^{0+}}\), has proven to be more difficult. Here, the (absolute) reconstruction errors remain high, \(\approx 30\%\). The accuracy of the proton density, proton temperature, and the oxygen charge-state ratio is strongly limited by the underlying measurement uncertainty. On the one hand, the measurement uncertainty determines the accuracy of the reconstructed parameter itself. This is the case for, for example, the proton temperature, which shows a high (and therefore poor) MAPE score, but reaches the lowest (best) \(\chi^{2}_{\rm red}\) score. On the other hand, the large measurement uncertainties of, for example, the proton temperature and the oxygen charge-state ratio also limit the reconstruction of all other parameters in our setup, because inaccurate input parameters also affect the output parameter. We suspect this effect is the reason for the low reconstruction accuracy compared to the measurement uncertainty we obtained for the proton speed and the magnetic field strength. For the oxygen charge-state ratio, the average reconstruction accuracy compared to the measurement uncertainty is very low, but the reconstruction accuracy is on the order of the measurement uncertainty for the majority of the individual data points. Therefore, measuring these quantities with higher accuracy is important in order to understand the interdependencies of the solar wind parameters. This is of particular interest in the case of the oxygen charge-state ratio \(n_{O^{0+}}/n_{O^{0+}}\). As the only parameter not affected by transport effects, this latter contains unique information that cannot be substituted by a non-linear relationship between the proton plasma parameters with high absolute accuracy, and the reconstruction of all solar considered solar wind parameters is likely inhibited by the large measurement uncertainties on \(n_{O^{0+}}/n_{O^{0+}}\). This emphasises the need for heavy ion instruments, such as ACE/SWICS, or the Heavy Ion Sensor (HIS), which is part of the Solar Wind Analyzer (SWA) (Owen et al., 2020) on board Solar Orbiter. These instruments require a stable high-voltage supply, which is challenging to design, and analysis of the data they provide is a complex undertaking. However, our results illustrate that the effort to build instruments of this type is necessary. From the point of view of the solar source region, our reconstruction approach implies that the proton speed carries less detailed information about the source conditions than the other four solar wind parameters. Studying the signatures in \(T_{\rm p}\), \(n_{\rm p}\), \(B\), and \(n_{O^{0+}}/n_{O^{0+}}\) may therefore provide a better chance to capture relevant details of a specific solar source region and the solar wind release mechanisms. Given the large measurement uncertainties on \(T_{\rm p}\), \(n_{\rm p}\), and \(n_{O^{0+}}\)/\(n_{O^{0+}}\), these four solar wind parameters cannot be completely reconstructed from each other, and therefore investigating which other properties --not included in this study-- determine their variability may help us to identify the underlying mechanisms behind the release of (slow) solar wind. The reconstruction accuracy differs depending on solar wind type. Based on the absolute reconstruction accuracy (estimated with the MAPE score) and the reconstruction accuracy relative to the measurement uncertainty (estimated with the \(\chi^{2}_{\rm red}\) and the median of the individual \(\chi^{2}\) scores), the reconstructions of \(B\), \(T_{\rm p}\), and \(n_{O^{0+}}/n_{O^{0+}}\) are best in coronal hole wind, the reconstruction of \(v_{\rm p}\) is best in sector reversal plasma, and the reconstruction of \(n_{\rm p}\) is best in streamer belt plasma. This illustrates that reconstructions face different challenges in different solar wind types, which differ both in the properties of the solar source region and in the transport history experienced during the solar wind travel time. To further investigate our results from the point of view of the transport history of solar wind, we make use of the proton-proton collisional age, which --although not defined for this--can serve as a proxy to differentiate between solar wind types with different transport histories (Heidrich-Meisner et al., 2020). Coronal hole wind is often influenced by wave activity. Waves have several effects on the solar wind plasma: The core of the proton population is (or is apparently) heated, probably preferentially perpendicular to the magnetic field; waves are speculated to play a role in the formation of the beam (Marsch et al., 1982; Verniero et al., 2020; D'Amicis & Bruno, 2015; Panasenco et al., 2020; Louarn et al., 2021); and wave-particle interaction likely plays a role in differential streaming (Kasper et al., 2012; Janitzek et al., 2016; Marsch et al., 1982a). Here, we cannot resolve these different effects, but argue that they are all mainly confined to coronal hole wind (or Alfvenic slow solar wind), which is typically associated with low proton density, high proton temperature, and high solar wind speed, which all lead to a low proton-proton collisional age. Therefore, we assume that the effect of waves as transport processes are relevant in solar wind with a low proton-proton collisional age. In this regime, we find that our neural network reconstruction tends to underestimate the proton temperature and (to a lesser degree) the magnetic field strength. Among the solar wind parameters considered here, these two, \(T_{\rm p}\) and \(B\), are exactly the parameters that are expected to be most influenced by Alfven waves. Therefore, our neural network reconstruction appears to focus on the underlying (source-driven) relationship between the solar wind parameters and not on the effect of waves on the solar wind plasma. Indirectly, this is supported by the observation that the oxygen charge-state ratio does not show a preferential over- or underestimation in the coronal hole wind regime. This is in agreement with the expectation that the oxygen charge-state ratio is not affected by transport effects. If we focus on solar wind with a particularly high proton-proton collisional age, this selects solar wind with high proton densities, high proton temperatures, and low proton speeds. These conditions are best realised in compressed slow solar wind in SIRs and in the preceding slow solar wind. Therefore, as argued in Heidrich-Meisner et al. (2020), we consider solar wind with a high proton-proton collisional age as a proxy for SIRs, which tend to be included in the sector reversal plasma category in the Xu & Borovsky (2015) categorisation. The sector reversal plasma also included a very slow, cool, and dense solar wind type (Sanchez-Diaz et al., 2016), which also falls in the high proton-proton collisional age regime. In this regime, we observe a systematic increase in the overestimation of the proton temperature and a systematic increase in the underestimation of the proton density. The underestimation of the proton densities again im plies that the neural network model is probably tailored to 'normal' conditions that are unaffected by transport, and is therefore ill-equipped to adapt to the different relationship between proton density and proton temperature in compressed solar wind. For the proton temperature, the observed underestimation is the result of a compromise between attempting to model the higher proton temperatures in compression regions and those in the very slow solar wind identified in Sanchez-Diaz et al. (2016). This bolsters the argument that the information contained in the solar wind parameters considered here (and in other studies) is likely not sufficient to completely characterise the plasma, its solar source, or the experienced transport history. Our results also support the idea that any solar wind classification using only one or a few of the solar wind parameters considered here contains an inherent bias towards greater accuracy during certain conditions that are more or less affected by transport processes. Our neural network models a static, time-independent relationship between the considered solar wind parameters. However, all considered solar wind parameters systematically change with the phase of the solar activity cycle. Therefore, in principle, our neural network model cannot be expected to perform equally well in all phases of the solar activity cycle. However, investigating how the reconstruction accuracy changes over (almost) one solar cycle shows that the influence of the high measurement uncertainties on the underlying parameters is stronger than a potential solar activity cycle-dependent effect. That the model achieves better results for the reconstruction of the oxygen charge-state ratio during the solar activity maximum is probably also an effect of the measurement uncertainty on \(n_{O^{\alpha}}\), \(n_{O^{\alpha}}\). During the solar activity minimum phase, observations of very dilute plasma are more likely. This condition can lead to very low count rates in ACE/SWICS and therefore to a very high measurement uncertainty. Although the oxygen charge-state ratio is the only solar wind parameter considered here that is not affected by any transport effects that complicate the relationship between the different considered solar wind parameters, the neural network reconstruction of the oxygen charge-state ratio does not prove to be easier than that of the other transport-affected solar wind parameters. This can be caused by at least three mechanisms: (1) The oxygen charge-state ratio at the source depends (also) on a property that is not included in our analysis; (2) recovering sufficiently detailed information on the solar source region (which determines the oxygen charge-state ratio) from the proton plasma properties and the magnetic field strength is hindered by the influence of the transport history, which strongly affects the proton temperature and the proton density; and (3) the high measurement uncertainties are unconductive to a good reconstruction. ## 5 Conclusion We investigated non-linear relationships between different solar wind parameters; namely proton speed, proton density, proton temperature, magnetic field strength, and the oxygen charge-state ratio. Our findings suggest that only the proton speed can be substituted with other measurements with reasonable absolute and relative accuracy. This implies that the proton speed carries less unique information about the solar source region and transport effects than the other considered solar wind parameters. The precision of the reconstructions of the proton density, proton temperature, and the oxygen charge-state ratio is constrained by their respective measurement uncertainties. While the average reconstruction accuracy of the oxygen charge-state ratio compared to the measurement uncertainty is generally low, most individual data points exhibit reconstruction accuracy in line with the measurement uncertainty. While the magnetic field strength can be measured with high accuracy, its reconstruction in our study is similarly inhibited by the comparatively high uncertainties on proton density, proton temperature, and the oxygen charge-state ratio. Therefore, to further our understanding of the relationships between different solar wind parameters and the process they originate from, it is crucial to further enhance the measurement accuracy for these quantities. Our neural network reconstruction appears to focus on the underlying relationship driven by the sources of the solar wind, rather than disentangling the impact of transport effects such of wave-particle interactions, collisions, or compression regions on the solar wind plasma. Nevertheless, the reconstruction accuracy clearly differs depending on the solar wind type. We note that different transport effects are dominant in different respective solar wind types, and therefore transport effects, such as wave-particle interactions in coral hole wind, collisions in slow solar wind, and compression regions in SIRs, limit the potential accuracy of identifying the source region of the solar wind purely based on the observations of proton speed, proton density, proton temperature, and magnetic field strength. Our results therefore underline the importance of measuring the charge states of the solar wind directly and with high accuracy. For complex models based on magnetohydrodynamics and solar corona magnetic field models (Arge & Pizzo 2000; Cranmer & Van Ballegooijen 2005; Cranmer et al. 2007; van der Holst et al. 2010; Pizzo 2011; Schultz 2011; van der Holst et al. 2014; Pomoell & Poedts 2018), capturing the properties of SIRs tends to be comparatively difficult. The fact that our simple approach of ad hoc neural network reconstruction is also least accurate for the solar wind type that contains the most SIRs suggests that the additional effect of compression --which dominates the plasma properties in SIRs-- needs to be considered in the design of highly accurate models. Consequently, incorporating comprehensive details regarding transport effects, compression regions, and the progressive impingement of faster solar wind into SIRs (Hofmeister et al. 2022) into consistent MHD-driven solar wind models holds promise for enhancing their accuracy. ###### Acknowledgements. This work was supported by the Deutsches Zentrum fur Luft- und Raumfahrt (DLR) as SOHO/CELIAAS 50 OC 2104. We further thank the science teams of ACE/SWPARA, ACE/MAG as well as ACE/SWICS for providing the respective level 2 and level 1 data products. The Sunspot data is taken from the World Data Center SILSO, Royal Observatory of Belgium, Brussels.
2302.07019
Critical time-step size analysis and mass scaling by ghost-penalty for immersogeometric explicit dynamics
In this article, we study the effect of small-cut elements on the critical time-step size in an immersogeometric context. We analyze different formulations for second-order (membrane) and fourth-order (shell-type) equations, and derive scaling relations between the critical time-step size and the cut-element size for various types of cuts. In particular, we focus on different approaches for the weak imposition of Dirichlet conditions: by penalty enforcement and with Nitsche's method. The stability requirement for Nitsche's method necessitates either a cut-size dependent penalty parameter, or an additional ghost-penalty stabilization term is necessary. Our findings show that both techniques suffer from cut-size dependent critical time-step sizes, but the addition of a ghost-penalty term to the mass matrix serves to mitigate this issue. We confirm that this form of `mass-scaling' does not adversely affect error and convergence characteristics for a transient membrane example, and has the potential to increase the critical time-step size by orders of magnitude. Finally, for a prototypical simulation of a Kirchhoff-Love shell, our stabilized Nitsche formulation reduces the solution error by well over an order of magnitude compared to a penalty formulation at equal time-step size.
Stein K. F. Stoter, Sai C. Divi, E. Harald van Brummelen, Mats G. Larson, Frits de Prenter, Clemens V. Verhoosel
2023-02-14T13:02:10Z
http://arxiv.org/abs/2302.07019v1
# Critical time-step size analysis and mass scaling by ghost-penalty ###### Abstract In this article, we study the effect of small-cut elements on the critical time-step size in an immersogeometric context. We analyze different formulations for second-order (membrane) and fourth-order (shell-type) equations, and derive scaling relations between the critical time-step size and the cut-element size for various types of cuts. In particular, we focus on different approaches for the weak imposition of Dirichlet conditions: by penalty enforcement and with Nitsche's method. The stability requirement for Nitsche's method necessitates either a cut-size dependent penalty parameter, or an additional ghost-penalty stabilization term is necessary. Our findings show that both techniques suffer from cut-size dependent critical time-step sizes, but the addition of a ghost-penalty term to the mass matrix serves to mitigate this issue. We confirm that this form of'mass-scaling' does not adversely affect error and convergence characteristics for a transient membrane example, and has the potential to increase the critical time-step size by orders of magnitude. Finally, for a prototypical simulation of a Kirchhoff-Love shell, our stabilized Nitsche formulation reduces the solution error by well over an order of magnitude compared to a penalty formulation at equal time-step size. keywords: Immersogeometric analysis, Explicit dynamics, Critical time step, Finite cell method, Ghost penalty, Mass scaling ###### Contents * 1 Introduction * 2 Immersogeometric methods * 2.1 Non-boundary fitted B-spline basis functions * 2.2 Integration of cut elements * 3 Critical time-step analysis for a second-order problem * 3.1 Semi-discrete formulation * 3.1.1 Neumann formulation * 3.1.2 Row-sum mass lumping * 3.1.3 Dirichlet condition enforcement by penalization * 3.1.4 Dirichlet condition enforcement by Nitsche's method * 3.1.5 Ghost-penalty stabilization * 3.1.6 Ghost mass * 3.2 Analysis of the critical time-step size * 4 Critical time-step analysis for a fourth-order problem * 4.1 Semi-discrete formulation * 4.2 Analysis of the critical time-step size * 5 Numerical experiments * 5.1 Verification of time-step size scaling * 5.1.1 Neumann boundaries * 5.1.2 Penalty formulations * 5.1.3 Nitsche formulations * 5.2 Convergence of a linear pre-stressed membrane * 5.3 Transient response of a linear Kirchhoff-Love shell * 6 Conclusion and outlook **1. Introduction** Explicit analysis forms the backbone of impact and crash-test simulation software. In these highly non-linear, short time-span simulations, the structure that is impacted is often comprised of shell-type components [1; 2; 3]. _Isogeometric analysis_ has emerged as a framework that streamlines the design-to-analysis pipeline for these types of simulations [4; 5; 6]. In isogeometric analysis, the spline-based geometry representation from CAD drawings is used directly in the analysis software [4]. The higher-order continuity of the spline basis functions makes them particularly well-suitable for handling the higher-order partial differential equations that arise in shell-type formulations [6; 7; 8]. Additionally, the increased order of continuity of the basis functions leads to a higher permissible time-step magnitude compared to standard (\(\mathcal{C}^{0}\)-continuous) finite element basis functions for a fixed number of degrees of freedom [9; 10]. The spline geometry representation from CAD drawings often involves trimmed edges, where the physical domain boundary cuts through the computational background mesh. These "immersed boundary" methods, referred to as immersogeometric methods in the context of isogeometric analysis, have been extensively studied in recent years [11; 12; 13; 14; 15; 16]. An example of a one-dimensional immersogeometric discretization is shown in Fig. 1a, where the B-spline basis functions are not confined to the physical domain. The size of the cut is characterized by the elements size fraction \(\chi\). The presence of basis functions with small support within the physical domain, i.e., with small \(\chi\), negatively impacts the stability characteristics of the numerical scheme. In implicit analysis, these issues manifest themselves as poor conditioning of the resulting system of equations [17]. For explicit analysis, a primary concern is the effect of small cut elements on the maximum eigenvalue, \(\lambda_{\text{max}}^{h}\), of the mass-to-stiffness generalized eigenvalue problem. Indeed, for a one-dimensional rod example with the basis functions from Fig. 1a, the cut-size heavily affects the maximum eigenfrequency, as can be observed from the diverging open markers in Fig. 1b. For Figure 1: Effect of the small-cut elements (characterized by the element size fraction \(\chi\) that is illustrated in Fig. 1a) on the eigenvalues of a rod discretized with the six basis functions depicted in Fig. 1a, for consistent and row-sum lumped mass matrices. explicit time-stepping methods, an increase in the maximum eigenfrequency leads to a tighter bound on the magnitude of the permissible time step. The specific relation between \(\lambda_{\text{max}}^{h}\) and this so-called "critical time-step size", \(\Delta t_{\text{crit}}\), depends on the explicit time-integration technique that is employed. A typical example for undamped structural dynamics is \[\Delta t_{\text{crit}}=\frac{2}{\sqrt{\lambda_{\text{max}}^{h}}}\,, \tag{1}\] which holds for the Newmark-type central difference method [18, p. 493]. The diverging results of the open markers in Figure 1b are obtained for a "full" or "consistent" mass matrix (in the sense of [19]) in the formulation of the eigenvalue problem. In explicit analysis, the practice of mass diagonalization through some form of mass lumping is the gold standard [18], as this is essential to attaining a fast solver. As can be observed from the filled marker in Fig. 1b, the use of such a (row-sum) lumped mass matrix completely mitigates the detrimental impact of small cuts. This was first observed by Leidinger et al. [20], where this behavior is explored analytically in one dimension, and numerically in two dimensions. The non-boundary fitted nature of immersed methods also requires specialized techniques for enforcing essential boundary conditions. Often, penalty-based methods are employed. In the context of explicit dynamics, not only the relation between the penalty parameter and the solution quality needs to be understood, but also its impact on the maximum eigenvalue. This is explored in the immersogeometric framework in [5; 20], where the authors report that sufficient solution accuracy can be attained with a penalty parameter that is small enough to avoid affecting the largest eigenvalue. When the penalty parameter exceeds a certain limit, the maximum eigenvalue scales linearly with the penalty parameter. The same linear scaling is observed for Nitsche's method in [21]. The current work aims to study the impact of small cuts on the critical time-step size in a wider range of scenarios, by investigating second- and fourth-order partial differential equations, regardless of the spatial dimension and polynomial order of the discretization, and considering various penalty- and Nitsche-based formulations for enforcing Dirichlet conditions. We also explore the use of ghost-penalty-based stabilization of the stiffness [22; 23] and/or mass matrix [24; 25]. Our goal is to identify the appropriate formulations in the explicit immersogeometric setting, and to evaluate the potential benefits of the alternative formulations in typical explicit dynamics computations. The remainder of this article is structured as follows. In Section 2, we present the theory and nomenclature relevant to the topic of immersogeometric methods. We then focus on a linear wave equation in Section 3, where we systematically collect the contributions to the mass and stiffness matrices for various immersed boundary formulations. Once the general formulation is derived, we analytically derive minimal scaling-orders of the maximum eigenvalue with respect to the cut-element size. This analysis is repeated for a linear fourth-order equation in Section 4. The results obtained are experimentally verified in Section 5, where we also perform convergence studies for the linear wave equation, and transient simulations of a linear Kirchhoff-Love shell model. Finally, in Section 6, we present our concluding remarks. ## 2 Immersogeometric methods Consider a physical domain \(\Omega\subset\mathbb{R}^{d}\) (\(d\in\{2,3\}\)) and its boundary \(\partial\Omega\), immersed in an ambient domain \(\mathcal{A}\supset\Omega\), as shown in Fig. 1(a). The ambient mesh \(\mathcal{T}_{\mathcal{A}}^{h}\) is a structured mesh covering the ambient domain \(\mathcal{A}\). The "background mesh" is then defined as the collection of elements from the ambient mesh that intersect the physical domain: \[\mathcal{T}^{h}:=\left\{K\in\mathcal{T}_{\mathcal{A}}^{h}:K\cap\Omega\neq \emptyset\right\}. \tag{2}\] Both the ambient mesh and the background mesh are illustrated in Fig. 1(b). We also define a set of "ghost faces" as the collection of faces of elements from the background mesh that are intersected by the domain boundary: \[\mathcal{F}_{\text{ghost}}=\left\{\partial K\cap\partial K^{\prime}\,:\,K\in \mathcal{G},K^{\prime}\in\mathcal{T}^{h},K\neq K^{\prime}\right\}, \tag{3}\] where \(\mathcal{G}:=\{K\in\mathcal{T}^{h}\mid K\cap\partial\Omega\neq\emptyset\}\). The ghost faces are illustrated in Fig. 1(c). ### Non-boundary fitted B-spline basis functions The structured nature of the ambient mesh allows for the straightforward construction of multi-variate B-spline basis functions as tensor products between uni-variate B-spline functions. The left column in Fig. 3 depicts various orders of uni-variate B-spline functions of maximum regularity, while the middle column shows \(C^{0}\)-continuous uni-variate B-spline functions. An important property of these B-spline basis functions is their non-negativity, meaning that the integral value of each of these functions on any arbitrary subdomain is naturally non-negative as well. The same is not true for the conventional \(\mathcal{C}^{0}\)-continuous Lagrange basis functions, as shown in the right column of Fig. 3. This non-negativity property is particularly important in the context of explicit dynamics, as it guarantees positivity of the diagonal components of the row-sum lumped mass matrix, which will be addressed in Section 3.1.2. Figure 2: Definitions of the different meshes and domains in the immersed setting. In this article, we focus on maximum regularity B-splines and address lower-order regularity cases only in remarks. For maximum regularity B-splines, each multi-variate basis function \(B_{i}(\mathbf{x})\) (\(i=1,\cdots,J\)) on the background mesh is a member of \(\mathcal{C}^{p-1}(\mathcal{A})\), where \(p\) is the polynomial order of the B-spline. The precise construction of the tensor product B-spline functions can be found, for example, in [4]. To obtain the approximation space only on the physical domain, we first identify a subset of the maximum regularity B-splines that have non-zero support in the interior of the domain: \[S=\big{\{}N\in\{B_{i}(\mathbf{x})\}_{i=1}^{J}:\mathrm{supp}_{\Omega}(N)\neq 0\big{\}}\,. \tag{4}\] We then number the remaining functions in this set from \(i=1\) to \(N_{\text{dofs}}\), and define the \(N_{\text{dofs}}\)-dimensional approximation space \(V^{h}\) as their span: \[V^{h}=\mathrm{span}\,(S)=\mathrm{span}\big{\{}N_{i}(\mathbf{x})\big{\}}_{i}^{N_{ \text{dofs}}}. \tag{5}\] Under the assumption that the approximation functions evolve smoothly from time \(t=0\) to time \(t=T\), the time-dependent functions belong to the following semi-discrete function space: \[V^{h}_{T}=V^{h}\otimes\mathcal{C}^{\infty}(0,T)\,. \tag{6}\] Figure 3: Immersed \(\mathcal{C}^{p-1}\)-continuous B-splines (left), \(\mathcal{C}^{0}\)-continuous B-splines (middle) and standard \(\mathcal{C}^{0}\)-continuous Lagrange basis functions (right). ### Integration of cut elements For immersed finite element methods, integration procedures are crucial for capturing the physical geometry in the discrete formulation. The octree subdivision integration strategy, described in [26], is a widely used approach due to its simplicity and robustness. However, the resulting large number of integration points may cause a significant computational cost increase during operation. A myriad of techniques to enhance octree-subdivision has been developed [27]. A few prominent techniques include error-estimate-based adaptive octree-subdivision [28], moment-fitting [29], equivalent polynomial methods [30], and the merged sub-cell technique [31]. In this article, we make use of the octree subdivision algorithm augmented with a tessellation step [32]. This approach consists of the following steps, as illustrated in Fig. 4: elements in the background mesh that intersect the boundary of the computational domain are bisected into \(2^{d}\) sub-cells. If a sub-cell lies entirely within the domain, it is retained in the partitioning of the cut-cell, whereas it is discarded if it lies entirely outside the domain. This bisectioning procedure is recursively applied to the sub-cells that intersect the boundary, until \(\varrho_{\max}\)-times bisected sub-cells are obtained. At the lowest bisectioning level, a boundary tessellation procedure is applied to construct a \(\mathcal{O}(h^{2}/2^{2\varrho_{\max}})\) accurate parametrization of the interior volume [32]. This tessellation procedure also provides a parametrization for the trimmed surface. The employed tessellation procedure is further detailed in [27]. The octree procedure results in integration rules on each of the sub-cells and each of the surface triangles. Cut-element volumetric integration is then performed by agglomerating all sub-cell quadrature points, and cut-element surface integration by collecting all triangulated surface quadrature points. The accuracy of the cut-element integration scheme can be controlled through the selection of the octree depth \(\varrho_{\max}\) and the quadrature rules on the sub-cells. A two-dimensional illustration of the cut-element integration scheme is shown in Fig. 4, with the volumetric integration points represented by dark-green squares and the surface integration points depicted as orange circles. Figure 4: Volumetric (squares) and surface (circles) quadrature rules obtained by the octree integration procedure with tessellation at the lowest bisectioning level. ## 3 Critical time-step analysis for a second-order problem We center the exposition of our critical-timestep study for second-order initial/boundary-value problems around the linear wave equation: \[\rho\frac{\partial^{2}}{\partial t^{2}}\phi-\nabla\cdot(\kappa \nabla\phi) =f \text{in }\Omega\times(0,T)\,, \tag{7a}\] \[-\kappa\nabla\phi\cdot\mathbf{n} =g \text{on }\partial\Omega_{N}\times(0,T)\,,\] (7b) \[\phi =\phi_{D} \text{on }\partial\Omega_{D}\times(0,T)\,,\] (7c) \[\phi =\phi_{0} \text{on }\Omega\times\{0\}\,,\] (7d) \[\frac{\partial}{\partial t}\phi =\dot{\phi}_{0} \text{on }\Omega\times\{0\}\,, \tag{7e}\] where \(\phi\) is the unknown field, \(\rho\) and \(\kappa\) are parameters of the propagating medium, \(f\), \(g\), \(\phi_{D}\), \(\phi_{0}\) and \(\dot{\phi}_{0}\) are the prescribed body force, Neumann data, Dirichlet data and initial state, respectively, \(\Omega\subset\mathbb{R}^{d}\) and \(T>0\) are the spatial domain and final time, and \(\partial\Omega_{N}\) and \(\partial\Omega_{D}=\partial\Omega\setminus\partial\Omega_{N}\) are the Neumann and Dirichlet segments of the domain boundary. This equation describes, for example, the out-of-plane displacement of a pre-stressed vibrating string or membrane in one, respectively two, dimensions, and the propagation of pressure waves in one, two or three dimensions. A weak formulation of this initial-boundary value problem reads [33]: \[\text{For a.e. }t\in(0,T)\text{, find }\phi\in H^{1}_{\phi_{D}}( \Omega)\text{ and }\frac{\partial^{2}}{\partial t^{2}}\phi\in H^{-1}(\Omega)\text{ s.t. }\forall\,v\in H^{1}_{0}( \Omega):\] \[\begin{cases}\big{\langle}\frac{\partial^{2}}{\partial t^{2}} \phi,\rho\,v\big{\rangle}+\int\limits_{\Omega}\kappa\nabla\phi\cdot\nabla v\, \mathrm{d}\Omega=\int\limits_{\Omega}f\,v\,\mathrm{d}\Omega-\int\limits_{ \partial\Omega_{N}}g\,v\,\mathrm{d}S\,,\\ \phi\big{|}_{t=0}=\phi_{0}\,,\\ \frac{\partial}{\partial t}\phi\big{|}_{t=0}=\dot{\phi}_{0}\,,\end{cases} \tag{8}\] where \(H^{1}_{\phi_{D}}(\Omega)=\{\phi\in H^{1}(\Omega):\phi\big{|}_{\partial\Omega_{ D}}=\phi_{D}\}\), \(H^{-1}(\Omega)\) is the corresponding dual space, and \(\big{\langle}\cdot,\cdot\big{\rangle}\) denotes the pairing between them. If the second time derivative of \(\phi\) is a member of \(L^{2}(\Omega)\), then \(\big{\langle}\frac{\partial^{2}}{\partial t^{2}}\phi,\rho\,v\big{\rangle}= \int\limits_{\Omega}\rho\frac{\partial^{2}}{\partial t^{2}}\phi\,v\,\mathrm{ d}\Omega\). ### Semi-discrete formulation With the aim of constructing a stable explicit finite element approximation for the weak formulation of Eq. (8) on a non-boundary-fitted isogeometric mesh, we consider the (semi-)discrete spaces \(V^{h}\) and \(V^{h}_{T}\), from Eqs. (5) and (6), as our test and trial spaces, respectively. \(V^{h}\) is defined as the span of basis functions, such that any test function in \(V^{h}\), for example \(v^{h}\in V^{h}\), can be represented as a linear combination of the basis functions: \[v^{h}(\mathbf{x})=\sum_{i=1}^{N_{\text{\tiny{dofs}}}}\hat{\mathrm{v}}_{i}N_{i}( \mathbf{x})=\hat{\underline{\mathrm{v}}}^{\text{T}}\underline{\mathrm{N}}(\mathbf{x})\,. \tag{9}\] Similarly, any trial function in \(V_{T}^{h}\), for example \(\phi^{h}\in V_{T}^{h}\), can be written as a linear combination of the basis functions with time-dependent weighting coefficients: \[\phi^{h}(t,\mathbf{x})=\sum_{i=1}^{N_{\text{dofs}}}\hat{\phi}_{i}(t)\,N_{i}(\mathbf{x})= \hat{\underline{\phi}}^{\text{T}}(t)\,\underline{\mathrm{N}}(\mathbf{x})\quad \text{with }\hat{\phi}_{i}(t)\in\mathcal{C}^{\infty}(0,T)\,. \tag{10}\] In the following subsections, we provide a comprehensive overview of the individual components that comprise the mass and stiffness matrices of the semi-discrete formulation. #### 3.1.1 Neumann formulation First, we consider a pure Neumann problem, i.e., \(\partial\Omega=\partial\Omega_{N}\) and \(\partial\Omega_{D}=\emptyset\). Then, for any \(t\in(0,T)\), a finite element approximation produces the following semi-discrete formulation: \[\text{Find }\phi^{h}\in V_{T}^{h}\text{ s.t. }\forall\,v^{h}\in V^{h}:\] \[\int\limits_{\Omega}\rho\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} \phi^{h}v^{h}\,\mathrm{d}\Omega+\int\limits_{\Omega}\kappa\nabla\phi^{h}\cdot \nabla v^{h}\,\mathrm{d}\Omega=\int\limits_{\Omega}f\,v^{h}\,\mathrm{d}\Omega -\int\limits_{\partial\Omega_{N}}g\,v^{h}\,\mathrm{d}S\,. \tag{11}\] Substitution of the representation of \(\phi^{h}\) and \(v^{h}\) from Eqs. (9) and (10) into Eq. (11) results in the following system of ordinary differential equations \[\underline{\mathrm{M}}\,\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\hat{\underline {\phi}}+\underline{\mathrm{K}}\,\hat{\underline{\phi}}=\underline{\mathrm{F}}\,, \tag{12}\] with: \[\big{[}\,\underline{\mathrm{M}}\,\big{]}_{ij} =M\big{(}N_{i}(\mathbf{x}),N_{j}(\mathbf{x})\big{)}=\int\limits_{\Omega} \rho N_{i}(\mathbf{x})N_{j}(\mathbf{x})\,\mathrm{d}\Omega\,, \tag{13a}\] \[\big{[}\,\underline{\mathrm{K}}\,\big{]}_{ij} =K\big{(}N_{i}(\mathbf{x}),N_{j}(\mathbf{x})\big{)}=\int\limits_{\Omega} \kappa\nabla N_{i}(\mathbf{x})\cdot\nabla N_{j}(\mathbf{x})\,\mathrm{d}\Omega\,,\] (13b) \[\big{[}\,\underline{\mathrm{F}}\,\big{]}_{i} =F\big{(}N_{i}(\mathbf{x})\big{)}=\int\limits_{\Omega}f(\mathbf{x})\,N_{i }(\mathbf{x})\,\mathrm{d}\Omega-\int\limits_{\partial\Omega_{N}}g(\mathbf{x})\,N_{i}( \mathbf{x})\,\mathrm{d}S\,. \tag{13c}\] For this Neumann problem, the immersogeometric framework primarily impacts the selection of the discrete approximation space \(V^{h}\) as the span of non-boundary fitted B-spline basis functions, and the procedure for carrying out the integrals in Eq. (13). #### 3.1.2 Row-sum mass lumping A fully-discrete formulation of Eq. (12) typically follows from a finite-difference type approximation to the time-derivative. When an explicit time-stepping scheme is adopted, such as a second-order Newmark-type central difference method or a higher-order explicit Runge-Kutta scheme, then the time-marching does not necessitate an inverse of the stiffness matrix. It does, however, require an inverse of the mass matrix. To avoid significant computational expense (both in terms of storage and operation count), the mass matrix is often manipulated to attain a diagonal matrix, which can be inverted trivially. This manipulation process is referred to as "mass lumping" and various methods exist, such as diagonal scaling [34], manifold-based methods [35] and lumping by nodal quadrature [36; 37]. In this article, we focus on row-sum lumping, where the diagonal value is set to the sum-total of the row. This sum-total of the row corresponds to the multiplication of the row by a vector of ones. For a partition of unity basis, such as the employed B-spline basis, the vector of ones represents a field of unit value. Consequently, the lumped mass matrix can be written as: \[\big{[}\,\underline{\mathrm{M}}_{D}\big{]}_{ij}=\begin{cases}0&\text{if }i \neq j\\ M\big{(}1,N_{i}(\boldsymbol{x})\big{)}=\int\limits_{\Omega}\rho N_{i}\,\mathrm{d }\Omega&\text{if }i=j\end{cases}\,. \tag{14}\] Ensuring that the diagonal entries of \(\underline{\mathrm{M}}_{D}\) are positive is of crucial importance: negative components would cause negative eigenvalues, inducing exponential growth of the corresponding eigenmode. As pointed out in Section 2.1, the non-negativity property of the B-spline basis-functions guarantees positivity of \(\int_{\Omega}N_{i}(x)\,\mathrm{d}\Omega\), and thus of the diagonal entries of \(\underline{\mathrm{M}}_{D}\), irrespective of how elements are cut. The same cannot be guaranteed for, for example, more classical \(\mathcal{C}^{0}\)-continuous Lagrange basis functions. **Remark 1**.: In the analysis in Sections 3.2 and 4.2 we require evaluations of the vector-matrix-vector product \(\hat{\mathrm{v}}^{\mathrm{T}}\underline{\mathrm{M}}_{D}\hat{\mathrm{v}}\) for various \(\hat{\mathrm{v}}\). Unlike the consistent mass matrix (for which \(\hat{\mathrm{v}}^{\mathrm{T}}\underline{\mathrm{M}}\hat{\mathrm{v}}=\int_{ \Omega}\rho\,v^{h}v^{h}\,\mathrm{d}\Omega\)), the ad-hoc nature of the mass-lumping procedure implies that the operation \(\hat{\mathrm{v}}^{\mathrm{T}}\underline{\mathrm{M}}_{D}\hat{\mathrm{v}}\) does not permit an integral-based bilinear form. However, for those particular functions whose vector of coefficients (\(\hat{\mathrm{v}}\) in Eq. (9)) exclusively consists of ones and zeros (\([\hat{\mathrm{v}}]_{i}\in\{0,1\}\)), the following integral evaluation of the vector-matrix-vector product is valid: \[\hat{\mathrm{v}}^{\mathrm{T}}\underline{\mathrm{M}}_{D}\hat{\mathrm{v}}=M \big{(}1,v^{h}(\boldsymbol{x})\big{)}=\int\limits_{\Omega}\rho v^{h}( \boldsymbol{x})\,\mathrm{d}\Omega=:M_{D}\big{(}v^{h}(\boldsymbol{x})\big{)}\,. \tag{15}\] #### 3.1.3 Dirichlet condition enforcement by penalization The remaining challenge in solving Eq. (8) in an immersed setting, is the imposition of Dirichlet boundary conditions. In Eq. (8), these are essential conditions, and they are imposed on the function space itself. For boundary-fitted methods, this can be mimicked by strongly prescribing nodal values. For immersed methods, however, the nodes are no longer placed on the domain boundary and the conventional procedure is not feasible. However, we can still integrate along this immersed boundary, as described in Section 2.2. Hence, boundary conditions can be incorporated weakly; through the addition of integral terms in the weak formulation targeted at enforcing the constraints. The most common approach for weak imposition of Dirichlet conditions in explicit analysis is by penalty enforcement [5]: \[\text{Find }\phi^{h}\in V_{T}^{h}\text{ s.t. }\forall\,v^{h}\in V^{h}:\] \[M(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\phi^{h},v^{h})+K(\phi^{h },v^{h})+\underbrace{\int_{\partial\Omega_{D}}\kappa\beta\,\phi^{h}\,v^{h}\, \mathrm{d}S}_{K_{\beta}\big{(}\phi^{h},v^{h}\big{)}}=F(v^{h})+\underbrace{ \int_{\partial\Omega_{D}}\kappa\beta\,\phi_{D}\,v^{h}\,\mathrm{d}S}_{F_{\beta} \big{(}v^{h}\big{)}}\,, \tag{16}\] where \(\beta>0\). The additional contributions to the stiffness matrix and force vector read: \[\big{[}\,\underline{\underline{\mathrm{K}}}_{\beta}\,\big{]}_{ ij} =K_{\beta}\big{(}N_{i}(\mathbf{x}),N_{j}(\mathbf{x})\big{)}=\int\limits_{ \partial\Omega_{D}}\kappa\beta\,N_{i}(\mathbf{x})N_{j}(\mathbf{x})\,\mathrm{d}S \tag{17a}\] \[\big{[}\,\underline{\underline{\mathrm{F}}}_{\beta}\,\big{]}_{i} =F_{\beta}\big{(}N_{i}(\mathbf{x})\big{)}=\int\limits_{\partial \Omega_{D}}\kappa\beta\,\phi_{D}(\mathbf{x})N_{i}(\mathbf{x})\,\mathrm{d}S\,. \tag{17b}\] To be dimensionally consistent, \(\beta^{-1}\) needs to have unit length. Suitable scaling of the eigenvalues associated to the penalty term is achieved when the penalty is chosen to scale inversely with the size of the elements of the background mesh \(h_{K}\)[5; 38; 39; 40]: \[\beta\big{|}_{K}=\bar{\beta}\,h_{K}^{-1}\,, \tag{18}\] where \(\bar{\beta}\) is a dimensionless global constant. The penalty method offers a number of advantages, such as the absence of stringent restrictions on the penalty parameter to ensure positivity of eigenvalues, owing to the positive semi-definiteness of the contribution to the stiffness matrix, and its ease of implementation. A significant drawback is its variationally inconsistent nature (in the sense of [36]), which leads to a loss of optimal convergence rate that may result in error increases of orders of magnitude. The optimal convergence rate may be retrieved by choosing a penalty scaling stronger than \(h^{-1}\)[41]. This is, however, not a viable option in the context of explicit analysis, as the larger penalty value would soon increase the largest eigenvalue, and hence negatively affects the critical time-step size. Moreover, the solution quality is sensitive to the choice of penalty parameter, for which rigorous estimates are not available. Also in this regard, one must exercise caution not to choose the penalty values too large to avoid impacting the critical time-step size [5]. #### 3.1.4 Dirichlet condition enforcement by Nitsche's method The penalty method can be augmented by terms to make it variationally consistent, mitigating many of the just mentioned drawbacks. When this is done in a symmetric manner, this is called Nitsche's method [42]: Find \(\phi^{h}\in V_{T}^{h}\) s.t. \(\forall\,v^{h}\in V^{h}:\) \[M(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\phi^{h},v^{h})+K(\phi^{h},v^{h})+K_{\beta}(\phi^{h},v^{h})\underbrace{-\int_{\partial\Omega_{D}}\!\! \kappa\nabla\phi^{h}\cdot\boldsymbol{n}\,v^{h}\,\mathrm{d}S-\int_{\partial \Omega_{D}}\!\!\kappa\nabla v^{h}\cdot\boldsymbol{n}\,\phi^{h}\,\mathrm{d}S}_{ K_{\mathrm{cs}}\big{(}\phi^{h},v^{h}\big{)}}\] \[=F(v^{h})+F_{\beta}(v^{h})\underbrace{-\int_{\partial\Omega_{D}} \!\!\kappa\phi_{D}\nabla\,v^{h}\cdot\boldsymbol{n}\,\mathrm{d}S}_{F_{\mathrm{ s}}\big{(}v^{h}\big{)}}\,. \tag{19}\] The new matrix and vector contributions originating from the consistency and symmetry terms may be identified as: \[\big{[}\,\underline{\underline{\mathrm{K}}}_{\mathrm{cs}}\, \big{]}_{ij} =K_{\mathrm{cs}}\big{(}N_{i}(\boldsymbol{x}),N_{j}(\boldsymbol{x}) \big{)}=-\int\limits_{\partial\Omega_{D}}\kappa\nabla N_{i}\cdot\boldsymbol{n }\,N_{j}\,\mathrm{d}S-\int\limits_{\partial\Omega_{D}}\kappa\nabla N_{j}\cdot \boldsymbol{n}\,N_{i}\,\mathrm{d}S\,, \tag{20a}\] \[\big{[}\,\underline{\underline{\mathrm{F}}}_{\mathrm{s}}\, \big{]}_{i} =F_{\mathrm{s}}\big{(}N_{i}(\boldsymbol{x})\big{)}=-\int\limits_{ \partial\Omega_{D}}\kappa\phi_{D}\,\nabla N_{i}\cdot\boldsymbol{n}\, \mathrm{d}S\,. \tag{20b}\] Due to the added terms, the stiffness matrix is no-longer unconditionally positive definite. When \(\beta\) is chosen too small, the stiffness matrix may include negative eigenvalues. Negativity of eigenvalues carries over to the mass-to-stiffness generalized eigenvalue problem, again leading to detrimental exponential growth of the corresponding eigenmodes in time. The restriction on \(\beta\) to ensure positive definiteness has been studied extensively [43; 44; 45], and follows in each element \(K\) from a local inverse estimate. We make use of the following choice: \[\beta\big{|}_{K}=2\sup_{v^{h}\in V^{h}}\frac{\|\nabla v^{h}\cdot\boldsymbol{n }\|_{\partial\Omega_{D}\cap K}^{2}}{\|\nabla v^{h}\|_{\Omega\cap K}^{2}} \propto\frac{1}{h_{c}}\,\Big{|}_{K}\,, \tag{21}\] where \(h_{c}\) is a length-scale associated to the cut element. The element size for cut elements is not unambiguously defined. The inverse estimate in Eq. (21) can be bound by a size-independent factor multiplied by the ratio of the area of the Dirichlet surface to the volume of the cut-element [46; 47]. This length parameter provides a suitable size measure for small-cut elements, however it does not limit to \(h_{K}\) for (nearly) uncut cases. In what follows, we make use of the following definition of \(h_{c}\), which is consistent for both small and large cuts: \[h_{c}\big{|}_{K}=\min\Big{\{}\ \frac{\int_{\Omega\cap K}\,\mathrm{d}\Omega}{ \int_{\partial\Omega_{D}\cap K}\,\mathrm{d}S}\ \,\ \ \left[\int_{\Omega\cap K}\,\mathrm{d}\Omega\right]^{\frac{1}{d}}\ \Big{\}}=:\chi h_{K}\,. \tag{22}\] The element size fraction \(\chi\) that implicitly follows from this definition as \(\chi=h_{c}/h_{K}\) represents the central quantity in our study on the cut-sensitivity of the largest eigenvalues later on. For sliver cuts, \(\chi\) equates to the cut element volume fraction \(\eta=\int_{\Omega\cap K}\mathrm{d}\Omega/\int_{K}\mathrm{d}\Omega\) (used in earlier analysis [17]), and for shape-regular cut elements they relate as \(\chi\propto\eta^{\frac{K}{d}}\). We define the constant of proportionality in Eq. (21) as \(\bar{\beta}\big{|}_{K}\), such that, by definition, we can make use of the following expression for \(\beta\big{|}_{K}\): \[\beta\big{|}_{K}=\bar{\beta}\big{|}_{K}\,(\chi\,h_{K})^{-1}\,. \tag{23}\] #### 3.1.5 Ghost-penalty stabilization For elements with vanishing support in the physical domain, the coefficient \(\beta\) required to ensure positive definiteness of the complete stiffness matrix can become arbitrarily large, as is reflected by the requirement in Eq. (21). For implicit and steady analysis, such a large penalty factor has negative consequences on the stability and conditioning of the ensuing system [47]. To mitigate this issue, the support of small-cut basis functions must be extended into the domain interior. Doing so strongly, for example by adopting weighted extended basis B-splines (WEB-splines) [48], or by performing cell aggregation [49], requires manipulation of the basis functions. If this is undesirable, then a potential alternative is the addition of "ghost-penalty" stabilization [22; 23]: \[\begin{split} M(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\phi^{h}, v^{h})+K(\phi^{h},v^{h})+K_{\beta}(\phi,v^{h})+K_{\mathrm{cs}}(\phi,v^{h})+ \underbrace{\int_{\Gamma_{g}}\kappa\gamma_{K}\,[\![\partial_{n}^{k+1}\phi^{ h}]\!][\![\partial_{n}^{k+1}\phi^{h}]\!]\,\mathrm{d}S}_{\gamma}\\ =F(v^{h})+F_{\beta}(v^{h})+F_{\mathrm{cs}}(v^{h})\,,\end{split} \tag{24}\] where \(\Gamma_{g}=\bigcup\mathcal{F}_{\mathrm{ghost}}\) is the union of the ghost faces from Eq. (3) and Fig. 2c, \([\![\cdot]\!]\) is the jump operator, and \(\partial_{n}^{k+1}\) is the normal gradient of order \(k+1\), with \(k\) the order of continuity of the B-spline basis functions. In a more general sense, \(K_{\gamma}\big{(}\cdot,\cdot\big{)}\) should include jump-terms of the normal derivatives of order \(1\) until the polynomial order \(p\). Since we only consider maximum order of continuity splines, for which \(k=p-1\), all but the highest normal derivatives vanish on element interfaces. The contribution to the stiffness matrix is: \[\big{[}\,\underline{\mathrm{K}}_{\gamma}\,\big{]}_{ij}=K_{\gamma}\big{(}N_{i }(\mathbf{x}),N_{j}(\mathbf{x})\big{)}=\int\limits_{\Gamma_{g}}\kappa\gamma_{K}\,[\! [\partial_{n}^{k+1}N_{i}]\!][\![\partial_{n}^{k+1}N_{j}]\!]\,\mathrm{d}S\,. \tag{25}\] The new penalty between the domain interior elements and the cut elements effectively adds a stiffness to the deflection of weakly supported degrees of freedom. More technically, the ghost-penalty term extends coercivity from the domain interior to the background mesh [50]. When the parameter \(\gamma_{K}\) is large enough and scales with \(h_{K}^{2p-1}\), \(\beta\) is permitted to scale with the background-element size \(h_{K}\) rather than with the cut-element size \(h_{c}=\chi\,h_{K}\): \[\gamma_{K}\big{|}_{K} =\bar{\gamma}_{K}\,h_{K}^{2p-1}\,, \tag{26a}\] \[\beta\big{|}_{K} =\bar{\beta}\,h_{K}^{-1}\,. \tag{26b}\] The minimal permitted values of the pre-factors \(\bar{\gamma}_{K}\) and \(\bar{\beta}\) is still an open research question [50; 51]. In the current work, we choose a sufficiently large \(\bar{\gamma}_{K}\) to enable the use of a small \(\bar{\beta}\), and experimentally verify that the ensuing stiffness matrix is positive definite. #### 3.1.6 Ghost mass The final ingredient that we choose to add to our explicit immersogeometric formulation is a "ghost-mass" term. We propose this term as a type of consistent mass scaling, with the intent of reducing the maximum eigenvalues that are caused by small-cut elements. The following additional component is introduced to the mass matrix [52; 24; 25]: \[M_{\gamma}(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\phi^{h},v^{h})= \int\limits_{\Gamma_{g}}\rho\gamma_{M}\,[\![\partial_{n}^{k+1}\frac{\mathrm{d} ^{2}}{\mathrm{d}t^{2}}\phi^{h}]\!][\![\partial_{n}^{k+1}\phi^{h}]\!]\,\mathrm{d}S\,, \tag{27}\] \[\big{[}\,\underline{\underline{\mathrm{M}}}_{\gamma}\big{]}_{ij} =M_{\gamma}\big{(}N_{i}(\mathbf{x}),N_{j}(\mathbf{x})\big{)}\,. \tag{28}\] If lower-order regularity B-splines are used, \(M_{\gamma}(\cdot,\cdot)\) should include penalties on the jumps of all lower-order normal derivatives as well. These jumps vanish for the \(\mathcal{C}^{p-1}\)-continuous B-splines considered herein. The ghost-mass term adds inertia to the acceleration of the deflection of the weakly supported degrees of freedom. As \(\underline{\underline{\mathrm{M}}}_{\gamma}\) represents a positive semi-definite contribution to the mass matrix, it serves to reduce the eigenvalues of modes that excite the \(M_{\gamma}(\cdot,\cdot)\) term, i.e., those with derivative changes across boundaries of elements with small interior support. At the same time, the term is consistent for sufficiently smooth solutions, such that it does not introduce a modeling error. The required scaling of the penalty parameter \(\gamma_{M}\) is different than that of \(\gamma_{K}\) in the stiffness matrix, as already follows from a dimensional consistency argument. We make use of the following scaling: \[\gamma_{M}=\bar{\gamma}_{M}\,h_{K}^{2p+1}\,. \tag{29}\] where the appropriate choice of \(\bar{\gamma}_{M}\) is investigated in Section 3.2. **Remark 2**: _The matrix that follows from the ghost-mass term is not amendable to standard row-sum lumping. Recall from Section 3.1.2 that the row-sum represents the action of the corresponding bilinear form on a field of unit value, and, as can be observed from Eq. (27), \(M_{\gamma}(1,v^{h})=0\). When the immersed boundary elements make up a relatively small portion of the mesh, the computational expense involved in inverting the mass matrix may not cause a bottleneck (especially when the LU-factorization is stored and reused). Nevertheless, various strategies could be considered to efficiently approximate an inverse to the scaled mass matrix. One could reduce the number of ghost faces in such a way that \(\underline{\underline{\mathrm{M}}}_{\gamma}\) becomes block-diagonal [53], or approximate the inverse in a (block) Jacobi sense, or attempt row-sum lumping of the absolute values of the matrix components, or consider different classes of discrete extension operators altogether [54]. However, since the primary focus of this work is the analysis of the explicit immersogeometric formulations, we consider the development of such optimized implementation strategies beyond the scope of this article, and we exclusively consider \(\underline{\underline{\mathrm{M}}}_{\gamma}\) in its full form._ ### Analysis of the critical time-step size The various stiffness terms proposed in the previous section can be collected in the total stiffness bilinear form \(\tilde{K}(\cdot,\cdot)\), to produce the total stiffness matrix \(\underline{\underline{\tilde{K}}}\). Similarly, the total inertial bilinear form \(\tilde{M}(\cdot,\cdot)\) and mass matrix \(\underline{\underline{\tilde{M}}}\) follow from the chosen combination of a consistent or lumped mass matrix, and potentially a ghost-mass contribution. The general semi-discrete form then reads: \[\begin{split}\text{Find }\phi^{h}\in V_{T}^{h}\text{ s.t. }\forall\,v^{h}\in V^{h}:\\ \tilde{M}(\frac{\text{d}^{2}}{\text{d}t^{2}}\phi^{h},v^{h})+\tilde {K}(\phi^{h},v^{h})=\tilde{F}(v^{h})\,,\qquad\Leftrightarrow\qquad\underline{ \underline{\tilde{M}}}\,\frac{\text{d}^{2}}{\text{d}t^{2}}\hat{\phi}+ \underline{\underline{\tilde{K}}}\,\hat{\underline{\phi}}=\underline{\tilde{ E}}\,.\end{split} \tag{30}\] As addressed in the introduction, the critical time-step size of an explicit time-stepping treatment of Eq. (30) is inversely related to the maximum eigenvalue of the generalized eigenvalue problem: \[\tilde{K}(\xi^{h},v^{h})=\lambda\tilde{M}(\xi^{h},v^{h})\quad\forall\,v^{h}\in V ^{h}\qquad\Leftrightarrow\qquad\underline{\underline{\tilde{K}}}\,\hat{ \underline{\xi}}=\lambda\underline{\underline{\tilde{M}}}\,\hat{\underline{ \xi}}\,. \tag{31}\] For symmetric matrices, the maximum eigenvalue is the largest value of the generalized Rayleigh quotient: \[\lambda_{\text{max}}\geq\mathcal{R}(\underline{\underline{\tilde{K}}}, \underline{\underline{\tilde{M}}},\hat{\underline{\xi}})=\frac{\underline{ \hat{\xi}}^{\text{T}}}{\underline{\hat{\xi}}^{\text{T}}}\,\underline{ \underline{\tilde{K}}}\,\hat{\underline{\xi}}=\frac{\tilde{K}(\xi^{h},\xi^{h} )}{\tilde{M}(\xi^{h},\xi^{h})}\qquad\forall\,\hat{\underline{\xi}}\in\mathbb{ R}^{N}\,,\ \xi^{h}=\sum_{n=0}^{N}\hat{\underline{\xi}}_{n}N_{n}(\mathbf{x})\,, \tag{32}\] where the equality holds for \(\underline{\hat{\xi}}=\underline{\hat{\xi}}_{\text{max}}\). Due to the bilinearity of the forms, the various components separate into individual contributions: \[\mathcal{R}(\underline{\underline{\tilde{K}}},\underline{\underline{\tilde{ M}}},\underline{\hat{\xi}})=\frac{K(\xi,\xi)+K_{\text{cs}}(\xi,\xi)+K_{\beta}(\xi, \xi)+K_{\gamma}(\xi,\xi)}{M(\xi,\xi)+M_{\gamma}(\xi,\xi)}\,. \tag{33}\] We now wish to analyse whether \(\lambda_{\text{max}}\), and by induction, the critical time-step size, is sensitive to the size and shape of the cut elements, as characterized by the parameter \(\chi\) in Eq. (22). For \(\chi\to 0\), the generalized Rayleigh quotient approaches the ratio of the lowest-order scaling of each of the components in the numerator divided by the lowest-order scaling of each of the components in the denominator. For some \(\underline{\hat{\xi}}\), we can then characterize the scaling of the generalized Rayleigh quotient as: \[\mathcal{R}(\underline{\underline{K}},\underline{\underline{M}},\hat{ \underline{\xi}})=\mathcal{O}\big{(}\chi^{q}\big{)}\,. \tag{34}\] If there exists any \(\hat{\underline{\xi}}\) such that \(q<0\), then, by Eq. (32), the maximum eigenvalue can become arbitrarily large for arbitrarily small cuts, resulting in an unfeasibly small critical time-step size. Those \(\underline{\hat{\xi}}\) for which \(q=0\) cause cut-_shape_ dependent eigenvalues that may turn out to be dominant. Those corresponding to \(q>0\) are suppressed as the cut becomes small. Our study on the impact of the cut elements on the time-step estimate thus revolves around the examination of the generalized Rayleigh quotient for selected examples of problematic cut cases. In this regard, we consider two distinct types of cuts: corner cuts and sliver cuts. These may arise even when the background mesh is sufficiently refined to resolve small-scale geometric features. On these cut elements, we consider two different functions: one predominantly supported in the domain interior and one predominantly supported in the domain exterior. All four functions are depicted for the example of a two-dimensional domain in Figs. 4(a), 4(c) and 4(d). In one dimension, the four cases reduce to the two functions plotted in Figs. 4(e) and 4(f). For all functions considered, the coefficient vector consists exclusively of ones and zeros, whereby the condition of Remark 1 is satisfied, and \(\hat{\underline{\xi}}^{\mathrm{T}}\,\underline{\underline{\mathrm{M}}}_{D}\, \hat{\underline{\xi}}\) may be evaluated as \(M_{D}(\xi)\). We determine the generalized Rayleigh quotient for different collections of components to the mass and stiffness matrix, i.e., for different finite element formulations. For the stiffness matrix, we consider a pure Neumann boundary, a pure penalty method with \(\beta=\bar{\beta}h_{K}^{-1}\), Nitsche's method with \(\beta=\bar{\beta}h_{c}^{-1}=\bar{\beta}(\chi\,h_{K})^{-1}\), and Nitsche's method combined with a Figure 5: Functions under consideration for cut-size scaling. Green represents the domain interior and red the domain exterior. ghost-penalty term and the choices \(\beta=\bar{\beta}h_{K}^{-1}\) and \(\gamma_{K}=\bar{\gamma}_{K}h_{K}^{2p-1}\). For the mass matrix, we consider the consistent mass matrix of Eq. (13a), a consistent mass matrix with ghost mass with \(\gamma_{M}=\bar{\gamma}_{M}h_{K}^{2p+1}\), a lumped mass matrix per Eq. (14), and a lumped mass matrix with (non-lumped) ghost mass. Carrying out all the generalized Rayleigh quotients computations for the corner-cut functions of Figs. 5a and 5b results in the scalings detailed in Tables 1 and 2, respectively, and for the sliver functions depicted in Figs. 5c and 5d results in Tables 3 and 4, respectively. Cells in the tables marked in red indicate formulations and cut-cases for which the generalized Rayleigh quotient increases for decreasing cut-size, i.e., for \(\chi\to+0\). Green cells signify formulations and cut-cases for which the generalized Rayleigh quotient decreases for decreasing cut-size, representing stable cases. The beige and yellow cells denote conditionally stable cases: there are restrictions on the polynomial order, spatial dimension and/or the penalty parameter for which the generalized Rayleigh quotient may or may not dominate and/or increase with decreasing cut-size. \begin{table} \begin{tabular}{|l|c|c c c|} \hline & & Consistent mass & Lumped mass & Consistent/lumped mass \\ & & (Eq. (13a)) & (Eq. (14)) & and ghost mass (Eq. (28)) \\ & & & with \(\gamma_{M}=\bar{\gamma}_{M}h^{2p+1}\) \\ \hline & \(\backslash\mathcal{O}(\bar{M})\) & \(\chi^{2pd+d}\) & \(\chi^{pd+d}\) & \(\bar{\gamma}_{M}\chi^{0}\) \\ \hline Neumann (Eq. (11)) & \(\chi^{2pd+d-2}\) & & & \\ & \(\chi^{2pd+d-2}\) & & & \\ & \(\chi^{2pd+d-2}\) & & & \\ & \(\chi^{2pd+d-2}\) & & & \\ & \(\chi^{2pd+d-2}\) & & & \\ & \(\chi^{2pd+d-2}\) & & & \\ & \(\chi^{2pd+d-2}\) \begin{table} \begin{tabular}{|l|c|c c c|} \hline & & Consistent mass & Lumped mass & Consistent/lumped mass \\ & & (Eq. (13a)) & (Eq. (14)) & & and ghost mass (Eq. (28)) \\ \hline Neumann (Eq. (11)) & \(\chi^{2p-1}\) & \(\chi^{2p-2}\) & \(\chi^{2p-2}\) & \(\bar{\gamma}_{M}^{1}\chi^{2p-1}\) \\ Penalty (Eq. (16)) & \(\chi^{2p-1}\) & \(\chi^{2p-1}\) & \(\chi^{2p-2}\) & \(\bar{\gamma}_{M}^{1}\chi^{2p-1}\) \\ \(\text{with }\beta=\bar{\beta}(\chi h_{K})^{-1}\) & \(\bar{\beta}\chi^{2p-1}\) & \(\bar{\beta}\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ Nitsche and ghost (Eq. (24)) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ \(\beta=\bar{\beta}h_{K}^{-1}\), \(\gamma_{K}=\bar{\gamma}_{K}h^{2p-1}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ Nitsche and ghost (Eq. (24)) & \(\bar{\beta}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\chi^{0}\) \\ Penalty (Eq. (16)) & \(\bar{\beta}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\chi^{0}\) \\ \(\text{with }\beta=\bar{\beta}h_{K}^{-1}\) & \(\bar{\beta}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\chi^{0}\) \\ \(\text{with }\beta=\bar{\beta}(\chi h_{K})^{-1}\) & \(\bar{\beta}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\chi^{0}\) \\ \(\text{with }\beta=\bar{\beta}h_{K}^{-1}\), \(\gamma_{K}=\bar{\gamma}_{K}h^{2p-1}\) & \(\bar{\beta}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\chi^{0}\) \\ \hline \end{tabular} \end{table} Table 3: Resulting scaling with cut size for the first sliver-cut function for the second-order problem. \begin{table} \begin{tabular}{|l|c c c c|} \hline & Consistent mass & Lumped mass & Consistent/lumped mass \\ & (Eq. (13a)) & (Eq. (14)) & & and ghost mass (Eq. (28)) \\ \hline Neumann (Eq. (11)) & \(\chi^{2p-1}\) & \(\chi^{2p-2}\) & \(\chi^{2p-2}\) & \(\bar{\gamma}_{M}^{1}\chi^{2p-1}\) \\ Penalty (Eq. (16)) & \(\chi^{2p-1}\) & \(\chi^{2p-2}\) & \(\chi^{2p-2}\) & \(\bar{\gamma}_{M}^{1}\chi^{2p-1}\) \\ \(\text{with }\beta=\bar{\beta}h_{K}^{-1}\) & \(\chi^{2p-1}\) & \(\chi^{2p-2}\) & \(\chi^{2p-2}\) & \(\bar{\gamma}_{M}^{1}\chi^{2p-1}\) \\ Nitsche (Eq. (19)) & \(\bar{\beta}\chi^{2p-1}\) & \(\bar{\beta}\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ Nitsche and ghost (Eq. (24)) & \(\bar{\beta}\bar{\gamma}_{K}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ \(\beta=\bar{\beta}h_{K}^{-1}\), \(\gamma_{K}=\bar{\gamma}_{K}h^{2p-1}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ \(\beta=\bar{\beta}h_{K}^{-1}\), \(\gamma_{K}=\bar{\gamma}_{K}h^{2p-1}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ \(\beta=\bar{\beta}h_{K}^{-1}\), \(\gamma_{K}=\bar{\gamma}_{K}h^{2p-1}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\chi^{2p-2}\) & \(\bar{\beta}\bar{\gamma}_{M}^{-1}\chi^{2p-1}\) \\ \hline \end{tabular} \end{table} Table 4: Resulting scaling with cut size for the second sliver-cut function for the second-order problem. Table 5 provides a comprehensive summary of findings that follow from the scaling relations detailed in Tables 1 to 4. The following conclusions stand out: * The first column of Table 5 is fully colored red, expressing that the use of a consistent mass matrix without a form of mass scaling is not applicable for explicit immersed computation. A consistent mass matrix is unconventional for explicit analysis due to the implied cost of inversion, but in an immersed setting the largest eigenvalue may also become arbitrarily large for small cut elements, causing unfeasibly small critical time-step sizes. * The beige coloring of the first two rows of the second column in Table 5 indicates that lumping the mass matrix can mitigate the problematic cut-size dependent scaling of the maximum eigenvalue, but only when the polynomial order of the basis function is at least quadratic. This was first observed in [20]. * To enable the use of linear basis functions, a ghost mass must be added to ensure that the critical time-step sizes remain independent of the cut-size. This formulation corresponds to the last column in Table 5, which is the only column with green cells. * When a penalty method is used for the enforcement of Dirichlet constraints, the maximum eigenvalue scales linearly with the non-dimensionalized penalty parameter \(\bar{\beta}\), as is indicated by the beige cells in the second row of Table 5. This is a known issue [5], and this undesirable scaling cannot be fixed by adding a ghost-mass term, nor by raising the polynomial order. * A Nitsche formulation with local penalty parameter (i.e., without the ghost-stiffness term) does not yield a cut-size independent critical time-step size. The resulting scaling reminds of the degenerative error bounds due to sliver cuts proven in [47]. * To adopt Nitsche's method in a stable manner, one requires both a ghost-mass term and as a ghost-stiffness term. Both penalty terms must scale with the uncut-element size \(h_{K}\). To keep the maximum eigenvalue small, the ghost-mass penalty \(\bar{\gamma}_{M}\) must be of the same order of magnitude as the ghost-stiffness penalty \(\bar{\gamma}_{K}\), and the Nitsche penalty \(\bar{\beta}\) must be as small as possible. In contrast to a penalty method, using a small \(\bar{\beta}\) does not adversely impact solution quality, and hence we color this bottom-right box green. **Remark 3**.: The utilization of maximum regularity B-spline basis functions in the analysis reduces the number of cut functions that need to be considered. As demonstrated in the subfigures of the left column of Fig. 3, all basis functions are identical up to affine transformation. The adoption of lower-order continuous basis functions, such as the \(\mathcal{C}^{0}\)-continuous functions depicted in Figs. 2(b), 2(f) and 2(h), necessitates the examination of a more extensive collection of cut functions. In particular, as \(\chi\to+0\), the \(\mathcal{C}^{0}\)-continuous B-spline basis functions with small support locally approach lower-order polynomial functions, as is illustrated in Fig. 6. The lower-order polynomial functions are the critical cases in the first three rows of the middle column in Tables 1 and 4. For lower-order continuous B-splines, the polynomial order in those scaling relations should thus be replaced by \(k+1\), with \(k\) the order of continuity. This corroborates the conclusion from [20] that a sufficiently high regularity is required to mitigate the cut-size dependency of the critical time-step size. **Remark 4**: _The scaling relations documented in Tables 3 and 4 are precisely those of Tables 1 and 2, with \(d=1\). This concurrence is anticipated, as the cut functions depicted in Figs. (c)c and (d)d are effectively one-dimensional. Thus, for an arbitrary \(d\)-dimensional domain, one only needs to determine the scaling relations for the \(d\)-dimensional cut functions (e.g., those in Figs. (a)a and (b)b), and the scaling relations for the lower-dimensional cut functions can then be deduced by successively replacing \(d\) by the positive lower integer values below \(d\)._ ## 4 Critical time-step analysis for a fourth-order problem While the wave equation studied in the previous section is a prototypical model equation for explicit analysis, its second-order nature limits the relevance of the conclusions from Section 3.2 when considering higher-order formulations, such as shell formulations. To understand how the conclusions translate, we repeat the analysis of Section 3 in brief for the following fourth-order problem: \[\rho\frac{\partial^{2}}{\partial t^{2}}\phi+\Delta(\kappa\Delta \phi) =f \text{in }\Omega\times(0,T)\subset\mathbb{R}^{d}\times[0,T]\,, \tag{35a}\] \[\nabla(\kappa\Delta\phi)\cdot\mathbf{n} =q \text{on }\partial\Omega_{q}\times(0,T)\,,\] (35b) \[-\kappa\Delta\phi =m \text{on }\partial\Omega_{m}\times(0,T)\,,\] (35c) \[\nabla\phi\cdot\mathbf{n} =g \text{on }\partial\Omega_{N}\times(0,T)\,,\] (35d) \[\phi =\phi_{D} \text{on }\partial\Omega_{D}\times(0,T)\,,\] (35e) \[\phi =\phi_{0} \text{on }\Omega\times\{0\}\,,\] (35f) \[\frac{\partial}{\partial t}\phi =\dot{\phi}_{0} \text{on }\Omega\times\{0\}\,. \tag{35g}\] This equation describes the bending of an Euler-Bernoulli beam in one spatial dimension and is a shell-type analog in two dimensions [55; 56]. Equations (35d) and (35e) are essential boundary conditions, describing displacement and rotation at the edge, respectively. Figure 6: Cut basis functions for a quadratic \(\mathcal{C}^{0}\)-continuous B-spline. Equations (35b) and (35c) are natural conditions. A more direct link to shell formulations requires different forms of the natural boundary conditions, to represent the applied bending moment and shear force [56; 57], but the expressions of Eqs. (35b) and (35c) are applicable for arbitrary spatial dimension, making them more suitable for a general time-step stability analysis. Note that, due to the fourth-order parabolic nature of the PDE, two boundary conditions must be prescribed at any location on the boundary. ### Semi-discrete formulation The weak formulation for the fourth-order problem of Eq. (35) reads: \[\begin{split}&\text{For a.e. }t\in(0,T),\,\text{find}\,\phi\in H^{2}_{\phi_{D},g}( \Omega)\text{ and }\frac{\partial^{2}}{\partial t^{2}}\phi\in H^{-2}(\Omega)\text{ s.t. }\forall\,v\in H^{2}_{0,0}(\Omega):\\ &\begin{cases}\big{\langle}\frac{\partial^{2}}{\partial t^{2}} \phi,\rho v\big{\rangle}+\int\limits_{\Omega}\kappa\Delta\phi\,\Delta v\, \mathrm{d}\Omega=-\int\limits_{\partial\Omega_{q}}q\,v\,\mathrm{d}S-\int \limits_{\partial\Omega_{m}}m\,\nabla v\cdot\mathbf{n}\,\mathrm{d}S\,,\\ \phi\big{|}_{t=0}=\phi_{0}\,,\\ \frac{\partial}{\partial t}\phi\big{|}_{t=0}=\dot{\phi}_{0}\,, \end{cases}\end{split} \tag{36}\] with \(H^{2}_{\phi_{D},g}(\Omega)\) the \(H^{2}(\Omega)\) Sobolev space for which each member satisfies the trace equalities from Eq. (35d) and Eq. (35e), \(H^{-2}(\Omega)\) is the corresponding dual space, and \(\big{\langle}\cdot,\cdot\big{\rangle}\) denotes the pairing between them. If \(\frac{\partial^{2}}{\partial t^{2}}\phi\in L^{2}(\Omega)\), then \(\big{\langle}\frac{\partial^{2}}{\partial t^{2}}\phi,\rho v\big{\rangle}=\int \nolimits_{\Omega}\rho\frac{\partial^{2}}{\partial t^{2}}\phi\,v\,\mathrm{d}\Omega\). In an immersed setting, both essential constraints need to be enforced weakly. A semi-discrete formulation may again be written in the general form of: \[\underline{\breve{\mathrm{M}}}\,\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\underline {\hat{\phi}}+\underline{\breve{\mathrm{K}}}\,\underline{\hat{\phi}}= \underline{\breve{\mathrm{F}}}\,, \tag{37}\] where, as in the second-order case, the mass matrix may or may not be lumped and may or may not contain a ghost-penalty term. The stiffness matrix, in its most generic form, consists of the following terms: \[\begin{split}&\big{[}\,\underline{\breve{\mathrm{K}}}\,\big{]} _{ij}=\int\nolimits_{\Omega}\kappa\Delta N_{i}\Delta N_{j}\,\mathrm{d}\Omega+ \int\nolimits_{\Gamma_{g}}\kappa\gamma_{K}\,\llbracket\partial_{n}^{k+1}N_{i} \rrbracket\llbracket\partial_{n}^{k+1}N_{j}\rrbracket\,\mathrm{d}S\\ &\quad+\int\nolimits_{\partial\Omega_{D}}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! For the penalty formulation, the following parameter scalings are applicable: \[\beta_{\phi}\big{|}_{K} =\bar{\beta}_{\phi}h_{K}^{-3}\,, \tag{39a}\] \[\beta_{g}\big{|}_{K} =\bar{\beta}_{g}h_{K}^{-1}\,. \tag{39b}\] For the Nitsche formulation without ghost-penalty stabilization on the stiffness matrix, the positive definiteness (coercivity) requirement necessitates a minimum penalty value. We make use of: \[\beta_{\phi}\big{|}_{K} =3\sup_{v^{h}\in V^{h}}\frac{\|\nabla(\Delta v^{h})\cdot\mathbf{n}\|_{ \partial\Omega_{D}\cap K}^{2}}{\|\Delta v^{h}\|_{\Omega\cap K}^{2}}=:\bar{ \beta}_{\phi}\big{|}_{K}\left(\chi\,h_{K}\right)^{-3}, \tag{40a}\] \[\beta_{g}\big{|}_{K} =3\sup_{v^{h}\in V^{h}}\frac{\|\Delta v^{h}\|_{\partial\Omega_{N }\cap K}^{2}}{\|\Delta v^{h}\|_{\Omega\cap K}^{2}}=:\bar{\beta}_{g}\big{|}_{K }\left(\chi\,h_{K}\right)^{-1}. \tag{40b}\] When the stiffness matrix includes ghost-penalty stabilization, then we use the following parameters scalings, which satisfy the dimensional consistency requirement: \[\gamma_{K}\big{|}_{K} =\bar{\gamma}\,h_{K}^{2p-3} \tag{41a}\] \[\beta_{\phi}\big{|}_{K} =\bar{\beta}_{\phi}h_{K}^{-3}\,,\] (41b) \[\beta_{g}\big{|}_{K} =\bar{\beta}_{g}h_{K}^{-1}\,. \tag{41c}\] ### Analysis of the critical time-step size By following the methodology outlined in Section 3.2, we derive the scaling relations for the four cut functions depicted in Fig. 5. The results for the various formulations are collected in Tables 6 and 7 for the two corner-cut functions, and in Tables 8 and 9 for the two sliver-cut functions. The results for all four cut cases are combined and summarized in Table 10. The main conclusions drawn for the second-order equation carry over to this fourth-order problem: * The use of a consistent mass matrix without mass scaling results in an undesirable cut-size dependent critical time-step size for all formulations. * Lumping the mass matrix is insufficient to yield a cut-size independent scheme for spline basis functions of order \(p=2\) or even \(p=3\). To ensure a cut-size independent critical time-step size, either \(p=4\) basis function need to be used or ghost mass needs to be added. * For both penalty formulations, the maximum eigenvalue scales linearly with the non-dimensionalized penalty parameter \(\bar{\beta}\), irrespective of the order of the basis functions and/or the use of mass-scaling. * Both Nitsche formulations with local penalty parameters (i.e., without a ghost-stiffness term) are inapplicable. * Both Nitsche formulations require a ghost-mass term as well as a ghost-stiffness term to ensure a cut-size independent critical time-step size. The only difference compared to the results of the second-order problem, is that, in the case of mass lumping without mass scaling, this fourth-order problem requires at least quartic basis functions. For the analogous formulation for the second-order problem, quadratic basis functions were sufficient. This appears to hint at the more general condition \(p\geq s\) (or, following Remark 3, \(k+1\geq s\)), with \(s\) the order of the spatial partial differential operator. \begin{table} \begin{tabular}{|l|c|c c c|} \hline & & Consistent mass & Lumped mass & Consistent/lumped mass \\ & & (Eq. (13a)) & (Eq. (14)) & and ghost mass (Eq. (28)) \\ & & & & with \(\gamma_{M}=\bar{\gamma}_{M}h^{2p+1}\) \\ \hline & \(\diagdown\mathcal{O}(M)\) & \(\chi^{2pd+d}\) & \(\chi^{pd+d}\) & \(\bar{\gamma}_{M}\chi^{0}\) \\ \hline Neumann & \(\chi^{2pd+d-4}\) & \(\chi^{*}\) & \(\chi^{pd-4}\) & \(\bar{\gamma}_{M}^{-1}\chi^{2pd+d-4}\) \\ Penalty on \(\phi\) & \(\beta_{\phi}=\bar{\beta}_{\phi}h_{K}^{-3}\) & \(\chi^{2pd+d-4}\) & \(\chi^{pd-4}\) & \(\bar{\gamma}_{M}^{-1}\chi^{2pd+d-4}\) \\ Nitsche on \(\phi\) & \(\beta_{\phi}=\bar{\beta}_{\phi}(\chi h_{K})^{-3}\) & \(\bar{\beta}_{\phi}\chi^{2pd+d-4}\) & \(\bar{\beta}_{\phi}\chi^{pd-4}\) & \(\bar{\beta}_{\phi}\bar{\gamma}_{M}^{1}\chi^{2pd+d-4}\) \\ Penalty on \(\nabla\phi\cdot\mathbf{n}\) & \(\beta_{g}=\bar{\beta}_{g}h_{K}^{-1}\) & \(\chi^{2pd+d-4}\) & \(\chi^{pd-4}\) & \(\bar{\gamma}_{M}^{-1}\chi^{2pd+d-4}\) \\ Nitsche on \(\nabla\phi\cdot\mathbf{n}\) & \(\beta_{g}=\bar{\beta}_{g}(\chi h_{K})^{-1}\) & \(\bar{\beta}_{g}\chi^{2pd+d-4}\) & \(\bar{\beta}_{g}\chi^{pd-4}\) & \(\bar{\beta}_{g}\chi^{1}\chi^{2pd+d-4}\) \\ Nitsche on \(\phi\), \(\beta_{\phi}=\bar{\beta}_{\phi}h_{K}^{-3}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\bar{\gamma}_{K}\chi^{0}\) \\ on \(\nabla\phi\cdot\mathbf{n}\), \(\beta_{g}=\bar{\beta}_{g}(\chi h_{K})^{-1}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\bar{\gamma}_{K}\chi^{0}\) & \(\bar{\gamma}_{K}\chi^{0}\) \\ and ghost \(\gamma_{K}=\bar{\gamma}_{K}h^{2p-1}\) & & & & \\ \hline \end{tabular} * : Unstable for \(d=1\) and \(p=2\). * : Unstable for \(d=1\) and \(p=2\), and may dominate for \(d=2\) and \(p=2\). \end{table} Table 6: Resulting scaling with cut size for the first corner-cut function for the fourth-order problem. \begin{table} \begin{tabular}{|l|c|c c c|} \hline & & Consistent mass & Lumped mass & Consistent/lumped mass \\ & & (Eq. (13a)) & (Eq. (14)) & and ghost mass (Eq. (28)) \\ & & & & with \(\gamma_{M}=\bar{\gamma}_{M}h^{2p+1}\) \\ \hline & \(\diagdown\mathcal{O}(M)\) & \(\chi^{0}\) & \(\chi^{0}\) & \(\chi^{0}\) \\ \hline Neumann & \(\chi^{0}\) & \(\chi^{0}\) & \(\chi^{0}\) & \(\chi^{0}\) \\ Penalty on \(\phi\) & \(\beta_{\phi}=\bar{\beta}_{\phi}h_{K}^{-3}\) & \(\bar{\beta}_{\phi}\chi^{0}\) & \(\chi^{0}\) & \(\chi^{0}\) \\ Nitsche on \(\phi\) & \(\beta_{\phi}=\bar{\beta}_{\phi}h_{K}^{-3}\) & \(\bar{\beta}_{\phi}\chi^{0}\) & \(\chi^{0}\) & \(\chi^{0}\) \\ Nitsche on \(\phi\) & \(\beta_{\phi}=\bar{\beta}_{\phi}(\chi h_{K})^{-3}\) & \(\bar{\beta}_{\phi}\chi^{d-4}\) & \(\bar{\beta}_{\phi}\chi^{0}\) & \(\bar{\beta}_{\phi}\chi^{0}\) \\ Penalty on \(\nabla\phi\cdot\mathbf{n}\) & \(\beta_{g}=\bar{\beta}(\chi h_{K})^{-1}\) & \(\bar{\beta}_{g}\chi^{0}\) & \(\bar{\beta}_{g}\chi^{0}\) & \(\bar{\beta}_{g}\chi^{0}\) \\ \(\beta_{g}=\bar{\beta}_{g}h_{K}^{-1}\) & \(\bar{\beta}_{g}\chi^{0}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) \\ Nitsche on \(\phi\), \(\beta_{\phi}=\bar{\beta}h_{K}^{-3}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) \\ \(\beta_{g}=\bar{\beta}_{g}(\chi h_{K})^{-1}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) & \(\bar{\beta}_{g}\chi^{d-2}\) \\ \(\beta_{g}=\bar{\beta}_{g}(\chi h_ \begin{table} \begin{tabular}{|l|c|c c c|} \hline & & Consistent mass (Eq. (13a)) & Lumped mass (Eq. (14)) & Consistent/lumped mass and ghost mass (Eq. (28)) \\ & & & & with \(\gamma_{M}=\bar{\gamma}_{M}h^{2p+1}\) \\ \hline & \(\backslash\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 5 Numerical experiments In this section, we present the results of our numerical experiments, which are designed to accomplish two objectives. First, in Section 5.1, we verify the derived scaling relations and conclusions from Sections 3 and 4. Secondly, we assess whether the presence of the ghost mass negatively impacts the error behavior and convergence characteristics of the explicit scheme. To this end, we examine a linear vibrating drum in Section 5.2 and a linear Kirchhoff-Love shell in Section 5.3. ### Verification of time-step size scaling To numerically verify the scaling relations collected in Tables 1 to 4 and 6 to 9, we consider the two-dimensional domain illustrated in Fig. 7. The figure shows the mesh that is used for all subsequent computations, with the ghost faces highlighted. The domain cut-out involves straight edges, curved edges and positive and negative corner cuts, such that a wide variety of cut configurations may occur. To generate different cases, we randomly displace the cut-out within the domain by a distance between \(-h_{K}\) and \(h_{K}\) in the \(x\) and \(y\) directions. For each new domain, we compute the critical time-step, as defined in Eq. (1), for the different formulations and polynomial orders. All computations are performed with row-sum mass-lumped mass matrices (apart from the ghost-mass term, as addressed in Remark 2). #### 5.1.1 Neumann boundaries Figures 8 and 9 present the results for the case where the entire cut-boundary is a Neumann boundary, for the second- and fourth-order equation, respectively. Figures 8a and 9a involve basis functions with the lowest permitted polynomial order (\(p=1\), respectively \(p=2\)), and in Figs. 8b and 9b this order is incremented by one. The blue dashed lines in these figures correspond to the case without a cut-out, representing the optimal achievable \begin{table} \begin{tabular}{|l|c c c|} \hline & Consistent mass (Eq. (13a)) & Lumped mass (Eq. (14)) & Consistent/lumped mass and ghost mass (Eq. (28)) \\ \hline Neumann & & Unstable for \(p\in\{2,3\}\) & \\ Penalty on \(\phi\) & & Unstable for \(p\in\{2,3\}\) & \\ \(\beta_{\phi}=\bar{\beta}_{\phi}h_{K}^{-3}\) & & Scales with \(\bar{\beta}\) & Scales with \(\bar{\beta}_{\phi}\) \\ Nistche on \(\phi\) & & & \\ \(\beta_{\phi}=\bar{\beta}_{\phi}(\lambda h_{K})^{-3}\) & & & \\ Penalty on \(\nabla\phi\cdot\mathbf{n}\) & & Unstable for \(p\in\{2,3\}\) & \\ \(\beta_{g}=\bar{\beta}_{g}h_{K}^{-1}\) & & Scales with \(\beta\) & Scales with \(\bar{\beta}_{g}\) \\ Nistche on \(\nabla\phi\cdot\mathbf{n}\) & & & \\ \(\beta_{g}=\bar{\beta}_{g}(\chi h_{K})^{-1}\) & & & \\ Nistche on \(\phi\), \(\beta_{\phi}=\bar{\beta}h_{K}^{-3}\) & & & \\ on \(\nabla\phi\cdot\mathbf{n}\), \(\beta=\bar{\beta}_{g}(\chi h_{K})^{-1}\) & & & \\ and ghost \(\gamma_{K}=\bar{\gamma}_{K}h^{2p-1}\) & & & \\ \hline \end{tabular} \end{table} Table 10: Overview of the stability characteristics for the fourth-order problem. Worst-case scaling of the cut scenarios considered in Tables 6, 8 and 9. value. We note that these 'optimal' values do still suffer from the usual boundary outliers due to the repeating knots at the exterior boundary [58, 52, 20], but believe that this represents the relevant comparison. The black dotted lines indicate the anticipated scaling relations derived in the preceding analysis sections. For a row-sum lumped mass matrix without ghost mass, the scaling of the maximum eigenvalue is predicted to be \(\chi^{p-s}\), according to Tables 3 and 8, where \(s\in\{2,4\}\) is the order of the spatial differential operator. The critical time-step size relates to the maximum eigenvalue according to \(\Delta t_{\text{crit}}\propto(\lambda_{\text{max}}^{h})^{-\frac{1}{2}}\), and should thus scale as \(\chi^{\frac{1}{2}(s-p)}\). As the black dotted lines indicate, this is indeed the trend that the bottom green diamonds follow. Not all data points follow this trend, since the smallest \(\chi\) in the domain (on the horizontal axis) may correspond to a corner-cut function instead, which, according to Tables 1 and 6, does not cause adverse scaling. Figure 8: Neumann formulation of the second-order equation: first row in Tables 1 to 5. Critical time-step size dependency on the minimal element size fraction for 100 perturbations of the domain of Fig. 7. Figure 7: Domain, cut-out, mesh and ghost faces. According to the results presented in Tables 5 and 10, the cut-size dependency of the critical time-step size can be mitigated by raising the polynomial to \(p\geq s\), or by incorporation the ghost-mass term. The latter approach is confirmed to be effective in all plots, as indicated by the red markers, and the former approach is demonstrated in Fig. 8b. For unfavourably cut elements, both methods have the potential to increase the critical time-step size by orders of magnitude. For the second-order equation, quadratic basis functions are sufficient to achieve this effect, but quartic basis functions are required to eliminate the scaling for the fourth-order (shell-type) equation. In the event that the use of such high-order basis functions is not feasible, the ghost-mass term serves as an alternative solution. #### 5.1.2 Penalty formulations If the cut-boundary is a Dirichlet boundary instead, and a penalty method is used for the enforcement of the constraints, then, according to Tables 4 and 9, the maximum eigenvalue scales with \(\bar{\beta}\). This is confirmed in Figs. 10 and 11 for basis functions with different polynomial orders for the second- and fourth-order problems, respectively. For \(p\leq s\) (\(s\in\{2,4\}\) the order of the spatial differential operator) the first sliver-cut function (Tables 3 and 8) also introduces a scaling of the critical time-step with \(\chi^{\frac{1}{2}(s-p)}\) in the same manner as for the Neumann boundary formulation. Both types of scaling are observed in the various subfigures: for \(p<s\) and small \(\bar{\beta}\) the markers follow the black dotted line, but as \(\bar{\beta}\) is increased the second sliver-cut functions produce the highest eigenvalues and the dependency on \(\chi\) is no-longer dominant in the considered collection of cut cases. We notice that the data-points in Fig. 11c are more scattered than those in the other figures, which can be attributed to the two types of scaling of the first sliver-cut function identified in Table 8. Additional dash-dotted lines are added to highlight the second, \((\bar{\beta}_{g}^{-\frac{1}{2}}\chi^{\frac{1}{2}(3-p)})\)-order, scaling. Figure 9: Neumann formulation of the fourth-order equation: first row in Tables 6 to 10. Critical time-step size dependency on the minimal element size fraction for 100 perturbations of the domain of Fig. 7. Figure 11: Penalty formulations without ghost mass for the fourth-order problem: second and fourth row and second column in Tables 6 to 10. Critical time-step size dependency on the minimal element size fraction for 100 perturbations of the domain of Fig. 7. Figure 10: Penalty formulations without ghost mass for the second-order problem: second row and second column in Tables 1 to 5. Critical time-step size dependency on the minimal element size fraction for 100 perturbations of the domain of Fig. 7. Addition of the ghost-mass term only suppresses the \(\chi^{\frac{1}{2}(s-p)}\)-scaling, as it did for the pure Neumann formulation. This means that significant critical time-step size improvements can be achieved for small \(\bar{\beta}\) values, or when \(p<s\) (from Fig. 9(a) to Fig. 9(a), from Figs. 9(a) and 9(c) to Fig. 9(a), and from Figs. 9(b) and 9(d) to Fig. 9(b)), but not for larger penalty values, or when \(p\geq s\). #### 5.1.3 Nitsche formulations The results of the various Nitsche formulations are presented in Figs. 14 and 15 for the second- and fourth-order problem, respectively. Figures 13(a), 14(a) and 14(c) correspond to Nitsche formulations with penalty parameters that are chosen per the inverse estimates of Eqs. (21), (40a) and (40b), which thus involve the local element size measure \(h_{c}=\chi\,h_{K}\). The four functions on cut elements induce different scaling orders. According to Tables 1 Figure 12: Penalty formulations with ghost mass for the second-order problem: second row and third column in Tables 1 to 5. Critical time-step size dependency on the minimal element size fraction for 100 perturbations of the domain of Fig. 7. Figure 13: Penalty formulations for both \(\phi\) and \(\nabla\phi\cdot\mathbf{n}\) with ghost mass for the fourth-order problem: second and fourth row and third column in Tables 6 to 10. Critical time-step size dependency on the minimal element size fraction for 100 perturbations of the domain of Fig. 7. and 6, the first corner-cut function causes a \((\chi^{\frac{1}{2}(s-pd)})\)-order scaling. For the considered combinations of \(s\), \(p\) and \(d\), this reduces to a scaling with \(\chi^{0}\) for all cases. This zeroth-order scaling is observed in Figs. 13(a) and 14(c), but not in Fig. 14(a). For the Nitsche formulation used in Fig. 14(a), the scaling induced by the _second_ corner-cut function dominates, as found in Table 7. Furthermore, the first sliver-cut function induces a scaling of order \(\chi^{\frac{1}{2}(s-p)}\) according to Tables 3 and 8, and the second sliver-cut function induces a scaling of order \(\chi^{\frac{1}{2}}\) (Figs. 13(a) and 14(c)) or \(\chi^{\frac{3}{2}}\) (Fig. 14(a)) according to Tables 4 and 9. These latter scalings are independent of \(p\), and therefore increasing the polynomial order will not improve the results. All predicted scaling trends are plotted in each of the three subfigures (sometimes overlapping) confirming that they are indeed bounds of the obtained critical time-step size. In Figs. 13(b), 14(b) and 14(d), the results are plotted for the Nitsche formulations with penalty parameter that scale with the inverse of \(h_{K}\). These formulations incorporate a ghost-penalty term in the stiffness matrix to guarantee a its positive definiteness. According to Tables 1 and 6, a function on a corner-cut element induces a detrimental cut-size dependency of the critical time-step size of order \(\chi^{\frac{1}{2}(pd+d)}\). Similarly, the first sliver-cut function causes a scaling of the order \(\chi^{\frac{1}{2}(p+1)}\) as stated in Tables 3 and 8. Both these trends are plotted in each of the respective subfigures. The positive exponent \(p\) in these relations implies that this scaling will only worsen with increasing polynomial order. However, the addition of the ghost-mass term eliminates the cut-size dependency altogether. The highest eigenvalue does still scale with \(\bar{\beta}\chi^{0}\). In Fig. 14(b) we observe that the red markers exhibit a smaller critical time-step size than the maximally achievable value indicated by the dashed-blue line. This indicates that the penalty value required to stabilized the Nitsche formulation is not sufficiently small to avoid affecting the highest eigenvalues. ### Convergence of a linear pre-stressed membrane The results of the analysis on the scaling of the critical time-step size with the cut-element size indicate that addition of ghost mass can, in certain cases, significantly increase the critical time-step size. In particular, it enables a Nitsche formulation with a cut-size independent critical time-step size. Of course, the added ghost mass should not come at the cost of a severe accuracy reduction. In this section, we study the impact of the added ghost mass on the solution error for a linear pre-stressed membrane, i.e., for the second-order wave equation of Section 3. We consider the same geometry description as before, depicted in Fig. 7, and take as an exact solution a simple standing sine wave: \[\phi_{\rm exact}(t,x,y)=\cos(\sqrt{2}\pi t)\sin(\pi x)\sin(\pi y)\,, \tag{42}\] from which we infer the required initial and boundary conditions. In the following, we compute one full period of this oscillation, for which we use a Newmark-type central difference method for time integration. This limits the optimal convergence rates to second-order. Figure 15: Nitsche formulations for the fourth-order problem, for \(p=2\): third, fourth and fifth row and second and third column in Tables 6 to 10. Critical time-step size dependency on the minimal element size fraction for 100 perturbations of the domain of Fig. 7. First, we investigate the scenario where the entire cut-out represents a Neumann boundary. According to Table 5, and as verified in Fig. 7(a), the addition of ghost mass only affects the critical time-step size for the \(p=1\) order of basis functions. Figure 16 shows the convergence curves of the relative \(H_{0}^{1}\) and \(L^{2}\)-errors for the formulations with and without a ghost-mass term. The dashed-blue lines indicate the optimal orders of convergence. The numbers attached to the final markers in the \(L^{2}\)-error graphs denote the total number of time steps required to carry out the corresponding simulation. For this particular example, we observe a factor 3 reduction of the required number of time steps on the most refined grid. Despite this reduction, we observe that the addition of the ghost-mass term does not compromise the accuracy of the approximation. Next, we consider penalty methods with different penalty factors, without ghost mass. The error convergence curves for \(p=1\) and \(p=2\) are shown in Fig. 17. For both polynomial orders, the smallest chosen penalty parameter only marginally impacts the critical time-step size, as may be ascertained from Fig. 10. As a result, the \(p=1\) simulation on the most refined grid requires \(\sim 5\%\) more time steps than the corresponding Neumann case plotted in Fig. 16. At the same time, the error has increased significantly. This is in part due to the variational inconsistency of the penalty formulation, which causes a loss of the optimal (spatial) convergence rate. The errors may be reduced by several orders of magnitude by increasing the penalty parameter. However, an increase of the penalty parameter comes at the cost of a reduced critical time-step size. In particular, we observe a factor 5 (for \(p=1\)) and 10 (for \(p=2\)) increase of the required total number of time steps for a factor 100 increase in \(\bar{\beta}\). At the highest penalty level, the optimal orders of convergence appear to be achieved in some of the subfigures, but these convergence rates drop past a certain mesh refinement level. Figure 16: Error convergence for a vibrating pre-stressed membrane on the domain of Fig. 7 after one full period of oscillation, for the Neumann case with and without ghost mass, with \(p=1\). Finally, we compare the performance of a penalty method with that of a Nitsche formulation. To ensure an equitable comparison, both formulations are augmented with ghost mass and make use of the same penalty parameter values. As a result, the simulation with Nitsche's method and the simulation with the penalty method require a comparable number of times steps. The relative \(H_{0}^{1}\) and \(L^{2}\)-errors are plotted in Fig. 18, for \(p=1\) and \(p=2\). For reference, the results of the penalty methods without ghost mass from Fig. 17 are overlaid. As anticipated, the addition of the ghost mass to the penalty method for these moderate penalty parameter values reduces the required number of time steps significantly for \(p=1\) and only little for \(p=2\). In both cases, the ghost mass only marginally affects accuracy. A jump in solution quality is achieved by switching to the Nitsche formulation. Due to the variational consistency of the formulation, both the \(H_{0}^{1}\) and \(L^{2}\)-errors converge optimally, yielding error reductions by orders of magnitude on the finer meshes. For \(p=1\), the error values even closely resemble those of the pure Neumann case in Fig. 16. Figure 17: Error convergence for a vibrating pre-stressed membrane on the domain of Fig. 7 after one full period of oscillation, for penalty methods with different penalty factors. ### Transient response of a linear Kirchhoff-Love shell To demonstrate the practical applications of ghost mass and Nitsche's method, we examine their usage in a transient simulation of a Kirchhoff-Love shell [59; 7]. As a case study, we consider a pressure shock-wave propagating through a pipe-segment with pinned support on both ends. Figure 18(a) shows the geometry, the material parameters and the load-function. The shock-wave travels through the entire pipe-segment in \(0.02\) seconds. Figure 18(b) shows the resulting displacement field of a reference computation, computed on a \(30\times 30\) grid of \(p=2\) polynomial B-splines. We then create a rectangular cut-out with corner fillets at the center of the cylinder, as depicted in Fig. 20, and treat the corresponding interior edges as immersed boundaries (or, trimmed patches). To repeat the simulation of the traveling shock-wave, we use the reference solution depicted in Fig. 18(b) as the manufactured solution for prescribing boundary conditions at these internal edges. Specifically, we extract the in-plane and out-of-plane Figure 18: Error convergence for a vibrating pre-stressed membrane on the domain of Fig. 7 after one full period of oscillation, for Nitsche or penalty enforcement of Dirichlet constraints. displacement vectors and the normal rotation of the shell at the integration points around the cut-out. These three fields represent essential boundary conditions for the Kirchhoff-Love shell model. In the immersogeometric computations, we enforce all conditions with either a penalty method or with Nitsche's method. The exact Nitsche formulation that we adopt can be found in [60]. In this formulation, the three conditions all require a respective penalty parameter; \(\beta_{1}\) for the out-of-plane displacement, \(\beta_{2}\) for the normal rotation, and \(\beta_{3}\) for the in-plane displacement. In [60], \(\beta_{1}\) is referred to as \(C^{S}_{\mathrm{pen},1}\), \(\beta_{2}\) as \(C^{S}_{\mathrm{pen},3}\), and \(\beta_{3}\) as \(C^{S}_{\mathrm{pen},4}\). Table 11 presents the critical time-step sizes for the various formulation for the example of Fig. 20, with a background mesh consisting of \(30\times 30\) quadratic B-spline elements, and with the penalty parameters chosen as \(\beta_{1}=50\), \(\beta_{2}=15\), and \(\beta_{3}=2.5\). The ghost penalty parameters for the contributions to the stiffness matrix (both for the in-plane displacement and the out-of-plane displacement field) and to the mass matrix are all set to \(0.1\). At these penalty values, the smallest eigenvalue of the system of equations arising from Nitsche's method is positive. As the critical time-step sizes in the table show, the use of ghost mass for the penalty method increases the critical time-step size by more than \(15\) times. With Figure 19: Kirchhoff-Love shell model problem. Figure 20: Immersed domain and mesh of \(30\times 30\) elements, with filled rectangular cut-out. Indicated points are measurement locations referenced in Fig. 23. the Nitsche formulation, we retrieve a critical time-step size equal to that of the uncut case, which is the optimal value. The traveling shock-wave computations are executed at the respective critical time-step size of each formulation. The evolution of the error over time is captured by the following time-dependent Bochner norm: \[\|\boldsymbol{u}-\boldsymbol{u}^{h}\|_{\Omega\times(0,t)}=\frac{1}{t}\int \limits_{0}^{t}\|\boldsymbol{u}(\tau,\cdot)-\boldsymbol{u}^{h}(\tau,\cdot)\|_ {L^{2}(\Omega)}\,\mathrm{d}\tau\,. \tag{43}\] The progression of this error is plotted in Fig. 21 for both the penalty formulation and the Nitsche formulation. The adoption of the Nitsche formulation results in a reduction of the error by well over an order of magnitude. The manifestation of the error is depicted in Fig. 22 for time \(t=0.045\). The deviation from the expected physical response is significantly more pronounced for the penalty formulation than for the Nitsche formulation (also note the change in colorbar in this regard). In particular, the error distribution in Fig. 22a indicates that penalty enforcement of the boundary conditions not only locally impacts the solution, but in fact disturbs the complete solution field. In contrast, the far-field impact of the Nitsche-based weak boundary condition enforcement is almost negligible, owing to its variational consistency. It can also be inferred from the error distribution that the axisymmetric nature of the true solution, as depicted in Fig. 19b, is lost in both finite element approximations. This loss is quantitatively assessed in Figure 23, which plots the displacement magnitudes at \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & Uncut & Penalty & Penalty formulation, & Nitsche formulation, \\ & & formulation & with ghost mass & with ghost mass \\ \hline \hline \(\Delta t_{\text{crit }}\)[ms] & 0.475 & 0.0311 & 0.393 & 0.475 \\ \hline \end{tabular} \end{table} Table 11: Critical time-step sizes for the different Kirchhoff-Love shell simulations. Figure 21: Evolution of the error in the Bochner norm for the penalty and Nitsche formulations of the Kirchhoff-Love shell. the three locations along the span of the cylinder indicated in Fig. 20. The results for the penalty method are shown in Fig. 23, while those for the Nitsche formulation are shown in Fig. 23. For the penalty method, the displacement magnitudes at the three points deviate increasingly from the reference solution, and also also diverge with respect to each other. This erroneous behavior is significantly suppressed when the Nitsche formulation is adopted, as evidenced by the nearly overlapping displacement curves for all points. ## 6 Conclusion and outlook Immersogeometric explicit analysis offers a powerful framework for streamlining the design-to-analysis pipeline for crash-test type simulations. To ensure a robust and reliable pipeline, it is imperative that the critical time-step size of the explicit time-stepping scheme is not affected by the size of the cut (or "trimmed") elements. In this article, we have studied the dependency of the critical time-step size on the cut-size for different types of boundary conditions and different methods of enforcement. The formulations that we investigated include a pure Neumann problem, penalty enforcement of Dirichlet constraints, and Nitsche's method for enforcement of Dirichlet constraints. To ensure a positive definite stiffness matrix when Nitsche's method is used, we considered a formulation where Figure 23: Displacement magnitude over time at the three point locations indicated in Fig. 20. Figure 22: Magnitude of the displacement error in mm, at \(t=0.045\). the Nitsche penalty parameter is determined based on an element-local eigenvalue problem, leading to a cut-size dependent penalty parameter, and a formulation with an additional ghost-penalty term, which then permits a cut-size independent penalty parameter. For each formulation, we considered a consistent mass matrix and a row-sum lumped mass matrix, both with and without additional ghost-penalty based mass scaling ("ghost mass"). We have found that a formulation with a consistent mass matrix, without any form of mass scaling, always suffers from adverse scaling of the critical time-step size with element cut-size. As was first observed in [20], this problematic scaling is mitigated by row-sum mass lumping, but only when the polynomial order of the maximum regularity splines is sufficiently high. Our analysis confirms this observation: for a second-order and a fourth-order equation we show that mass lumping alone is insufficient when the order of the basis functions is lower than the order of the spatial differential operator. To enable 'lower-order' discretization (i.e., up to cubics for shell-type equations), an additional mass scaling is required. The cut-size dependency vanishes for our proposed addition of ghost mass. Our analysis also shows that penalty enforcement of Dirichlet conditions suffers from the drawback that the critical time-step size scales with the penalty parameter after a threshold value, confirming the observations in [5, 20]. While this is also the case for Nitsche's method for enforcement of Dirichlet conditions, the penalty factors required to achieve satisfactory results are generally lower. Out of the different Nitsche formulations that we considered, the only formulation for which the critical time-step size is independent of the cut-element size is one with ghost stiffness and ghost mass. One favorable property of the addition of ghost mass to the problematically cut elements, is that it does not suffer from the severe variational inconsistency issues that plague more conventional mass-scaling methods. With numerical experiments, we have demonstrated the efficacy of ghost mass, as it can be utilized without incurring negative impacts on solution quality, despite a potentially substantial increase in the critical time-step size, sometimes by orders of magnitude. For a linear wave equation and a linear Kirchhoff-Love shell model, we have shown that the enforcement of Dirichlet conditions with a variationally consistent Nitsche method, as opposed to a penalty method, may lead to reductions of the solution error by orders of magnitude at the same critical time-step size. An open research question pertains to the efficient (approximate) inversion of the mass matrix when ghost mass is added to the formulation. Suggestions for developing such a technique have been made in this work. **Acknowledgements:** S.K.F. Stoter gratefully acknowledges financial support through the Industrial Partnership Program _Fundamental Fluid Dynamics Challenges in Inkjet Printing_ (_FIP_), a joint research program of Canon Production Printing, Eindhoven University of Technology, University of Twente, and the Netherlands Organization for Scientific Research (NWO). C.V. Verhoosel and S.C. Divi acknowledge the partial support of the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017578 (SIMCor). M. Larson and E.H. van Brummelen gratefully acknowledge the insightful discussions at the special session organized by Prof. Trond Kvamsdal at IGA 2022 in Banff. All simulations have been performed using the open source software package Nutils [61].
2306.03950
MISGENDERED: Limits of Large Language Models in Understanding Pronouns
Content Warning: This paper contains examples of misgendering and erasure that could be offensive and potentially triggering. Gender bias in language technologies has been widely studied, but research has mostly been restricted to a binary paradigm of gender. It is essential also to consider non-binary gender identities, as excluding them can cause further harm to an already marginalized group. In this paper, we comprehensively evaluate popular language models for their ability to correctly use English gender-neutral pronouns (e.g., singular they, them) and neo-pronouns (e.g., ze, xe, thon) that are used by individuals whose gender identity is not represented by binary pronouns. We introduce MISGENDERED, a framework for evaluating large language models' ability to correctly use preferred pronouns, consisting of (i) instances declaring an individual's pronoun, followed by a sentence with a missing pronoun, and (ii) an experimental setup for evaluating masked and auto-regressive language models using a unified method. When prompted out-of-the-box, language models perform poorly at correctly predicting neo-pronouns (averaging 7.7% accuracy) and gender-neutral pronouns (averaging 34.2% accuracy). This inability to generalize results from a lack of representation of non-binary pronouns in training data and memorized associations. Few-shot adaptation with explicit examples in the prompt improves performance for neo-pronouns, but only to 64.7% even with 20 shots. We release the full dataset, code, and demo at https://tamannahossainkay.github.io/misgendered/
Tamanna Hossain, Sunipa Dev, Sameer Singh
2023-06-06T18:27:52Z
http://arxiv.org/abs/2306.03950v2
# Misgendered: ###### Abstract _Content Warning:_ This paper contains examples of misgendering and erasure that could be offensive and potentially triggering. Gender bias in language technologies has been widely studied, but research has mostly been restricted to a binary paradigm of gender. It is essential also to consider non-binary gender identities, as excluding them can cause further harm to an already marginalized group. In this paper, we comprehensively evaluate popular language models for their ability to correctly use English gender-neutral pronouns (_e.g., singular they, them_) and neo-pronouns (_e.g., ze, xe, thon_) that are used by individuals whose gender identity is not represented by binary pronouns. We introduce Misgendered, a framework for evaluating large language models' ability to correctly use preferred pronouns, consisting of (i) instances declaring an individual's pronoun, followed by a sentence with a missing pronoun, and (ii) an experimental setup for evaluating masked and auto-regressive language models using a unified method. When prompted out-of-the-box, language models perform poorly at correctly predicting neo-pronouns (averaging 7.7% accuracy) and gender-neutral pronouns (averaging 34.2% accuracy). This inability to generalize results from a lack of representation of non-binary pronouns in training data and memorized associations. Few-shot adaptation with explicit examples in the prompt improves performance for neo-pronouns, but only to 64.7% even with \(20\) shots. We release the full dataset, code, and demo at [https://tamannahossainkay.github.io/misgendered/](https://tamannahossainkay.github.io/misgendered/). ## 1 Introduction From document retrieval to virtual assistants, large language models (LLMs) (Zhang et al., 2022; Scao et al., 2022; Lewis et al., 2020) have become indispensable for various automated language processing tasks. Given their proliferation, it is vital that these LLMs are safe to use. Any biases in the model may perpetuate and amplify existing real-world harms toward already marginalized people. Efforts to address gender bias in natural language processing primarily focus on binary gender categories, female and male. They are aimed at either upstream bias, e.g., gendered associations in language models (Guo et al., 2022; Kirk et al., 2021; Dev et al., 2021; Bolukbasi et al., 2016), or downstream bias, e.g., gendered information used for decision-making in tasks such as coreference resolution (Zhao et al., 2018), machine translation (Choubey et al., 2021; Stanovsky et al., 2019) etc. Figure 1: **Evaluation examples. Each instance begins with a declaration of an individual’s preferred pronouns, followed by text where a [PRONOUN] is missing. Language models are evaluated for their ability to predict the pronoun accurately. The correct answer along with predictions from GPT-J are shown.** However, this is restrictive as it does not account for non-binary gender identities as they become more commonplace to openly discuss. This can perpetuate harm against non-binary individuals through exclusion and marginalization Dev et al. (2021). This paper comprehensively evaluates popular language models' ability to use declared third-person personal pronouns using a framework, MIS-gendered. It consists of two parts: (i) instances declaring an individual's pronoun, followed by a sentence with a missing pronoun (SS 3.1), and (ii) an experimental setup for evaluating masked and auto-regressive language models using a unified method (SS 3.2). We create a template-based evaluation dataset, for _gendering_ individuals correctly given a set of their preferred pronouns. Each evaluation instance begins with an individual's name and an explicit declaration of their pronouns, followed by a sentence in which the model has to predict a missing [3]. For instance (Fig. 1), '_Aamari's pronouns are xe/xem/xyr/xyrs/xemself. Aamari is undergoing a surgery. Please pray for [3] quick recovery._' We evaluate language models on their ability to fill in [3] correctly, here with the possessive-dependent pronoun, _xyr_. Sentences in our evaluation cover \(5\) different pronoun forms: nominative, accusative, possessive-dependent, possessive-independent, and reflexive (_e.g., they, them, their, theirs, and themself_, respectively) for \(11\) sets of pronouns from \(3\) pronoun types: binary (_e.g., he, she_)1, gender-neutral (_e.g., they, them_), and neo-pronouns (_e.g., xe, thon_)2. We create \(10\) variations for each pronoun form and populate them with popular unisex, female, and male names, resulting in a total of \(3.8\) million instances. Footnote 1: Note a distinction between pronouns and gender identity. “Binary pronouns” refer to feminine and masculine pronouns. Individuals using binary pronouns do not necessarily have a binary gender identity. Footnote 2: We refer to gender-neutral pronouns and neo-pronouns as _non-binary pronouns_ throughout this paper, however, note that using non-binary pronouns does not imply an individual has a non-binary gender identity Our evaluation shows that current language models are far from being able to handle gender-neutral and neo-pronouns. For direct prompting, we use models of varying sizes from seven families comprising both auto-regressive and masked language models (SS 4.1). While most models can correctly use binary pronouns (average accuracy of 75.9%), all models struggle with neo-pronouns (average accuracy of 7.7%), and most with gender-neutral pronouns as well (average accuracy of 34.2%). This poor zero-shot performance could be due to the scarcity of representation of neo-pronouns and gender-neutral pronouns in pre-training corpora (SS 4.2). For example, there are \(220\times\) more occurrences of masculine pronoun tokens in C4 Raffel et al. (2020), the pre-training corpus for T5 Raffel et al. (2020) models, than for the _xe_ neo-pronouns. Additionally, we also notice some memorized associations between pronouns and the gender of names. Language models identify the non-binary pronouns most accurately for unisex names, whereas the bottom-performing names are either masculine or feminine. Similarly, for binary pronouns, language models correctly predict masculine pronouns for masculine names with almost \(3\times\) more accuracy than feminine names. Although language models do not perform well on predicting neo-pronouns in a zero-shot setting, models with few-shot learning abilities are able to adapt with a few examples (in-context learning achieves an accuracy of up to 64.7% for neopronouns). However, performance plateaus with more shots, and it is not clear how this method of prompting with examples can be used to mitigate bias in downstream applications. Future work should focus on further evaluation of language technologies on their understanding of non-binary pronouns and mitigating biases. While we have made progress toward recognizing pronouns as an open class in NLP rather than a closed one, much work remains to be done. The overarching limitations of our work are its adherence to a Western conceptualization of gender, as well as being confined to English. To facilitate further research, we release3 the full dataset, code base, and demo of our work at [https://tamannahossainkay.github.io/misgendered/](https://tamannahossainkay.github.io/misgendered/). Footnote 3: Appendix C ## 2 Background In this section, we present the social context in which our work is situated. The contemporary Western discourse regarding gender differentiates between _biological sex_ and _gender identity_. An individual's _biological sex_ is assigned at birth and is associated with physical characteristics, such as chromosomes, reproductive organs, etc. WHO (2021); NIH; Prince (2005). Biological sex can be binary (female or male) or non-binary, eg. intersex with X, XXY genotypes NIH (2021) etc. On the other hand, _gender identity_ is an individual's subjective experience of their own gender, which encompasses a diverse range of experiences and expressions (WHO, 2021; NIH; Prince, 2005), eg. cisgender, transgender, non-binary etc. Historically, there are several cultures where gender is understood as a spectrum, for example, the Bugis people of Indonesia recognize five genders (Davies, 2007). While there are nations that legally acknowledge gender exclusively as a binary (female or male) (EqualDex, 2022), an increasing number of jurisdictions recognize gender as a broader concept, including the USA (U.S. Dept of State, 2022; EqualDex, 2022). Exclusively binary female-male third-person personal pronouns are insufficient in such a diverse and dynamic landscape of gender. Rather, expanding pronouns to include neo pronouns, such as, singular _they, thon, ze_, etc. is essential (Vance Jr et al., 2014; Markman, 2011). Spaces inclusive of LGBTQIA+ persons encourage everyone to declare what pronouns to use to refer to them (NIH, 2022, 2020). _Pronoun declarations_ often include at least two pronoun forms, such as nominative and accusative (_e.g., they/them, she/her_), but can consist of all five pronoun forms (_e.g., they/them/their/theirs/themself_). _Misgendering_, i.e., addressing individuals using gendered terms that are not aligned with their gender identity are associated with a variety of harms (Dev et al., 2021). Note that while an expanding view of gender identity creates a corresponding need for a wider range of pronouns, we cannot infer an individual's gender-identity from their preferred pronouns. For instance, the use of binary pronouns, such as _she_ or _he_, does not necessarily indicate a binary gender identity, and similarly, the use of neo-pronouns, such as _xe_, does not imply an identity outside of the female-male binary. In this paper, we aim to establish a paradigm of evaluation of gender bias in NLP which takes into account the growing use of non-binary pronouns. We evaluate language models for one type of misgendering, which is Figure 2: Misgendered **Framework:** We create a dataset to evaluate the ability of large language models to correctly ‘gender’ individuals. We manually write templates, each referring to an individual and containing a blank space for a pronoun to be filled-in. We populate the templates with names (unisex, female, and male) and pronouns (binary, gender-neutral, and non-binary), and declare two to five pronoun forms are for each individual either _explicitly_ or _parenthetically_. We then use masked and auto-regressive LMs to predict missing pronouns in each instance utilizing a unified constrained decoding method. using incorrect pronouns for individuals. ## 3 Misgendered Framework The Misgendered framework for evaluating the pronoun usage abilities of language models consists of (i) instances specifying an individual's pronoun, succeeded by a sentence missing a pronoun, and (ii) a unified method for evaluating masked and auto-regressive language models. ### Dataset Construction We evaluate existing language models to assess their ability to understand and correctly use third-person personal pronouns (Figure 2). To do this, we create a dataset designed specifically for evaluating the correct _gendering_ of individuals given a set of their pronouns. To _gender_ a person correctly is to use the pronouns they prefer to refer to them. Each instance in the evaluation dataset consists of a first name and preferred pronouns at the start, followed by a manually crafted template that has a blank space for a missing [PRONOUN]. It is important to note that we only use preferred pronouns from a single pronoun group (eg. _they/them, xe/xem/xym_ and do not considered cases where an individual uses multiple sets of pronouns (eg. _they/she_). All templates are shown in Appendix A. Popular US first names and pronouns are used to populate each template. We do not use any private or individually identifiable information. We use unisex, female, and male names per US Social Security data over the past 100 years. This limits our analysis to English and American names assigned at birth. We take a sample of 300 names from the unisex names compiled by Flowers (2015). These are names that are least statistically associated with being female or male in the USA. For female and male names, on the other hand, we take the top 100 names that are the most statistically associated with being female or male respectively (Social Security, 2022). We manually construct ten templates for each pronoun form with CheckList (Ribeiro et al., 2020) in the loop. Evaluation instances are then completed by using sets of binary (masculine and feminine), gender-neutral (singular _they_), and neo-pronouns. For neo-pronouns, we use a list compiled by Lauscher et al. (2022). We do not use nounself, emojiself, numberself, or nameself pronouns from their compilation as they are currently rare in usage. If there are variations in forms of the same neo-pronoun group then we only use one of them, (e.g., for _ve/vi, ver/vir, vis, vers/virs, verself/virself_, we only use _vi, vir, vis, vis, and virself_). Neither Lauscher et al. (2022) nor our list of non-binary pronouns (shown in Table 1) are exhaustive as they are continually evolving. Each row of this table constitutes one possible choice of preferred pronouns and will be referred to as a **pronoun group** from here onwards, and each pronoun group will be referred to by its nominative form for short, eg. the non-binary pronoun group _{xe, xem, xyr, xyrs, xemself}_ will be referred by _xe_ for short. ### Evaluation Setup Using the evaluation dataset we created we test popular language models by direct prompting and in-context learning. #### 3.2.1 Constrained Decoding For both masked and auto-regressive language models, we do a _constrained decoding_ to predict the most likely pronoun _out of all pronouns of the same form_. We use a uniform framework for making predictions from both masked and auto-regressive langauge models. Let \(F\) be the set of pronoun forms (\(|F|=5\), columns in Table 1), and \(P\) be the set of pronoun groups (\(|P|=11\); rows in Table 1). Let \(x\) be an evaluation instance with gold pronoun \(p_{f}^{*}\) such that \(p^{*}\in P\) and \(f\in F\). Each instance has \(|P|\) inputs, \(\{x(p_{f})\}\) constrained label sets, \(\{y(p_{f})\}\)\(\forall p\in P\). Both inputs and labels are constructed following the pre-training design of each model. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} **Pronoun** \\ **Type** \\ \end{tabular} } & \multicolumn{5}{c}{**Pronoun Form**} \\ \cline{2-6} & **Nom.** & **Acc.** & **Pos.** & **Pos.** & **Ref.** \\ \hline \multirow{2}{*}{**Binary**} & he & him & his & his & himself \\ & she & her & her & hers & herself \\ \hline \multirow{2}{*}{**Neutral**} & they & them & their & theirs & themselves \\ \hline \multirow{6}{*}{**Neo-Pronouns**} & thon & thon & thons & thons & thonself \\ & e & em & es & ems & emself \\ & ae & aer & aer & aer & aers \\ & co & co & cos & cos & coself \\ & vi & vir & vis & virself \\ & xe & xem & xyr & xyrs & xemself \\ & ey & em & eir & eirs & emself \\ & ze & zir & zir & zirs & zirself \\ \hline \hline \end{tabular} \end{table} Table 1: **Pronouns.** List of binary, gender-neutral, and neopronouns (Lauscher et al., 2022) we use in this paper for evaluating the ability of language models to correctly _gender_ individuals. Each row of this table consists of a _pronoun group_, with each column specifying the pronoun for each of the form for that group. Inputs, \(\{x(p_{f})\}\):The inputs vary based on the type of language model being used. * For masked-models, the inputs are \(x\) with the missing [3] replaced with the mask token. For example, for T5, input is '_Aamari needs your history book. Could you lend it to <extra_id_o>?_' * For auto-regressive models, the inputs are \(x\) with [3] replaced with \(p_{f}\forall p\in|P|\). An example input set is {'_Aamari needs your history book. Could you lend it to him?_',..., '_Aamari needs your history book. Could you lend it to cir?_'} Constrained Label Set, \(\{y(p_{f})\}\):The labels vary based on the pre-training design of the models. * For T5, the labels are \(p_{f}\forall p\in|P|\), e.g. for accusative templates the label set is {his,...zir}. * For all remaining models, the labels are \(x\) with [3] replaced with \(p_{f}\forall p\in|P|\). An example label set is {'_Aamari needs your history book. Could you lend it to him?_',..., '_Aamari needs your history book. Could you lend it to cir?_'} For both masked and auto-regressive language models, the predicted output of each model is then computed in using its loss function, \(\mathcal{L}\): \[\hat{y}=\operatorname*{arg\,min}_{p\in P}\mathcal{L}(x(p_{f}),y(p_{f}))\] A detailed example evaluation with model inputs, labels, and output is illustrated in Appendix B. ### Experiments Direct PromptingWe directly prompt language models out of the box to test their ability to correctly predict declared pronouns. We use instances from the evaluation dataset (SS 3.1) and use a unified constrained decoding mechanism to get predictions from both masked and auto-regressive language models (SS 3.2.1). We use models4 of varying sizes from the BART Lewis et al. (2020), T5 Raffel et al. (2020), GPT-2 Radford et al. (2019), GPT-J Wang and Komatsuzaki (2021), OPT Zhang et al. (2022), BLOOM Scao et al. (2022), and LLaMA Touvron et al. (2023). The specific models along with their parameter counts are shown in Table 3. All computations are performed on a standard academic laboratory cluster. Footnote 4: We use the implementation from the HuggingFace library. We study the different ways of declaring preferred pronouns. We use two different declaration types and seven combinations of declared forms, * [leftmargin=*,topsep=0pt,topsep=0pt,parsep=0pt,leftmargin=*] * **Declaration Type:** We declare preferred pronouns for individuals using two formats, **explicit** and **parenthetical**. In the first case, pronouns are explicitly declared as _'[Name]'s pronouns are'_ followed by their preferred pronouns. In the second case, pronouns are declared in parenthesis after the first time a person's name is used in a sentence. An example of each declaration type is shown in Figure 2. * **Declaration Number:** We vary the number of pronouns declared between two to five. The pronoun forms that are declared for each number of declaration is shown in Table 2. Explaining Zero-Shot ObservationsIn order to better understand the zero-shot performance results we check two things. We take a look at the prevalence of pronoun tokens in the pre-training corpora \begin{table} \begin{tabular}{l l} \hline \hline **Dec.**\# & **Pronouns Declared** \\ \hline 2 & Nom., Acc. \\ 3 & Nom., Acc., Pos. Ind. \\ 3 & Nom., Acc., Pos. Dep. \\ 4 & Nom., Acc., Pos. Ind., Ref. \\ 4 & Nom., Acc., Pos. Dep., Ref. \\ 5 & Nom., Acc., Pos. Dep., Pos. Ind., Ref. \\ \hline \hline \end{tabular} \end{table} Table 2: **Declaration Number.** The pronoun forms that are declared for each declaration number \begin{table} \begin{tabular}{l l c} \hline \hline **Model Family** & **Model** & **\# Parameters** \\ \hline \multicolumn{3}{l}{**Auto-regressive LM**} \\ & gpt2 & 124M \\ & gpt2-medium & 355M \\ & gpt2-large & 774M \\ & gpt2-xl & 1.5B \\ GPT-J & gpt-j-6B & 6.7B \\ & bloom-560m & 560M \\ & bloom-1b1 & 1.1B \\ BLOOM & bloom-3b & 3B \\ & bloom-7b1 & 7.1B \\ & opt-350m & 350M \\ & opt-1.3b & 1.3B \\ OPT & opt-2.7b & 2.7B \\ & opt-6.7b & 6.7B \\ LLaMA & llama-7B & 6.7B \\ \hline \hline \multicolumn{3}{l}{**Span-Masked LM**} \\ BART & bart-base & 140M \\ & bart-large & 400M \\ & t5-small & 60M \\ T5 & t5-base & 220M \\ & t5-3b & 3B \\ \hline \hline \end{tabular} \end{table} Table 3: **Language Models.** Auto-regressive and spansmasked models evaluated for pronoun-based misgendering along with their parameter counts. of a few language models. Using the Elastic Search indices of **C4** (pre-training corpus for T5) (Raffel et al., 2020), and **Pile** (pre-training corpus for GPT-J) (Gao et al., 2020), we count the number of documents in each corpus that contain tokens for each pronoun in Table 1. We also check to see for each pronoun type if there is a difference in performance based on the gender association of the name. Differences in performance would indicate memorization of name and pronoun relationships from the pre-training corpora of the language models. In-Context LearningIn-context learning involves including training examples in the prompt, which is fed to the model along with the instance to be evaluated. This allows the model to adapt to new tasks without the need for any parameter updates. We experiment with 2,4,6, 10, and 20-shot settings using GPT-J-6B, OPT-6.7b, and LLaMA-7B models. These experiments are only conducted using explicit declarations of all five pronoun forms as this was best for neo-pronouns. We select the examples given in the prompt by randomly sampling templates, names, and pronouns that are not included in the specific instance being evaluated. ## 4 Results We test popular language models on their ability to correctly use declared pronouns when directly promoted using our evaluation dataset (SS 3.1). We conduct a thorough analysis of the variations in performance varies based on how pronouns were declared, the size of the models used, the form of the pronouns, and individual pronoun sets. We also illustrate the effect of using in-context learning, i.e., by providing models with examples of correct declared pronoun usage within the input prompts. ### Direct Prompting Average accuracy for correctly gendering instances in our evaluation dataset (SS 3.1) by pronoun type across all zero-shot experiments is shown in Figure 4. On average language models perform poorly at predicting gender-neutral pronouns (34.2% accuracy), and much worse at predicting neo-pronouns correctly (accuracy 7.7%). Effect of declarationWhen experiments are aggregated by declaration type (Fig. 5), we see that declaring pronouns **explicitly** is slightly better for correctly predicting neo-pronouns (from 5.9% accuracy to 9.5%). However, the opposite is true for singular _they_ and binary pronouns, which both perform better with **parenthetical** declarations. Declaring more pronoun forms improved performance for neo-pronouns (Table 6). On the other hand, the number of forms declared does not have much of an effect on predicting binary pronouns, and for singular _they_ increasing the number of declared forms slightly decreases performance. Effect of model sizeOur experiments do not show a consistent association with size (Fig. 3). However, some model families have consistent scaling patterns for specific pronoun types. OPT's performance for gender-neutral pronouns increases sharply with size: OPT-350m has an accuracy of 21.2%, whereas the model with 6.7b parameters has an accuracy of 94.2%. OPT also shows moderate gains with scale for neo-pronouns. On the \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Dec. \#**} & \multicolumn{3}{c}{**Pronoun Type**} \\ \cline{2-4} & **Binary** & **Neutral** & **Neo-Pronouns** \\ \hline 2 & 75.3 & **35.7** & 4.8 \\ 3 & 75.5 & 34.6 & 6.7 \\ 4 & 76.2 & 33.6 & **9.4** \\ 5 & **76.4** & 32.9 & **9.4** \\ \hline \hline \end{tabular} \end{table} Table 6: **Declaration Number.** Zero-shot gendering accuracy by the number of pronoun forms declared for each individual. Increasing the number of declared forms provides better performance for neo-pronouns, whereas for gender-neutral pronouns, the minimal declaration of only two pronouns works best. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Declaration Type**} & \multicolumn{2}{c}{**Pronoun Type**} \\ \cline{2-3} & **Binary** & **Neutral** & **Neo-Pronouns** \\ \hline **Explicit** & 69.5 & 27.4 & **9.5** \\ **Parenthetical** & 82.2 & **40.9** & 5.9 \\ \hline \hline \end{tabular} \end{table} Table 5: **Declaration Type.** Direct prompting accuracy by the declaration used to specify an individual’s preferred pronouns. _Explicit_ declarations provide slightly better performance for neo-pronouns, whereas the opposite is true for binary and gender-neutral pronouns. other hand, our analysis indicates that the performance of BLOOM for neutral pronouns exhibits a negative correlation with size, whereas it demonstrates a positive correlation for binary pronouns, and remains relatively stable for neo-pronouns. Effect of pronouns and pronoun formsAs displayed in Table 7, the overall accuracy for machine and feminine binary pronouns are similar at 75.4% and 76.3% respectively. However, the performance for neutral pronouns is less than half at an accuracy of 34.2%, with an even lower performance for neo-pronouns. Amongst the neo-pronouns, _thon_ exhibits the highest accuracy at 18.1%, followed by _x_e at 14.1%. As demonstrated in Table 8, there seems to be an inverse correlation between the performance of binary and neo-pronouns with respect to pronoun forms. Specifically, the nominative form exhibits the second highest accuracy for binary pronouns (81.1%) but the lowest for neo-pronouns (3.2%). Conversely, the possessive-independent form presents the second highest accuracy for non-binary pronouns (10.8%) but the lowest for binary pronouns (62.6%) name and gender in the USA). We also notice an association between binary pronouns and names. The predictive accuracy for masculine pronouns is much higher when associated with male names, with accuracy 2.7 times greater than when associated with female names (Table 10). Likewise, the performance for feminine pronouns is 2.1 times higher when associated with female names rather than male ones. These findings suggest that the models may have memorized the association of certain names with specific pronouns from their training on corpora. Corpus counts of pronounsWe compute unigram counts for two pretraining corpora, C4 and Pile. In both cases, neo-pronouns are substantially rarer than binary pronouns (Table 11). Further, even the documents that contain non-binary pronoun tokens often do not use them semantically as pronouns (see Table 12 for examples). This means that language models pretrained on these corpora would not have instances in the data to learn about the usage of non-binary pronouns. Though the cases of _they_ are high, the top retrieved cases are of the plural usage of _they_. These trends are consistent with the text available generally on the web; see OpenWebText (Gokaslan et al., 2019) (Table 11). Notably, in all three corpora, masculine pronouns are more prevalent than feminine ones. ### In-Context Learning LLaMA-7B accuracy for correctly predicting neo-pronouns improves as more examples are provided with a maximum of 64.7% at 20 shots (Table 13). However, GPT-J-6B and OPT-6.7b only perform better for neo-pronouns up to 6 shots. Similar k-shot behavior where performance decreases with high values of \(k\) has been noted in GPT-3 and OPT on RTE (Brown et al., 2020; Zhang et al., 2022). There can also generally high variance in few-shot performance even with fixed number of samples \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Pronoun**} & \multicolumn{3}{c}{Gender of the Name} \\ \cline{2-4} & Female & Male & Unisex \\ \hline She & 91.4 & 44.2 & 81.9 \\ He & 34.7 & 92.1 & 83.3 \\ They & 27.3 & 28.3 & 38.3 \\ \hline \hline \end{tabular} \end{table} Table 10: **Binary and gender-neutral pronoun performance breakdown by gender association of individual names.** Models are able to predict feminine pronouns much more accurately for individuals with feminine names than masculine ones. Similarly, they are able to better predict masculine pronouns for masculine names rather than feminine ones. \begin{table} \begin{tabular}{l c c c} \hline \hline **Pronoun** & **Document Excerpt** \\ \hline she (C4) & She Believed She Could So She Did Wall Art... \\ \hline they (Pile) & When they saw the courage of Peter and John and realized that they were unschooleed, ordinary men, they were astonished and they took note that these men had been... \\ \hline e (Pile) & ‘E’ is for e-e-e-e-e-e-e... \\ \hline co (C4) &... Sign Company in Colorado CITIES WE SERVE Agate, CO... \\ \hline \hline \end{tabular} \end{table} Table 12: **Excerpts from pre-training corpora.** This table shows small excerpts from a top retrieved document each for a binary (_she_), neutral (_they_), and neo-pronouns (_e, co_) from either C4 or Pile. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{**Top 10**} & \multicolumn{3}{c}{**Bottom 10**} \\ \hline Name & Gender & Name & Gender \\ \hline True & Unisex & Katherine & Female \\ Freedom & Unisex & Angela & Female \\ Harvest & Unisex & Helen & Female \\ Britain & Unisex & Deborah & Female \\ Germany & Unisex & Stephanie & Female \\ Indiana & Unisex & Kathleen & Female \\ Vegas & Unisex & Teresa & Female \\ Shell & Unisex & Heather & Female \\ Justice & Unisex & Judith & Female \\ Berkeley & Unisex & Margaret & Female \\ \hline \hline \end{tabular} \end{table} Table 9: **Top and bottom 10 names for neo-pronouns.** The names that models are the best at predicting non-binary pronouns are all unisex, whereas the bottom ones are mostly gendered names, suggesting memorized association between pronouns and names. \begin{table} \begin{tabular}{l c c c} \hline \hline **Pronoun** & **Pronoun** & \multicolumn{3}{c}{**Corpus**} \\ \cline{3-4} **Type** & **Group** & C4 & OpenWT & Pile \\ \hline \multirow{3}{*}{Binary} & he & 552.7M & 15.8M & 161.9M \\ & she & 348.0M & 5.5M & 68.0M \\ \hline \multirow{3}{*}{Neutral} & they & 769.3M & 13.5M & 180.4M \\ \hline \multirow{6}{*}{Neo-Pronouns} & thon & 2.1M & 5.5K & 83.4K \\ & xe & 2.5M & 2.3K & 133.4K \\ \cline{1-1} & ze & 1.8M & 3.3K & 177.2K \\ \cline{1-1} & co & 172.0M & 1.3M & 27.7M \\ \cline{1-1} & e & 248.7M & 537.8K & 23.2M \\ \cline{1-1} & ae & 5.4M & 7.9K & 412.2K \\ \cline{1-1} & ey & 15.8M & 63.2K & 2.2M \\ \cline{1-1} & vi & 12.9M & 45.2K & 2.2M \\ \hline \hline \end{tabular} \end{table} Table 11: **Corpus Counts.** Count of the number of documents containing each pronoun in C4, Open Web Text, and Pile corpora. We notice dramatically fewer documents containing neo-pronouns than binary ones. (Lu et al., 2021). For the pronoun _they_, we see different trends from each model. For GPT-J, similar to non-binary pronouns, performance improves as more examples are provided up to 6 shots. On the other hand, for OPT-6.7b and LLaMA-7B, there is a large drop in performance from the zero-shot to the few-shot setting. ## 5 Related Work There has been extensive work to understand and mitigate gender bias in language technologies (Bolukbasi et al., 2016; Zhao et al., 2018; Kurita et al., 2019). However, this has mostly been restricted to a binary view of gender. Recently some work has been done to explore gender bias in a non-binary paradigm. For instance, Dev et al. (2021) discuss ways in which gender-exclusivity in NLP can harm non-binary individuals. Ovalle et al. (2023) design Open Language Generation (OLG) evaluation focused on the experiences of transgender and non-binary individuals and the everyday sources of stress and marginalization they face. Brandl et al. (2022) show that gender-neutral pronouns in Danish, English, and Swedish are associated with higher perplexities in language models. Cao and Daume III (2020) create specialized datasets for coreference resolutions with neo-pronouns, while Lauscher et al. (2022) provide desiderata for modelling pronouns in language technologies. However, these studies only focus on a few neo-pronouns (_eg. xe and ze_), and only Dev et al. (2021) and Brandl et al. (2022) evaluate misgendering but only on a few language models and in zero-shot settings. We are the first to comprehensively evaluate large language models on a wide range of pronouns and pronoun forms. ## 6 Conclusion In this work, we show that current language models heavily misgender individuals who do not use feminine or masculine personal pronouns (e.g. _he, she_). Despite being provided with explicitly declared pronouns, these models do not use the correct neo-pronouns and struggle even with gender-neutral pronouns like _they_. Our analysis suggests the poor performance may be due to the scarcity of neo pronouns in the pre-training corpora and memorized associations between pronouns and names. When prompted with a few explicit examples of pronoun use, the language models do improve, suggesting some ability to adapt to new word use. Nevertheless, it is unclear how few-shot prompting of pronoun use can mitigate bias and exclusion harms in practice in real-world downstream applications of language models. We hope researchers will expand upon our work to evaluate language technologies on their abilities to understand non-binary identities and mitigate their biases. To facilitate further research in this area, we release the full dataset, code, and demo at [https://tamanahossainkay.github.io/misgendered/](https://tamanahossainkay.github.io/misgendered/). While evaluation of misgendering is a crucial first step, future work should aim to go beyond evaluation and focus on developing techniques to correct it. Misgendering can be present in both human-written and model-generated content, especially towards non-binary and transgender individuals. Hence, it is crucial to advance efforts toward detecting misgendering and implementing corrective measures. Individuals who often fall victim to misgendering, such as non-binary and transgender people, should be empowered and given central roles in shaping the work on these topics. ## Acknowledgements We would like to thank Yanai Elazar, Emily Denton, Pouya Pezeshkpour, Dheeru Dua, Yasaman Razeghi, Dylan Slack, Anthony Chen, Kolby Nottingham, Shivanshu Gupta, Preethi Seshadri, Catarina Belem, Matt Gardner, Arjun Subramonian, Anaelia Ovalle, and anonymous reviewers for their \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} **Pronoun** \\ **Type** \\ \end{tabular} } & \multirow{2}{*}{**Shot**} & \multicolumn{3}{c}{**Model**} \\ \cline{3-5} & & GPT-J-6B & OPT-6.7b & LLaMA-7B \\ \hline \multirow{5}{*}{Neutral} & 0 & 33.4 & **94.2** & **92.5** \\ & 2 & 50.9 & 69.2 & 66.1 \\ & 4 & 62.0 & 68.8 & 61.4 \\ & 6 & **66.6** & 67.9 & 70.0 \\ & 10 & 48.0 & 69.3 & 74.6 \\ & 20 & 51.1 & 68.6 & 80.7 \\ \hline \multirow{5}{*}{ \begin{tabular}{c} Neo- \\ Pronouns \\ \end{tabular} } & 0 & 6.7 & 11.9 & 16.5 \\ & 2 & 30.4 & 31.7 & 51.6 \\ & 4 & 39.7 & 33.7 & 39.2 \\ & 6 & **45.4** & **38.8** & 58.1 \\ & 10 & 24.8 & 23.9 & 63.3 \\ & 20 & 30.5 & 31.8 & **64.7** \\ \hline \hline \end{tabular} \end{table} Table 13: **In-Context Learning.** Language models can adapt moderately to neo-pronouns with a few examples. We see improvement from LLaMA-7B as the number of shots is increased. We also see improvement from GPT-J and OPT-6.7b but only up to k=6. Bold numbers represent the highest accuracy for a model and pronoun type, whereas underlined values represent the highest accuracy for a pronoun type. discussions and feedback. This work was funded in part by Hasso Plattner Institute (HPI) through the UCI-HPI fellowship, in part by NSF awards IIS-2046873, IIS-2040989, and CNS-1925741. ## Limitations This paper evaluates language models for their ability to use gender-neutral pronouns and neopronouns using a template-based dataset, MISESTDER. While this approach is helpful in assessing bias, the measurements can be sensitive to the choice of templates Delobelle et al. (2022); Seshadri et al. (2022); Alnegheimish et al. (2022); Selvam et al. (2022). Consequently, our findings should not be considered as the definitive verdict on the phenomenon of misgendering by language models. There are other limitations to our work that should be considered as well. We also only conduct an upstream evaluation on language models and do not assess downstream applications. Our evaluation is also limited to a Western conception of gender and restricted to English only. We only consider names and genders assigned at birth in the United States. Subsequent changes in names or genders are not taken into account in our analysis. Furthermore, our work does not take into account individuals who use multiple sets of pronouns, such as _she/they_ combinations Them (2021), nor does it consider the full range of nonbinary pronouns as the list continues to expand Lauscher et al. (2022). However, additional names (rare, self-created, or non-Western) and neo-pronouns can be directly used with our framework to further evaluate LLMs. We release our full code dataset to make this easier. Lastly, there are larger models that were not evaluated due to limitations in our computational budget. Further research needs to be done to address these limitations for the complete assessment of accurate preferred pronoun usage by language models. ## Ethics Statement Evaluations of gender bias in language technologies need a holistic outlook, such that they evaluate the harms of stereotyping, erasure of identities, misgendering, dead-naming, and more. Our work attempts to address one specific type of misgendering harm and builds a framework that estimates the extent of misgendering propagated by a model under specific settings. We hope our framework enables model evaluations that are not exclusionary of gender identities. However, the absence of measured misgendering by this paradigm is not evidence of no misgendering or other gender harms at all. For responsible model deployment, it is imperative that they be appropriately scrutinized based on the context of usage.
2303.17123
Masked and Adaptive Transformer for Exemplar Based Image Translation
We present a novel framework for exemplar based image translation. Recent advanced methods for this task mainly focus on establishing cross-domain semantic correspondence, which sequentially dominates image generation in the manner of local style control. Unfortunately, cross-domain semantic matching is challenging; and matching errors ultimately degrade the quality of generated images. To overcome this challenge, we improve the accuracy of matching on the one hand, and diminish the role of matching in image generation on the other hand. To achieve the former, we propose a masked and adaptive transformer (MAT) for learning accurate cross-domain correspondence, and executing context-aware feature augmentation. To achieve the latter, we use source features of the input and global style codes of the exemplar, as supplementary information, for decoding an image. Besides, we devise a novel contrastive style learning method, for acquire quality-discriminative style representations, which in turn benefit high-quality image generation. Experimental results show that our method, dubbed MATEBIT, performs considerably better than state-of-the-art methods, in diverse image translation tasks. The codes are available at \url{https://github.com/AiArt-HDU/MATEBIT}.
Chang Jiang, Fei Gao, Biao Ma, Yuhao Lin, Nannan Wang, Gang Xu
2023-03-30T03:21:14Z
http://arxiv.org/abs/2303.17123v1
# Masked and Adaptive Transformer for Exemplar Based Image Translation ###### Abstract We present a novel framework for exemplar based image translation. Recent advanced methods for this task mainly focus on establishing cross-domain semantic correspondence, which sequentially dominates image generation in the manner of local style control. Unfortunately, cross-domain semantic matching is challenging; and matching errors ultimately degrade the quality of generated images. To overcome this challenge, we improve the accuracy of matching on the one hand, and diminish the role of matching in image generation on the other hand. To achieve the former, we propose a masked and adaptive transformer (MAT) for learning accurate cross-domain correspondence, and executing context-aware feature augmentation. To achieve the latter, we use source features of the input and global style codes of the exemplar, as supplementary information, for decoding an image. Besides, we devise a novel contrastive style learning method, for acquire quality-discriminative style representations, which in turn benefit high-quality image generation. Experimental results show that our method, dubbed MATEBIT, performs considerably better than state-of-the-art methods, in diverse image translation tasks. The codes are available at [https://github.com/AiArt-HDU/MATEBIT](https://github.com/AiArt-HDU/MATEBIT). + Footnote †: *Corresponding Author ## 1 Introduction Image-to-image translation aims at transfer images in a source domain to a target domain [16, 50]. Early studies learn mappings directly by Generating Adversarial Networks (GANs), and have shown great success in various applications [2, 42]. Recently, exemplar based image translation [29, 45, 30], where an exemplar image is used to control the style of translated images, has attracted a lot of attention. Such methods allow high flexibility and controllability, and have a wide range of potential applications in social networks and metaverse. For example, people can transfer a facial sketch to an artistic portrait, in the style of oil paintings or avatars. Despite the remarkable progress, yielding high-fidelity images with consistent semantic and faithful styles remains a grand challenge. Early pioneering works [15, 21, 35] attempt to globally control the style of generated images. However, such methods ignore spatial correlations between an input image and an exemplar, and may fail to produce faithful details. Recently, some advanced methods [44, 45, 25, 49] first establish the cross-domain semantic correspondence between an input image and an exemplar, and then use it to warp the exemplar for controlling local style patterns. In these methods, the quality of generated images relies heavily on the learned correspondence [39]. Unfortunately, cross-domain semantic matching is challenging, since there is no reliable supervision on correspondence learning [45]. As a result, potential matching errors ultimately lead to degraded artifacts in generated images. To combat this challenge, we propose to boost the matching accuracy on one hand, and to diminish the role of matching in image generation on the other hand. Inspired by the great success of Transformers [6, 10, 26, 41], we first devise a _Masked and Adaptive Transformer_ (MAT) for learning ac Figure 1: Visualization of correspondence maps. The red point is the query position. _Full Corr._ and _Masked Corr._ denote the full correspondence [45] and masked one in our method, respectively. CAM denotes visualization by _Class Activation Mapping_[48]. curate cross-domain correspondence and executing context-aware feature augmentation. Previous works [44, 45, 49] have used the vanilla attention mechanism [41] for learning full correspondence. However, the initial attention typically involves ambiguous correspondences (2nd row in Fig. 1). To mitigate these limitations, in MAT, we use a masked attention to distinguish the correspondence as reliable or not, and then reliability-adaptively aggregate representations. Besides, the _Feed-Forward Network_ (FFN) [41] in vanilla transformers neglects contextual correlations inside an image. We thus replace FFN by an adaptive convolution block [28], where the coordinate attention [12] and depth-wise separable convolution [5] are used to capture contextual correlations and to improve efficiency. With a joint consideration of matching reliability and contextual correlations, MAT gradually focuses on accurate correspondences and emphasizes on features of interest (3rd row in Fig. 1). In addition, to boost both the semantic consistency and style faithfulness, we supplementally use semantic features of the input image and global style codes of the exemplar for decoding an image. To this end, we first design our whole network following the U-Net architecture [16]. Besides, we devise a novel contrastive style learning (CSL) framework for acquiring discriminative style representations. Recently, Zhang et al. [47] propose a similar CSL method, where the target exemplar is used as a positive sample, and the other exemplars as negative ones. Differently, we use low-quality images, generated during early training stages, as negative samples. In this way, our style codes are desired to discriminate not only subtle differences in style, but also those in perceptual quality. Ultimately, the learned _global_ style codes, cooperating with the _local_ style control induced by MAT, in turn benefit high-quality image generation. With the proposed techniques above, our full model, dubbed MATEBIT, diminishes the impact of position-wise matching on image quality, and integrates both local and global style control for image generation. Experimental results show that MATEBIT generates considerably more plausible images than previous state-of-the-art methods, in diverse image translation tasks. In addition, comprehensive ablation studies demonstrate the effectiveness of our proposed components. Finally, we perform interesting applications of photo-to-painting translation and Chinese ink paintings generation. ## 2 Relate Work **Exemplar Based Image Translation.** Recently, exemplar based image translation has attracted increasing attention. For example, Park et al. [35] learn an encoder to map the exemplar image into a global style vector, and use it to guide image generation. Such a global style control strategy enables style consistency in whole, but fails to produce subtle details. Most recently, researchers propose a matching-then-generation framework [39]. Specially, they first establish dense correspondence between an input and an exemplar, and then reshuffle the exemplar for locally control the style of synthesize images. For example, Zhang et al. [45] establish position-wise correspondence based on the Cosine attention mechanism and warp the exemplar correspondingly. Afterwards, the warped image dominates the generation of images in the manner of SPADE [35]. To reduce the cost of matching in high-resolution image generation, Zhou et al. [49] introduce a hierarchical refinement of semantic correspondence from ConvGRU-PatchMatch. Besides, Liu et al. [25] used a dynamic pruning method for learning hierarchical sparse correspondence. They also use reliability-adaptive feature integration to improve the quality of generated images. Previous methods merely use global or local style control, and the latter relies heavily on the learned correspondence. Besides, they consider little about contextual correlations inside an image. In this paper, we use both global and local style control to boost the style consistency. Besides, we take contextual correlations into consideration and execute reliability-adaptive feature augmentation. **Transformers.** Transformers [41] have shown incredible success from the field of natural language processing (NLP) [19] to computer vision (CV) [6, 26]. Multi-head attention (MHA) and FFN are key components in a Transformer, and have been used in exemplar based image translation. However, they induce unreliable matching results and neglect context correlations in feature translation. In our MAT, we combat these limitations by replacing them with a masked attention and a context-aware convolution block, respectively. Recently, researchers use semantic masks to facilitate representation learning [4, 8, 37], where a mask predictor is required. Differently, we use a ReLU function to mask over the attention layer, for distinguishing correspondence as reliable or not (Sec. 3.1). In general, MAT follows a concise and efficient architecture. **Contrastive Learning.** Contrastive learning has shown its effectiveness in various computer vision tasks [9, 13, 34]. The basic idea is to learn a representation by pushing positive samples toward an anchor, and moving negative samples away from it. Different sampling strategies and contrastive losses have been extensively explored in various downstream tasks. For example, Chen et al. [3] and He et al. [9] obtain positive samples by augmenting original data. In the field of image translation, Park et al. [34] propose patch-wise contrastive learning by maximizing the mutual information between cross-domain patches. Similarly, Zhang et al. [47] use contrastive learning for acquiring discriminative style representations. In the task of exemplar based image translation, Zhan et al. [44] use contrastive learning to align cross-domain images to a consistent semantic feature space, so as to boost the accuracy of matching. Differently, we use early generated images as negative samples, so that the learned style representations can discriminate subtle differences in both style and perceptual quality (Sec. 3.2). ## 3 The Proposed Method Given an input image \(x_{A}\) in domain \(\mathcal{A}\) and an exemplar image \(y_{B}\) in domain \(\mathcal{B}\), our goal is to generate a target image \(x_{B}\) which preserves semantic structures in \(x_{A}\) but resembles the style of similar parts in \(y_{B}\). Fig. 2 shows an overview of our translation network \(\mathcal{G}\). Specially, we first align \(x_{A}\) and \(y_{B}\) to an intermediate feature space by encoders \(\mathcal{E}_{A}\) and \(\mathcal{E}_{B}\), respectively. Afterwards, we use a _Masked and Adaptive Transformer_ (MAT) for correspondence learning and feature augmentation. Finally, a decoder \(\mathcal{D}_{B}\) produces an output image \(\hat{x}_{B}\) based on the augmented features, as well as the source features and target style codes. Details are described below. ### Masked and Adaptive Transformer (MAT) In order to establish accurate cross-domain correspondence, we propose a novel and concise Transformer architecture, i.e. MAT. In general, the architecture of MAT (Fig. 3b) follows that of vanilla Transformers (Fig. 3a) [41]. Differently, we use masked attention to distinguish reliable and unreliable correspondence, instead of using multi-head attention. Besides, we use _Positional Normalization_ (PONO) [23] and an _Adaptive Convolution_ (AdaConv) block [28], instead of LN and MLP-based FFN, respectively. MAT is desired to gradually concentrate on accurate matching, and to reliability-adaptively augment representations with contextual correlations. **Masked Correspondence Learning.** Let \(\mathbf{X}_{A}\in\mathbb{R}^{H\times W\times C}\) and \(\mathbf{Y}_{B}\in\mathbb{R}^{H\times W\times C}\) be the representations of \(x_{A}\) and \(y_{B}\) in the intermediate feature space, with height \(H\), width \(W\), and \(C\) channels. We first map \(\mathbf{X}_{A}\) to the query \(\mathbf{Q}\in\mathbb{R}^{HW\times C}\), and \(\mathbf{Y}_{B}\) to the key \(\mathbf{K}\in\mathbb{R}^{HW\times C}\) and value \(\mathbf{V}\in\mathbb{R}^{HW\times C}\), by using \(1\times 1\) convolutions, respectively. As shown in Fig. 3d, we add positional encoding (PE) to \(\mathbf{X}_{A}\) and \(\mathbf{Y}_{B}\), for embedding spatial correlations. Afterwards, we learn the initial correspondence \(\mathbf{A}\in\mathbb{R}^{HW\times HW}\) following the Cosine attention mechanism [45], i.e. \[\mathbf{A}(u,v)=\frac{\tilde{\mathbf{Q}}(u)\tilde{\mathbf{K}}(v)^{T}}{|| \tilde{\mathbf{Q}}(u)||\cdot||\tilde{\mathbf{K}}(v)||}, \tag{1}\] with \(\tilde{\mathbf{Q}}(u)=\mathbf{Q}(u)-\tilde{\mathbf{Q}}(u),\ \tilde{\mathbf{K}}(v)= \mathbf{K}(v)-\tilde{\mathbf{K}}(v)\), where \(u,v\in[1,...,HW]\) are position indices; \(\tilde{\mathbf{Q}}(u)\) and \(\tilde{\mathbf{K}}(v)\) are the means of \(\mathbf{Q}(u)\) and \(\mathbf{K}(v)\), respectively. \(\mathbf{A}(u,v)\) is the matching score between \(\mathbf{Q}(u)\) and \(\mathbf{K}(v)\). Previous methods [44, 45] typically use the initial correspondence map \(\mathbf{A}\) to reshuffle an exemplar for controlling local patterns in image synthesis. However, induced by the difficulties in cross-domain correspondence learning, \(\mathbf{A}\) involves unreliable match scores (Fig. 3d). As a result, the reshuffled image will lead to implausible artifacts in generated images. To combat this limitation, we distinguish initial matching scores as reliable or not, according to their signs [32]. The masked correspondence map becomes: \[\mathbf{A}_{mask}=\mathrm{ReLU}(\mathbf{A}), \tag{2}\] In DynaST [25], two networks are used to predict the reliability mask of correspondence. However, it's challenging Figure 3: Detailed architectures. (a) Vanilla Transformer block, (b) MAT block, (c) AdaConv block, and (d) _Masked Attention_. Figure 2: Overview of our image translation network, MATEBIT. to effectively train the network, because there is no supervision on matching during training. In contrast, ReLU contains no learnable parameters and ultimately leads to superior performance over DynaST (Sec. 4.1). **Reliability-Adaptive Feature Aggregation.** For regions with reliable correspondence in \(x_{A}\), we use \(\mathbf{A}_{mask}\) to warp the value features, \(\mathbf{V}\), derived from the exemplar: \[\mathbf{X}_{cor}=\tilde{\mathbf{A}}_{mask}\mathbf{V},\text{ with }\tilde{ \mathbf{A}}_{mask}=\mathrm{softmax}(\alpha\cdot\mathbf{A}_{mask}), \tag{3}\] where \(\alpha\) is a scaling coefficient to control the sharpness of the softmax function. In default, we set its value as \(100\). For regions with unreliable correspondence in \(x_{A}\), \(\mathbf{X}_{cor}\) provides an average style representation of \(\mathbf{V}\). We further extract complementary information from the query, \(\mathbf{Q}\), derived from the input. Inspired by SPADE [35], we first transfer \(\mathbf{Q}\) to the target domain by using pixel-wise modulation parameters (i.e., \(\boldsymbol{\gamma}\) for scale and \(\boldsymbol{\beta}\) for bias) learned from \(x_{A}\). The modulation is formulated by: \[\mathbf{Q}_{norm}=\boldsymbol{\gamma}(x_{A})\frac{\mathbf{Q}-\mu(\mathbf{Q}) }{\sigma(\mathbf{Q})}+\boldsymbol{\beta}(x_{A}), \tag{4}\] where \(\mu(\mathbf{Q})\) and \(\sigma(\mathbf{Q})\) are the mean value and standard deviance of \(\mathbf{Q}\). Afterwards, we select the translated features of unreliably corresponded regions in \(x_{A}\) by: \[\mathbf{X}_{uncor}=(1-\sum\nolimits_{j}\mathbf{A}_{mask})\odot\mathbf{Q}_{ norm}, \tag{5}\] where the summation is along the second dimension; \(\odot\) denotes point-wise production with broadcasting. Since \(\boldsymbol{\gamma}\) and \(\boldsymbol{\beta}\) are learned from the input image \(x_{A}\), the modulated features preserve the semantic information of \(x_{A}\). Besides, constraints on the generated image will push the selected features convey to the style of \(y_{B}\). Ideally, \(\mathbf{X}_{cor}\) and \(\mathbf{X}_{uncor}\) would complement each other and facilitate both semantic consistency and style relevance in image generation. To this end, we integrate \(\mathbf{X}_{cor}\), \(\mathbf{X}_{uncor}\), and \(\mathbf{Q}\) by: \[\mathbf{X}_{agg}=\mathrm{PONO}(\mathbf{X}_{cor}+\mathbf{X}_{uncor}+\mathbf{ Q}). \tag{6}\] In PONO [23], features at each position are normalized dependently. Compared to LN in vanilla transformers and DynaST [25], PONO boosts the flexibility in reliability-adaptive feature aggregation. **Context-Aware Feature Augmentation.** Inspired by ConvNeXT [28], we replace FFN by an AdaConv block to position-adaptively emphasize informative representations. Besides, we use the _coordinate attention_ (CoordAtten) module [12] to capture contextual correlations. The architecture of the AdaConv block is as shown in Fig. 2(c). We fist use the depthwise convolution (Dwise) to update representations in each channel separately; and then use two pointwise convolutions (Pwise) to automatically emphasize representations of interest, at every position. The _Gaussian Error Linear Unit_ (GELU) activation function and _Layer Norm_ (LN) are used after the first Pwise layer [28]. Notably, CoordAtten is used after the Dwise layer for modeling long-range dependencies in an image. Specially, CoordAtten produces cross-channel and position-sensitive attention maps, which helps our model to more accurately locate the representations of interest [12]. Finally, the output of a MAT block is obtained with a residual connection, i.e. \(\mathbf{X}_{\mathrm{MAT}}=\mathrm{AdaConv}(\mathbf{X}_{agg})+\mathbf{X}_{agg}\). In the implementation, we stack three MAT blocks in default to gradually refine the correspondence and to augment informative representations (Fig. 1). Empirical verifications will be given in Sec. 4.2. **Benefits of MAT.** Fig. 2(d) illustrates the impact of MAT. The query point locates over the left eye of the source image. Here we show the magnitudes of its correspondence over the exemplar, in the third layer of MAT. Obviously, the original correspondence \(\mathbf{A}\) covers both eyes of the exemplar. In contrast, the masked correspondence \(\mathbf{A}_{mask}\) accurately concentrates over the left eye. Such superiority significantly boost the quality of ultimate images. ### Contrastive Style Learning (CSL) In MATEBIT, we use the encoder \(\mathcal{E}_{B}\) to extract local style information \(\mathbf{X}_{B}\), and then a MLP to extract global style codes \(\mathbf{z}\). \(\mathbf{X}_{B}\) and \(\mathbf{z}\) perform local and global style control on generated images, respectively (Sec. 3.3). To boost the discriminative capacity of style representations, as well as the quality of generated images, we propose a novel _contrastive style learning_ (CSL) method (as shown in Fig 4). In our settings, the exemplars are drawn by human artists and thus considered as high-quality. In contrast, the images generated in early training stages are typically low-quality. Inspired by the idea of contrastive learning [9], we use the exemplar \(y_{B}\) as the positive sample, while a collection of early generated images as negative. Let \(\mathbf{z}\) denotes style Figure 4: Contrastive style learning. The memory bank consists of divergent low-quality images generated in early training stages. codes of the generated image \(\hat{x}_{B}\), \(\mathbf{z}^{+}\) that of exemplar \(y_{B}\), and \(\{\mathbf{z}_{1}^{-},\mathbf{z}_{2}^{-},...,\mathbf{z}_{m}^{-}\}\) the style codes of \(m\) negative samples. CSL learns style representations by maximizing the mutual information between anchors and positive samples, while minimizing that between anchors and negative samples. Our contrastive style loss is computed by: \[\mathcal{L}_{style}=-\log\frac{\exp(\frac{\mathbf{z}^{T}\mathbf{z}^{+}}{\tau}) }{\exp(\frac{\mathbf{z}^{T}\mathbf{z}^{+}}{\tau})+\sum_{j=1}^{m}\exp(\frac{ \mathbf{z}^{T}\mathbf{z}_{j}^{-}}{\tau})}, \tag{7}\] where \(\tau=0.07\) and \(m=1024\). In the implementation, we use a queue to cache negative style vectors. ### Translation network To boost both the semantic consistency and style faithfulness, we additionally use source semantic features and global style codes for decoding an image. Specially, we design our whole translation network following U-Net (Fig. 2), where the multi-level features in \(\mathcal{E}_{A}\) are skip-connected to the decoder \(\mathcal{D}_{B}\), for supplementing informative semantic structures of the input image \(x_{A}\). Besides, we use the style codes \(\mathbf{z}\) to globally control the style of generated images, in the manner of AdaIN [14]. Specially, \(\mathbf{z}\) is mapped to channel-wise modulating factors by fully-connected (FC) layers. In this way, we diminish the impact of correspondence learning on image generation, and provide reliable style control for even unmatched regions. In summary, our translation network allows both local and global style control, and reuses the semantic features of input images. As a result, the generated image is desired to present consistent semantic to the input \(x_{A}\) and faithful style to the exemplar \(y_{B}\). More details of our network are available in the supplementary material. ### Loss functions Our whole network is end-to-end optimized to jointly achieve high-fidelity image generation and accurate correspondence. Following [45], we obtain training triplets \(\{x_{A},y_{B},x_{B}\}\) from the ready-made data pair \(\{x_{A},x_{B}\}\), where \(y_{B}\) is a geometrically warped version of \(x_{B}\). The generated image is denoted by \(\hat{x}_{B}=\mathcal{G}(x_{A},y_{B})\). Our loss functions are similar to [45], except for the previous contrastive style loss \(\mathcal{L}_{style}\) and the structural loss \(\mathcal{L}_{str}\) below. **Semantic alignment Loss.** For accurate cross-domain correspondence learning, the encoders \(\mathcal{E}_{A}\) and \(\mathcal{E}_{B}\) should align \(x_{A}\) and \(x_{B}\) to consistent representations. The corresponding semantic alignment loss is: \[\mathcal{L}_{align}=\left\|\mathcal{E}_{A}(x_{A})-\mathcal{E}_{B}(x_{B}) \right\|_{1}. \tag{8}\] **Correspondence Loss.** Ideally, if we warp \(y_{B}\) in the same way as Eq.3, the resulting image should be exactly \(x_{B}\). We thus constrain the learned correspondence by: \[\mathcal{L}_{corr}=\left\|\tilde{\mathbf{A}}_{mask}^{T}y_{B}\downarrow-x_{B} \downarrow\right\|_{1}, \tag{9}\] where \(\downarrow\) indicates down-sampling \(y_{B}\) and \(x_{B}\) to the size (i.e. width and height) of \(\mathbf{X}_{A}\). **Perceptual Loss.** The generated image \(\hat{x}_{B}\) should be semantic-consistent with the ground truth \(x_{B}\) in term of semantic. We thus use the perceptual loss: \[\mathcal{L}_{perc}=\left\|\varphi_{l}(\hat{x}_{B})-\varphi_{l}(x_{B})\right\|_{ 1}, \tag{10}\] where \(\varphi_{l}\) denotes the activations after layer \(relu4\_2\) in pre-trained VGG19 [40], which represent high-level semantics. **Contextual Loss.** In addition, the generated image should be in the same style as the exemplar. In addition to the previous contrastive style loss (Eq.7), we additionally use the contextual loss (CX) [31] to constrain on local style consistency. The contextual loss is computed by: \[\mathcal{L}_{ctx}=-\log\left(\sum_{l}w_{l}\mathrm{CX}(\varphi_{l}(\hat{x}_{B} ),\varphi_{l}(y_{B}))\right) \tag{11}\] where \(w_{l}\) balances the terms of different VGG19 layers. **Structural Loss.** The generated image should preserve semantic structures in the input image. Correspondingly, we use the _Learned Perceptual Image Patch Similarity_ (LPIPS) [46] between their boundaries as the structural loss: \[\mathcal{L}_{str}=\mathrm{LPIPS}(\mathcal{H}(\hat{x}_{B}),\mathcal{H}(x_{B})), \tag{12}\] where \(\mathcal{H}\) is the HED algorithm [43], which has been widely used for extracting semantic boundaries in an image. **Adversarial loss.** Finally, we add a discriminator \(\mathcal{D}\) to distinguish real images in domain \(\mathcal{B}\) and the generated images [7]. The adversarial loss is: \[\mathcal{L}_{adv}^{\mathcal{D}} =-\mathbb{E}[h(\mathcal{D}(y_{B}))]-\mathbb{E}[h(-\mathcal{D}(\hat{x }_{B}))], \tag{13}\] \[\mathcal{L}_{adv}^{\mathcal{G}} =-\mathbb{E}[\mathcal{D}(\hat{x}_{B})],\] where \(h(t)=\min(0,-1+t)\) is the hinge loss function [1]. **Total loss.** In summary, our overall objective function is, \[\min_{\mathcal{G}}\max_{\mathcal{D}} \lambda_{1}\mathcal{L}_{style}+\lambda_{2}\mathcal{L}_{align}+ \lambda_{3}\mathcal{L}_{corr}+\lambda_{4}\mathcal{L}_{str} \tag{14}\] \[+\lambda_{5}(\mathcal{L}_{perc}+\mathcal{L}_{ctx})+\lambda_{6}( \mathcal{L}_{adv}^{\mathcal{G}}+\mathcal{L}_{adv}^{\mathcal{D}})\] where \(\lambda\) denotes the weight parameters. ## 4 Experiments **Implementation details.** We apply spectral normalization [33] to all the layers in the translation network and discriminator. We use the Adam [20] solver with \(\beta_{1}=0\) and \(\beta_{2}=0.999\). The learning rates for the generator and discriminator are set as \(1e-4\) and \(4e-4\) respectively, following TTUR [11]. The experiments are conducted using 4 24GB RTX3090 GPUs. Limited by the computation load, we restrict the resolution of generated images to \(256\times 256\) in all translation tasks. **Datasets.** We mainly conduct experiments on the following datasets. (1) **CelebA-HQ**[22] contains 30,000 facial photos. We chose 24,000 samples as the training set and 3000 as the test set. (2) **Metfaces**[18] consists of 1336 high-quality artistic facial portraits. (3) **AAHQ**[24] consists of high-quality facial avatars. We randomly select 1500 samples for training and 1000 samples for testing. (4) **Ukiyo-e**[36] consists of high-quality Ukiyo-e faces. We randomly select 3000 and 1000 samples for training and testing, respectively. (5) **Cartoon**[36] consists of 317 cartoon faces. (6) **DeepFashion**[27] consists of 800,00 fashion images. On CelebA-HQ, we connect the face landmarks for face region, and use Canny edge detector to detect edges in the background. On DeepFashion, we use the officially provided landmarks as input. On the other datasets, we use HED [43] to obtain semantic edges. ### Comparison with state-of-the-art We select several advanced models, including SPADE [35], CoCosNet [45], CoCosNet-v2 [49], MCL-Net [44], and DynaST [25], for comparison. For a fair comparison, we retrain their models at resolution \(256\times 256\) under the same settings as ours. **Quantitative evaluation.** We adopt several criteria to fully evaluate the generation results. (1) _Frechet Inception Score_ (FID) [38] and _Sliced Wasserstein distance_ (SWD) [17] are used to evaluate the image perceptual quality. (2) To assess style relevance and semantic consistency of translated images [45], we compute the _color_, _texture_, and _semantic_ metrics based on VGG19 [40]. Specifically, the cosine similarities between low-level features (i.e. \(relu1\_2\) and \(relu2\_2\)) are used to measure _color_ and _texture_ rele \begin{table} \begin{tabular}{l|c c c c c|c c c|c c c|c c c c|c} \hline \hline & \multicolumn{4}{c|}{CelebA-HQ} & \multicolumn{2}{c|}{Metfaces} & \multicolumn{2}{c|}{Cartoon} & \multicolumn{2}{c|}{Ukiyo-e} & \multicolumn{2}{c|}{AAHQ} & \multicolumn{2}{c|}{DeepFashion} & \multicolumn{2}{c}{Time \(\downarrow\)} \\ \cline{2-13} & FID \(\downarrow\) & SWD \(\downarrow\) & Texture \(\uparrow\) & Color \(\uparrow\) & Semantic \(\uparrow\) & FID \(\downarrow\) & SWD \(\downarrow\) & FID \(\downarrow\) & SWD \(\downarrow\) & FID \(\downarrow\) & SWD \(\downarrow\) & FID \(\downarrow\) & SWD \(\downarrow\) & FID \(\downarrow\) & SWD \(\downarrow\) & (s) \\ \hline SPADE [35] & 31.5 & 26.9 & 0.927 & 0.955 & 0.922 & 45.6 & 26.9 & 97.5 & 30.5 & 45.6 & 26.9 & 79.4 & 32.1 & 36.2 & 27.8 & 0.196 \\ CoCosNet [45] & 14.3 & 15.2 & 0.958 & 0.977 & 0.949 & 25.6 & 24.3 & 66.8 & 27.1 & 38.3 & 13.9 & 62.6 & 21.9 & 14.4 & 17.2 & 0.321 \\ CoCosNet-v2 [49] & 13.2 & 14.0 & 0.954 & 0.975 & 0.948 & **23.3** & 22.4 & 66.4 & 27.0 & 32.1 & **11.0** & 62.4 & 22.8 & 13.0 & 16.7 & 1.573 \\ MCL-Net [42] & 12.8 & 14.2 & 0.951 & 0.976 & **0.953** & 23.8 & 24.5 & 67.9 & 27.9 & 32.4 & 12.4 & 65.4 & 22.2 & 12.9 & 16.2 & 0.309 \\ DynaST [25] & 12.0 & **12.4** & 0.959 & 0.978 & 0.925 & 29.2 & 28.6 & **62.8** & **26.5** & 38.9 & 14.2 & 67.2 & 24.0 & 8.4 & 11.8 & 0.214 \\ MATEENT (ours) & **11.5** & 13.2 & **0.966** & **0.986** & 0.949 & 26.0 & **19.1** & 64.4 & 27.6 & **30.3** & LLS & **56.0** & **19.5** & **8.2** & **10.0** & **0.185** \\ \hline Input (_Edge_) & Source & SPADE & CoCosNet & CoCosNet-v2 & MCL-Net & DynaST & Ours & & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison on the Metfaces [18], CelebA-HQ [22], Ukiyo-e [36],Cartoon [36], AAHQ [24], and DeepFashion [27] datasets. Figure 5: Results on the Metfaces [18], CelebA-HQ [22], Ukiyo-e [36], Cartoon [36], AAHQ [24], and DeepFashion [27] datasets. vance, respectively; the average cosine similarity between high-level features (i.e. \(relu3\_2\), \(relu4\_2\), and \(relu5\_2\)) measures the _semantic_ consistency. The quantitative comparison results are shown in Table 1. Compared to existing methods, our model consistently achieves superior or highly competitive performance across all the datasets. Especially, MATEBIT significantly improves the style relevance in both texture and color. On the complicated AAHQ dataset, which contains diverse styles of avatars, MATEBIT dramatically decreases both FID and SWD. Such superiority indicates that our generated images are of better perceptual quality; and present consistent appearance to similar parts in exemplars. We additionally report the average time each method costs for generating an image. Our method shows the best efficiency and is significantly faster than previous methods. **Qualitative comparison.** Fig 5 illustrates images generated by different methods. Obviously, previous methods present geometric distortions, blurring artifacts, inconsistent colors, or identity inconsistency. In contrast, MATEBIT consistently produces appealing results, including more results shown in Fig. 6. Specially, our results preserve the semantic structure of input images, and present consistent appearance with semantically similar regions in exemplars. Previous methods suffer serious degradations mainly due to the matching errors in full correspondence learning. In our method, we distinct reliable and unreliable correspondence, and release the role of matching in image generation. As a result, our method stably transfers a source image to the target style of an exemplar. ### Ablation study **Impacts of MAT.** We present a comprehensive analysis to justify the important component in our architecture, i.e. MAT. We here modify our full model by (1) removing the MAT module (i.e. w/o MAT), (2) removing ReLU in MAT (i.e. w/o ReLU), (3) replacing MAT with three-layer full correspondence learning modules [45] (i.e. _Full Corr_), and (4) replacing the AdaConv with FFN [45] (i.e. _w/ AdaConv_). The results in Table 2 show that removing MAT or ReLU dramatically hurts the performance. Besides, using the full correspondence learning in [45] or using FFN also significantly decreases the texture relevance and semantic consistency. Correspondingly, these model variants leads to inferior results in terms of textures or colors, compared to our full model (Fig. 8). Recall the visualized correspondence in Fig. 1, our method learns remarkably accurate correspondence, which ultimately benefits the quality of generated images. In addition, Fig. 7 shows that both the semantic consistency and style realism broadly improve with the number of MAT blocks and peak at three. All these observations demonstrate our motivation that MAT gradually refines cross-domain correspondence and augments informative representations for generating high-quality images. observe that: (1) without \(\mathcal{L}_{style}\), although the generated images show high semantic consistency, they present low style relevance; (2) \(\mathcal{L}_{\mathrm{CAST}}\) benefits the style relevance, but leads to inferior performance to \(\mathcal{L}_{style}\). These comparison results meet our expectation that: our CSL methodology enables the learned style codes to discriminate subtle divergences between images with different perceptual qualities. Such discriminability facilitates pushes the network to generate high-quality images. **Skip connections & global style control.** In MATEBIT, we use skip connections to supplement input semantic information. Removing skip connections dramatically hurts the semantic inconsistency and the quantitative results. Besides, using global style vector \(\mathbf{z}\) increases subtle details, e.g. the colors over the mouth, rings, and hairs. In summary, MAT learns accurate correspondence and enables context-aware feature augmentation; the contrastive style learning benefits the style control and high-quality image generation; and the U-Net architecture helps the preservation of semantic information. Ultimately, all these benefits make our model significantly outperform previous state-of-the-art methods in generating plausible images. ### Applications **Artistic Portrait Generation.** An potential application of our method is transferring a facial photo to an artistic portrait, in the style of an exemplar. We here apply the previously learned models to randomly selected facial photos from CelebA-HQ [22]. As illustrated in Fig. 9, our method can generate appealing portraits with consistent identity and faithful style appearance. **Chinese Ink Painting Generation.** To verify the capacity of our model in generating complex images, we additionally apply it to generate Chinese Ink paintings. Specially, we collect paintings of landscapes and facial portraits from the web, and then train and test our model on each subset respectively. Fig. 10 illustrates the results of painting generation and photo-to-painting translation. Obviously, all the generated images show remarkably high quality. Besides, our model successfully captures subtle differences between different exemplars, demonstrating its remarkable capacity in style control. ## 5 Conclusions This paper presents a novel exemplar-guided image translation method, dubbed MATEBIT. Both quantitative and qualitative experiments show that MATEBIT is capable of generating high-fidelity images in a number of tasks. Besides, ablation studies demonstrate the effectiveness of MAT and contrastive style learning. Despite such achievements, the artistic portraits transferred from facial photos (Fig. 9) are inferior to those shown in Fig. 6. This may be due to the subtle differences in edge maps between photos and artistic paintings. In the near future, we will explore to solve this issue via semi-supervised learning or domain transfer technologies. Figure 8: Comparison of generated images by different variants of our model, on Metfaces [18]. Figure 10: Chinese ink paintings generation (1st & 3rd rows), as well as photo-to-painting translation (2nd & 4th rows). Figure 9: Our method can transfer facial photos to artistic portraits in the style of exemplars.
2309.02213
Bayesian Bi-clustering of Neural Spiking Activity with Latent Structures
Modern neural recording techniques allow neuroscientists to obtain spiking activity of multiple neurons from different brain regions over long time periods, which requires new statistical methods to be developed for understanding structure of the large-scale data. In this paper, we develop a bi-clustering method to cluster the neural spiking activity spatially and temporally, according to their low-dimensional latent structures. The spatial (neuron) clusters are defined by the latent trajectories within each neural population, while the temporal (state) clusters are defined by (populationally) synchronous local linear dynamics shared with different periods. To flexibly extract the bi-clustering structure, we build the model non-parametrically, and develop an efficient Markov chain Monte Carlo (MCMC) algorithm to sample the posterior distributions of model parameters. Validating our proposed MCMC algorithm through simulations, we find the method can recover unknown parameters and true bi-clustering structures successfully. We then apply the proposed bi-clustering method to multi-regional neural recordings under different experiment settings, where we find that simultaneously considering latent trajectories and spatial-temporal clustering structures can provide us with a more accurate and interpretable result. Overall, the proposed method provides scientific insights for large-scale (counting) time series with elongated recording periods, and it can potentially have application beyond neuroscience.
Ganchao Wei
2023-09-05T13:19:50Z
http://arxiv.org/abs/2309.02213v3
# Bayesian Bi-clustering of Neural Spiking Activity ###### Abstract Modern neural recording techniques allow neuroscientists to obtain spiking activity of multiple neurons from different brain regions over long time periods. This requires new statistical methods to be developed for understanding structure of the large-scale data, in terms of both neuron number and recording duration. In this paper, we develop a bi-clustering method to cluster the neural spiking activity spatially and temporally, according to their low-dimensional latent structures. The spatial (neuron) clusters are defined by the latent trajectories within each neural population, while the temporal (state) clusters are defined by local linear dynamics manner shared across the population. To flexibly extract the bi-clustering structure, we build the model non-parametrically, and develop an efficient Markov chain Monte Carlo (MCMC) algorithm to sample the posterior distributions of model parameters. Validating our proposed MCMC algorithm through simulations, we find the method can recover unknown parameters and true bi-clustering structures successfully. We then apply the proposed bi-clustering method to multi-regional neural recordings under different experiment settings, where we find that simultaneously considering latent trajectories and spatial-temporal clustering structures can provide us with a more accurate and interpretable result. Overall, the proposed method provides scientific insights for large-scale (counting) time series with elongated recording periods, and it can have application beyond neuroscience. ## 1 Introduction In neuroscience, identifying types of neurons is a longstanding challenge (Nelson et al., 2006; Bota and Swanson, 2007; Zeng, 2022). Some criteria based on features such as anatomical regions, genomics and synaptic connectivity have been proposed, and there are some Bayesian approaches to integrate these features (Jonas and Kording, 2015). On the other hand, the response pattern and interactions between neural populations may change over time, especially when the experiment stimuli changes (Pooresmazeili et al., 2014; Oemisch et al., 2015; Ruff and Cohen, 2016; Steinmetz et al., 2019; Cowley et al., 2020). However, these complex dynamical observations can often be broken down into simpler units, and it can be appropriate to assume static linear dynamics within chunks of the data. Moreover, it is usually appropriate and helpful to assume similar linear dynamics can be shared by different epochs (Zoltowski et al., 2020; Glaser et al., 2020). Here, we consider the problem of how to identify both spatial and temporal clusters of neural spikes. The modern techniques such as the high-density probes (Jun et al., 2017; Steinmetz et al., 2021; Marshall et al., 2022) allow us to obtain large-scale multi-electrode recordings from hundreds to thousands of neurons across different anatomical regions over an elongated session. Several models have been developed to extract shared latent structures from simultaneous neural recordings, assuming that the activity of all recorded neurons can be described through common low-dimensional latent states (Cunningham and Yu, 2014; Gao et al., 2017). These approaches have proven useful in summarizing and interpreting high-dimensional population activity. Inferred low-dimensional latent states can provide insight into the representation of task variables (Churchland et al., 2012; Mante et al., 2013; Cunningham and Yu, 2014; Saxena and Cunningham, 2019) and dynamics of the population itself (Vyas et al., 2020). Many existing approaches are based on linear dynamical system (LDS) model (Macke et al., 2011), which is built on the state-space model and assumes latent factors evolve with linear dynamics. Although assuming static linear dynamics over time can be valid in some tasks and in small chunks of experiment, the assumption is not generally appropriate. To tackle the nonlinear dynamics, some variants of the LDS, such as switching-LDS (SLDS, Ghahramani and Hinton (2000); Fox (2009); Fox et al. (2008); Murphy (2012)) and recurrent-SLDS (RSLDS, Linderman et al. (2017, 2019)) have been proposed. The non-parametric Gaussian process factor analysis (GPFA) model (Yu et al., 2009) and its variants provide a more flexible way to model non-linaer neural data, although most these method assume independent GP and doesn't allow for interactions between latent factors. Recently, (Cai et al., 2023) proposed the dependent GP method using the kernel convolution framework (KCF, Boyle and Frean (2004); Alvarez and Lawrence (2011); Sofro et al. (2017)), although their method may not scalable for elongated neural recordings. Several methods have been developed and implemented to analyze multiple neural populations and their interactions (Semedo et al., 2019; Glaser et al., 2020), as the interactions may occur in low-dimensional subspace (Stavisky et al., 2017; Kaufman et al., 2014; Semedo et al., 2019). But the neural populations are pre-specified, and the total number of clusters and the cluster membership is not evaluated systematically in general. Some methods like mixPLDS (Buesing et al., 2014) and recent mixDPFA method (Wei et al., 2022, 2023) try to cluster neurons according to their latent structures, by using the mixture of LDS model. These approach provides a more interpretable and accurate way to clusters neurons, and may be useful for identifying "functional populations" of neurons. However, these methods assume the static linear dynamics and don't allow for the interactions between neural populations, which can limit the usage of these methods, and may bias or even fail the detection of neural populations when considering the elongated recording session. On the other hand, for the clustering structures in terms of temporal states, most methods are developed based on the SLDS, by modeling the nonlinear dynamics with local linear pattern. Instead of clustering based on linear dynamics, D'Angelo et al. (2023) recently tried to cluster the experiment periods based on the distributions of spiking amplitude, using a nested formulation of mixture of finite mixtures model (MFMM), i.e., exploiting the generalized MFMM (gFMFMM, Fruhwirth-Schnatter et al. (2021)) prior with common atom model (Denti et al., 2023). In this research, we develop a bi-clustering method to cluster neural spikes both spatially (to give subject clusters) and temporally (to give state clusters), according to the latent structures of these neurons (Figure 1A). The neural population is defined via private low-dimensional latent trajectories as in mixPLDS (Buesing et al., 2014) or mixDPFA (Wei et al., 2022, 2023). For the state clusters, we assume the linear dynamics can be shared across different chunks and the state clustering structures are defined by local linear manner as in (R)SLDS. Neurons in each population is assume to have the same private latent trajectories, but all time series are assumed to switch between different states synchronously, to use the information from all observations. Besides extending the previous clustering method like mixDPFA to simultaneously detect the state cluster, the proposed bi-clustering method also allow for interactions between neural populations and non-stationary dynamics for neural response, using similar idea from (Glaser et al., 2020). Simultaneously considering all these effects in the proposed bi-clustering method is necessary, since incorrect population assignments can lead to biased and inconsistent inference on the latent structure (Ventura, 2009). On the other hand, these flexibility allows for more accurate estimate of latent trajectories, and hence will lead to a more accurate estimates of the subject clustering structure. To flexibly infer the bi-clustering structure, we model them non-parametrically so that we don't need to prespecify the number for subject and state clusters. Specifically, the subject clustering structure is modeled by a mixture of finite mixtures model (MFMM, Miller and Harrison (2018)) of latent trajectories and the state clustering structure is modeled by a sticky Hierarchical Dirichlet Process Hidden Markov Model (sticky- HDP-HMM, Fox et al. (2008)). The posteriors of the model parameters are sampled using an efficient Markov Chain Monte Carlo (MCMC) algorithm, while the Polya-Gamma data augmentation technique (Polson et al., 2013) is used to handle the counting observations for neural spiking data. The rest of this paper is structured as follows. In section 2, we introduce the biclustering method for time series with counting observations, and provide brief explanations of the MCMC algorithm to sample posterior distributions of parameters. After validating the proposed bi-clustering method with a synthetic dataset in section 3, we then apply our method to analyze multi-regional experimental recordings from a behaving mouse under different experiment settings in section 4. Finally, in section 5, we conclude with some final remarks and highlight some potential extensions of our current model for future research. ## 2 Bi-clustering Model for Neural Spikes In this section, we introduce our bi-clustering model for neural spiking activity, i.e., the time series data with counting observations. The goal for the proposed model is to cluster neural spikes both spatially (to give subject cluster) and temporally (to give state cluster), based on the multi-population and -state latent structures. To flexibly capture the clustering structures, we build model non-parametrically. The graphical representation of the model is summarized in Figure 1B. After introducing the model, we briefly describe how we use a MCMC algorithm to infer model parameters. ### Multi-population and -state Linear Dynamic Model Assume we can observe spiking activity of \(N\) neurons up to recording length \(T\). Denote the number of counts for neuron \(i\in\{1,\ldots,N\}\) at time bin \(t\in\{1,\ldots,T\}\) as \(y_{it}\in\mathbb{Z}_{\geq 0}\), and the cluster indicator of subject \(i\) as \(z_{i}\) (i.e. the "subject indicator"). Assume the count \(y_{it}\) follows a negative-binomial distribution, where the log-mean response is modeled by linear combinations of subject baseline \(d_{i}\), population baseline \(\mu_{t}^{(z_{i})}\) and \(p-\)dimensional latent factor \(\mathbf{x}_{t}^{(z_{i})}\in\mathbb{R}^{p}\) Figure 1: **Model overview.****A**. The goal for our proposed model is to do clustering both spatially and temporally (i.e. “bi-clustering”) for neural spike data (time series data with counting observations), according to their latent structures. The neural spiking counts are determined by a low dimensional latent factors, specific to the spatially subject clustering assignment (e.g. green,blue and red). On the other hand, all time series are assumed to switch between states synchronously, and are temporally clustered according to different states of linear dynamics (e.g. gray and white). **B**. Graphical model of the proposed bi-clustering model. All prior parameters are summarized as \(\theta\), and parameters such as \(d_{i}\) and \(\mathbf{c}_{i}\) dropped for simplicity. (Here we assume all populations have the same latent dimension for convenience). In other words, the observation equation is as follows: \[y_{it} \sim\text{NB}(r_{i},\mu_{it}) \tag{1}\] \[\log\mu_{it} =d_{i}+\mu_{t}^{(z_{i})}+\mathbf{c}_{i}^{\prime}\mathbf{x}_{t}^{(z_{i})}\] , where \(\text{NB}(r,\mu)\) denotes the negative-binomial distribution (NB) with mean \(\mu\) and variance \(\mu+\mu^{2}/r\), and \(\mathbf{c}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{p})\). The NB distribution can be replaced by a Poisson distribution when it's appropriate to assume equi-dispersion, for ease of model inference. If we further denote \(\tilde{\mathbf{x}}_{t}^{(j)}=(\mu_{t}^{(j)},\mathbf{x}_{t}^{\prime(j)})^{\prime}\) and \(\tilde{\mathbf{c}}_{i}=(1,\mathbf{c}_{i}^{\prime})^{\prime}\), then \(\log\mu_{it}=d_{i}+\tilde{\mathbf{c}}^{\prime}\tilde{\mathbf{x}}_{t}^{(z_{i})}\). To save words and notations, if not specified, we refer \(\tilde{\mathbf{x}}_{t}^{(j)}\) as "latent factor" for cluster \(j\), which also includes the population baseline. Although each neural population is modeled with private latent factors, there usually exist some interactions between clusters (Musall et al., 2019; Stringer et al., 2019), and these interactions can change over time (Ruff and Cohen, 2016; Steinmetz et al., 2019; Cowley et al., 2020), especially when the external condition changes. On the other hand, interactions between neural populations and receiving common inputs for all neurons suggest that neurons in different clusters may synchronize the response states over time. Therefore, to allow for the interactions between populations and model the synchronous state switching, we stack the latent factors for all clusters together, and assume all latent factors evolve in a conditional linear manner, given the discrete latent states \(\xi_{t}\) shared across the cluster, as in Glaser et al. (2020). In other words, the state clustering structure is defined by the local linear dynamics of latent factors, by assuming complex dynamics can be decomposed into simple linear unit and the small chunks of the neural response can be sufficiently described by the LDS model. Specifically, assume there are \(k\) unique clusters, i.e., \(|\{z_{i}\}_{i=1}^{N}|=k\), the cluster-stacked latent factors (including population baseline) is denoted as \(\tilde{\mathbf{X}}_{t}=(\tilde{\mathbf{x}}_{t}^{\prime(1)},\ldots,\tilde{\mathbf{x}}_{t}^{ \prime(k)})^{\prime}\in\mathbb{R}^{kp}\). To capture temporal dynamics of the data, we further put AR(1) structure onto the latent factors \(\tilde{\mathbf{X}}_{t}\). In other words, given the discrete latent state at \(t\) as \(\xi_{t}\) (i.e. the "state indicator" shared across the subject cluster), \(\tilde{\mathbf{X}}_{t}\) is assumed evolve linearly with a Gaussian noise as follows: \[\tilde{\mathbf{X}}_{t+1}=\mathbf{b}_{\xi_{t}}+\mathbf{A}_{\xi_{t}}\tilde{\mathbf{X}}_{t}+\mathbf{ \epsilon}_{\xi_{t}} \tag{2}\] , where \(\mathbf{\epsilon}_{\xi_{t}}\sim\mathcal{N}(\mathbf{0},\mathbf{Q}_{\xi_{t}})\) and \(\tilde{\mathbf{X}}_{1}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{k(p+1)})\). To make the model identifiable, we further assume 1) \(\tilde{\mathbf{x}}^{(j)}\) is zero-centered, i.e. \(\sum_{t=1}^{T}\tilde{\mathbf{x}}_{t}^{(j)}=\mathbf{0}\) and 2) \(\tilde{\mathbf{x}}_{1:T}^{(j)}\tilde{\mathbf{x}}_{1:T}^{\prime(j)}\) is diagonal, where \(\mathbf{x}_{1:T}^{(j)}=(\tilde{\mathbf{x}}_{1}^{(j)},\ldots,\tilde{\mathbf{x}}_{T}^{(j)}) \in\mathbb{R}^{(p+1)\times T}\)(Fokoue and Titterington, 2003). In summary, given the neuron \(i\) belonging to cluster \(z_{i}=j\), the counting series is generated by a negative-binomial linear model \(\mathcal{M}\) defined in (1) as \((y_{i1},\ldots,y_{iT})^{\prime}\sim\mathcal{M}(d_{i},\mathbf{c}_{i},\tilde{\mathbf{x}} _{1:T}^{(j)})\), where the prior for \(\tilde{\mathbf{x}}_{1:T}^{(j)}\) is denoted as \(\mathcal{H}\). The within- and between-population linear dynamics at \(t-\)th step are captured by dynamical parameters \((\mathbf{b}_{\xi_{t}},\mathbf{A}_{\xi_{t}},\mathbf{Q}_{\xi_{t}})\), where \(\xi_{t}\) is the state indicator at \(t\) and the prior of \((\mathbf{b}_{l},\mathbf{A}_{l},\mathbf{Q}_{l})\) is denoted as \(\mathcal{S}\). To do clustering both spatially (subject cluster) and temporally (state cluster) in a flexible way, we model each of these two clustering structures non-parametrically as follows. ### Subject Clustering Model For subject clustering structure, the number of neural populations should be finite but unknown. Therefore, we put prior on number of subject cluster \(|\{z_{i}\}_{i=1}^{N}|=k\) as in (Wei et al., 2023), which leads to the mixture of the finite mixtures model (MFMM) as follows: \[\begin{array}{ll}K\sim f_{k},&f_{k}\;\text{is a p.m.f. on}\{1,2,\ldots\},\\ \boldsymbol{\pi}=(\pi_{1},\ldots,\pi_{k})\sim\operatorname{Dir}_{k}(\gamma, \ldots,\gamma)&\text{given}\;K=k,\\ z_{1},\ldots,z_{N}\stackrel{{ i.i.d.}}{{\sim}}\boldsymbol{\pi}& \text{given}\;\boldsymbol{\pi},\\ \tilde{\boldsymbol{x}}_{1:T}^{(1)},\ldots,\tilde{\boldsymbol{x}}_{1:T}^{(k)} \stackrel{{ i.i.d.}}{{\sim}}\mathcal{H}&\text{given}\;k,\\ (y_{i1},\ldots,y_{iT})^{\prime}\sim\mathcal{M}(d_{i},\boldsymbol{c}_{i}, \tilde{\boldsymbol{x}}_{1:T}^{(z_{i})})&\text{given}\;d_{i},\boldsymbol{c}_{i },\tilde{\boldsymbol{x}}_{1:T}^{(z_{i})},z_{i},\text{for}\;i=1,\ldots,N,\end{array} \tag{3}\] , where p.m.f denotes the probability mass function. By using the MFMM, we can integrate the field knowledge about the number of clusters into our analysis, by specifying the \(f_{k}\). In the analysis of this paper, we assume \(k\) follows a geometric distribution, i.e., \(k\sim\text{Geometric}(\zeta)\) with p.m.f. defined as \(f_{k}(k\mid\zeta)=(1-\zeta)^{k-1}\zeta\) for \(k=1,2,\ldots\), and \(\gamma=1\). For general use of the proposed method to some problems where the number of subject cluster number can potentially grow to infinity, using the mixture model such as the Dirichlet process mixtures model (DPMM) maybe conceptually more appropriate. See Miller and Harrison (2018) for more detailed discussion. ### State Clustering Model For state clustering structure, as the number of states can potentially shoot to infinity, we model the discrete state \(\xi_{t}\) by a sticky Hierarchical Dirichlet Process Hidden Markov Model (sticky-HDP-HMM) proposed by (Fox et al., 2008b) as follows: \[\begin{array}{ll}\beta\sim\text{GEM}(\eta),\\ \psi_{l}\stackrel{{ i.i.d.}}{{\sim}}\operatorname{DP}(\alpha+m, \beta+m\delta_{i})\\ \xi_{t}\sim\psi_{\xi_{t-1}},\\ (\boldsymbol{b}_{l},\boldsymbol{A}_{l},\boldsymbol{Q}_{l})\stackrel{{ i.i.d.}}{{\sim}}\mathcal{S},&\text{for}\;l=1,2,\ldots,\\ \tilde{\boldsymbol{X}}_{t+1}\sim\mathcal{N}(\boldsymbol{b}_{\xi_{t}}+ \boldsymbol{A}_{\xi_{t}}\tilde{\boldsymbol{X}}_{t},\boldsymbol{Q}_{\xi_{t}})& \text{for}\;t=1,\ldots,T\end{array} \tag{4}\] , where GEM denotes the stick breaking process [cite], \(\delta_{i}\) denotes the indicator function at index \(i\) and DP denotes the Dirichlet process. The sticky HDP-HMM extends the HDP-HMM with a "sticky" parameter \(m>0\) to encourage longer state duration, and hence can handle the rapid-switching problem to some degree. Some more careful methods for modeling the state duration and state transition is further discussed in the section 5. ### Model Inference We do Bayesian inference on the proposed bi-clustering model by an efficient MCMC algorithm. In each sampling iteration, there are 4 key steps: 1) sample dynamical latent factors \(\tilde{\mathbf{x}}_{1:T}^{(z_{i})}\), 2) sample remaining subject-specific parameters in observation equation (1), including dispersion \(r_{i}\), subject baseline \(d_{i}\) and factor loading \(\tilde{\mathbf{c}}_{i}\), 3) sample the temporal states \(\xi_{t}\) and corresponding dynamical parameters \(\mathbf{b}_{l},\mathbf{A}_{l},\mathbf{Q}_{l}\) for each sampled state, and 4) sample the subject cluster indices \(z_{i}\). The details of sampling procedures can be found in the appendix Section A, and we briefly introduce the key sampling method for each step here. In step 1), the full conditional distribution of latent factors \(\tilde{\mathbf{x}}_{1:T}^{(j)}\) is equivalent to the posterior distribution of the negative-binomial dynamic GLM (NB-DGLM), which has no closed form. However, the NB distribution falls within a Polya-Gamma (PG) augmentation scheme (Polson et al., 2013; Windle et al., 2013; Linderman et al., 2016), therefore we can sample them in closed form by introducing the PG augmented variables. Conditioning on the auxiliary variables \(\omega_{it}\), the transformed "effective" observations \(\hat{y}_{it}\) has Gaussian likelihood, and hence we can sample the posterior of \(\tilde{\mathbf{x}}_{1:T}^{(j)}\) using the forward-filtering-backward-sampling (FFBS, Carter and Kohn (1994); Fruhwirth-Schnatter (1994)) algorithm. For Poisson observation model, we can treat the data as coming from the NB distribution, use the samples as proposal and add one more Metropolis-Hasting (MH) step to accept or reject the proposal. In Poisson case, the dispersion \(r_{i}\) becomes the tuning parameter to achieve desirable acceptance rate (Wei et al., 2022, 2023). In step 2), the sampling of \(d_{i}\) and \(\tilde{\mathbf{c}}_{i}\) is regular NB regression problem, and we again use the PG data augmentation technique to sample them. The dispersion parameter \(r_{i}\) is updated via a Gibbs sampler, using the method described in Zhou et al. (2012), as the gamma distribution is the conjugate prior to the \(r_{i}\) under the compound Poisson representation. In step 3), the discrete states \(\xi_{t}\) are sampled by a weak-limit Gibbs sampler for sticky HDP-HMM as in Fox et al. (2008b). The weak-limit sampler constructs a finite approximation to the HDP transitions prior with finite Dirichlet distributions, as the infinite limit converges in distribution to a true HDP. Given the latent factors \(\tilde{\mathbf{x}}_{1:T}^{(j)}\) and state indicator \(\xi_{t}\), we can update dynamical parameters \((\mathbf{b}_{l},\mathbf{A}_{l},\mathbf{Q}_{l})\) for each state separately in closed form. In step 4), given the parameters in observation equation (1), we then sample the subject cluster indices \(z_{i}\) using the algorithm for MFMM proposed by Miller and Harrison (2018), which is analogous to the partition-based algorithm for DPMM (Neal, 2000). When sampling the clustering assignments \(z_{i}\) in such a high dimensional time series data with large \(T\), if we evaluate the full likelihood given samples of \(\mathbf{c}_{i}\) as in Gaussian MFA (Fokoue and Titterington, 2003), the chain has very poor mixing. Instead, we evaluate the marginalized likelihood by integrating out the subject-specific loading \(\mathbf{c}_{i}\), as in (Wei et al., 2023). The marginalized likelihood is evaluated by Laplace approximation. The Python implementation of the NB and Poisson bi-clustering model is available in [https://github.com/weigcdsb/bi_clustering](https://github.com/weigcdsb/bi_clustering), and additional details for MCMC sampling can be found in appendix Section A. ## 3 Simulation To validate and illustrate the proposed bi-clustering method, we simulate neural spikes from the NB bi-clustering generative model defined in equation (1) and (2). In this simulation, we generate 3 clusters with 10 neurons in each cluster (\(N=30\) in total). The recording length is \(T=500\) and the dimension for \(\mathbf{x}_{1:T}^{(j)}\) are all \(p=2\). For each neuron, the individual baseline is generated by \(d_{i}\sim N(0,0.5^{2})\), the factor loading is generated by \(\mathbf{c}_{i}\sim N(\mathbf{0},\mathbf{I}_{2})\) and dispersion are all \(r_{i}=10\). For latent factors of these three clusters \(\{\tilde{\mathbf{x}}_{1:T}^{(j)}\}_{j=1}^{3}\), they are generated from two discrete states, and the state indicator \(\xi_{t}=1,2\) is generated from a semi-Markov chain (Sansom and Thomson, 2001; Yu, 2010), to encourage longer state duration. These states correspond two sets of linear dynamics: 1) independent state, where \(\mathbf{A}\in\mathbb{R}^{9}\) is diagonal and 2) interactive state, where \(\mathbf{A}\) is a random rotation of an orthogonal matrix, and hence there are interactions between clusters. The bias term is \(\mathbf{b}=\mathbf{0}\) and noise covariance is \(\mathbf{Q}=\mathbf{I}_{9}\cdot 10^{-2}\) for both states. We then apply the proposed bi-clustering methods to the simulated data. Although for formal analysis, we need to run a longer MCMC chain, we show results from a short (1000 iterations) chain here for illustration of the method. This MCMC chain starts from 1 subject cluster and 6 uniformly distributed temporal states. The results shown here summarize the posterior samples from iteration 250 to 1000. First, to evaluate the inferred clustering structure, we check the similarity matrices for both state (Figure 3A) and subject cluster (Figure 3B). The entry \((i,j)\) for a similarity matrix is the posterior probability that data points \(i\) and \(j\) belong to the same cluster. Both state and subject similarity matrices are sorted according to true clustering assignments, and hence if the algorithm can recover the simulated cluster structures, the diagonal blocks will have high posterior probability, which is the pattern shown in Figure 3A and 3B. The histograms of posterior samples (Figure 3C)show that our method can successfully recover the number of subject and state cluster. To represent the clustering structure in temporal state more intuitively, we also provide the single point estimates of \(\xi_{t}\) (Figure 3D) by maximizing the posterior expected adjusted Rand index (maxEPAR, Fritsch and Ickstadt (2009)), which performs better than other points estimates such as the MAP estimates. All these results show that we can successfully recover the clustering structures on both spatial and temporal dimension, including the number of clusters. One the other hand, the advantage for proposed bi-clustering method is that it can simultaneously provide unbiased estimates of the latent trajectories for each cluster, which can be very helpful for scientific interpretation and insights. For example, here we show the latent trajectories for the cluster that most neuron 1-5 belong to in Figure 3E (subject 1-5 also forms the a maxPEAR subject cluster). The fitting results show that simultaneously considering subject and state clustering structure is necessary for estimation of latent structures. We also show the performance of Poisson bi-clustering model (i.e. replace the NB distribution in observation equation (1) by Poisson distribution), to show the usage of the "simplified version" of the model. The clustering results are summarized by similarity matrices (Figure 3F) and maxPEAR estimate of state (the third bar in Figure 3D). Since in this simulation example, the over-dispersion is not severe (\(r_{i}=10\)), the Poisson version can also recover the true clustering structure, but with some bias in the estimation of latent trajectories. However, for data with large over-dispersion (which is common for real data), the wrong assumption on equi-dispersion will hugely influence the clustering structures, and it would be necessary to use the more flexible NB bi-clustering model. Figure 2: **Simulations.** Here, we show the results for posterior samples from iteration 250 to 1000 for each MCMC chain on the simulated dataset. These results are from NB bi-clustering model if not specified. **A**. The posterior similarity matrix for temporal states are ordered according to ground true states, which show the clustering structures relative to ground truth. **B**. Spatially, the similarity matrix for subject are ordered according true subject clusters. **C**. The histograms of posterior samples on number of state cluster (true = 2) and subject cluster (true = 3). **D**. The maxPEAR estimates (point estimates) of the discrete states for NB and Poisson bi-clustering model, comparing to the true temporal states. **E**. The inferred latent trajectories for the detected cluster that most subject 1-5 belong to, where the black lines are truths, blue lines are posterior means and shaded light blue regions are 95% highest posterior density (HPD) regions. **F**. The similarity matrices of state cluster and subject for Poisson bi-clustering model, which are sorted using the same order as in panel **A** and **B** respectively. Application We then apply our bi-clustering method to Allen Institute Visual Coding Nevropixels dataset. The dataset contains neural spiking activity from multiple brain regions of an awake mouse, under different visual stimuli. See Siegle et al. (2021) for more detailed data description. Here, we use the electrophysiology session 719161530 to investigate the bi-clustering structures of neurons from three anatomical sites, under three consecutive experimental epochs. After excluding neurons having less than 1Hz response rate, 78 neurons are contained in the following analysis. Among these neurons, 37 neurons come from the hippocampal CA1, 20 neurons come from lateral posterior nucleus of the thalamus (LP) and 21 neurons come from the primary visual cortex (VISp). The neural spikes are recorded when the mouse is exposed to three consecutive visual stimuli : spontaneous (S, durates 30.025s), natural movie (N, durates 300.251s) and again spontaneous (S, durates 30.025s). Here, we rebin the data with 500ms, and hence \(T=720\). For formal application, we may need a smaller bin size for higher resolution. The binned spiking counts for these 78 neurons are shown in Figure 4A. Then, we fit the data with both NB bi-clustering and Poisson bi-clustering model, and run two independent chains for each. The results from all four chains can be found in the Section B, Figure B. Although the formal analysis may also require us to run a long MCMC chain and tune the some key parameters such as latent dimension \(p\) and the sticky parameter \(m\) in Equation (4), we here run 1000 iterations using \(p=2\) and \(m=10\), simply to illustrate the proposed method on real data. Since these neurons come from three brain regions, we set the prior for number of subject cluster as \(k\sim\text{Geometric}(0.415)\), such that \(P(k\leq 3)=0.8\). For the subject clustering structure, the NB bi-clustering model detects around 13 clusters, and the posterior similarity matrix sorted by maxPEAR estimate is shown in Figure 4C-i. Generally, the method detects a large neural population with a high "confidence", with several weak clusters. We further sort the similarity matrix according to the anatomical labels, to examine the relationship between subject clustering results and anatomy (Figure 4-ii). The re-sorted result show that most neurons of the detected largest cluster come from CA1, while some neurons in LP and VISp are also included. Moreover, although most identified subject clusters are neurons from the same anatomical area, there are some mismatches between these two criteria. Especially, some neurons in CA1 are grouped into the same cluster with neurons in LP or neurons in VISp. This may imply that there are some "functional interactions" between CA1 and LP, and between CA1 and VISp, as claimed by Wei et al. (2023). We also compare the subject clustering results from Poisson bi-clustering model and mixDPFA, and sort the similarity matrices using the same order as in Figure 4C-ii. When assuming the equi-dispersion and fit the Poisson bi-clustering model, there are more mismatches between the detected clusters and anatomy (Figure 4C-iii), which may suggest some spurious interactions are detected when ignoring the over-dispersion. On the other hand, the mixDPFA assumes Poisson distributed spikes and ignores the potential state changes along the time. The mixDPFA can hardly detect clusters, except for some neurons in CA1 (Figure 4C-iv). The results are consistent with previous finding of the mixDPFA, which shows that the neural population may change under different experimental settings, if the static dynamics is assumed (Wei et al., 2023). Overall, these results suggest that it is necessary to consider the over-dispersion and time-varying nonlinear dynamics, to obtain unbiased estimate of clustering structures. For the state clustering structures, the algorithm detects around 10 clusters, and we show the similarity matrix (Figure 4B) and the maxPEAR estimate (Figure 4D). These results don't show a clear pattern as in subject cluster. This may be resolved if we tune the sticky parameter to encourage longer state duration, or model the state duration more carefully with e.g. HDP-HSMM (Johnson and Willsky, 2013). However, it seems there's a clear state change between the first spontaneous and natural movie. Moreover, there may be also a state change when the mouse see half of the natural movie. Overall, the state clustering structures suggest that there are more subgroups of the states besides the experiment settings, and the neurons may change the state even under the same experiment setting. Finally, we also show the details of the largest maxPEAR subject cluster. The largest maxPEAR cluster has 17 neurons, which contains 9 neurons from CA1, 4 neurons from LP and 4 neurons from VISp. The spiking counts of these neurons (Figure 4E) may suggest periodic pattern, i.e., alternating strong and weak response, in the middle portion of the natural movie epoch. The observed pattern is captured by the latent trajectories that most of these neurons belong to (Figure 4E). ## 5 Discussion In this paper, we introduce a Bayesian nonparametric method to cluster the neural spiking activity both spatially and temporally, according to the latent structures (trajectories) for each neural population. Compared to other clustering method for time series (e.g. distance-based methods), the clustering structures defined by latent trajectories can be more meaningful for the scientific problems and can provide insights for the large-scale complicated time series data. On the other hand, simultaneously consider the subject and state clustering structures can provide us unbiased and consistent estimates of latent structures vice versa. Although the proposed method can simultaneously cluster the neural spikes spatially and temporally, there are some potential improvements. First, the subject clustering structures are modeled by MFMM, which consider the nature for number of neural populations. However, the uncertainty of clustering results can be large in some cases, and hence it may be better to consider the generalized MFMM (gMFMM), which can provide greater efficiency in the clus ter estimation (Fruhwirth-Schnatter et al., 2021). Moreover, the common atom specification of gMFMM (Denti et al., 2023; D'Angelo et al., 2023) can provide flexiblity in partitions estimations, resolve the degeneracy isuues and more importantly can allow us borrow information from different trials and neural populations. Second, we currently pre-specify and assume all clusters share the same dimension of latent factors \(p\) for convenience. However, this assumption may be inappropriate for real data application, and the method can be more flexible to infer \(p\) at the same time. Previously, Wei et al. (2023) sample the latent dimension by a birth-and-death MCMC (BDMCMC) (Fokoue and Titterington, 2003; Stephens, 2000) with the marginalized likelihood, which requires very little mathmatical sophistication and is easy for interpretation. Some other methods, such as putting multiplicative Gamma process prior (Bhattacharya and Dunson, 2011), multiplicative exponential process prior (Wang et al., 2016) and Beta process priopr (Paisley and Carin, 2009; Chen et al., 2010) and Indian Buffet pro Figure 3: **Application in multi-regional neural data under different experiment epochs.****A**. Here, we apply our method to multi-regional Neuropixels data, which contains neural spikes from 3 regions (CA1, LP and VISp) across 3 periods with different visual stimuli: spontaneous (S), natural movie (N) and spontaneous. The results from iteration 250 to 1000 for each chain are shown here. **B**. The similarity matrix of state cluster for NB bi-clustering model. **C**. The similarity matrices of neuron cluster sorted by maxPEAR estimate for NB bi-clustering model (NB-maxPEAR, upper-left). The clustering results sorted by both NB-maxPEAR and anatomical sites for three different clustering models (NB bi-clustering, Poisson bi-clustering and mixDPFA) are also shown here for comparison. **D**. The maxPEAR estimates of the discrete states for NB and Poisson bi-clustering model. **E**. The largest maxPEAR cluster contains 9 neurons from CA1, 4 neurons from LP and 4 neurons from VISp. The upper panel shows the observed neural spikes. The lower panel shows the latent trajectories that most these neurons belong to, where the blue lines are posterior means and shaded light blue regions are 95% HPD regions. cess prior (Knowles and Ghahramani, 2007, 2011; Rockova and George, 2016) may also be useful. Third, when clustering the temporal state, we tried to use the sticky-HDP-HMM to handle the rapid-switching issue. However, the method restrict to geometric state duration and doesn't allow for learning state-specific duration information. When applying to Neuropixels data, the state still looks change too fast. These limitations may require us to model the state duration more carefully,.e.g by HDP-HSMM (Johnson and Willsky, 2013). Moreover, neither sticky-HDP-HMM nor HDP-HSMM allow the transition of discrete latent state \(\xi_{t}\) to depend on latent trajectories \(\tilde{\mathbf{x}}_{t}\). Therefore, it may be possible to combine idea of recurrent HMM (Linderman et al., 2017) with HDP-HSMM, which may lead to some method like HDP=recurrent-HSMM, for instance. Finally, although the MCMC algorithm developed here is quite efficient, a deterministic approximation of MCMC, such as variational inference may be more computationally efficient and can be more attractive for scientific application. To sum up, as the scale of neural spiking data becoming large both spatially and temporally, understanding the latent structures of multiple populations under different conditions can be a major statistical challenge. Here, we provide a way to extract spatio-temporal clustering structure, according to their latent trajectories. Compared to other clustering method, the proposed bi-clustering method can provide a meaningful and scientific interpretable clustering structures, and can simultaneously provide unbiased inference on latent trajectories for each neural population. Although the proposed bi-clustering method is to resolve problems in neuroscience, this method can be potentially useful to extract insightful latent structures (bi-clustering and trajectories) from general large-scale (counting) time series.
2310.00567
Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks
Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify. Even with access only to the model's output, an attacker can employ black-box attacks to generate such adversarial examples. In this work, we propose a simple and lightweight defense against black-box attacks by adding random noise to hidden features at intermediate layers of the model at inference time. Our theoretical analysis confirms that this method effectively enhances the model's resilience against both score-based and decision-based black-box attacks. Importantly, our defense does not necessitate adversarial training and has minimal impact on accuracy, rendering it applicable to any pre-trained model. Our analysis also reveals the significance of selectively adding noise to different parts of the model based on the gradient of the adversarial objective function, which can be varied during the attack. We demonstrate the robustness of our defense against multiple black-box attacks through extensive empirical experiments involving diverse models with various architectures.
Quang H. Nguyen, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D. Doan
2023-10-01T03:53:23Z
http://arxiv.org/abs/2310.00567v1
# Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks ###### Abstract Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify. Even with access only to the model's output, an attacker can employ black-box attacks to generate such adversarial examples. In this work, we propose a simple and lightweight defense against black-box attacks by adding random noise to hidden features at intermediate layers of the model at inference time. Our theoretical analysis confirms that this method effectively enhances the model's resilience against both score-based and decision-based black-box attacks. Importantly, our defense does not necessitate adversarial training and has minimal impact on accuracy, rendering it applicable to any pre-trained model. Our analysis also reveals the significance of selectively adding noise to different parts of the model based on the gradient of the adversarial objective function, which can be varied during the attack. We demonstrate the robustness of our defense against multiple black-box attacks through extensive empirical experiments involving diverse models with various architectures. ## 1 Introduction Modern deep neural networks have demonstrated remarkable performance in various complex tasks, including image classification and face recognition, among others. However, prior works have pointed out that deep learning models are sensitive to small changes in the input and can be fooled by carefully chosen and imperceptible perturbations Szegedy et al. (2014); Goodfellow et al. (2015); Papernot et al. (2016); Madry et al. (2018). These adversarial attacks can be generally classified into white-box and black-box attacks. In a white-box setting, strong attacks such as Projected Gradient Descent (PGD) Madry et al. (2018) can generate effective adversarial examples by levering the information inside the model. However, in practical scenarios such as machine learning as a service (MLaas), the well-trained models and the training datasets are often inaccessible to the users, especially in the era of large models. Hence, query-based black-box attacks become the primary threats in most real-world applications, where the adversary is assumed to have no knowledge of the model architecture and parameters. This paper proposes a lightweight, plug-and-play defensive method that can significantly decrease the success rate of query-based black-box attacks, including both score-based and decision-based attacks Ilyas et al. (2018, 2019); Andriushchenko et al. (2020); Guo et al. (2019); Al-Dujaili and O'Reilly (2020); Liu et al. (2019); Chen and Gu (2020); Chen et al. (2020); Rahmati et al. (2020). Adversarial examples generated through query-based attacks involve iterative procedures that rely on either local search techniques involving small incremental modifications to the input or optimization methods leveraging estimated gradients of the adversary's loss concerning the input. However, the process of requesting numerous queries is time-consuming and sometimes may raise suspicions with the presence of multiple similar queries. Hence, the objective of defense is to perplex the adversary when attempting to generate adversarial examples. Our proposed method accomplishes this by introducing noise into the feature space. Unlike previous randomized defense approaches that solely rely on empirical evaluations to showcase effectiveness, this paper provides both theoretical analysis and empirical evidence to demonstrate improved robustness. Our analysis also highlights the importance of strategically introducing noise to specific components of the model based on the gradient of the adversarial objective function, which can be dynamically adjusted throughout the attack process. Our contributions can be summarized as follows: * We investigate the impact of randomized perturbations in the feature space and its connection to the robustness of the model to black-box attacks. * We design a simple yet effective and lightweight defense strategy that hampers the attacker's ability to approximate the direction toward adversarial samples. As a result, the success rate of the attacks is significantly reduced. * We extensively evaluate our approach through experiments on both score-based and decision-based attacks. The results validate our analysis and demonstrate that our method enhances the robustness of the randomized model against query-based attacks. ## 2 Related Works ### Adversarial Attacks Extensive research has been conducted on white-box attacks, focusing on the generation of adversarial examples when the attacker possesses complete access to the target model. Over the years, various notable methods have emerged as representative approaches in this field, including fast gradient sign method (FGSM) Goodfellow et al. (2015), Jacobian-based saliency Map Attack (JSMA) Papernot et al. (2016a), and PGD Madry et al. (2018). In contrast to white-box attacks, the black-box scenario assumes that the attacker lacks access to the target model, making it a more challenging situation. However, this is also a more realistic setting in real-world applications where the adversary would not have access to the model parameters. One approach in black-box attacks involves utilizing white-box techniques on substitute models to create adversarial examples, which can subsequently be applied to black-box target models Papernot et al. (2017). However, the effectiveness of transfer-based attacks can vary significantly due to several practical factors, such as the initial training conditions, model hyperparameters, and constraints involved in generating adversarial samples Chen et al. (2017). This paper focuses on the defense against query-based attacks instead. ### Query-based Black-box Attacks Query-based attacks can be largely divided into score-based attacks and decision-based attacks, based on the accessible model output information. Score-based attacks leverage the output probability or logit of the targeted model, allowing the attacker to manipulate the scores associated with different classes. On the other hand, decision-based queries provide the attacker with hard labels, restricting the access to only the final predictions without any probability or confidence values. We list the query-based attacks used in this paper below: **Natural Evolutionary Strategies (NES) Ilyas et al. (2018)** is one of the first query-based attacks that use natural evolutional strategies to estimate the gradient of the model with respect to an image \(x\). By exploring the queries surrounding \(x\), NES effectively gauges the model's gradient, enabling it to probe and gain insights into the model's behavior. **SignHunt Al-Dujaili & O'Reilly (2020)** is another score-based attack, which flips the sign of the perturbation based on the sign of the estimated gradient to improve the query efficiency. **Square attack Andriushchenko et al. (2020)** is a type of score-based attack that differs from gradient approximation techniques. Instead, it employs random search to update square-shaped regions located at random positions within the images. This approach avoids relying on gradient information and introduces a localized square modification to the image. **RayS Chen & Gu (2020)** is a decision-based attack that solves a discrete problem to find the direction with the smallest distance to the decision boundary while using a fast check step to avoid unnecessary searches. **SignFlip Chen et al. (2020)** is an \(\ell^{\infty}\) decision based attack that alternately projects the perturbation to a smaller \(\ell^{\infty}\) ball and flips the sign of some randomly selected entries in the perturbation. ### Defensive Methods against Query-based Attacks In the recent literature, several defensive solutions have been proposed to counter adversarial examples. One such solution involves the detection of malicious queries by comparing them with previously observed normal queries Chen et al. (2020); Li et al. (2022); Pang et al. (2020). This approach aims to identify anomalous patterns in queries and flag them as potential adversarial examples. Additionally, adversarial training has also been utilized to enhance the model's robustness Cohen et al. (2019); Wang et al. (2020); Sinha et al. (2017); Zhang et al. (2020). Adversarial training involves training the model on both regular and adversarial examples to improve its ability to withstand adversarial attacks. However, it is computationally expensive, especially when dealing with large and complex datasets. In some cases, adversarial training may also inadvertently harm the model's overall performance. In contrast, this paper focuses on approaches that involve incorporating noise or randomness into the model, thereby providing the adversary with distorted information. The underlying intuition behind these defense mechanisms is to deceive the attacker by introducing perturbations in the model's prediction process. By altering certain signals, the defenses aim to mislead the attacker and divert them from their intended direction. To achieve this, various techniques are employed to modify the input data or manipulate the model's internal workings. For instance, some defenses may introduce random noise or distortion to the input samples, making them less susceptible to adversarial perturbations. This noise acts as a smokescreen, confusing the attacker and making it harder for them to generate effective adversarial examples. We list the defensive methods evaluated in this paper below: **Random Noise Defense (RND) Qin et al. (2021)** is a lightweight defense that adds Gaussian noise to the input for each query. This work also theoretically shows RND's effectiveness against query-based attacks. **Small Noise Defense (SND) Byun et al. (2021)** is also a randomized defense that uses a small additive input noise to neutralize query-based attacks. **Adversarial Attack on Attackers (AAA) Chen et al. (2022)** directly optimizes the model's logits to confound the attacker towards incorrect attack directions. ## 3 Method ### Problem Formulations **Adversarial attack.** Let \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\) be the victim model, where \(d\) is the input dimension, \(K\) is the number of classes, \(f_{k}(x)\) is the predicted score of class \(k\) for input \(x\). Given an input example \((x,y)\), the goal of adversarial attack is to find a sample \(x^{\prime}\) such that \[\arg\max_{k}f(x^{\prime})\neq y,\quad\text{s.t}\quad d(x,x^{\prime})\leq\epsilon, \tag{1}\] where \(d(x,x^{\prime})\) is distance between samples \(x\) and \(x^{\prime}\). In practice, the distance can be the \(\ell^{2}-\)norm, \(\|x-x^{\prime}\|_{2}\), or the \(\ell^{\infty}-\)norm, \(\|x-x^{\prime}\|_{\infty}\). This adversarial task can be framed as a constrained optimization problem. More particularly, the attacker tries to solve the following objective \[\min_{x^{\prime}}\mathcal{L}(f(x^{\prime}),y),\quad\text{s.t}\quad d(x,x^{ \prime})\leq\epsilon, \tag{2}\] where \(\mathcal{L}(.,.)\) is a loss function designed by the attacker. In practice, a common loss function \(\mathcal{L}\) is the max-margin loss, as follows: \[\mathcal{L}(f(x),y)=f_{y}(x)-\max_{i\neq y}f_{i}(x). \tag{3}\] **Score-based attack.** For the query-based attack, an attacker can only access the input and output of the model; thus, the attacker cannot compute the gradient of the objective function with respect to the input \(x\). However, the attacker can approximate the gradient using the finite difference method: \[\hat{\nabla}\mathcal{L}=\sum_{u}\frac{\mathcal{L}(f(x+\eta u),y)-\mathcal{L}(f(x ),y)}{\eta}u,\quad\text{where }u\sim\mathcal{N}(0,\mu I). \tag{4}\] Another approach to minimize the objective function is via random search. Specifically, the attacker proposes an update \(u\) and computes the value of \(\mathcal{L}\) of this update to determine if \(u\) can help improve the value of the objective function. Formally, the proposed \(u\) is selected if \(\mathcal{L}(f(x+u),y)-\mathcal{L}(f(x),y)<0\), otherwise it is rejected. **Decision-based attack.** In contrast to score-based attacks, hard-label attacks find the direction that has the shortest distance to the decision boundary. The objective function of an untargeted hard-label attack can be formulated as follows: \[\min_{d}g(d)\] \[\text{where}\quad g(d)=\min\big{\{}r:\arg\max_{k}f(x+rd/\|d\|_{2}) \neq y\big{\}}. \tag{5}\] This objective function can be minimized using binary search, in which the attacker queries the model to find the distance \(r\) for a particular direction \(d\). To improve the querying efficiency, binary search can be combined with fine-grained search, in which the radius is iteratively increased until the attacker finds an interval that contains \(g(d)\). Hence, the gradient of \(g(d)\) can also be approximated by the finite difference method \[\hat{\nabla}g(d)=\sum_{u}\frac{g(d+\eta u)-g(d)}{\eta}u. \tag{6}\] Similar to the case of score-based attacks, the attacker can also search for the optimal direction. Given the current best distance \(r_{\mathrm{opt}}\), a proposed direction \(d\) is eliminated if it cannot flip the prediction using the current best distance \(r_{\mathrm{opt}}\); otherwise the binary search is used to compute \(g(d)\), which is the new best distance. **Randomized model.** In this work, we consider a randomized model \(f_{\mathrm{rand}}:\mathbb{R}^{d}\rightarrow\mathcal{P}(\mathbb{R}^{K})\) that maps a sample \(x\in\mathbb{R}^{d}\) to a probability distribution on \(\mathbb{R}^{K}\). Given an input \(x\) and an attack query, the corresponding output is a vector drawn from \(f_{\mathrm{rand}}(x)\). We assume that the randomized model \(f_{\mathrm{rand}}\) is 'nice'; that is, the mean and variance of \(f_{\mathrm{rand}}(x)\) exist for every \(x\). Finally, we define adversarial samples for a randomized model. Since the model has stochasticity, the prediction returned by the model of a sample \(x\) can be inconsistent at different queries; i.e., the same sample can be correctly predicted at one application of \(f_{\mathrm{rand}}\) and be incorrectly predicted later in another application of \(f_{\mathrm{rand}}\). For this reason, adversarial attacks are successful if the obtained adversarial example can fold the randomized model in the majority of its applications on the example. **Definition 1** (Attack Success on Randomized Model).: _Given a datapoint \(x\) with label \(y\) and a positive real number \(\epsilon\), a point \(x^{\prime}\) is called adversarial samples in a closed ball of radius \(\epsilon\) around \(x\) with respect to the model \(f_{\mathrm{rand}}\) if \(\|x^{\prime}-x\|_{p}<\epsilon\) and_ \[\arg\max\mathbb{E}[f_{\mathrm{rand}}(x^{\prime})]\neq y.\] ### Randomized Feature Defense Our method is based on the assumption that the attacker relies on the model's output to find the update vector toward an adversarial example. Consequently, if the attacker receives unreliable feedback from the model, it will be more challenging for the attacker to infer good search directions toward the adversarial sample. In contrast to the previous inference-time randomization approaches, we introduce stochasticity to the model by perturbing the hidden features of the model. Formally, let \(h_{l}\) be the \(l-\)th layer of the model, we sample an independent noise vector \(\delta\) and forward \(h_{l}(x)+\delta\) to the next layer. For simplicity, \(\delta\) is sampled from Gaussian distribution \(\mathcal{N}(0,\Sigma)\), where \(\Sigma\) is a diagonal matrix, or \(\mathcal{N}(0,\nu I),\nu\in\mathbb{R}\). The detailed algorithm is presented in Algorithm 1. Let \(f_{\rm rand}\) be the proposed randomized model corresponding to the original \(f\). When the variance of injected noise is small, we can assume that small noise diffuses but does not shift the prediction. **Assumption 1**.: _Mean of the randomized model \(f_{\rm rand}\) with input \(x\) is exactly the prediction of the original model for \(x\)_ \[\mathbb{E}[f_{\rm rand}(x)]=f(x).\] By Definition 1, adversarial samples of the original model are adversarial samples of the randomized model. Therefore, the direction that the attacker seeks is also that of the original model. Recall that the attacker finds this direction by either finite difference or random search. In our method, when the model is injected with an independent noise, the value of objective \(\mathcal{L}\) is affected. If \(\mathcal{L}(f_{\rm rand}(x+\eta u),y)-\mathcal{L}(f_{\rm rand}(x),y)\) oscillates among applications of \(f_{\rm rand}\), the attacker is likely misled and selects a wrong direction. For random-search attacks, when the sign of \(\mathcal{L}(f_{\rm rand}(x+\eta u),y)-\mathcal{L}(f_{\rm rand}(x),y)\) and the sign of \(\mathcal{L}(f(x+\eta u),y)-\mathcal{L}(f(x),y)\) are different, the attacker chooses the opposite action to the optimal one. In other words, the attacker can either accept a bad update or reject a good one in a random search. ### Robustness to Score-based Attacks In this section, we present the theoretical analysis of the proposed defense against score-based attacks. **Theorem 1**.: _Assuming the proposed random vector \(u\) is sampled from a Gaussian \(\mathcal{N}(0,\mu I)\), the model is decomposed into \(f=g\circ h\), and the defense adds a random noise \(\delta\sim\mathcal{N}(0,\nu I)\) to the output of \(h\). At input \(x\), the probability that the attacker chooses an opposite action positively correlates with_ \[\arctan\left(-\left(\frac{2\nu}{\mu}\frac{\|\nabla_{h(x)}(\mathcal{L}\circ g )\|_{2}^{2}}{\|\nabla_{x}(\mathcal{L}\circ f)\|_{2}^{2}}\right)^{-0.5}\right).\] This theorem states that the robustness of the randomized model is controlled by both (i) the ratio between the defense and attack noises and (ii) the ratio of the norm of the gradient with respect to the feature \(h(x)\) and the norm of the gradient with respect to the input \(x\). Since \(\arctan\) is monotonically increasing, the model becomes more robust if the ratio \(\frac{2\nu}{\mu}\frac{\|\nabla_{h(x)}(Cg)\|_{2}^{2}}{\|\nabla_{x}(\mathcal{L }\circ f)\|_{2}^{2}}\) is high. Intuitively, the perturbations added by the attacker and by the defense induce a corresponding noise in the output; if the attack noise is dominated by the defense noise, the attacker cannot perceive how its update affects the model. Note that the \(\arctan\) function is bounded, which means at some point the robustness saturates when the ratio increases. While the first ratio is predetermined before an attack, the second ratio varies during the attack when the input \(x\) is sequentially perturbed since it depends on the gradient of the objective function. To understand this behavior of the randomized model during the attack, we perform the following experiment. First, we compute the ratio of the norms of gradients at \(h(x)\) and \(x\). To simulate an attacker, we perform a single gradient descent step with respect to \(\mathcal{L}\). The distributions of the ratios on the raw and perturbed images at different layers are shown in Figure 1. We can observe that these ratios become higher when the data are perturbed toward the adversarial samples. In other words, the randomized model is more robust during the attack. ### Robustness to Decision-based Attacks In decision-based attacks, the attacker finds the optimal direction \(d_{\rm opt}\) and the corresponding distance \(r_{\rm opt}\) to the decision boundary such that \(r_{\rm opt}\) is minimal. We use the objective function \(\mathcal{L}(f(x),y)\) to understand how our method affects the decision-based attacks. Indeed, \(\mathcal{L}\) measures how close the prediction is to the true label: \(\mathcal{L}\leq 0\) if the prediction is incorrect and \(\mathcal{L}>0\) otherwise. To estimate \(g(d)\), the attacker can use binary search. Similar to score-based attacks, when noise is injected into the model, the function \(g(d)\) becomes stochastic, which eventually affects the attack. Unfortunately, the distribution of \(g(d)\) (under binary search with randomness) does not have an analytical form. Nevertheless, we can still use a similar analysis to the last section to understand the robustness of our method. To avoid performing a binary search on uninformative directions, the attacker relies on best-radius searching. Given the current best distance \(r_{\mathrm{opt}}\), for every new direction \(d\), the attacker verifies if the distance along \(d\) to the boundary is shorter than \(r_{\mathrm{opt}}\) by querying \(x+r_{\mathrm{opt}}d/\|d\|_{2}\). When adding noise to features \(h(x)\) of \(f=g\circ h\) and linearizing the function at the current input \(x\), we have \[\mathcal{L}(f_{\mathrm{rand}}(x+r_{\mathrm{opt}}d/\|d\|_{2}),y) \approx\mathcal{L}(g(h(x)+r_{\mathrm{opt}}J_{h}(x)d/\|d\|_{2}+ \delta) \tag{7}\] \[\approx\mathcal{L}(f(x),y)+r_{\mathrm{opt}}\nabla_{x}\mathcal{L }(f(x),y)d/\|d\|_{2}+\nabla_{h(x)}\mathcal{L}(g(h(x)),y)\delta\] (8) \[\approx(r_{\mathrm{opt}}-g(d))\nabla_{x}\mathcal{L}(f(x),y)d/\|d \|_{2}+\nabla_{h(x)}\mathcal{L}(g(h(x)),y)\delta, \tag{9}\] where \(J_{h}(x)\) is the Jacobian matrix of \(h\) evaluated at \(x\), since \(\mathcal{L}(f(x),y)+g(d)\nabla_{x}\mathcal{L}(f(x),y)d/\|d\|_{2}\approx \mathcal{L}(f(x+g(d)d/\|d\|_{2}),y)=0.\) If \(\delta\sim\mathcal{N}(0,\nu I)\), the variance of \(\nabla_{h(x)}\mathcal{L}(g(h(x)),y)\delta\) is \(\nu\|\nabla_{h(x)}\mathcal{L}(g(h(x)),y)\|_{2}^{2}\). When this value is large, it can dominate the other terms and increase the chance of flipping the sign of the loss function \(\mathcal{L}\). In other words, when \(\mathcal{L}\) has a high variance, the attacker is more likely to misiudge the direction. ### The Effect of Randomized Features on Accuracy Let \(\mathcal{D}\) be the data distribution, without any attack or defense, the accuracy of the model is \[\mathrm{Acc}(f):=\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[ \mathbb{1}\left(f(x)=y\right)]=\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[ 1\left(\mathcal{L}(f(x),y)>0\right)]. \tag{10}\] When injecting noise into the model, it becomes a robust, stochastic model \(f_{\mathrm{rand}}:\mathbb{R}^{d}\rightarrow\mathcal{P}(\mathbb{R}^{K})\). The clean accuracy of the randomized model is \[\mathrm{Acc}(f_{\mathrm{rand}})=\underset{(x,y)\sim\mathcal{D}}{ \mathbb{E}}\underset{y^{\prime}\sim f_{\mathrm{rand}}(x)}{\mathbb{E}}[1(y^{ \prime}=y)]\ =\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}\underset{y^{\prime}\sim f_{ \mathrm{rand}}(x)}{\mathbb{E}}[1(\mathcal{L}(y^{\prime},y)>0)]. \tag{11}\] Adding noise \(\delta_{2}\sim\mathcal{N}(0,\nu_{2}I)\) to the features at layer \(h\) of the model \(f=g\circ h\) results in: \[\mathrm{Acc}(f_{\mathrm{rand}}) =\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}\ \underset{\delta\sim\mathcal{N}(0,\nu_{2}I)}{\mathbb{E}}[1 (\mathcal{L}(g(h(x)+\delta_{2}),y)>0)] \tag{12}\] \[\approx\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}\ \underset{\delta_{2}\sim \mathcal{N}(0,\nu_{2}I)}{\mathbb{E}}[1\left(\mathcal{L}(f(x),y)+\nabla_{h(x)}( \mathcal{L}\circ g)\delta_{2}>0\right)]\] (13) \[=\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}\ \underset{\delta_{2} \sim\mathcal{N}(0,\nu_{2})}{\mathbb{E}}[1\left(\mathcal{L}(f(x),y)/\|\nabla_{h (x)}(\mathcal{L}\circ g)\|_{2}+\delta_{2}^{\prime}>0\right)]. \tag{14}\] Figure 1: The ratio of the norm of the gradient of \(\mathcal{L}\) at selected hidden layers and at input of VGG19 on CIFAR10 before and after perturbed. Full results are provided in the supplementary material. Figure 2: Distributions of the magnitude of the robustness to query-based attacks computed at input and selected hidden layers of VGG19 on CIFAR10. It means that the accuracy of a randomized model depends on the objective function and its gradient, which vary for different data points. These ratios of \(\mathcal{L}\) and its gradient computed at the input and hidden layers are different. If \(\mathcal{L}\) is small at samples that have a large gradient norm when noise is injected at a layer, these samples will be likely misclassified while the correctly classified samples have a low magnitude of robustness (i.e., \(\nu\|\nabla_{h(x)}(\mathcal{L}\circ g)\|_{2}^{2}\) is small, as discussed in Theorem 1 and Section 3.4). In contrast, if the gradient norm with respect to the randomized layer is large for samples that have large \(\mathcal{L}\), the robustness of the model for the correctly classified samples will be high; thus, adding noise to this layer makes the model more robust against black-box attacks. We conduct the following experiment to understand how the defense affects the whole dataset. We first compute the ratios of \(\mathcal{L}\) and its gradient for all samples and keep the top 99% values. Essentially, the standard deviation of defensive noises that makes the accuracy drop by \(1\%\) is proportional to the value at which \(1\%\) of the ratios in the dataset are smaller. The product of this value and the norm of gradient represents the robustness of datasets, which are shown in Figure 2. We can observe that the ratio distributions when randomizing the input and the hidden features are similar at the first few layers of the model; however, these ratios at the deeper layers of the model are higher. This means that randomizing the model at these layers makes it more robust than adding noise to the input layer when the defenders desire similar clean accuracy in the randomized models. ## 4 Experiments In this section, we evaluate the empirical performance of the proposed randomized feature defense. ### Experimental Setup **Datasets**. We perform our experiments on two widely used benchmark datasets in adversarial robustness: CIFAR10 Krizhevsky & Hinton (2009) and ImageNet Russakovsky et al. (2015). We randomly select \(1000\) images that contain every class from the studied dataset in each experiment. **Defenses**. In addition to the proposed defense, we also include the related input defenses Qin et al. (2021); Byun et al. (2021) in our evaluation. Note that, the empirical robustness comparison of all adversarial defenses is beyond the scope of the paper since our objective is to theoretically and empirically study the effectiveness of the randomized feature defense. We also evaluate AAA defense Chen et al. (2022) against decision-based attacks and compare them with randomized defenses. **Attacks**. For score-based attacks, we consider the gradient-estimation methods, NES Ilyas et al. (2018), and the random-search methods, Square Andriushchenko et al. (2020), SignHunt Al-Dujaili & O'Reilly (2020). For decision-based attacks, we consider RayS Chen & Gu (2020) and SignFlip Chen et al. (2020b). **Models**. We consider \(6\) victim models on ImageNet, including \(2\) convolution models that are VGG19 Simonyan & Zisserman (2015) and ResNet50 He et al. (2016), \(2\) transformer models that are ViT Dosovitskiy et al. (2021) and DeiT Touvron et al. (2021). For the experiments on CIFAR10, we finetuned VGG19, ResNet50, ViT, DeiT with an input size of \(224\times 224\). **Evaluation protocol**. For a fair comparison, we report each defense's robustness performance results at the corresponding configuration of hyperparameters that achieves a specific drop (i.e., \(\approx\)1% or \(\approx\)2%) in clean-data accuracy. In practice, a defender always considers the trade-off between robustness and clean-data performance, with a priority on satisfactory clean-data performance; thus, achieving higher robustness but a significant drop in clean-data accuracy is usually not acceptable. ### Performance against Score-based Attacks On ImageNet, we report the accuracy under the attack of \(6\) models and \(3\) score-based attacks in Table 1. As we can observe, while the attacks achieve close to \(0\%\) failure rate on the base models (i.e., without any defense), both randomized feature and input defenses significantly improve the models' robustness against score-based attacks. Furthermore, for Square attack and SignHunt, which are strong adversarial attack baselines, randomized feature defense consistently achieves better performance on all \(6\) models, which supports our theoretical analysis in Section 3. For instance, while the base VGG19 models are severely vulnerable, our randomized feature defense achieves \(22.2\%\) in robust accuracy after \(10000\) query, also significantly better than the randomized input defense (\(17.8\%\) robust accuracy). On the transformer-based DeiT, our randomized feature defense has \(69.1\%\) robustness best accuracy under Square attack, while the robust accuracy of the randomized input defense is \(2\%\) lower. For the NES attack, the randomized-feature VGG19 shows the best robustness. In summary, randomized feature defense consistently achieves high robustness on most models except ResNet50 where the robustness is similar to randomized input defense. We also observe similar robustness results on CIFAR10 experiments with ResNet50, VGG19, DeiT, and ViT for \(3\) attacks. As we can observe in Table 2, randomized feature and input defenses are effective against score-based attacks. Similar to ImageNet, randomized feature defense achieves significantly better robustness than randomized input defense in most experiments. For Square attacks on ResNet50 and DeiT, while the best robustness is achieved by randomized input defense, randomized feature defense is more robust when the defender sacrifices \(2\%\) clean-data accuracy. **Dynamic Analysis of Robustness.** As the adversary increases the magnitude of perturbation, the attack becomes more effective since the misleading probability decreases as shown in Theorem 1. The adversary can vary the square size for Square attack, the exploration step for NES, and the budget for SignHunt (since SignHunt sets the finite-difference probe to the perturbation bound). Table 3 reports the robustness of the models under stronger attacks from these adversaries for different values of \(\nu\). We can observe that increasing the strength of the attack leads to lower robustness among all the defenses. However, at the selected defense noise scales corresponding to the same clean accuracy drop, our defense is still more robust against can be explained by the analysis in section 3.3 and 3.5. A larger attack perturbation may also cause the approximation in the attack to be less accurate, which leads to a drop in the attack's effectiveness; for example, the robustness increases from \(89.6\%\) to \(91.4\%\) when the NES's perturbation magnitude increases in VGG19 experiments (similar observations in ViT). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Model & Method Acc & \multicolumn{3}{c}{Square} & NES & SignHunt \\ \cline{3-8} & & \multicolumn{2}{c}{1000} & 100000 & 10000 & 10000 & 10000 \\ \hline \multirow{3}{*}{ResNet50} & Base & 80.37 & 3.5 & 0.2 & 36.2 & 4.3 & 6.6 & 0.4 \\ \cline{2-8} & Input & 79.18 (\(\approx 15\%\)) & 40.3 & 39.5 & 63.3 & 23.9 & 47.6 & 45.4 \\ & Input & 78.46 (\(\approx 25\%\)) & 41.1 & 30.8 & **69.4** & 44.5 & 49.3 & 47.2 \\ \cline{2-8} & Feature & 79.70 (\(\approx 15\%\)) & 37.0 & 36.0 & 56.7 & 16.8 & 46.3 & 43.4 \\ & & 78.43 (\(\approx 2\%\)) & **42.2** & 41.5 & 6.6 & **61.3** & **49.3** \\ \hline \multirow{3}{*}{VGG19} & Base & 74.21 & 0.1 & 0.0 & 19.6 & 0.0 & 0.4 & 0.0 \\ \cline{2-8} & Input & 73.24 (\(\approx 15\%\)) & 7.7 & 6.9 & 32.1 & 1.5 & 18.3 & 17.0 \\ & Input & 71.43 (\(\approx 2\%\)) & 18.7 & 17.8 & 47.4 & 11.5 & 28.3 & 27.1 \\ \cline{2-8} & Feature & 72.66 (\(\approx 15\%\)) & 22.4 & 21.6 & 50.1 & 18.5 & 34.6 & **32.9** \\ \cline{2-8} & & 77.11 (\(\approx 2\%\)) & **23.3** & **22.1** & **51.1** & **28.4** & **36.3** & **32.8** \\ \hline \multirow{3}{*}{DeiT} & Base & 82.00 & 6.4 & 0.0 & 46.7 & 0.8 & 22.3 & 0.0 \\ \cline{2-8} & Input & 80.10 (\(\approx 15\%\)) & 67.7 & **27.8** & **65.6** & 59.4 & 64.6 \\ \cline{2-8} & Input & 79.60 (\(\approx 2\%\)) & 66.6 & 66.0 & 75.7 & 64.9 & 64.3 & 64.9 \\ \cline{2-8} & Feature & 80.80 (\(\approx 15\%\)) & **69.7** & **69.1** & 75.0 & 59.1 & **66.4** & 64.1 \\ \cline{2-8} & Feature & 79.76 (\(\approx 2\%\)) & 69.3 & 69.0 & 75.1 & 65.3 & 66 & **64.3** \\ \hline \multirow{3}{*}{ViT} & Base & 79.15 & 5.7 & 0.0 & 45.7 & 7.3 & 5.1 & 0.0 \\ \cline{2-8} & Input & 78.28 (\(\approx 15\%\)) & 58.8 & 58.1 & 70.8 & 54.1 & 53.1 & 52.2 \\ \cline{2-8} & Input & 77.08 (\(\approx 2\%\)) & 61.3 & 60.9 & 70.6 & **59.2** & 53.7 & 52.7 \\ \cline{2-8} & Feature & 78.20 (\(\approx 15\%\)) & 60.6 & 60.2 & 69.1 & 47.5 & 54.0 & 52.9 \\ \cline{2-8} & & 77.18 (\(\approx 2\%\)) & **63.7** & **62.9** & **72.2** & 58.1 & **57.0** & **55.3** \\ \hline \hline \end{tabular} \end{table} Table 1: Defense Performance in ImageNet. The clean-data accuracy of the robust models is allowed to drop either \(\approx 1\%\) or \(\approx 2\%\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{\(\mu\)} & \multicolumn{3}{c}{VGG} & \multicolumn{3}{c}{ViT} \\ \cline{3-8} & & \multicolumn{2}{c}{Small \(\nu\)} & \multicolumn{2}{c}{Large \(\nu\)} & \multicolumn{2}{c}{Small \(\nu\)} & \multicolumn{2}{c}{Large \(\nu\)} \\ \cline{3-8} & & \multicolumn{2}{c}{Input} & \multicolumn{2}{c}{Future} & \multicolumn{2}{c}{Input} & \multicolumn{2}{c}{Future} & \multicolumn{2}{c}{Input} & \multicolumn{2}{c}{Future} \\ \hline \multirow{3}{*}{Square} & 0.05 & 30.6 & 61.0 & 42.2 & 64.3 & 60.9 & 62.6 & 66.2 \\ & 0.1 & 47.4 & 65.8 & 34.6 & 65.5 & 69.3 & 79.2 & 68.8 & 69.6 \\ & 0.2 & 32.7 & 57.3 & 43.8 & 56.0 & 56.1 & 58.0 & 58.6 \\ & 0.3 & 27.0 & 54.9 & 38.1 & 59.7 & 47.1 & 51.9 & 47.7 & 50.4 \\ \cline{2-8} & 0.01 & 93.4 & 93.9 & 90.1 & 91.4 & 93.7 & 48.0 & 93.3 & 93.5 \\ \cline{2-8} & 0.01 & 93.5 & 92.9 & 90.3 & 91.4 & 93.7 & 48.0 & 93.5 \\ & 0.1 & 88.0 & 90.0 & 86.7 & 98.6 & 97.9 & 94.1 & 86.7 & 90.6 \\ & 0.2 & 93.6 & 93.0 & 92.6 & 91.4 & 91.0 & 93.8 & 87.6 & 92.0 \\ \hline \multirow{3}{*}{SignHunt} & 0.01 & 91.6 & 91.0 & 91.3 & 88.0 & 89.1 & 90.9 & 85.4 & 91.3 \\ & 0.05 & 2.7 & 41.2 & 27.5 & 42.4 & 34.8 & 24.5 & 33.4 & 44.8 \\ & 0.075 & 5.6 & 17.9 & 5.4 & 12.1 & 15.6 & 11.3 & 5.2 & 12.7 \\ \cline{2-8} & 0.1 & 1.2 & 7.9 & 2.4 & 12.1 & 5.5 & 11.3 & 5.2 & 12.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Defense Performance in CIFAR10. The clean-data accuracy of the robust models is allowed to drop either \(\approx 2\%\) or \(\approx 4\%\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Model & Method Acc & \multicolumn{3}{c}{Square} & NES & SignHunt \\ \cline{3-8} & & \multicolumn{2}{c}{1000} & 100000 & 10000 & 10000 & 10000 \\ \hline \multirow{3}{*}{ResNet50} & Base & 97.66 & 0.8 & 0.1 & 71.7 & 21.7 & 3.7 & 0.2 \\ \cline{2-8} & Input & 95.98 (\(\approx 2\%\) **Combined with Adversarial Training (AT).** We evaluate the combination of our defense and AT on CIFAR10/ResNet20 model against under score-based attacks with 1000 queries and observe significantly improved robustness, as shown in Table 4. ### Performance against Decision-based Attacks Table 5 reports the performance of VGG19 and ResNet50 against \(2\) decision-based attacks on CIFAR10. Besides randomized feature and input defenses, we also include AAA defense, which optimizes the perturbation that does not change the prediction. While AAA is optimized for score-based attacks directly and thus is successful in fooling these attacks (as seen in Table 3 in Supplementary), the results show that AAA is not effective in defending against decision-based attacks, while randomized feature and input defenses improve the robustness. An interesting observation is that RayS attack is more effective than score-based attacks although it only uses hard labels, even when there are defenses. ### Relationship Between the Gradient Norm and the Robustness to Score-Based Attacks In Table 6, we provide the corresponding accuracy under attack on CIFAR10 with 1000 queries (for when a single layer is randomized with a fixed value of \(\nu\)) and the mean of the gradient norm at that layer. As we can observe, as the gradient norm increases (also as we originally observe in Figure 1), the robustness also increases, thus verifying our theoretical results. ### Performance against Adaptive Attacks We conduct experiments with adaptive attacks that apply Expectation Over Transformation (EOT) Athalye et al. (2018) in which the attacker queries a sample \(M\) times and averages the outputs to cancel the randomness. Tables 7 show the robust accuracy of VGG19 and ResNet50 on CIFAR10 against EOT attack with \(M=5\) and \(M=10\). Note that with EOT, the number of updates in the attack is \(M\) times less than that of a normal attack with the same query budget. For this reason, we report the results for adaptive attacks with both \(1000\) queries and \(M\times 1000\) queries. We can observe that EOT can mitigate the effect of randomized defenses even with the same number of queries; however, feature defense still yields better performance. ## 5 Conclusion and Future Work In this work, we study the effectiveness of random feature defense against query-based attacks, including score-based and decision-based attacks. We provide an analysis that connects the robustness to the variance of noise and the local behavior of the model. Our empirical results show that random defense helps improve the performance of the model under query-based attacks with a trade-off in clean accuracy. Future works will be directed toward the analysis covering black-box attacks that transfer adversarial samples from the surrogate model to the target model. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{13}{c}{\multirow{2}{*}{Attacks}} & \multicolumn{4}{c}{Square NES} & \multicolumn{4}{c}{SigInT} & \multicolumn{4}{c}{Model} & \multicolumn{4}{c}{Acc} & \multicolumn{4}{c}{RayS} & \multicolumn{4}{c}{SignFlip} & \multicolumn{4}{c}{Model} & \multicolumn{4}{c}{Layer Square} & \multicolumn{4}{c}{NES} & \multicolumn{4}{c}{SignFlint} & \multicolumn{4}{c}{GradNorm} \\ \hline \multirow{3}{*}{AT} & \multirow{3}{*}{32.5} & \multirow{3}{*}{Acc} & \(M=1\) & \(M=5\) & & \multicolumn{3}{c}{} & \(M=10\) & Acc & \(M=1\) & \(M=5\) & & \multicolumn{3}{c}{} & \(M=1\) & \(M=5\) & & \multicolumn{3}{c}{} & \(M=10\) \\ &
2303.06990
Randomness-free Test of Non-classicality: a Proof of Concept
Quantum correlations and non-projective measurements underlie a plethora of information-theoretic tasks, otherwise impossible in the classical world. Existing schemes to certify such non-classical resources in a device-independent manner require seed randomness, which is often costly and vulnerable to loopholes, for choosing the local measurements performed on different parts of a multipartite quantum system. In this letter, we propose and experimentally implement a semi-device independent certification technique for both quantum correlations and non-projective measurements without seed randomness. Our test is semi-device independent in the sense that it requires only prior knowledge of the dimensions of the parts. We experimentally show a novel quantum advantage in correlated coin tossing by producing specific correlated coins from pairs of photons entangled in their transverse spatial modes. We establish the advantage by showing that the correlated coin obtained from the entangled photons cannot be obtained from two 2-level classical correlated coins. The quantum advantage requires performing qubit trine positive operator-valued measures (POVMs) on each part of the entangled pair, thus also certifying such POVMs in a semi-device-independent manner. This proof of concept firmly establishes a new cost-effective certification technique for both generating non-classical shared randomness and implementing non-classical measurements, which will be important for future multi-party quantum communications.
Zhonghua Ma, Markus Rambach, Kaumudibikash Goswami, Some Sankar Bhattacharya, Manik Banik, Jacquiline Romero
2023-03-13T10:44:16Z
http://arxiv.org/abs/2303.06990v3
# Randomness-free Test of Non-classicality: a Proof of Concept ###### Abstract Quantum correlations and non-projective measurements underlie a plethora of information-theoretic tasks, otherwise impossible in classical world. Existing schemes to certify such non-classical resources in a _device independent_ manner, require seed randomness--which is often costly and vulnerable to loopholes--for choosing the local measurements performed on different parts of a multipartite quantum system. In this letter we propose and experimentally implement a _semi-device independent_ certification technique for both quantum correlations and non-projective measurements without seed randomness. Our test is _semi-device independent_ in the sense that it requires only prior knowledge of the dimension of the parts. By producing specific correlated coins from pairs of photons entangled in their transverse spatial modes we experimentally show a novel quantum advantage in correlated coin tossing. We establish the advantage by showing that the correlated coin procured from the entangled photons cannot be obtained from any two 2-level classical correlated coins. The quantum advantage requires performing qubit trine positive operator-valued measures (POVMs) on each part of the entangled pair, thus also certifying such POVMs in a _semi-device independent_ manner. This proof of concept firmly establish a new _cost effective_ certification technique for both, generating non-classical shared randomness and implementing non-classical measurements which will be important for future multi-party quantum communications. _Introduction.-_ Correlations play an integral role in information processing be it classical or quantum. Nature presents us with composite systems consisting of correlations among multiple subsystems, that cannot be explained if the subsystems are separable [1, 2, 3, 4, 5, 6]. Characterizing such non-classical correlations have been central to quantum theory. Aside from testing Bell inequalities, recent developments in quantum technology provide us with the tools to detect non-classicality of correlations either as a pseudo-telepathy game [7, 8] or in a communication task assisted by those correlations [9, 10]. Both cases involve randomizing over the choice of inputs (measurement settings in the first case and preparation and measurement in the second case). In this work, we implement a new technique of detecting non-classical correlations, which does not require costly seed randomness for inputs. As a trade-off the experimenter is required to know only an upper bound to the dimension of the subsystems in use, hence the technique is semi-device-independent. Besides foundational interest, this new tool paves way for a cost-effective characterization of non-classical resources in quantum information and computation. We follow an operational approach by considering the task of generating shared randomness between two distant parties. Shared randomness (SR), also known as public/correlated randomness (as opposed to private randomness [11]) can be thought of as a joint probability distribution over random variables shared between two distant parties, that cannot be factorized. A well-known quantifier for such correlations is the mutual information, which serves as a bonafide measure for the distant parties agreeing on a string of measurement outcomes, given a common source [12, 13, 14]. Based on this quantity, shared randomness has been established as a useful resource in a number of tasks--privacy amplification, simultaneous message passing, simulation of noisy classical and quantum channels, secret sharing, simulation of quantum correlations, and Bayesian game theory, to name a few [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Generation of this resource from some physical system is, therefore, a question of utmost practical relevance. In an operational theory, SR, between two distant parties, can be obtained from a bipartite system prepared in some correlated state. In practice, the two parties could each be given a part of a correlated pair of classical or quantum coins which they could use for "coin-tossing". Each party performs a local operation on their respective parts of the composite system which results in correlated outcomes and hence SR. Here we demonstrate an experimental quantum advantage in generating SR between two parties. Particularly, we show that a two-qubit system prepared in a maximally entangled state can yield some desired SR, that otherwise is not possible to obtain from the corresponding classical system - two 2-level correlated classical coins which we call two-2-coin. This in turn establishes a non-classical feature of the two-qubit system which distinguishes it from its classical counterpart. Importantly, in our case, a single measurement - positive operator value measure (POVM) - is performed on each part of the entangled pair. Therefore, unlike Bell tests (see [29, 30, 31]), no randomization over the choice of local measurements is required for establishing this non-classicality. In the experiment, we use transverse-mode entangled photon pairs produced via degenerate spontaneous parametric down conversion method as the two-qubit system. As it turns out the advantage is established through the payoff of a game played between two distant parties, which is different from mutual information between the random variables possessed by spatially separated parties [32]. The payoff is upper bounded by a threshold value when the parties share a two-2-coin state, whereas a better payoff can be obtained from a two-qubit singlet state even when the state is noisy. The resulting quantum advantage demands generalized measurements, _viz._ POVMs [33; 34] on the local parts of the shared entangled state as it is not possible through local projective measurements, _a.k.a._ von Neumann measurements [35], and then local post-processing of the outcome statistics. Payoff superseding the classical threshold value ensures that the measurements are not projective, and thus establishes a semi-device-independent test of generalized measurement. _Correlated coin tossing.-_ The operational utility of SR can be understood within the framework of resource theory. This resource theoretic framework, in fact, finds successful application in quantifying a variety of quantum resources (see [36] and references therein). The methodology of this highly versatile framework starts with partitioning the states into two disjoint groups: resource-free states and resourceful states. Based on natural restrictions placed on physical systems, a collection of operations is identified as free, which keeps the set of free states invariant. The framework then introduces the concept of resource monotones that put a quantitative order among the resourceful states. Sources of two random variables \(X\) and \(Y\) held by two distant parties, Alice and Bob, will not yield any SR whenever the joint probability distribution is in the product form, _i.e._, \(P(X,Y){=}P(X)P(Y)\); here \(P(Z){\equiv}\{p(z)\mid p(z)\geq 0\ \&\ \sum_{z\in Z}p(z){=}1\}\) denotes a probability distribution on \(Z\). On the other hand, the joint source produces a nonzero amount of shared randomness when the distribution cannot be written as a product. The amount of SR can be faithfully quantified by the entropic function called mutual information, \(I(X{:}Y){:}{=}H(X)+H(Y){-}H(X,Y)\); where \(H(Z){:}{=}-\sum p(z)\log_{2}p(z)\) denotes the Shannon entropy associated with the source \(P(Z)\)[37]. A source \(P(Z)\) can be converted into a different one \(P^{\prime}(Z^{\prime})\) by a stochastic map \(S^{Z\to Z^{\prime}}\), which can be represented as a \(|Z^{\prime}|\times|Z|\) matrix having non-negative real elements with the entries in each column adding up to unity [38]. While constructing the resource theory of SR, the free operations on a bipartite source \(P(X,Y)\) are given by the product of stochastic maps applied on the individual parts, _i.e._, instead of a general stochastic matrix of the form \(S^{XY{\to}X^{\prime}Y^{\prime}}\) only product of local stochastic matrices \(S^{X{\to}X^{\prime}}\) and \(S^{Y{\to}Y^{\prime}}\) are allowed as free. For convenience, the free operations can be represented as a tensor product, \(S^{X{\to}X^{\prime}}\otimes S^{Y{\to}Y^{\prime}}\)[32]. Physically SR can be obtained from a composite system prepared in some correlated state which is shared between distant parties. Alice and Bob perform local operations on their respective parts of the composite state resulting in random but correlated outcomes. Within the framework of generalized probability theory, the state space of such a bipartite system is described by \(\Omega_{A}\otimes\Omega_{B}\), where \(\Omega_{K}\) denotes the marginal state space [39]. For instance, the state space of \(d\)-level classical system is the \(d\)-simplex, a convex set embedded in \(\mathbb{R}^{d-1}\) having \(d\) number of extremal points. A coin-tossing experiment corresponds to the case \(d=2\), and leads to the state space \(\mathfrak{C}^{2}\equiv\{(p(\mathrm{h}),p(\mathrm{t}))^{\mathrm{T}}\mid p( \mathrm{h}),p(\mathrm{t})\geq 0\ \&\ p(\mathrm{h})+p(\mathrm{t})=1\}\); \(p(\mathrm{h})\) and \(p(\mathrm{t})\) respectively denote the probability of obtaining outcome 'head' and 'tail'. The state space \(\mathfrak{C}(2)\equiv\mathfrak{C}^{2}_{A}\otimes\mathfrak{C}^{2}_{B}\) of two-2-coin shared between Alice and Bob is given by \(\mathfrak{C}(2)\equiv\{(p(\mathrm{h}),p(\mathrm{h}),p(\mathrm{h}),p(\mathrm{h }),p(\mathrm{t}))^{\mathrm{T}}\mid p(\mathrm{i})\geq 0,\ \forall\ \mathrm{i}, \mathrm{j}\in\{\mathrm{h},\mathrm{t}\},\ \&\ \sum_{\mathrm{i},\mathrm{j}}p(\mathrm{i} )=1\}\). The quantum analogue of two-2-coin is a two-qubit system associated with the Hilbert space \(\mathbb{C}^{2}_{A}\otimes\mathbb{C}^{2}_{B}\), and the corresponding state space is given by \(\mathcal{D}(\mathbb{C}^{2}_{A}\otimes\mathbb{C}^{2}_{B})\); where \(\mathcal{D}(\mathbb{H})\) denotes the set of density operators acting on the Hilbert space \(\mathbb{H}\). From a quantum state, \(\rho_{AB}\in\mathcal{D}(\mathbb{C}^{2}_{A}\otimes\mathbb{C}^{2}_{B})\), Alice and Bob can generate shared randomness by performing local measurements on their respective parts (see Fig. 1). Generalizing the notion of \(\mathfrak{C}(2)\), the state space of a two-d-coin is defined as \(\mathfrak{C}(d)\equiv\{(p(11),p(12),\cdots,p(\mathrm{dd}))^{\mathrm{T}}\mid p (\mathrm{i})\geq 0,\ \forall\ \mathrm{i},\mathrm{j}\in\{1,\cdots,\mathrm{d}\},\ \&\ \sum_{\mathrm{i},\mathrm{j}}p(\mathrm{i} )=1\}\). By \(\Theta\mathrm{C}(2\to d)\) we denote the set of two-d-coin states that can be obtained from the set of two-2-coin states \(\mathfrak{C}(2)\) by applying free local stochastic maps \(S^{2{\to}d}_{A/B}\) on Alice's and Bob's part of the states. Similarly, \(\Theta\mathrm{Q}(2\to d)\) denotes the set of two-d-coin states obtained by performing \(d\)-outcome local measurements on Alice's and Bob's parts of the bipartite quantum states \(\mathcal{D}(\mathbb{C}^{2}_{A}\otimes\mathbb{C}^{2}_{B})\). We are now in a position to present our first result as stated in the following proposition (proof deferred to the Appendix). **Proposition 1**.: _For every \(d\geq 3,\ \Theta\mathrm{C}(2\to d)\subseteq\Theta\mathrm{Q}(2\to d)\subsetneq \mathfrak{C}(d)\), whereas \(\Theta\mathrm{Q}(2\to 2)=\mathfrak{C}(2)=\Theta\mathrm{C}(2\to 2)\)._ As evident from this Proposition, a quantum advantage in SR generation is possible if we consider the generation of a Figure 1: The set up- A trusted source is emitting bipartite correlated systems of local dimension \(2\) which are being measured by spatially separated black box devices, which outputs \(a\in\{1,\ldots,d\}\) and \(b\in\{1,\ldots,d\}\). The sets of observed joint probability distributions \(P(a,b)\) for \(d>2\), are different for classical and quantum sources. higher dimensional correlated coin state starting from a lower dimensional correlated coin state. More particularly, a proper set inclusion relation \(\Theta\mathrm{C}(2\to d)\subsetneq\Theta\mathrm{Q}(2\to d)\) for some \(d\geq 3\) establishes a quantum advantage in correlated coin state generation, which we aim to experimentally establish through a game introduced in [32]. The game \(\mathbb{G}(n)\) involves two employees, Alice and Bob, working in an organization. There are \(n\) different restaurants \(\{r_{1},\cdots,r_{n}\}\) where the employees can choose to buy their daily drink. The organization follows a reimbursement policy to pay back the bill. For this, each day's bills of Alice and Bob are collected for a log to calculate the joint probability \(P(ij)\) that Alice visits restaurant \(r_{i}\) while Bob visits restaurant \(r_{j}\) restaurant. Alice and Bob are reimbursed only if they end up going to different restaurants, _i.e._, \(r_{i}\neq r_{j}\). At the same time, the game requires minimising the chance of one restaurant being boycotted, which would happen if Alice and Bob frequent two different but fixed restaurants. Hence we define the payoff as, \(\$\mathcal{R}(n)=\$\min_{i\neq j}P(ij)\) to ensure that trade is distributed among all the restaurants. The employees are non-communicating. However, they possess a bipartite state with subsystems described by two-level systems and choose strategies with the help of free operations and accordingly try to maximize their payoff. If no restriction is put on the amount of SR, then both Alice and Bob can obtain the maximum payoff \(\mathcal{R}_{\max}(n)=1/(n^{2}-n)\)[32]. The situation becomes interesting if restrictions are imposed on the amount of SR. Let, \(\mathcal{R}_{\max}^{\mathfrak{E}(m)}(n)\) denotes the maximum payoff achieved in \(\mathbb{G}(n)\) when the parties are allowed to share a two-\(m\)-coin. For a pair \((m,n)\) with \(m<n\), the cases \(\mathcal{R}_{\max}^{\mathfrak{E}(m)}(n)<\mathcal{R}_{\max}(n)\) open up a scope for quantum advantage. _Quantum advantage in correlated coin kossing_.- Perfect success of \(\mathbb{G}(3)\) game requires Alice and Bob to share a coin \(\mathcal{C}_{ac}(3):=\frac{1}{6}\left(0,1,1,1,0,1,1,1,0\right)^{\mathrm{T}} \in\mathfrak{C}(3)\). As it turns out this particular coin cannot be generated from any of the coin states in \(\mathfrak{C}(2)\) by applying free operations, and the optimal payoff Alice and Bob can ensure with a \(\mathfrak{C}(2)\) coin is \(\mathcal{R}_{\max}^{\mathfrak{E}(2)}(3)=1/8<1/6=\mathcal{R}_{\max}(3)\) (see Appendix). When allowed to share a two-qubit state Alice and Bob can start their protocol by sharing a noisy singlet state \[\mathcal{W}_{p}:=p\left|\psi^{-}\right\rangle\left\langle\psi^{-}\right|+(1-p )\frac{\mathbf{I}_{2}}{2}\otimes\frac{\mathbf{I}_{2}}{2},\ \ p\in[0,1]; \tag{1}\] where \(\left|\psi^{-}\right\rangle:=\frac{1}{\sqrt{2}}(\left|0\right\rangle\otimes \left|1\right\rangle-\left|1\right\rangle\otimes\left|0\right\rangle)\) with \(\{\left|0\right\rangle,\left|1\right\rangle\}\) denoting the eigenstates of Pauli \(\sigma_{z}\) operator. Both of them perform the _trine_ POVM \[\mathcal{M}\equiv\{e_{i}:=\frac{1}{3}(\mathbf{I}_{2}+\hat{n}_{i }.\sigma)\}:\ \ \hat{n}_{i}:=(\sin\theta_{i},0,\cos\theta_{i})^{\mathrm{T}},\] \[\text{where}\ \ \theta_{1}=0,\ \theta_{2}=2\pi/3,\ \theta_{3}=4\pi/3, \tag{2}\] on their respective qubits. This results in a shared coin state \(\mathcal{C}_{p}(3):=(f_{p},s_{p},s_{p},s_{p},f_{p},s_{p},s_{p},s_{p},f_{p})^{ \mathrm{T}}\in\mathfrak{C}(3)\), with \(f_{p}:=(1-p)/9,\ s_{p}:=(2+p)/18\). Manifestly, this amounts to a payoff \[\mathcal{R}_{p}(3):=\min_{i\neq j}P(ij)=(2+p)/18, \tag{3}\] if Alice and Bob visit the \(i^{th}\) restaurant when the \(i^{th}\) outcome clicks in their respective measurements. A quantum advantage is ensured whenever \(\mathcal{R}_{p}(3)>1/8\). As it turns out quantum states \(\mathcal{W}_{p}\in\mathcal{D}(\mathbb{C}^{2}\otimes\mathbb{C}^{2})\) becomes advantageous over the classical two-\(2\)-coin states \(\mathfrak{C}(2)\) in playing the \(\mathbb{G}(3)\) game whenever \(p>1/4\), with perfect entangled state yielding the perfect success [32]. In view of an adversarial scenario, one may consider a third party Eve, and ask: is it possible for Eve to obtain information about \(P(X,Y)\), the joint probability distribution of the random variables \(X\) and \(Y\), which are outcomes of Alice's and Bob's measurements? The answer to such a question definitely depends on the computational power of Eve and the constraints on the system shared between Alice and Bob. For the case where Alice and Bob achieve perfect success in the \(\mathbb{G}(3)\) it is not possible for Eve to obtain information about \(P(X,Y)\), provided that Eve's power is restricted to process quantum systems only and Alice's and Bob's local subsystems' dimension are bounded from above (in our case \(\mathrm{dim}{=2}\)). This follows from the observation that perfect success necessitates a maximally entangled two-qubit state to be shared between Alice and Bob, which cannot be correlated even classically with a third party. However, a general answer to the question of security in this setting for arbitrary success probability \(\mathcal{R}_{p}(3)>1/8\), we leave as a question for future research. _Randomness-free test of non-classicality_- A quantum measurement is most generally described by a POVM, which is a collection of positive semidefinite operators \(\{e_{i}\}_{i=1}^{k}\), with \(\sum_{i}e_{i}=\mathbf{I}_{d}\), where \(\mathbf{I}_{d}\) is the identity operator acting on the Hilbert space \(\mathbb{C}^{d}\) associated with the system [33; 43]. Projective measurements are special cases, where the \(e_{i}\)'s correspond to mutually orthogonal projectors \(\pi_{i}\)'s. For a qubit, such a measurement can have only two outcomes: \(\{\pi_{i}:=\left|\psi_{i}\right\rangle\left\langle\psi_{i}\right|\ \left\lvert\left\langle\psi_{i}\right\rvert\psi_{j}\right\rangle=\delta_{ij} \ \text{for}\ \ i,j=0,1\}\). A \(k\)-outcome POVM \(\{e_{i}\}_{i=1}^{k}\) will be called projective simulable if the outcome probabilities of the POVM elements can be obtained by coarse-graining the outcome probabilities of some \(d\)-outcome projective measurement for any \(d<k\), _i.e._, \(\forall\ i\in[1,k]\), \(e_{i}=\sum_{j}P_{ij}\pi_{j}\), with \(\{\pi_{j}\}_{j\in[1,d]}\) being a \(d\)-outcome projective measurement and \(\{P_{ij}\}_{i}\) denoting probability distributions. For instance, the quantum qubit measurement \(\sigma_{\hat{n}}(\lambda)\equiv\{\frac{1}{2}(\mathbf{I}_{2}\pm\lambda\hat{n}. \sigma)\ |\ \lambda\in(0,1)\}\) can be simulated through the projective measurement \(\sigma_{\hat{n}}\equiv\{\frac{1}{2}(\mathbf{I}_{2}\pm\hat{n}.\sigma)\}\), since \(\frac{1}{2}(\mathbf{I}_{2}\pm\lambda\hat{n}.\sigma)=\frac{1+\lambda}{4}(\mathbf{ I}_{2}\pm\hat{n}.\sigma)+\frac{1-\lambda}{4}(\mathbf{I}_{2}\mp\hat{n}.\sigma)\)[34; 44]. Not all POVMs are projective simulable and such measurements are known to be useful for a number of information-theoretic tasks [45; 46; 47; 48; 49]. Our non-monopopizing social subsidy game provides a way to semi-device independent certification of such qubit measurements. Denoting the set of all qubit projective simulable measurements as \(\mathfrak{P}\mathfrak{S}(2)\), the result is formally stated as the following proposition. **Proposition 2**.: _The maximum payoff \(\mathcal{R}_{max}^{\mathfrak{P}\mathfrak{S}(2)}(3)\) of the game \(\mathrm{G}(3)\), achievable when the players are restricted to perform measurement from the set \(\mathfrak{P}\mathfrak{S}(2)\), is upper bounded by \(\mathcal{R}^{\mathfrak{C}(2)}_{max}(3)\). The claim of Proposition 2 follows from the fact that given dimension \(d\) of the local sub-systems, the joint outcome probabilities obtained from any arbitrary quantum state and projective measurement are the diagonal elements of the density matrix (the state) when written in the same basis as the projective measurement. Thus the same statistics can also be obtained from a classically correlated (diagonal) state and measurement on the computational basis. A payoff higher than the maximum classical payoff, therefore, certifies that the qubit measurements performed by the players are not projective simulable [50]. We highlight that this certification technique is semi-device- independent, with the experimenter requiring only the knowledge of the local dimension (in this case \(d=2\)) of the state shared between them. As shown in [51] certification of POVM is possible even in a device-independent manner. However, Such a device-independent certification demands a violation of suitably designed Bell-type inequality, and hence requires each of the parties involved in the Bell test to randomly perform incompatible measurements on their part of the shared system. Note that the technique of [51] is a detection of non-projective measurement only if subsystem dimension \(d=2\), which is further guaranteed by a CHSH violation. In contrast, our semi-device-independent scheme requires a single measurement device for each of the parties, getting rid of the seed randomness in inputs to the measurement devices. _Experimental results.-_ The quantum coin used in the experiment is a pair of photons entangled in their transverse spatial modes produced via degenerate spontaneous parametric downconversion (SPDC) [40]. We show our experimental schematic in Fig. 2. A 405-nm continuous wave ultraviolet laser is incident on a 5-mm thick beta-barium borate (BBO) crystal (after passing through a pair of lenses of focal length 50 mm to control collimation, L1 and L2). While passing through the crystal, a UV photon is probabilistically downconverted to a pair of infrared photons of the wavelength of 810 nm. We block the remaining UV photons by a long-pass filter (LP) and separately image the infrared photons to two spatial light modulators (SLM1 and SLM2) by the combination of lenses L3 (f=150 mm), L4 (f=300 mm) and the beam splitter (BS). The SPDC process conserves optical orbital angular momentum (OAM), denoted by \(\ell\) which can take on any integer value, and hence is theoretically infinite-dimensional (in practice, the apertures in the experiment limit the dimensionality). We use the SLMs to project the entangled state onto the qubit space with \(\ell=\pm 3\), i.e. the Bell state \(|\psi^{+}\rangle\) with \(|0\rangle\equiv|\ell=-3\rangle\) and \(|1\rangle\equiv|\ell=3\rangle\). Choosing a value for \(|\ell|\) is a trade-off between the high extinction of the two qubit states and high coincidence count rates. As the number of measurements is limited to nine settings in our experiment, we prioritised higher extinction, while compensating for lower rates with longer integration times. The amplitude of the individual contributions of the two-qubit state is balanced via angle tuning of the BBO crystal [41]. We implement each element \(\Pi_{i}\) of the trine POVM of Eq.(2) by programming an appropriate hologram onto the SLMs. The photons reflected from the SLM are coupled to single-mode fibres via lenses L5 and L6. The single-mode fibres are then connected to high-efficiency superconducting nanowire single-photon detectors (SNSPDs). The probability of a party going to a particular restaurant is proportional to the probability of the outcomes of the POVM which can then be obtained from photon counts. To get the joint probability, we record the coincidence count (\(C_{ij}\)) between Alice's \(i\)th and Bob's \(j\)th measurement outcome. This is done via a time-tagging module (TT20, Swabian Instruments) by integrating for 3600 seconds per data point. We normalise the coincidence counts to evaluate the joint probability \(P(ij)\) for Alice and Bob going to the \(i^{th}\) and \(j^{th}\) restaurant, i.e., \(P(ij)=C_{ij}/\sum_{i,j}C_{ij}\) and evaluate the payoff of Eq. (3). The entangled photons produced by SPDC are in the state \(|\psi^{+}\rangle\), which turns out to be the \(|\psi^{-}\rangle\) state when applied \(\sigma_{z}\)-rotation to one of the photons. Alternatively, we program the hologram for measuring \(\sigma_{z}\Pi_{i}\sigma_{z}\) for one of the photons, where \(\Pi_{i}\)'s are the POVM-elements as defined in Eq.(2). In the same manner, for the noisy case where the quantum coin is in the noisy state of Eq. (1), instead of generating \(\mathcal{W}_{p}\), we add the noise to the measurements. The state \(\mathcal{W}_{p}\) signifies that one of the subsystems of the singlet state \(|\psi^{-}\rangle\) undergoes a depolarising channel of strength \(p\), _i.e._, the state Figure 2: Experimental Setup. A 405-nm pump laser (pump) goes through a nonlinear crystal (BBO) producing pairs of entangled photons at 810 nm. A long-pass filter (LP) separates the pump from the single photons, which are then split probabilistically by a 50/50 beamsplitter (BS) between the signal and idler arms representing Alice’s and Bob’s shares. The combination of the spatial light modulators (SLM1 and SLM2), single mode fibres (yellow curves), and single photon detectors (not shown) correspond to measurements of the transverse mode of the single photons. The output of the single photon detectors is fed to a coincidence circuit (CC) to record the correlations. Lenses (Ls) are placed along the optical path to optimise mode matching. remains unchanged with a probability \((1{+}3p)/4\) or undergoes any of the three Pauli rotations, each with a probability \((1{-}p)/4\). In our experiment, we introduce this depolarising channel in the measurement settings by implementing the POVM-elements \(\Pi_{i}\)'s affected by the noise. This can be done by measuring \(\{\Pi_{i}\}\) with a probability of \((1{+}3p)/4\) and measuring \(\{\sigma_{j}\Pi_{i}\sigma_{j}\}\) with a probability of \((1{-}p)/4\), where \(\{\sigma_{j}\}\) represent the Pauli operations. Experimentally we implement this noisy POVM by performing a weighted time average on the relevant measurements. For a total acquisition time of \(T\), we measure \(\{\Pi_{i}\}\) for a time duration of \(T(1{+}3p)/4\) and measure \(\{\sigma_{j}\Pi_{i}\sigma_{j}\}\) for a time duration of \(T(1{-}p)/4\) each. Thus the temporal degree of freedom is used as pointers for the Kraus-operators of the depolarising channel, and time-averaging erases this pointer information leading to a statistical mixture of the Kraus-operators [42]. Results obtained in the experiment are depicted in Fig. 3. The payoffs from the probabilities obtained from our experiment are all above the classical limit of 0.125 (dashed green line) for \(p>0.6\), with the highest value being \(0.150\pm 0.003\) obtained for the noiseless case. The experimental payoffs as a function of the noise (denoted by the depolarisation strength \(p\) are given by the blue dots. The ideal payoffs are given by the dash-dotted orange line (i.e. if we have a perfect maximally entangled state for the correlated coins, and perfect POVMs). The discrepancies between the experiment and the ideal case can be accounted for by our imperfect entangled state, the purple solid line is the expected payoff given the entangled state that we obtained from SPDC (with 97.0% fidelity to \(|\psi^{+}\rangle\) and purity of 95.2%). Nevertheless, our experiment firmly establishes a quantum advantage in correlated coin-tossing even with a significant amount of noise. Apart from establishing the advantage in generating shared randomness our experiment also has another interesting implication as it constitutes a semi-device independent certification of non-projective measurement. _Discussions.-_ Tests of the quantum nature of physical systems are complicated by the requirement of randomness in the inputs to such tests. For example, the true randomness that quantum systems are known to exhibit can only be certified in a device-independent manner using Bell's theorem [54], which in turn needs true randomness (however small) in the inputs [55]- at least qualitatively the argument is circular. The quantum advantage for shared randomness processing that we experimentally demonstrate in this paper is important as it provides a way to test non-classical correlations without the need for true randomness--the only test of this kind. Our method certifying both non-classical correlations and generalised measurements is semi-device-independent, requiring only knowledge of the dimensionality of the subsystem. We show that a two-qubit system prepared in a maximally entangled state can yield some desired correlated coin state that is impossible to obtain from any two 2-level correlated classical coins, leading to a higher payoff in our two-party game. In contrast to the advantage in randomness processing demonstrated in [57; 58] which involves the probability distribution of one random variable, our work focuses on a new kind of quantum advantage in generating _shared_ randomness between two distant parties, involving two random variables and their joint probability distributions. This latter quantum advantage will find use in distributed computational tasks, as in the example of the game we show here. Our work sets the stage for further studies of quantum advantage in multi-party shared randomness processing for qubits or even higher-dimensional systems. Given that randomness processing is an important computational primitive, we envision our work will be useful for information processing in future quantum networks. _Acknowledgements-_ We thank Valerio Scarani for helpful discussions. SSB acknowledges support from the Foundation for Polish Science (IRAP project, ICTQT, contract no. MAB/2018/5, co-financed by EU within Smart Growth Operational Programme). This research was supported by the Australian Research Council Centre of Excellence for Engineered Quantum Systems (EQUS, CE170100009) and Discovery Project (DP200102273). MB acknowledges funding from the National Mission in Interdisciplinary Cyber-Physical systems from the Department of Science and Technology through the I-HUB Quantum Technology Foundation (Grant no: I-HUB/PDF/2021-22/008), support through the research grant of INSPIRE Faculty fellowship from the Department of Figure 3: Payoff of classical optimum strategy vs. quantum strategy. The optimal classical payoff of \(0.125\) is shown as the dashed green line. Ideal quantum payoff following Eq.(3) is plotted in a dash-dotted orange line. Theoretically expected payoffs considering the imperfect entangled state are shown in the solid purple line. Payoffs obtained in experiments are shown in blue dots (along with error bars) and they are all above the classical limit for \(p>0.6\). Science and Technology, Government of India, and the start-up research grant from SERB, Department of Science and Technology (Grant no: SRG/2021/000267). JR is supported by a Westpac Bicentennial Foundation Research Fellowship.
2307.14863
IML-ViT: Benchmarking Image Manipulation Localization by Vision Transformer
Advanced image tampering techniques are increasingly challenging the trustworthiness of multimedia, leading to the development of Image Manipulation Localization (IML). But what makes a good IML model? The answer lies in the way to capture artifacts. Exploiting artifacts requires the model to extract non-semantic discrepancies between manipulated and authentic regions, necessitating explicit comparisons between the two areas. With the self-attention mechanism, naturally, the Transformer should be a better candidate to capture artifacts. However, due to limited datasets, there is currently no pure ViT-based approach for IML to serve as a benchmark, and CNNs dominate the entire task. Nevertheless, CNNs suffer from weak long-range and non-semantic modeling. To bridge this gap, based on the fact that artifacts are sensitive to image resolution, amplified under multi-scale features, and massive at the manipulation border, we formulate the answer to the former question as building a ViT with high-resolution capacity, multi-scale feature extraction capability, and manipulation edge supervision that could converge with a small amount of data. We term this simple but effective ViT paradigm IML-ViT, which has significant potential to become a new benchmark for IML. Extensive experiments on five benchmark datasets verified our model outperforms the state-of-the-art manipulation localization methods.Code and models are available at \url{https://github.com/SunnyHaze/IML-ViT}.
Xiaochen Ma, Bo Du, Zhuohang Jiang, Ahmed Y. Al Hammadi, Jizhe Zhou
2023-07-27T13:49:27Z
http://arxiv.org/abs/2307.14863v3
# IML-ViT: Benchmarking Image Manipulation Localization by Vision Transformer ###### Abstract Advanced image tampering techniques are increasingly challenging the trustworthiness of multimedia, leading to the development of Image Manipulation Localization (IML). But what makes a good IML model? The answer lies in the way to capture artifacts. Exploiting artifacts requires the model to extract non-semantic discrepancies between manipulated and authentic regions, necessitating explicit comparisons between the two areas. With the self-attention mechanism, naturally, the Transformer should be a better candidate to capture artifacts. However, due to limited datasets, there is currently no pure ViT-based approach for IML to serve as a benchmark, and CNNs dominate the entire task. Nevertheless, CNNs suffer from weak long-range and non-semantic modeling. To bridge this gap, based on the fact that artifacts are sensitive to image resolution, amplified under multi-scale features, and massive at the manipulation border, we formulate the answer to the former question as building a ViT with high-resolution capacity, multi-scale feature extraction capability, and manipulation edge supervision that could converge with a small amount of data. We term this simple but effective ViT paradigm IML-ViT, which has significant potential to become a new benchmark for IML. Extensive experiments on five benchmark datasets verified our model outperforms the state-of-the-art manipulation localization methods. Code and models are available at [https://github.com/SunnyHaze/IML-ViT](https://github.com/SunnyHaze/IML-ViT) ## 1 Introduction As multimedia editing technology advances, we increasingly need advanced Image Manipulation Localization(IML) methods to cope with existing tampered images and avoid security threats [27]. As shown in Figure 1, this task aims to detect whether images have been modified and to localize the modified regions at the pixel level in a segmentation manner. Image manipulation can be generally classified into three types [30, 27]: (1)_splicing_: copying a region from an image and pasting it to another image. (2)_copy-move_: cloning a region within an image. (3)_inpainting_: erasing regions from images and inpaint missing regions with visually plausible contents. As shown in Table 1, most existing methods for IML tasks greatly benefit from tracing artifacts with various CNN-based feature extractors. "Artifacts" refer to unique visible traces (as shown in Figure 1) and invisible low-level feature inconsistencies (e.g., noise or high-frequency) resulting from manipulation. As tampering aims to deceive the audience by creating semantically meaningful and perceptually convincing images, visual traces typically manifest at a non-semantic level, distributed in textures around the manipulated area. Additionally, low-level features, like noise inconsistencies introduced by different cameras, can also serve as crucial evidence to reveal manipulated regions within the authentic area. Thus, based on previous works, _the key to IML lies in capturing the artifacts by identifying non-semantic visible traces and low-level inconsistencies._ However, convolution propagates information in a _collective_ manner, making CNNs more suitable for semantic Figure 1: **An example of three types of manipulations and their corresponding visible artifacts.** Visible traces include distortions, sudden changes, or anomalies caused by tampering operations, which are frequently found at the junction between two regions and appear in very detailed positions. For a better view, zooming in is recommended. related tasks, such as object detection, rather than tracing non-semantic artifacts that often surround an object. Further, to identify low-level inconsistencies, we need to explicitly compare the relationships between different regions. But in deeper networks, CNNs may overlook global dependencies [22], rendering them less effective in capturing differences between regions. Given the weaknesses of CNN in non-semantic and long-distance modeling, we ask: _Is there any other optimal backbone for solving IML tasks?_ Considering the goal of capturing the feature discrepancies between the manipulated and authentic regions, we argue that self-attention should be a better solution regarding IML. _As self-attention can explicitly model relationships between any areas regardless of their visual semantic relevance, especially for non-adjacent regions._ The performance boost achieved by SPAN [13] highlights the effectiveness of integrating self-attention structures into convolutional layers. Furthermore, as artifacts are often distributed at the patch level rather than at the pixel or image level, Vision Transformer (ViT) [8] naturally becomes the ideal choice to trace artifacts and make comparisons. While ViT may be suitable for IML tasks, directly applying the original ViT architecture is insufficient. We suggest that IML involves three key discrepancies from traditional segmentation tasks, which also have not yet received sufficient attention in previous IML methods, as supported by Table 1. These discrepancies are: **High resolution** While semantic segmentation and IML share similar inputs and outputs, IML tasks are more information-intensive, focusing on detailed artifacts rather than macro-semantics at the object level. Existing methods use various extractors to trace artifacts, but their resizing methods already harm these first-hand artifacts. Therefore, preserving the _original resolution_ of the images is crucial to retain essential artifacts for the model to learn. **Edge supervision** As mentioned earlier, IML's primary focus lies in detecting the distinction between the tampered and authentic regions. This distinction is most pronounced at the boundary of the tampered region, whereas typical semantic segmentation tasks only require identifying information within the target region. From another perspective, it becomes evident that visible artifacts are more concentrated along the periphery of the tampered region rather than within it (as shown in Figure 1). Consequently, the IML task must guide the model to concentrate on the manipulated region's edges and learn its distribution for better performance. **Multi-scale supervision** The percentage of tampered area to the total area varies significantly across different IML datasets. CASIAv2 [7] contains a considerable amount of sky replacement tampering, whereas Defacto [23] mostly consists of small object manipulations. On average, CASIAv2 has 7.6% of pixels as tampered areas, while Defacto has only 1.7%. Additionally, IML datasets are labor-intensive and often limited in size, which poses challenges in bridging the gap between datasets. Therefore, incorporating multi-scale supervision from the pre-processing and model design stages is essential to enhance generalization across different datasets. In this paper, we present IML-ViT, an end-to-end ViT-based model that solves IML tasks. Regarding the 3 key discrepancies, we devise IML-ViT with the following components: 1) a ViT which accepts high-resolution input. Most of the global attention block is replaced with windowed attention as the trade-off for time complexity. We initialized its parameters with ImageNet-1k MAE [11] pre-training; 2) a _simple feature pyramid_[18] network to introduce multi-scale supervision; 3) a morphology-based edge loss strat \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline **Methods** & **View** & **Training Dataset For Evaluation** & **Backbone** & **Resolution** \\ \hline ManTra-Net [30], 2019 & \begin{tabular}{l} BaycarConv(Noise) \\ SRM filter(Noise) \\ \end{tabular} & Private, 102,028 images & wider VGG & 512\(\times\)512 \\ \hline SPAN [13], 2020 & \begin{tabular}{l} BaycarConv(Noise) \\ SRM filter(Noise) \\ \end{tabular} & \begin{tabular}{l} Private, 102,028 images \\ (Copied parameters from ManTra-Net) \\ \end{tabular} & \begin{tabular}{l} wider VGG \\ self-attention \\ \end{tabular} & \begin{tabular}{l} raw \& 224x224 \\ (resized feature) \\ \end{tabular} \\ \hline CR-CNN [32], 2020 & \begin{tabular}{l} BaycarConv(Noise) \\ SRM filter(Noise) \\ \end{tabular} & Public: CASIA v2 & mask R-CNN & short side to 600 \\ \hline GSR-Net [35], 2020 & \begin{tabular}{l} Edge Prediction \\ \end{tabular} & Public: CASIA v2 & DeepLab & 300x300 \\ \hline MVSS-Net [2], 2021 & \begin{tabular}{l} BaycarConv(Noise) \\ Sobel(Edge) \\ \end{tabular} & Public: CASIA v2 or Defacto & FCN & 512\(\times\)512 \\ \hline MM-Net [33], 2021 & \begin{tabular}{l} BaycarConv(Noise) \\ \end{tabular} & Private: synthesized & mask R-CNN & short side to 800 \\ \hline TransForensics [10], 2021 & - & Public: CASIA v2, COVERAGE and IMD2020 & FCN + Transformer blocks & 512\(\times\)512 \\ \hline ObjectFormer [28], 2022 & High-Frequency & Private: large synthesized & CNN + Transformer & 256x256 \\ \hline _IML-ViT(This paper)_ & Edge supervision & Public: CASIA v2 & ViT & \begin{tabular}{l} 1024x1024 \\ (zero-padding) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 1: **Overview of the state-of-the-art for image manipulation localization end-to-end models**_View_ can be regarded as prior knowledge widely accepted in the field of manipulation detection. Edge information can better trace visible artifacts, while noise and high-frequency features mainly compare the low-level differences between tampered and authentic regions. egy is proposed to ensure edge supervision. The overview of IML-ViT is shown in Figure 2. To the best of our knowledge, ObjectFormer [28] and TransForensics [10] are the only Transformer-related models solving the IML tasks. However, their backbone distinguishes significantly from ViT, as will explain in the Related Works section. Thus, IML-ViT can be regarded as the pioneering model that utilizes ViT as the backbone for tackling IML tasks. The experiments were conducted based on a common evaluation protocol [28, 36, 2, 30, 35] to measure the generalizability and performance of our IML-ViT. Model is trained on CASIAv2 [7] dataset and then test it on other 5 public datasets. The results demonstrate that IML-ViT has surpassed all SoTA models, indirectly validating the reliability of the three key aspects of IML proposed by us. Thus, we believe that IML-ViT is a powerful candidate to become a new SoTA model for IML. With no specialized modules, IML-ViT has the potential to serve as a simple yet superior performance benchmark for IML and demonstrate that IML tasks can be solved without manually designed feature extractors or complicated feature fusion, promoting existing IML research into new research paradigms. In summary, our contributions are as follows: * We reveal the essential discrepancies between IML and traditional segmentation tasks by raising the three uniqueness, which were overlooked by previous studies: high resolution, multi-scale, and edge supervision. * Aiming at three uniqueness, we modify the components of ViT and establish the IML-ViT, the first ViT-based model for image manipulation localization. * Extensive experiments show that IML-ViT outperforms state-of-the-art models in both \(F_{1}\) and AUC scores on five public benchmark datasets. This verifies the solidity of the three uniqueness we proposed. ## 2 Related works ### Transformer-based IML Method Currently, there are two Transformer-based models in the field of IML, namely ObjectFormer [28] and TransForensics [10]. Though named "Trans" and "Former", these two models are hardly in line with ViT in overall structures and design philosophies. In particular, different from ViT directly embedding the patched images before encoding, both methods utilize several CNN layers to initially extract feature maps and subsequently employ Transformers for further encoding, leading to the neglect of crucial first-hand low-level information. Moreover, in ObjectFormer(OF)'s encoder, the "query" inputs are learnable vectors representing object prototypes \(o_{i}\), not image embeddings. As a result, it focuses on capturing dependencies between object prototypes and image tokens, whereas a standard ViT encoder solely models the dependencies between image embeddings. Besides, OF is pre-trained with a large tampering-oriented synthesized private dataset, while IML-ViT achieves superior performance with pre-training on the more accessible ImageNet-1k dataset and outperforms OF. On the other hand, the primary distinction between TransForensics and ViT lies in how to apply Transformer blocks. While ViT uses these blocks sequentially, TransForensics employs them in parallel, wherein each feature map of an FCN output is decoded with a Transformer block, then fused together for the final output. In short, IML-ViT can be considered the first IML method with a vanilla ViT as its backbone and could easily benefit from recently advanced algorithms related to ViT, proving that IML tasks do not require complex designs. **Paradigm of IML** Research in the early years focused on single kind of manipulation detection, with studies on copy-move [3, 25], splicing [4, 14, 15], and Removal (Inpainting) [38], respectively. However, since the specific type of tampering is unknown in practice, after 2018, general manipulation detection has become the focus of research. Many existing works follow the paradigm of "feature extraction + backbone inference", especially extractors to exploit tamper-related information from artifacts. Yang _et al._ design CR-CNN [32] with a noise-sensitive BayarConv [1] as the first convolution layer. Zhou _et al._ develop an SRM filter to mine the difference in noise distribution to support decision-making in their RGB-N networks[37]. And Wu _et al._[30] and Hu _et al._[13] combined SRM, BayarConv, and Conv2D as the first layer of their model. Besides noise-related extractors, Wang _et al._[28] employ a DCT transform to extract high-frequency features, which are then combined with RGB features and fed into a transformer decoder. And Chen _et al._ suggest MVSS-Net [2] with a Sobel-supervised edge branch and a BayarConv noise branch. Dual attention is then utilized to fuse them. Nevertheless, a feature may only be effective for a single type of tampering, e.g., noise is more sensitive to splicing from different images but less effective for copy-move from the same image. To avoid this issue, our IML-ViT is aiming to step out of the paradigm of "extraction + fusion" and let the model itself learn as much knowledge as possible from the datasets rather than simply rely on _priori knowledge_. **Resolution of IML** Resolution is an important aspect but has always been neglected. Wu _et al._[30] reported resize(re-scale) to images can do harm to the performance of their model. However, we observed that most existing methods re-scale images to a uniform size during pre-processing for parallel training purposes (shown in _Resolution_ column Table 1). In most cases, the images were down-sampled, destroying the aspect ratio and compressing the original information in the dataset. This manner can greatly affect the distribution of noise, edge, and high-frequency features in the raw image, then weaken the performance of the model. In this paper, we make a novel attempt of preserving the original resolution of the images as much as possible, by zero-padding them to 1024x1024 for parallel computation. This process can leave the freshest information for the model to mine the difference between the manipulated region and the authentic region, and make the best possible use of the dataset. ## 3 Proposed Method In this section, we introduce our powerful IML-ViT paradigm, as shown in Figure 2, it consists of three main components: (1) a windowed ViT to balance the high-resolution inputs and the space complexity; (2) a _simple feature pyramid_ network (SFPN) to introduce multi-scale features; and (3) a lightweight MLP decoder head with additional edge supervision, which aids in focusing on artifact-related features and ensures stable convergence. ### ViT Backbone **High resolution** The ViT Encoder aims to mine the detailed artifacts and explore the differences between the suspicious area. Thus, it is essential to **preserve the original resolution of each image** to avoid downsampling that could potentially distort the artifacts. However, when training in parallel, all images within a batch must have the same resolution. To reconcile these demands, we adopt a novel approach that has not been applied to any IML method before. Rather than simply rescaling images to the same size, we pad images and ground truth masks with zeros and place the image on the top-left side to match a larger constant resolution. This strategy maintains crucial low-level visual information of each image, allowing the model to explore better features instead of depending on handcrafted prior knowledge. To implement this approach, we first adjust the embedding dimensions of the ViT encoder to a larger scale. **Windowed attention** To balance the computation cost from high resolution, we adopt a technique from previous works [17, 18], which periodically replaces part of the global attention blocks in ViT with windowed attention blocks. This method ensures global information propagation while reducing complexity. **MAE pre-train** We initialize the ViT with parameters pre-trained on ImageNet-1k [5] with Masked Auto Encoder (MAE) [11]. This self-supervised method can greatly alleviate the over-fitting problem and helps the model generalize, supported by Table 3. More specifically, we represent input images as \(X\in\mathbb{R}^{3\times h\times w}\), and ground truth masks as \(M\in\mathbb{R}^{1\times h\times w}\), where \(h\) and \(w\) correspond to the height and width of the image, respectively. We then pad them to \(X_{p}\in\mathbb{R}^{3\times H\times W}\) and \(M_{p}\in\mathbb{R}^{1\times H\times W}\). Balance with computational cost and the resolution of datasets we employ in Table 2, we take \(H=W=1024\) as constants in our implementation. Then \(X_{p}\) is passed into the windowed ViT-Base encoder with 12 layers, with a complete global attention block retained every 3 layers. The above process can be formulated as follows: \[G_{e}=\mathcal{V}(X_{p})\in\mathbb{R}^{768\times\frac{H}{16}\times\frac{W}{16}} \tag{1}\] where \(\mathcal{V}\) denotes the ViT, and \(G_{e}\) stands for encoded feature map. The number of channels, 768, is to keep the information density the same as the RGB image at the input, as \(768\times\frac{H}{16}\times\frac{W}{16}=3\times H\times W\). ### Simple Feature Pyramid To introduce multi-scale supervision, we adopt the _simple feature pyramid_ network (SFPN) after the ViT encoder, which was suggested in ViTDet [34]. This method takes the single output feature map \(G_{e}\) from ViT, and then uses a series of convolutional and deconvolutional layers to perform up-sampling and down-sampling to obtain multi-scale feature maps: \[F_{i}=\mathcal{C}_{i}(G_{e})\in\mathbb{R}^{C_{S}\times\frac{H}{2^{i}+2}\times \frac{W}{2^{i+2}}},i\in\{1,2,3,4\} \tag{2}\] where \(\mathcal{C}_{i}\) denotes the convolution series, and \(C_{S}\) is the output channel dimension for each layer in SFPN. This multi-scale method does not change the base structure of ViT, which allowed us easily introduce recently advanced algorithms to the backbone. ### Light-weight Predict Head For the final prediction, we aimed to design a model that is simple enough to reduce memory consumption while also demonstrating that the improvements come from the advanced design in the ViT Encoder and the multi-scale supervision. Based on these ideas, we adopted the decoder design from SegFormer [31], which outputs a smaller predicted mask \(M_{e}\) with a resolution of \(1\times\frac{H}{4}\times\frac{W}{4}\). The lightweight All-MLP decoder first applies a linear layer to unify the channel dimension. It then up-samples all the features to the same resolution of \(C_{D}\times\frac{H}{4}\times\frac{W}{4}\) with bilinear interpolation, and concatenates all the features together, as shown in Figure 3. Finally, a series of linear layers is applied to fuse all the layers and make the final prediction. We can formulate the prediction head as follows: \[P=MLP\{\odot_{i}(W_{i}F_{i}+b_{i})\}\in\mathbb{R}^{\frac{H}{4}\times\frac{W}{ 4}\times 1} \tag{3}\] Here, \(P\) represents the predicted probability map for the manipulated area; \(\odot\) denotes concatenation operation, and \(MLP\) refers to an MLP module. ### Edge Supervision Loss To account for the fact that artifacts are typically more prevalent at the edges of tampered regions, where the differences between manipulated and authentic areas are most noticeable, we developed a strategy that places greater emphasis on the boundary region of the manipulated area. Specifically, we generate a binary edge mask \(M^{\star}\) from the original mask image \(M\) using mathematical morphology operations including dilation (\(\oplus\)) and erosion (\(\ominus\)) [26], followed by taking the absolute values of the result. The formula we use to generate the edge mask is: \[M^{\star}=|(M\ominus B(k))-(M\oplus B(k))| \tag{4}\] where, \(B(x)\) generates a \((2x+1)\times(2x+1)\)_cross_ matrix, where only the \(x^{th}\) column and \(x^{th}\) row have a value of 1, while the rest of the matrix contains 0s. The integer value \(x\) is selected to be approximately equal to the width of the white area in the boundary mask. Examples of the edge mask generated using this approach are shown in Figure 4. **Combined Loss** To compute the loss function, we first pad the ground-truth mask \(M\) and the edge mask \(M^{\star}\) to the size of \(H\times W\), and refer to them as \(M_{p}\) and \(M^{\star}_{p}\), respectively. We then calculate the final loss using the following formula: \[\mathcal{L}=\mathcal{L}seg(P,M_{p})+\lambda\cdot\mathcal{L}edge(P*M^{\star}_{ p},M_{p}*M^{\star}_{p}) \tag{5}\] where \(*\) denotes the point-wise product, which masks the original image. Both \(\mathcal{L}seg\) and \(\mathcal{L}edge\) are binary cross-entropy loss functions, and \(\lambda\) is a hyper-parameter that controls the balance between the segmentation and edge detection losses. By default, we set \(\lambda=20\) to guide the model to focus on the edge regions. While this strategy is straightforward, as will discuss in the Experiments section, it remarkably accelerates model convergence, stabilizes the training process, and mitigates potential NaN (Not-a-Number) issues. Therefore, we consider this strategy to be a powerful prior knowledge for IML problems, deserving attention in future research. ## 4 Experiments ### Experimental setup **Datasets** To make a fair comparison with state-of-the-art, we mainly evaluate the performance of our model based on a commonly used protocol [2, 35, 13] for image tampering localization. We train the model on the CASIAv2 [7] Figure 4: **Examples of generating the edge mask \(M^{\star}\), white represent for manipulated area**, \(k\) is set to 5 while the image size is 1024x682. The absolute value operation ensures that whether the tampered region dominates or the non-tampered region dominates, the mask only emphasizes the junction of the two. Figure 3: **Diagrams of the predict head. The rectangles on the left represent the output of SFPN.** Figure 2: **Overview of the general structure of IML-ViT.** dataset1 and then test it on other smaller public datasets including CASIAv1 [7], NIST16 [9], COVERAGE [29], Columbia [12] and Defacto [23], see Table 2. However, there are no authentic images as negative examples in the Defacto dataset. Following the approach of MVSS-Net, we randomly selected 6000 untouched images from MS-COCO dataset [19] and combined them with 6000 images from Defacto to create the Defacto-12k dataset for validation. Footnote 1: We noticed some resolution errors in the public CASIAv2 dataset. Therefore, we have released an adjusted version of CASIAv2 with corrected ground truth. Details can be found at [https://github.com/SunnyHaze/CASIA2.0-Corrected-Groundtruth](https://github.com/SunnyHaze/CASIA2.0-Corrected-Groundtruth) **Evaluation Criteria** We evaluate our model using pixel-level \(F_{1}\) score with a fixed threshold \(0.5\) and Area Under the Curve (AUC), which are commonly used evaluation metrics in previous works. However, it's worth noting that AUC can be influenced by an excessive number of true negative pixels in common IML datasets, leading to an overestimation of model performance. Nevertheless, our model achieves state-of-the-art performance in both \(F_{1}\) score and AUC. In some previous methods, _optimal F1 score_ is used for evaluation, where the best \(F_{1}\) score is selected by testing the prediction probability map at different thresholds. However, this approach becomes impractical in real-world scenarios since the optimal threshold for the real-world distribution is typically unknown. Instead, we report the pixel-level \(F_{1}\) score using a uniform threshold of \(0.5\), providing a more practical metric for evaluating the model's performance. **Implementation** Our IML-ViT model is implemented with PyTorch and trained on NVIDIA RTX 3090 GPUs for 200 epochs with a batch size of 1 in each GPU (16GB of graphics memory). Accumulate batch strategy is applied with a size of 32. We pad all images to a resolution of 1024x1024, except for those that exceed this limit. For the larger images, we resized them to the longer side to 1024 and maintain their aspect ratio. Before training, following MVSS-Net, common data augmentation techniques were applied, including re-scaling, flipping, blurring, rotation, and various naive manipulations (e.g., randomly copy-moving or inpainting rectangular areas within a single image). We initialize ViT-B with MAE pre-trained weights on ImageNet-1k and used the AdamW optimizer [21] with a base learning rate of 1e-4. We scheduled the learning rate using a cosine decay strategy [20]. The early stop technique was employed during training. ### Ablation Studies To better evaluate the contributions of each component to the model performance, we conducted experiments with multiple settings and compare them with a _full setup_ to test the four aspects we are most concerned about. For _initialization_, besides _full setup_ with MAE pre-training on ImageNet-1k, we test Xavier initialization and ordinary ViT pre-training on ImageNet-21k by classification. To explore the impact of _high resolution_, we simply resized all images to 512x512 during training instead of our padding strategy. For _edge supervision_, and _multi-scale supervision_, we evaluate them by removing the respective structures from the model. During ablation studies, We trained model _only_ with manipulated images in CASIAv2 and evaluate its pixel-level \(F1\) score on CASIAv1, COVERAGE, Columbia, and NIST16 datasets. The experiment setups and the results are shown in Table 3. In conclusion, our findings are: **MAE pretrain is mandatory.** Indeed, dataset insufficiency is a significant challenge in building ViT-based IML methods. As shown in Table 2, public data sets for IML are small in size, which cannot satisfy the appetite of vanilla ViT. As shown in _w/o MAE_ aspects in Table 3, the use of Xavier initialization to train the model resulted in complete non-convergence. However, while regular ViT pre-training initialization with Imagenet-21k achieves acceptable performance on CASIAv1 which is homologous to CASIAv2, exhibits poor generalization ability on other non-homology datasets. The results indicate that MAE greatly alleviated the problem of non-convergence and over-fitting of ViT on small datasets. This suggests that MAE pre-training is indispensable for ViT-based image manipulation localization, and demonstrates that MAE is a powerful and effective method for downstream tasks with limited datasets. **Edge supervision is crucial.** The performance of IML-ViT without edge loss shows significant variability with different random seeds, all leading to gradient collapse eventually, where the F1 score reaches 0, and the loss becomes NaN, as shown in Figure 5. On the other hand, when employing edge loss, all performance plots for IML-ViT exhibit consistent behavior similar to the blue line in Figure 5, enabling fast convergence and smooth training up to 200 epochs. Furthermore, Table 3 confirms the effectiveness of edge loss in contributing to the final performance. In summary, these results demonstrate that edge supervision effectively stabilizes IML-ViT convergence and can serve as highly efficient prior knowledge for IML problems. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multirow{2}{*}{**Usage**} & \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Type**} & \multicolumn{3}{c}{**Manpolation type**} & \multicolumn{3}{c}{**Resolution**} \\ \cline{3-8} & & \multicolumn{1}{c}{**Authentic**} & \multicolumn{1}{c}{**Manplupleted**} & \multicolumn{1}{c}{**copy**} & \multicolumn{1}{c}{**spall**} & \multicolumn{1}{c}{**limp**} & \multicolumn{1}{c}{**limp**} & \multicolumn{1}{c}{**limp**} \\ \hline Train & CASIAv2 & 7491 & 3063 & 3235 & 1828 & 0 & 240 & 800 \\ \hline \multirow{4}{*}{Test} & CASIAv1 & 800 & 920 & 459 & 461 & 0 & 256 & 384 \\ & NIST16 & 0 & 564 & 68 & 288 & 208 & 480 & **5616** \\ & COVERAGE & 100 & 100 & 100 & 0 & 0 & 1585 & 572 \\ & Defacto-12k & 6000 & 6000 & 2000 & 2000 & 2000 & 120 & 640 \\ & Columbia & 183 & 180 & 0 & 180 & 0 & 568 & **1152** \\ \hline \hline \end{tabular} \end{table} Table 2: **Details of six datasets in our experiments** **High resolution is effective for artifacts.** The improved performance shown in Table 3 for the _full setup_ model across four datasets validates the effectiveness of the high-resolution strategy. However, it is essential to note that the NIST16 dataset shows limited improvement when using higher resolutions. This observation can be attributed to the fact that the NIST16 dataset contains numerous images with resolutions exceeding 2000, and down-sampling these images to 1024 for testing may lead to considerable distortion of the original artifacts, consequently reducing the effectiveness of learned features. Nevertheless, when considering the SoTA score achieved, it becomes evident that IML-ViT can flexibly infer the manipulated area based on the richness of different information types. **Multi-scale supervision helps generalize.** However, after applying the SFPN, there is not much improvement in average \(F_{1}\), even the performance slightly decreases on CASIAv1 and Coverage. But in exchange, the results on NIST16 gain a larger boost. Since CASIAv2 and CASIAv1 are homologous datasets, a good performance on CASIAv1 cannot reflect the generalization ability well. Coverage, as a dataset with only 100 splicing images, is somewhat limited as well. Therefore, it is a worthwhile trade-off to achieve more improvement on NIST16, which has more images and diverse manipulation types. This validates that SFPN does bring certain generalization performances to the model. ### Comparison with SoTA **Evaluation barrier** While recent studies have introduced numerous state-of-the-art models, it remains challenging to compare them on an equal footing. This is partly due to the lack of publicly available code for the models and training processes [13, 28], as well as the utilization of massive synthesized datasets that are inaccessible to the wider research community [30, 33]. Therefore, we urge for the adoption of open-source to the community and call for the generation strategies for large-scale datasets can be assessed separately from the model performance itself. Such measures are vital for ensuring fairness and promoting continued advancements in this field. To demonstrate the potential of IML-ViT as an effective benchmark, we conduct a comprehensive comparison with existing models in a fair manner. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{6}{c}{**Pixel-level \(F_{1}\) score**} \\ \cline{2-7} & **CASIAv1** & **Columbia** & **NIST16** & **Coverage** & **Defacto-12k** & **MEAN** \\ \hline HP-FCN*, ICCV19 [16] & 0.154 & 0.067 & 0.121 & 0.003 & 0.055 & 0.080 \\ ManTra-Net*, CVPR19 [30] & 0.155 & 0.364 & 0 & 0.286 & 0.155 & 0.192 \\ CR-CNN*, ICME20 [32] & 0.405 & 0.436 & 0.238 & 0.291 & 0.132 & 0.300 \\ GSR-Net*, AAAI20 [35] & 0.387 & 0.613 & 0.283 & 0.285 & 0.051 & 0.324 \\ MVSS-Net*, ICCV21 [2] & 0.452 & 0.638 & 0.292 & 0.453 & 0.137 & 0.394 \\ MVSS-Net (re-trained) & 0.435 & 0.303 & 0.203 & 0.329 & 0.097 & 0.270 \\ MVSS-Net++*, PAMI22 [6] & 0.513 & 0.660 & 0.304 & **0.482** & 0.095 & 0.411 \\ _IML-ViT (ours)_ & **0.658** & **0.836** & **0.339** & 0.425 & **0.156** & **0.482** \\ \hline \hline \end{tabular} \end{table} Table 4: **Cross-datasets evaluation of SoTA models.** Except for ManTra-Net and HP-FCN, which trained on a privately synthesized dataset, all the methods were trained on CASIAv2 datasets. The best scores are highlighted in bold. Symbol ’*’ marks the results are quoted from MVSS-Net paper [2]. We re-train MVSS-Net with their official-released code. Figure 5: **Impact of the proposed edge loss on training.** Note that settings here follow Table 4 instead of Table 3. \begin{table} \begin{tabular}{l|c|c c c|c c c c c c c} \hline \hline \multirow{2}{*}{**Test Goal**} & \multirow{2}{*}{**init method**} & \multicolumn{2}{c|}{**Components**} & \multicolumn{2}{c}{**CASIAv1**} & \multicolumn{2}{c}{**Coverage**} & \multicolumn{2}{c}{**Columbia**} & \multicolumn{2}{c}{**NIST16**} & \multicolumn{2}{c}{**MAEN**} \\ \cline{3-14} & & **H-Reso** & **S-FPN** & **Edge** & **F1** & **AUC** & **F1** & **AUC** & **F1** & **AUC** & **F1** & **AUC** \\ \hline w/o MAE & Xavier & + & + & + & 0.1035 & - & 0.0439 & - & 0.0744 & - & 0.0632 & - & 0.0713 & - \\ & ViT-B ImNet-21k & + & + & + & 0.5114 & - & 0.1854 & - & 0.2287 & - & 0.1811 & - & 0.2767 & - \\ w/o high resolution & MAE ImNet-1k & - & + & + & 0.5061 & 0.8166 & 0.2324 & 0.825 & 0.5409 & 0.842 & 0.2987 & 0.8212 & 0.3945 & 0.8262 \\ w/o multi-scale & MAE ImNet-1k & + & - & + & 0.5996 & 0.8627 & **0.4457** & **0.8352** & 0.6125 & 0.8350 & 0.1841 & 0.6767 & 0.4605 & 0.8024 \\ w/o edge supervision & MAE ImNet-1k & + & + & + & - & 0.5432 & 0.8573 & 0.2688 & 0.8272 & 0.3008 & 0.7617 & 0.2347 & 0.7078 & 0.3369 & 0.7885 \\ _Full setup_ & MAE ImNet-1k & + & + & + & **0.5886** & **0.8668** & 0.3277 & 0.8264 & **0.7458** & **0.9076** & **0.2993** & **0.7706** & **0.4900** & **0.8429** \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation study of IML-ViT.** Each model is trained for 200 epochs on CASIAv2 dataset without authentic images. Best scores are marked in bold. **Cross-dataset comparison** Since MVSS-Net [2] has already conducted a detailed evaluation on a fair cross-dataset protocol, we directly quote their results here and train our models with the same protocol, i.e. training on both authentic and manipulated images of CASIAv2 dataset and testing on public datasets. The results measured by \(F_{1}\) score are listed respectively in Table 4. We also list comparison with some closed-source method that only reports their F1 score tested on CASIAv1 in Table 5. Moreover, ObjectFormer [28] and CFL-Net [24] evaluate their models fine-tuning with CASIAv2 on AUC. Although this metric may overestimate the models, IML-ViT has still surpassed them, as shown in Table 6. Overall, our model has achieved state-of-the-art performance compared to existing models evaluated under this fair cross-dataset protocol. Figure 6 qualitatively illustrates the high-quality and clear boundary of our model trained on CASIAv2 and tested on various datasets under different preferences of manipulation types. **Mix-dataset comparison** Some methods reported their performance based on mixed datasets, which were randomly split into train-validate-test sets, introducing sampling bias. Particularly, in the case of NIST16, there exists a consid \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **CASIAv1** & **Coverage** & **Columbia** & **NIST16** & **MEAN** \\ \hline ObjectFormer & 0.882 & - & - & - \\ CFL-Net & 0.863 & - & - & 0.799 & - \\ _ML-ViT(Ours)_ & **0.931** & **0.918** & **0.962** & **0.818** & **0.917** \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison of AUC trained on CASIAv2.** Figure 6: **Testing results trained with CASIAv2 on 4 datasets of IML-ViT compare to ManTra-Net and MVSS-Net. Each dataset has its preference for manipulation types. Columns from left to right are: input image, ground-truth, Mantra-Net, MVSS-Net and IML-ViT (ours). Zoom in for a better view.** \begin{table} \begin{tabular}{l l l} \hline \hline **Method** & **Pre-train** & \(F_{1}\)**(\%)** \\ \hline RGB-N, CVPR18 [36] & ImageNet & 40.8 \\ SPAN, ECCV20 [13] & Private synthesized dataset & 38.2 \\ Objectformer, CVPR22 [28] & Private synthesized dataset & 57.9 \\ _ML-ViT(Ours)_ & MAE on ImageNet-1k & **73.4** \\ \hline \hline \end{tabular} \end{table} Table 5: **Performance comparison with Closed-source methods** All the methods above are fine-tuning with CASIAv2 and are tested with the CASIAv1 dataset. erable number of duplicated samples, and random splitting might cause the same image to appear repeatedly in both the train and test sets, resulting in artificially inflated metrics. This makes them unsuitable as reliable benchmarks. Anyway, results in Table 7 show that, under the same evaluation criteria, IML-ViT also outperforms them. ### Robustness Evaluation JPEG compression and Gaussian Blur are the common attack method for Image manipulation localization. Hence we further carried out experiments on the resistance of these two operations on CASIAv1. The evaluation results2 are shown in Figure 7. The IML-ViT exhibited excellent resistance to JPEG compression and Gaussian blur and consistently maintained the best performance of the five models. Footnote 2: Performance against JPEG compression is quoted from MVSS-Net, while performance against Gaussian Blur is retested by us using the publicly available model because we found a significant discrepancy between our tests and the performance reported in their paper. ## 5 Conclusions This paper introduces IML-ViT, the first image manipulation localization model based on ViT. Extensive experiments on five public datasets demonstrate that IML-ViT achieves SoTA performance and generalization ability, validating the reliability of the three core elements of the IML task proposed in this study: high resolution, multi-scale, and edge supervision. Further, IML-ViT proves the effectiveness of self-attention in capturing non-semantic artifacts. Its simple structure also makes it a promising benchmark in this field. \begin{table} \begin{tabular}{l|c|c c c c|c c} \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Datasets (Train/validate/test split)**} & \multicolumn{4}{c|}{**Pixel-level AUC**} & \multicolumn{2}{c}{**Pixel-level \(F1\)**} \\ \cline{3-8} & & **COVER** & **NIST16** & **CASIA** & **IMD-20** & **CASIA** & **COVER** \\ \hline TransForesinc, ICCV21 & COVER + CASIA + IMD20 (8:1:1) & 0.884 & - & 0.850 & 0.848 & 0.627 & 0.674 \\ _IML-ViT(Ours)_ & COVER + CASIA + IMD20 (8:1:1) & 0.912 & 0.821* & **0.961** & **0.943** & **0.825** & **0.815** \\ \hline ObjectFormer, CVPR22 & COVER(4:1); NIST(4:1); CASIA(v2:v1) & **0.957** & 0.996 & 0.882 & - & 0.579 & 0.758 \\ \hline CF-Net, WACV23 & NIST16 + CASIA + IMD20 (8:1:1) & - & **0.997** & 0.863 & 0.899 & - & - \\ _IML-ViT(Ours)_ & NIST16 + CASIA + IMD20 (8:1:1) & 0.801* & **0.997** & **0.959** & **0.941** & 0.820 & 0.505* \\ \hline \end{tabular} \end{table} Table 7: **Mix-dataset results. \(*\) marks cross-dataset results.** Figure 7: **Robustness Evaluation against JPEG compression and Gaussian blur on CASIAv1. The red dashed line represents the \(F_{1}\) score when all predictions are classified as positive. When the result is lower than this line, we consider the model to be less effective than random guessing and losing its localization ability. Our model has a later entry of the \(F_{1}\) score into the baseline value compared to other models, and it consistently maintains a relatively high position, proving its better resistance.**
2308.13989
LDL: Line Distance Functions for Panoramic Localization
We introduce LDL, a fast and robust algorithm that localizes a panorama to a 3D map using line segments. LDL focuses on the sparse structural information of lines in the scene, which is robust to illumination changes and can potentially enable efficient computation. While previous line-based localization approaches tend to sacrifice accuracy or computation time, our method effectively observes the holistic distribution of lines within panoramic images and 3D maps. Specifically, LDL matches the distribution of lines with 2D and 3D line distance functions, which are further decomposed along principal directions of lines to increase the expressiveness. The distance functions provide coarse pose estimates by comparing the distributional information, where the poses are further optimized using conventional local feature matching. As our pipeline solely leverages line geometry and local features, it does not require costly additional training of line-specific features or correspondence matching. Nevertheless, our method demonstrates robust performance on challenging scenarios including object layout changes, illumination shifts, and large-scale scenes, while exhibiting fast pose search terminating within a matter of milliseconds. We thus expect our method to serve as a practical solution for line-based localization, and complement the well-established point-based paradigm. The code for LDL is available through the following link: https://github.com/82magnolia/panoramic-localization.
Junho Kim, Changwoon Choi, Hojun Jang, Young Min Kim
2023-08-27T02:57:07Z
http://arxiv.org/abs/2308.13989v1
# LDL: Line Distance Functions for Panoramic Localization ###### Abstract We introduce LDL, a fast and robust algorithm that localizes a panorama to a 3D map using line segments. LDL focuses on the sparse structural information of lines in the scene, which is robust to illumination changes and can potentially enable efficient computation. While previous line-based localization approaches tend to sacrifice accuracy or computation time, our method effectively observes the holistic distribution of lines within panoramic images and 3D maps. Specifically, LDL matches the distribution of lines with 2D and 3D line distance functions, which are further decomposed along principal directions of lines to increase the expressiveness. The distance functions provide coarse pose estimates by comparing the distributional information, where the poses are further optimized using conventional local feature matching. As our pipeline solely leverages line geometry and local features, it does not require costly additional training of line-specific features or correspondence matching. Nevertheless, our method demonstrates robust performance on challenging scenarios including object layout changes, illumination shifts, and large-scale scenes, while exhibiting fast pose search terminating within a matter of milliseconds. We thus expect our method to serve as a practical solution for line-based localization, and complement the well-established point-based paradigm. The code for LDL is available through the following link: [https://github.com/82magnolia/panoramic-localization](https://github.com/82magnolia/panoramic-localization). ## 1 Introduction Estimating the location of a mobile device or agent with respect to a 3D map, widely referred to as visual localization, has vast applications in robotics and AR/VR. Compared to perspective images, which are more widely used for localization, panorama images provide a \(360^{\circ}\) field of view that contains ample visual evidence from the holistic scene context. In this light, there have been recent advances in visual localization using panoramic images [7, 8, 26, 27] Figure 1: Overview of our approach. LDL assumes a 3D map equipped with lines and local features, and similarly preprocesses the 2D panorama prior to localization. LDL then selects candidate poses by matching 2D, 3D line distance functions through decomposition along principal directions that effectively represent the sparse geometry of lines. Finally, the selected poses are refined via local feature matching [44] and PnP-RANSAC [15, 29]. that demonstrate reasonably stable localization, with state-of-the-art methods leveraging a two-step process of candidate pose selection and refinement [27, 43]. Nevertheless, many existing methods for this task have limitations in computational efficiency and robustness, mainly stemming from the costly or unstable pose selection process. As global feature descriptors [23, 3] or a large number of colored points [26, 27] are the main components for this step, the pipelines can be memory and compute intensive or fragile to large illumination changes [26, 27]. To overcome such limitations, we explore the alternative direction of using _lines_ as the major cue for panoramic localization. Lines have a number of desirable properties compared to commonly used raw color, semantic labels or learned global features [26, 8, 43]. First, due to the long-standing work in line segment extraction [18, 59, 19, 55], it is cheap and stable to extract line segments even amidst dramatic changes in illumination or moderate motion blur. Second, lines are sparse representations of a scene and can potentially lead to small memory consumption and computation. Nevertheless, line segments alone are visually ambiguous compared to other localization cues (color, global features, etc.), which makes them harder to tailor for successful localization. While there exist prior works in line-based visual localization [16, 33, 57], many focus on using lines for _pose refinement_ after finding coarse poses from conventional global feature comparisons [57, 16] or exhibit unstable performance compared to conventional point-based methods [33]. Further, prior works often involve expensive line-specific feature extraction to distinguish contexts and establish one-to-one line correspondences [57]. LDL is a fast and robust localization method that leverages the holistic context from lines in panoramas and 3D maps to effectively find the camera pose. In contrast to previous works [57, 16], we retain our focus on using line segments for _pose search_ based on the observation that conventional point-based matching [44, 12] performs stably once given a good initial pose. As shown in Figure 1, given a panoramic image of an unknown location, we utilize the distribution of extracted line segments and compare it against those in the pre-captured 3D map. First, the candidate pose selection step rapidly evaluates an immense set of poses within a matter of milliseconds and selects the coarse poses to further optimize. Here LDL compares the distribution of lines in 2D and 3D evaluated on their spherical projections using distance functions, as shown in Figure 1. The distance function imbues relative spatial context even in featureless regions and quickly matches poses without establishing explicit correspondences between detected lines. We further enhance the discriminative power of distance functions by decomposition, and separately evaluate lines aligned with each principal directions. Once a small set of initial poses are found, LDL refines them with PnP-RANSAC [15, 29], where we leverage powerful local features from recent works [44, 12] to establish good 2D-3D correspondences. We evaluate LDL in various indoor scenes where it performs competitively against all tested baselines while demonstrating robust performance in scenes with object changes or large illumination shifts. Further, LDL exhibits an order-of-magnitude faster runtime compared to global feature comparison [23, 17, 3] due to the efficient formulation. By only using the geometric information of lines and pre-trained visual features, we expect LDL to serve as a practical localization algorithm that could enhance and complement existing visual localization techniques. ## 2 Related Work Line-Based LocalizationInspired by abundant straight-lines and rectangular structures in man-made objects, many works attempt visual localization with line segments [2, 58, 57, 16, 52, 33]. Micusik et al. [33] utilize the line segments extracted from the 3D model to directly match line segments in images by comparing the Chamfer distance in 2D and 3D. However, lines, even when perfectly matched, are inherently subject to ambiguity along the line direction. Yoon et al. [57] suggest removing such ambiguities by treating points on a line segment as verbal tokens in natural language processing, where line features are learned using Transformers [53]. Such learning-based approaches are trained with a database of pose-annotated images or require additional computation [57, 16, 58]. Further, these approaches only use lines for pose refinement, assuming a coarse pose estimate to be given via global feature comparisons [17, 3]. LDL takes a different approach and focuses on robust _pose selection_ based on lines. We compare LDL against existing approaches for line-based localization, where LDL performs competitively against these methods while balancing robustness and efficiency. Point-Based LocalizationMost visual localization algorithms follow a point-based paradigm, focusing on sparse feature point correspondences [45, 44, 46, 47, 48, 30, 10, 41, 42, 43, 36, 41, 43], dense matching via coordinate regression of scene points [5, 30], or minimizing color discrepancies of dense 3D points via gradient descent [26, 27]. Conventional approaches using a perspective camera input take a two-step approach, where coarse poses are first estimated using global feature descriptors [17, 3] and refined with PnP-RANSAC from local feature matches [44, 12, 31] or dense matches from scene coordinate regression [48, 30]. Recent panoramic localization methods [27, 8, 7, 26] also follow a similar two-step approach, where exemplary methods find candidate poses via color distribution matching and refine them using gradient descent optimization [26, 27]. While these algorithms can robustly handle a modest range of scene changes due to the holistic view from panoramas, the algorithms can still fail with significant changes in illumination. We compare LDL against exemplary point-based methods and demonstrate that line segments could be effectively utilized for accurate and robust localization even without the costly calculation of global features or color matching. ## 3 Method LDL aims at finding the pose at which the query image \(I\) is taken with respect to a 3D scene, where Figure 1 depicts the localization steps taken by LDL. We first represent the 3D scene using a line map equipped with local feature descriptors for keypoint locations, and similarly acquire line segments and local descriptors for the query image prior to localization (Section 3.1). We then estimate the three principal directions for 2D and 3D by voting, from which we can deduce a set of rotations considering the sign and permutation ambiguity (Section 3.2). Given the fixed set of candidate rotations, we construct an initial set of possible poses incorporating translations. We generate the decomposed line distance functions at each pose and choose the promising poses by comparing the distance functions with a robust loss function (Section 3.3). As the final step, the selected poses are refined by performing PnP-RANSAC [15] using feature matches [44] with the query image (Section 3.4). ### Localization Input Preparation Map BuildingLDL operates using a 3D map consisting of line segments and local features. We build such a map starting from a colored point cloud \(P=\{X,C\}\). To obtain the 3D line segments we use the line extraction method from Xiaohu et al. [54], which can quickly process point clouds containing millions of points within a few seconds. We further remove short, noisy line segments from the raw detection with a simple filtering step: given the point cloud bounding box of size \(b_{x}\times b_{y}\times b_{z}\), we filter out 3D line segments shorter than \(\lambda(b_{x}+b_{y}+b_{z})/3\) with \(\lambda=0.1\) in all our experiments. The 2D line segments are then filtered with an adaptive length threshold to match the filtering rate of 3D line segments. Specifically, we choose the threshold value such that the ratio of lines filtered in 2D equals that in 3D. To obtain local features embedded in the 3D map, we first render synthetic views at various locations using the point cloud color values. Specifically, we project the input point cloud \(P{=}\{X,C\}\) at a virtual camera and assign the measured color \(Y(u,v){=}C_{n}\) at the projected location of the corresponding 3D coordinate \((u,v){=}\Pi(RX_{n}+t)\) to create the synthetic view \(Y\). We then extract local features for each synthetic view \(Y\) using SuperPoint [12], and back-project the local features to their 3D locations, which in turn results in keypoint descriptors embedded in 3D space. Note that while we illustrate map building using a colored point cloud, our setup can also work with line-based SfM maps [32, 39, 40] since the input to our pipeline is lines and associated local features. Panorama Pre-processingSimilar to map building, we extract line segments and local features from the query panorama image. We use LSD [18] to acquire line segments, which is a robust line detection algorithm that can stably extract lines even under motion blur or lighting changes. To remove noisy line detections as in the 3D case, we filter 2D line segments with an adaptive length threshold to match the filtering rate of 3D line segments. Specifically, for each scene we choose the threshold value such that the ratio of lines filtered in 2D equals that in 3D. Then, we extract local feature descriptors using SuperPoint [12], where the results will later be used for pose refinement in Section 3.4. Figure 2: Motivation for (a) utilizing and (b) decomposing line distance functions. (a) Line distance functions disambiguate regions with dense lines. Given two candidate poses close (**A**) and far (**B**) from ground truth, Chamfer distance falsely favors **B** near dense lines, whereas distance functions correctly rank the poses. (b) Decomposition further reduces ambiguities from rotation by separately considering line segments with varying directions. Given an original view close to the ground truth (green) and a rotated view (red), the decomposition better distinguishes the two views by correctly selecting the original view over the rotated view. ### Candidate Rotation Estimation Given the detected line segments, LDL first estimates a set of feasible rotations by extracting principal directions, which we define as the most common line directions in 2D and 3D. Let \(L_{2D}=\{l\}\) denote the line segments in 2D, where \(l=(s,e)\) is a tuple of start point \(s\in\mathbb{S}^{2}\) and end point \(e\in\mathbb{S}^{2}\). Note that we operate on the spherical projection space and treat lines and points on panoramas as arcs and points on the unit sphere \(\mathbb{S}^{2}\) respectively. Similarly, let \(L_{3D}=\{\tilde{l}\}\) denote the line segments in 3D, with \(\tilde{l}=(\tilde{s},\tilde{e})\) being a tuple containing start and end points \(\tilde{s},\tilde{e}\in\mathbb{R}^{3}\). LDL estimates the vanishing point and votes for the principal directions in 2D and 3D. In 2D we first extract vanishing points by finding the points of intersection of extended 2D line segments. The 2D principal directions \(P_{2D}{=}\{p\}\) are defined as the top \(k_{2D}\) vanishing points containing the most incident lines, where \(p\in\mathbb{R}^{3}\) is a unit norm vector denoting the vanishing point location in the sphere. Similarly, the 3D principal directions \(P_{3D}{=}\{\tilde{p}\}\) are defined as the top \(k_{3D}\) most common line directions from 3D line segments obtained via voting. Note that the 3D direction \(\tilde{p}\in\mathbb{R}^{3}\) is also normalized. LDL estimates the feasible candidate rotations up to uncertainty in the combinatorial ambiguities when matching the principal directions in 2D and 3D. Specifically, we select triplets of directions from \(P_{2D}\) and \(P_{3D}\), yielding a total of \(\binom{k_{2D}}{3}\times\binom{k_{3D}}{3}\times 3!\times 2^{3}\) possible combinations, additionally considering the sign and permutation ambiguity. For each pair of triplets, we apply the Kabsch algorithm [25] to find the optimal rotation that aligns the 2D directions to 3D directions. Discarding infeasible rotations that have large mean squared error, we obtain \(N_{r}\) rotations. The possible rotations are further filtered using line distance function presented in the next section. ### Line Distance Functions for Pose Selection We propose line distance functions to efficiently evaluate a large pool of poses and select promising candidate poses. The initial pool of poses is the combination of possible translations with the rotations found in the previous section. To this end, \(N_{t}\) translations are chosen within grid partitions of the 3D point cloud, where details are explained in the supplementary material. The resulting \(N_{t}\times N_{r}\) poses are ranked using line distance functions. Distance Function DefinitionDistance functions are designed to compare the holistic spatial context captured from the large field of view in panorama images. They are defined for every point including void regions without any lines and can quickly rank poses. Compared to Chamfer distance or learned line embeddings used in prior work [33, 57], LDL does not attempt pairwise matching between lines, which is often costly and can incur failure modes. For example, it is ambiguous to correctly match between densely packed lines as shown in Figure 2a. A line distance function is a dense field of distance values to detect lines in the 2D query image or the spherical projection at an arbitrary pose in 3D. For a point \(x\) on the unit sphere \(\mathbb{S}^{2}\), the 2D line distance function is given as \[f_{2D}(x;L_{2D})=\min_{l\in L_{2D}}D(x,l). \tag{1}\] Here \(D(x,l)\) is the spherical distance from \(x\) to line segment \(l=(s,e)\), namely \[D(x,l)=\left\{\begin{array}{ll}\sin^{-1}|\langle x,\frac{s\times e}{\|s \times e\|}\rangle|&\text{if }x\in\mathcal{Q}(s,e)\\ \min(\cos^{-1}\langle x,e\rangle,\cos^{-1}\langle x,s\rangle)&\text{ otherwise,}\end{array}\right. \tag{2}\] where \(\mathcal{Q}(s,e)\) is the spherical quadrilateral formed from \(\{s,e,\pm(s\times e)/\|s\times e\|\}\). Similarly, the 3D line distance function is defined for each candidate rotation \(R\in SO(3)\) and translation \(t\in\mathbb{R}^{3}\). Using the spherical projection function \(\Pi(\cdot):\mathbb{R}^{3}\rightarrow\mathbb{S}^{2}\) that maps a point in 3D to a point on the unit sphere, the 3D line segment \(\tilde{l}=(\tilde{s},\tilde{e})\) is projected to 2D under the candidate transformation as \(l=(\Pi(R\tilde{s}+t),\Pi(R\tilde{e}+t))\). For simplicity, let \(\Pi_{L}(\tilde{l};R,t)\) denote the projection of a line segment in 3D to the spherical surface. Then the 3D line distance function is defined as follows, \[f_{3D}(x;L_{3D},R,t)=\min_{\tilde{l}\in L_{3D}}D(x,\Pi_{L}(\tilde{l};R,t)). \tag{3}\] As shown in Figure 3, one can expect poses closer to the ground truth to have similar 2D and 3D line distance functions. Therefore, we evaluate \(N_{t}\times N_{r}\) poses according to the similarity of line distance functions. We apply a robust loss function that measures inlier counts to quantify the affinity of the line distance functions. For each candidate pose \(\{R,t\}\) we count the number of points whose distance function differs below a threshold \(\tau\), \[L(R,t)=-\!\!\sum_{q\in Q}\mathbbm{1}\{|f_{2D}(q;L_{2D})-f_{3D}(q;L_{3D},R,t)| <\tau\}, \tag{4}\] where \(\mathbbm{1}\{\cdot\}\) is the indicator function and \(Q\subset\mathbb{S}^{2}\) is a set of query points uniformly sampled from a sphere. The loss function only considers inlier counts, and thus is robust to outliers from scene changes or line misdetections. We validate the efficacy of the robust loss function in Section 4.2. Distance Function DecompositionTo further enhance pose search using line distance functions, we propose to decompose the distance functions along three principal directions. While line distance functions provide useful evidence for line-based localization, they lack a sense of direction as in Figure 2b, where the distance functions alone cannot effectively distinguish rotated views at a fixed translation. We split line segments along the principal directions used for rotation estimation and define separate line distance functions for each group of lines, as shown in Figure 3. Recall from Section 3.2 that each candidate rotation \(R\) is obtained from a pair of triplets in 2D and 3D principal directions denoted as \(\hat{P}_{2D}^{R}\)=\(\{p_{1},p_{2},p_{3}\}\) and \(\hat{P}_{3D}^{R}\)=\(\{\tilde{p}_{1},\tilde{p}_{2},\tilde{p}_{3}\}\). We associate line segments that are parallel to directions in \(\hat{P}_{2D}^{R},\hat{P}_{3D}^{R}\), leading to three groups of line segments \(L_{2D}^{L}\)=\(\{L_{2D}^{1},L_{2D}^{2},L_{2D}^{3}\}\) and \(L_{3D}^{R}\)=\(\{L_{3D}^{1},L_{3D}^{2},L_{3D}^{3}\}\) in 2D and 3D, respectively. We separately define line distance functions for the three groups using Equation 2, namely \(f_{2D}(x;L_{2D}^{i})\) and \(f_{3D}(x;L_{3D}^{i},R,t)\) for \(i=1,2,3\). Then the robust loss function in Equation 4 can be modified to accommodate the decomposed distance functions, \[L(R,t)=-\!\!\sum_{i=1}^{3}\!\sum_{q\in Q}\mathbb{I}\,\{|f_{2D}(q;L_{2D}^{i})-f _{3D}(q;L_{3D}^{i},\!R,\!t)|<\!\tau\}. \tag{5}\] We validate the importance of distance function decomposition in Section 4.2. ### Candidate Pose Refinement After we select the top \(K\) poses from the pool of \(N_{t}\times N_{r}\) poses with the loss function values from Equation 5, we refine them using local feature matching as shown in Figure 1. Here we utilize the cached local features from Section 3.1. Specifically, for each selected pose we first retrieve the set of visible 3D keypoints at that pose and perform local feature matching against the 2D keypoints in the query image. In this process we use SuperGlue [44] for feature matching and select the candidate pose with the most matches. Finally, we apply PnP-RANSAC [21, 29, 15] on the matched 2D and 3D keypoint coordinates to obtain a refined pose estimate. Backed by local feature matching that stably operates given decent coarse estimates from line distance functions, LDL can robustly function as an effective localization method which we further verify in Section 4. ## 4 Experiments We evaluate LDL in various localization scenarios and analyze its performance. Our method is mainly implemented using PyTorch [35], and is accelerated with a single RTX 2080 GPU. In all our experiments we set the number of principal directions as \(k_{2D}\)=\(20,k_{3D}\)=\(3\), the inlier threshold \(\tau\)=\(0.1\), and the number of query points as \(|Q|\)=\(42\). We report the full hyperparameter setup in the supplementary material. Following prior works [26, 7, 8], we report the median translation and rotation errors along with the localization accuracy where a prediction is considered correct if the translation error is below 0.1m and the rotation error is below 5\({}^{\circ}\). DatasetsWe evaluate LDL in two indoor localization datasets: Stanford 2D-3D-S [4] and OmniScenes [26]. Stanford-2D-3D-S [4] contains 1413 panorama images from 272 rooms subdivided into six areas. Each area has diverse indoor scenes such as offices, hallways, and auditoriums where repetitive structure and featureless regions are present. OmniScenes contains 4121 panorama images from seven 3D scans, where the panorama images are captured with cameras either handheld or robot mounted, and at different times of day including large changes in furniture configurations. The dataset has three splits (Robot, Handheld, Extreme) that are recorded in scenes with/without changes, where images in the Extreme split are captured under large camera motion. BaselinesWe compare LDL against three point-based baselines (PICCOLO, CPO, structure-based) and two line-based baselines (Chamfer distance-based, Line Transformer [57]). PICCOLO (PC) [26] and the follow-up work CPO [27] is an optimization-based algorithm that finds pose by minimizing the color discrepancy between the point cloud and the query image. Structure-based approach [51, 43] (SB) is one of the most prominent methods for visual localization using perspective cameras. We implement a method for panorama images, where candidate poses are retrieved from an image database using a global feature extractor [17] and further refined using SuperGlue [44] matches. For fair comparison, we undistort the Figure 3: Line distance function visualization and decomposition at the ground truth pose \(R^{*},t^{*}\). LDL decomposes distance functions using principal directions and enhances their expressiveness. panorama image into cubemaps and perform feature matching, where the results are then fed to PnP-RANSAC for refinement. In addition, we construct the database of pose-annotated images by rendering synthetic views at various locations in the colored point cloud. Chamfer distance-based approach (CD), inspired from Micusik et al. [33], ranks candidate poses by comparing the spherical Chamfer distance of line segments in the synthetic views against the query image. Line Transformer by Yoon et al. [57] (LT) ranks candidate poses using Transformer-based [53] matching learned for each line segment. As this baseline also requires a pose-annotated database, we construct a synthetic database similar to the structure-based approach, and apply the undistortion process for fair comparison. We provide additional details about the baselines in the supplementary material. ### Localization Evaluation Stanford 2D-3D-SWe first assess the localization performance of LDL against the baselines in the Stanford 2D-3D-S dataset, as shown in Table B.1. LDL performs competitively against the strong baselines (Structure-based and Line Transformer) that apply powerful neural networks for candidate pose search. While the dataset contains hallways and auditoriums with large featureless regions or repetitive structure, LDL leverages the holistic distribution of lines using distance functions and shows stable performance without resorting to costly neural network computations. Further, LDL shows superior performance when compared against the Chamfer distance-based method, which indicates that solely focusing on line matches for ranking candidate poses can lead to suboptimal performance. OmniScenesWe additionally compare LDL against baselines in the OmniScenes dataset, as shown in Table 3. Unlike the Stanford 2D-3D-S dataset, all images exhibit blur from camera motion and approximately half of the images contain changes in object layout. In splits not containing changes, LDL performs competitively against the baselines, which supports our claim that line distance functions enable effective pose search without using neural networks. Further, LDL attains the highest accuracy in splits containing scene changes and notably in the extreme split that contains the largest amount of motion blur. This is due to the stable line extraction [59, 18, 19, 55] that enables resilience against motion blur, and the robust distance function comparison (Equation 4) that rejects outliers for handling scene changes. We further verify the importance of each components in LDL in Section 4.2. Illumination Robustness EvaluationTo validate the illumination robustness of LDL, we measure localization performance after applying synthetic color variations. We select Room 3 from the Extreme split in OmniScenes for evaluation. As shown in Figure 4, the image gamma, white balance, and average intensity are modified to an arbitrary value, where further details are deferred to the supplementary material. We report the results of LDL along with the baselines in Table 2. CPO, PICCOLO, and the structure-based baseline all suffer from performance degradation, as the color values are directly utilized for finding initial poses. Notably, Yoon et al. [57] also shows performance drop, as Transformer line features are affected by the illumination changes of the image. As LDL relies on the spatial structure of line segments for candidate pose search, it is robust to illumination variations, leading to stable performance across all color variations. Further, note that while all the methods excluding PICCOLO [26] and CPO [27] use local feature \begin{table} \begin{tabular}{l|c c c c c|c c c c c c c c c c c c} \hline \hline & \multicolumn{8}{c|}{\(t\)-error (m)} & \multicolumn{8}{c|}{\(R\)-error (\({}^{\circ}\))} & \multicolumn{8}{c}{Accuracy} \\ Dataset & PC & SB & CD & LT & CPO & LDL & PC & SB & CD & LT & CPO & LDL & PC & SB & CD & LT & CPO & LDL \\ \hline Area 1 & 0.02 & 0.02 & 0.12 & 0.02 & **0.01** & 0.02 & 0.46 & 0.62 & 1.14 & 0.62 & **0.25** & 0.54 & 0.66 & 0.89 & 0.50 & **0.90** & **0.90** & 0.86 \\ Area 2 & 0.76 & 0.04 & 1.16 & 0.04 & **0.01** & 0.02 & 2.25 & 0.72 & 11.54 & 0.72 & **0.27** & 0.66 & 0.45 & 0.76 & 0.35 & 0.74 & **0.81** & 0.77 \\ Area 3 & 0.02 & 0.03 & 0.79 & 0.02 & **0.01** & 0.02 & 0.49 & 0.57 & 4.54 & 0.55 & **0.24** & 0.54 & 0.57 & **0.92** & 0.36 & 0.88 & 0.78 & 0.89 \\ Area 4 & 0.18 & 0.02 & 0.33 & 0.02 & **0.01** & 0.02 & 4.17 & 0.57 & 1.97 & 0.56 & **0.28** & 0.48 & 0.49 & **0.91** & 0.46 & **0.91** & 0.83 & 0.88 \\ Area 5 & 0.50 & 0.03 & 0.95 & 0.03 & **0.01** & 0.02 & 14.64 & 0.69 & 41.84 & 0.65 & **0.27** & 0.54 & 0.44 & 0.80 & 0.36 & 0.79 & 0.74 & 0.81 \\ Area 6 & 0.01 & 0.02 & 0.50 & 0.02 & **0.01** & 0.02 & 0.31 & 0.63 & 1.20 & 0.60 & **0.18** & 0.50 & 0.69 & **0.88** & 0.47 & 0.87 & 0.90 & 0.83 \\ \hline Total & 0.03 & 0.03 & 0.73 & 0.02 & **0.01** & 0.02 & 0.63 & 0.63 & 2.30 & 0.63 & **0.24** & 0.53 & 0.54 & **0.85** & 0.39 & 0.84 & 0.83 & 0.83 \\ \hline \hline \end{tabular} \end{table} Table 1: Localization performance evaluation in Stanford 2D-3D-S [4], compared against PICCOLO (PC) [26], structure-based approach (SB), Chamfer distance-based approach (CD), Line Transformer (LT) [57], and CPO [27]. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline & \multicolumn{8}{c}{Accuracy} \\ Dataset & PC & SB & CD & LT & CPO & LDL \\ \hline Original & 0.45 & 0.69 & 0.21 & 0.68 & 0.72 & **0.89** \\ Gamma & 0.00 & 0.63 & 0.47 & 0.59 & 0.00 & **0.82** \\ Intensity & 0.00 & 0.56 & 0.40 & 0.58 & **0.80** & 0.76 \\ White Balance & 0.00 & 0.62 & 0.32 & 0.67 & 0.74 & **0.91** \\ \hline \hline \end{tabular} \end{table} Table 2: Localization accuracy on synthetic color variations applied to Room 3 in the Extreme split from OmniScenes [26]. Figure 4: Color variations for evaluating illumination robustness. matching for pose refinement, there is a large performance gap between LDL and the other methods. This validates our focus on designing a stable candidate pose selection method, as modern feature descriptors and matching algorithms [12, 13, 43, 44] are fairly robust against adversaries such as illumination changes. ### Performance Analysis Candidate Pose Search EvaluationTo evaluate the efficacy of line distance functions for candidate pose search, we compare the retrieval accuracy of LDL against NetVLAD [3], which is a widely used global feature extractor [23, 57, 43]. Note that NetVLAD is used as the candidate pose selection module in the structure-based baseline. We use the Extreme split from OmniScenes for evaluation, where the translation and rotation error recall curve along with the runtime for processing a single candidate pose is reported in Figure 5. For fair comparison we use the identical pool of translations for both methods as \(N_{t}=50\) and assign a large number of candidate rotations for NetVLAD with \(N_{r}=216\). Additional setup details are reported in the supplementary material. While neural network-based pose search methods can perform city-scale search [3, 17, 20], the line distance functions in LDL exhibit competitive performance to NetVLAD in indoor environments. The distance functions provide highly discriminative spatial context, which enables effective pose search. Furthermore, the runtime for pose search in LDL is much shorter than NetVLAD, due to the highly efficient computation of distance functions only conducted on sparse sphere points. This is in contrast to NetVLAD where visual features are computed with a neural network for each view. The line distance functions enable quick and effective pose initialization, which in turn allow LDL to be usable in various practical localization scenarios. Runtime AnalysisWe analyze the runtime of LDL in Table 4b where we decompose the runtime for localizing a single query image from OmniScenes [26]. We assume that 3D scanning along with map building is done offline and only consider the computation time for online operations, namely 2D line segment extraction, candidate pose selection and refinement. Overall, the pose selection process including rotation estimation and distance function computation exhibits a small runtime for both CPU and GPU, which validates the efficiency of our proposed line-based pose search. Nevertheless, the pose refinement exhibits a relatively larger runtime, which is mainly due to the large number of features in panoramas compared to normal images with a smaller field of view. While we attained our focus in pose search and used the off-the-shelf local feature matching algorithms for pose refinement [12, 44], devising highly efficient feature matching algorithms tailored specifically for panoramas is left as future work. Scalability AnalysisWe assess the scalability of LDL to large-scale indoor scenes using the OmniScenes [26] dataset. While the previous set of experiments assume room-scale localization scenarios, here we test LDL us \begin{table} \end{table} Table 4: Multi-room localization compared against Structure-Based method (SB) with various number of candidate poses (\(K\)) and runtime analysis of LDL. Figure 5: Pose error recall and runtime comparison between candidate pose search using LDL and NetVLAD [3]. \begin{table} \end{table} Table 3: Localization performance evaluation in OmniScenes [26], considering both scenes with and without object layout changes. ing the entire OmniScenes dataset as the 3D map. Table 4 shows the localization results, where LDL is compared against the structure-based method at various number of candidate poses (\(K\)). LDL exhibits performance on a par with the structure-based method, which indicates that line distance functions can scalably handle large scenes consisting of multiple rooms. Nevertheless, scaling LDL to even larger scale scenes (e.g. building-scale scenes as in InLoc [51]) is left as future work. Privacy Preservation AnalysisWhile the main goal of LDL is to offer fast and robust localization based on lines, we find that with a small modification our method can offer light-weight privacy protection in client-server localization scenarios [6, 11, 14, 49, 50]. Following prior works [34, 50], we consider the case where a client using an edge device wants to localize oneself against a 3D line map stored in the cloud. Privacy breaches occur if the service provider maliciously tries to view the visual data captured by the client. This is possible even when only the local feature descriptors are shared between the client and server, by using feature inversion methods [37] that reconstruct the original image from a sparse set of local features as shown in Figure 6. By changing LDL to only exploit local features near lines during refinement, we can prevent privacy breaches including feature inversion attacks without largely sacrificing localization performance. First, as LDL uses line segments for candidate pose selection the clients only need to share the extracted line segments with the service providers for initial pose search, instead of the entire view that would be needed for global feature-based methods. Second, as local features near line segments are shared with the service provider for pose refinement, feature inversion methods cannot faithfully recover the original visual content. We validate this claim with a small set of experiments performed in the Stanford 2D-3D-S dataset [4], where we filter descriptors whose spherical distances to the nearest line segment are over 0.05 rad. As shown in Figure 6, this line-based filtering degrades the quality of feature inversion attacks by hiding objects that potentially contain sensitive information while only incurring small drops in localization accuracy. We report additional details and results regarding the potential of LDL for privacy preservation in the supplementary material. ### Ablation Study We ablate the distance function decomposition, number of query points, and robust loss function, which are key components of LDL in the OmniScenes Extreme split. In Table 5, LDL is first compared against the baseline that does not apply decomposition and use the loss function in Equation 4. Decomposition leads to a large performance gain, as the distance functions are further disambiguated and split into each principal direction. We further test the effect of the number of query points \(|Q|\) on evaluating the robust loss function. While increasing the number of query points enhances performance, the improvement is not as significant and incurs additional computation. Conversely, using a smaller number of query points lead to ambiguities in distance function matching, exhibiting poor performance. The number of query points \(|Q|=42\) balances both the computational efficiency and localization accuracy of LDL. We finally validate the robust loss function in Equation 5 by comparing LDL against variants using other loss functions: L1, L2, Huber, and Median loss. Here we report results from the Wedding Hall scene, as this scene contains drastic scene changes with large amounts of outliers. As shown in Table 5, inlier counting proposed in Equation 5 attenuates outliers and exhibits optimal performance, demonstrating the effectiveness of the robust loss function. ## 5 Conclusion We presented LDL, a fast and robust algorithm for panorama to point cloud localization using line segments. LDL benefits from the illumination-robustness of line segments and the holistic context of panoramas by using a novel formulation based on line distance functions. The distance functions effectively handle visual ambiguities of line segments, as they provide spatial meaning to void regions often neglected by existing line-based localization methods. In addition, by evaluating distance functions only on sparsely sampled query points, LDL performs rapid candi Figure 6: Visualization of feature inversion attacks on panoramic inputs along with the localization accuracy before and after line-based feature filtering \begin{table} \begin{tabular}{l|c c c c} \hline \hline Method & \begin{tabular}{c} \(\ell\)-error \\ (m) \\ \end{tabular} & \begin{tabular}{c} \(R\)-error \\ (\(\lx@paragraph\)) \\ \end{tabular} & Acc. \\ \hline w/o Decomposition & 1.00 & 3.97 & 0.37 \\ \hline w/ \(|Q|=10\) & 0.04 & 0.85 & 0.77 \\ w/ \(|Q|=21\) & 0.04 & 0.71 & 0.88 \\ w/ \(|Q|=84\) & **0.03** & **0.66** & **0.95** \\ \hline Ours (\(|Q|=42\)) & **0.03** & 0.72 & 0.92 \\ \hline \hline \end{tabular} \begin{tabular}{l|c c c c} \hline \hline Method & \begin{tabular}{c} \(\ell\)-error \\ (m) \\ \end{tabular} & Acc. \\ \hline w/ LL Loss & 0.08 & 1.38 & 0.55 \\ w/ LL Loss & 0.17 & 1.48 & 0.34 \\ w/ Huber Loss & 0.11 & 1.39 & 0.50 \\ w/ Median Loss & 0.08 & **1.22** & 0.55 \\ \hline Ours & **0.07** & **1.22** & **0.68** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study of various components of LDL. date pose search with accuracy on a par with learning-based global feature extractors. As a result, LDL performs robust localization in various challenging scenarios with a short runtime. We expect LDL to complement and enhance the currently prevalent point-based localization algorithms for highly robust and practical localization. AcknowledgementsThis work was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. RS-2023-00218601), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub), and Samsung Electronics Co., Ltd. Young Min Kim is the corresponding author. ## Appendix A Details on LDL Principal Direction ComputationWe explain the details of principal direction computation. Recall that the principal directions in 2D and 3D are defined as the top \(k_{2D}\) and \(k_{3D}\) most common line directions. The 2D principal directions are extracted from vanishing points. When parallel lines are projected on an image, they appear to converge at a point, which is referred to as a vanishing point. To locate vanishing points, we extrapolate detected line segments and find their intersections. Since we are using panoramic images, we use spherical projection of lines and vanishing points. Specifically, we create a uniform spherical grid and count the number of intersection points in each grid cell, which we referred to as 'voting' in the main paper. We select the top \(k_{2D}\) grid locations with the most votes as the 2D principal directions. For 3D principal directions, we similarly aggregate votes for 3D line directions and extract the top \(k_{3D}\) votes. Note that we fix the filtering parameters for all our experiments and LDL achieves competitive results. Line FilteringPrior to localization, recall from Section 3.1 that LDL filters short lines. Specifically, given the point cloud with the bounding box size of \(b_{x}\times b_{y}\times b_{z}\), we filter out 3D line segments shorter than \(\lambda(b_{x}+b_{y}+b_{z})/3\), where \(\lambda=0.1\) in all our experiments. The 2D line segments are then filtered to match the filtering rate of 3D line segments. Note the threshold parameter \(\lambda\) does not play a critical role in performance. Figure A.1 shows the median localization error measured in Room 1 from OmniScenes [5]. The errors are nearly constant with respect to varying \(\lambda\). Spherical Quadrilateral for Computing Line DistanceFunctionsWe illustrate the spherical quadrilateral used for computing distance functions from Section 3. As shown in Figure A.2, given a line segment \(l\) on a sphere with a start point \(s\) and an end point \(e\), the spherical quadrilateral \(\mathcal{Q}(s,e)\) is formed by connecting \(\{s,e,\pm(s\times e)/\|s\times e\|\}\). The spherical quadrilateral is used in Equation 2 to compute the distance \(D(x,l)\) from a point \(x\) to a line segment \(l\) on a sphere. Here, \(D(x,l)\) is computed differently depending on whether \(x\) lies on \(\mathcal{Q}(s,e)\). The 2D and 3D line distance functions (Equation 1, 3) are further built upon this definition of \(D(x,l)\). Hyperparameter SetupHere we report the hyperparameter setup of LDL. As explained in Section 3, from \(N_{t}\times N_{r}\) poses we select \(K\) candidate poses by comparing the distance functions with the robust lost function in Equation 5. Recall that we use the candidate rotation estimation step in Section 3.2 to choose \(N_{r}\) rotations. For the \(N_{t}\) translations, we follow the design choice of prior works [7, 8, 26, 27] and employ uniform grid partitions for Stanford2D-3D-S [4] and centroids of octrees as in Rodenberg _et al_. [38] for OmniScenes [26]. We set \(K{=}20,N_{t}{=}800\) for OmniScenes [26] and \(K{=}20,N_{t}{=}1700\) for Stanford 2D-3D-S [4]. We use an increased number of translations for Stanford 2D-3D-S to cope with large scenes such as auditoriums and hallways. Nevertheless, note that LDL can quickly search promising candidate poses: even in Stanford 2D-3D-S candidate pose search finishes within 0.02 seconds. Potential for Privacy PreservationAs explained in Section 4.2, while the primary goal of LDL is to offer fast and robust localization, our approach can also be extended to offer low cost protection against various privacy breaches in client-server localization. To cope with edge devices having limited computing power, modern location-based services employ a client-server localization setup [11, 6] where the visual data of the edge device is shared with the service Figure A.1: Localization error against line threshold parameter \(\lambda\). Figure A.2: Given a line segment \(l\) (red), the distance (green) from point \(x\) to the \(l\) is defined depending on whether \(x\) lies on the spherical quadrilateral (blue) \(\mathcal{Q}(s,e)\). provider [11, 37]. Based on the shared information, the service provider performs the actual localization pipeline and returns the estimated 6DoF pose to the edge device user. We adapt LDL to the client-server localization scenario while offering privacy protection by having the edge device user to only share lines and local features near lines during localization. Specifically, as shown in Figure A.3, we modify the pose refinement phase of LDL to operate using local features near lines, instead of all the visible local features used for the original refinement explained in Section 3.4. Here we only consider line segments whose lengths are over a designated threshold as explained in Section 3.1 and directions are parallel to one of the 2D principal directions. Such a modification results in privacy protection against feature inversion attacks [11, 34, 37], which take local feature vectors as input and outputs an image reconstruction. Note that LDL naturally offers privacy protection during pose selection as it only uses line segments for this phase and thus does not necessitate the clients to share their entire view with the service provider. We further demonstrate the potential of LDL for privacy protection through experiments shown in Section B.5. ## Appendix B Additional Experimental Results ### Additional Ablation Study Choice of Query Point LocationsWe report the impact of choosing uniformly sampled query points for evaluating distance functions. Recall that we rank \(N_{t}\times N_{r}\) poses with the robust loss function in Equation 5, where the query points \(Q\) are uniformly sampled from a unit sphere. We compare LDL against a variant that uses query points sampled along the 2D line segment locations. Namely, this variant only considers regions with line segments, in contrast to LDL that equally considers regions lacking lines. We make quantitative evaluations between LDL and the variant using the Stanford 2D-3D-S [4] dataset. For fair comparison, we use identical hyperparameters as the original implementation of LDL. As shown in Table B.1, uniform sampling employed in LDL leads to large amounts of performance improvement. By fairly using all regions on the sphere, LDL effectively utilizes the spatial context from the line distance functions and performs effective localization. Choice of Loss FunctionWe validate the robust loss function in Equation 5 by comparing LDL against variants using other loss functions: L1, L2, Huber, and Median loss. Here we report results from the Wedding Hall scene in OmniScenes, as this scene contains drastic scene changes with large amounts of outliers. As shown in Table B.2, the inlier counting proposed in Equation 5 attenuates outliers in \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & \(t\)-error (m) & \(R\)-error (\({}^{\circ}\)) & Acc. \\ \hline w/ L1 Loss & 0.08 & 1.38 & 0.55 \\ w/ L2 Loss & 0.17 & 1.48 & 0.34 \\ w/ Huber Loss & 0.11 & 1.39 & 0.50 \\ w/ Median Loss & 0.08 & **1.22** & 0.55 \\ \hline Ours & **0.07** & **1.22** & **0.68** \\ \hline \hline \end{tabular} \end{table} Table B.2: Ablation study on the choice of loss functions evaluated in OmniScenes [26]. Figure A.3: Client-server localization setup using LDL. (a) The edge device user captures the raw 2D data and shares the lines and local features near lines with the service provider. The service provider provides the 6DoF pose using the shared information along with the 3D map. (b) While the service provider can attempt feature inversion attacks by training neural networks that learn image reconstructions from local feature inputs, this cannot fully recover the sensitive visual details for LDL as only a fraction of information is shared. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{\(t\)-error (m)} & \multicolumn{2}{c|}{\(R\)-error (\({}^{\circ}\))} & \multicolumn{2}{c}{Accuracy} \\ Area & LDL & LDL\({}^{\text{LS}}\) & LDL & LDL\({}^{\text{LS}}\) & LDL & LDL\({}^{\text{LS}}\) \\ \hline Area 1 & **0.02** & **0.02** & **0.54** & 0.60 & **0.86** & 0.75 \\ Area 2 & **0.02** & 0.05 & **0.66** & 0.79 & **0.77** & 0.57 \\ Area 3 & **0.02** & 0.03 & **0.54** & 0.73 & **0.89** & 0.69 \\ Area 4 & **0.02** & **0.02** & **0.48** & 0.57 & **0.88** & 0.72 \\ Area 5 & **0.02** & 0.03 & **0.54** & 0.61 & **0.81** & 0.59 \\ Area 6 & **0.02** & **0.02** & **0.50** & 0.58 & **0.83** & 0.66 \\ \hline Total & **0.02** & 0.03 & **0.53** & 0.64 & **0.83** & 0.66 \\ \hline \hline \end{tabular} \end{table} Table B.1: Ablation study of uniformly sampling query points on the unit sphere. LDL is compared against a variant using query points sampled along 2D line segment locations (LDL\({}^{\text{LS}}\)) in the Stanford 2D-3D-S dataset [4]. the Extreme split and exhibits optimal performance, demonstrating the effectiveness of the robust loss function. ### Evaluation in Noisier Maps In the main paper, we extract 3D lines from point clouds obtained using Matterport 3D scanners [1]. Here we run LDL on noisier line maps created using structure-from-motion (SfM) and Line3D++ [22]. As shown in Figure B.4, the maps are more noisier than those from 3D scans. Table B.3 shows the localization results from Room 3, 5 in Omniscenes under different types of line maps (note the new pipeline did not produce reliable maps in other scenes). Even though LDL was run with the exact same hyperparameters as in the main paper, it shows only a small amount of performance drop, which indicates that it can robustly handle noisier SfM-based line maps which are generated without 3D scanners. ### Additional Evaluation in Large-Scale Maps In the main paper we demonstrated that LDL can perform competitively against the structure-based method in large scenes by testing multiple room localization in OmSiScenes [23]. To further show the scalability of LDL, we evaluate on 20 office rooms from Stanford 2D-3D-S [3], and localize each image against the jointly composed 3D map. The 20 office rooms contain similar structures, as shown in Figure B.5. Even in such conditions, LDL shows similar performance against the structure-based method as shown in Table B.4. While scalability has not been the main goal of this paper, LDL shows the potential to be deployed in large-scale localization settings containing visual ambiguities. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline Accuracy (0.05 m, \(5^{\circ}\)) & PC & CPO & SB & LT & CD & LDL \\ \hline Robot & 0.69 & 0.89 & **0.99** & **0.99** & 0.31 & **0.98** \\ Hand & 0.81 & 0.80 & 0.95 & 0.95 & 0.29 & **0.97** \\ Change Robot & 0.41 & 0.59 & 0.93 & 0.94 & 0.30 & **0.95** \\ Change Hand & 0.47 & 0.60 & **0.92** & 0.90 & 0.30 & **0.92** \\ Extreme & 0.41 & 0.59 & 0.89 & 0.88 & 0.29 & **0.92** \\ \hline \multicolumn{10}{c}{(c) Accuracy at translation and rotation threshold 0.1 m, \(10^{\circ}\)} \\ \hline Accuracy (0.2 m, \(5^{\circ}\)) & PC & CPO & SB & LT & CD & LDL \\ \hline Robot & 0.70 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.32 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme & 0.42 & 0.60 & 0.96 & 0.96 & 0.34 & **0.98** \\ \hline \multicolumn{10}{c}{(e) Accuracy at translation and rotation threshold 0.2 m, \(5^{\circ}\)} \\ \hline Accuracy (0.2 m, \(10^{\circ}\)) & PC & CPO & SB & LT & CD & LDL \\ \hline Robot & 0.70 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.32 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme & 0.42 & 0.60 & 0.96 & 0.96 & 0.34 & **0.98** \\ \hline \multicolumn{10}{c}{(f) Accuracy at translation and rotation threshold 0.2 m, \(10^{\circ}\)} \\ \hline Accuracy (0.2 m, \(10^{\circ}\)) & PC & CPO & SB & LT & CD & LDL \\ \hline Robot & 0.70 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.32 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme & 0.42 & 0.60 & 0.96 & 0.96 & 0.34 & **0.98** \\ \hline \multicolumn{10}{c}{(f) Accuracy at translation and rotation threshold 0.2 m, \(10^{\circ}\)} \\ \hline \hline Accuracy (0.2 m, \(10^{\circ}\)) & PC & CPO & SB & LT & CD & LDL \\ \hline Robot & 0.70 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.33 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme & 0.42 & 0.60 & 0.96 & 0.96 & 0.34 & **0.98** \\ \hline \multicolumn{10}{c}{(f) Accuracy at translation and rotation threshold 0.2 m, \(10^{\circ}\)} \\ \hline Robot & 0.70 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.33 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme & 0.42 & 0.60 & 0.96 & 0.96 & 0.34 & **0.98** \\ \hline \multicolumn{10}{c}{(f) Accuracy at translation and rotation threshold 0.2 m, \(10^{\circ}\)} \\ \hline Robot & 0.70 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.33 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme & 0.42 & 0.60 & 0.96 & 0.96 & 0.34 & **0.98** \\ \hline \hline \end{tabular} \end{table} Table B.3: Evaluation results of LDL on noisier line maps obtained using structure from motion and Line3D++ [22]. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Method & \(t\)-error (m) & \(R\)-error (\({}^{\circ}\)) & Acc. \\ \hline SfM & **0.03** & 0.80 & 0.85 \\ 3D Scan & **0.03** & **0.71** & **0.98** \\ \hline \hline \end{tabular} \end{table} Table B.3: Evaluation results of LDL on noisier line maps obtained using structure from motion and Line3D++ [22]. Figure B.5: Top-down view of offices in Stanford 2D-3D-S [3]. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline Accuracy (0.2 m, \(5^{\circ}\)) & PC & CPO & SB & LT & CD & LDL \\ \hline Robot & 0.67 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.32 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme & 0.42 & 0.60 & 0.96 & 0.96 & 0.34 & **0.98** \\ \hline \multicolumn{10}{c}{(e) Accuracy at translation and rotation threshold 0.2 m, \(5^{\circ}\)} \\ \hline \hline \multicolumn{10}{c}{(f) Accuracy at translation and rotation threshold 0.2 m, \(10^{\circ}\)} \\ \hline Accuracy (0.2 m, \(10^{\circ}\)) & PC & CPO & SB & LT & CD & LDL \\ \hline Robot & 0.70 & 0.89 & **1.00** & **1.00** & 0.34 & **0.99** \\ Hand & 0.81 & 0.81 & 0.98 & 0.98 & 0.32 & **0.99** \\ Change Robot & 0.41 & 0.59 & 0.98 & **0.99** & 0.33 & **0.98** \\ Change Hand & 0.48 & 0.60 & **0.97** & **0.97** & 0.34 & **0.97** \\ Extreme \begin{table} \end{table} Table B.6: Localization accuracy at various thresholds in the Stanford 2D-3D-S [26] dataset. \begin{table} \end{table} Table B.7: Privacy-preservation evaluation of modified LDL using line-based feature filtering evaluated in Stanford 2D-3D-S dataset [4]. The simple filtering incurs only a small drop in localization accuracy while largely increasing the image error metrics. \begin{table} \end{table} Table B.6: Localization accuracy at various thresholds in the Stanford 2D-3D-S [26] dataset. \begin{table} \end{table} Table B.7: Privacy-preservation evaluation of modified LDL using line-based feature filtering evaluated in Stanford 2D-3D-S dataset [4]. The simple filtering incurs only a small drop in localization accuracy while largely increasing the image error metrics. Figure B.6: Privacy-utility curve drawn from various values of line-based filtering thresholds in the Stanford 2D-3D-S dataset. While the reconstruction quality of feature inversion attacks largely degrade as we filter out more feature points, the localization accuracy remains relatively constant. against the original image along with the localization accuracy using various line-based filtering threshold values. While the discrepancy values increase largely, the localization accuracy remains relatively constant. Thus the modified version of LDL can balance between privacy protection and accurate localization, suggesting its future potential as a robust privacy-preserving localization algorithm. Nevertheless, the current modification cannot fully hide keypoints from large structures such as walls and ceilings. While these regions typically do not contain sensitive visual information, some users may want their entire views to be hidden from service providers. Developing a more secure line-based localization algorithm that could alleviate a wider range of concerns is left as future work. ## Appendix C Baseline Details In this section, we describe the details for implementing the baselines compared against LDL. We implement PIC-COLO [26] and CPO [27] from the publicly available codebase. Below we retain our description on the Structure-based, Chamfer-based, and Line Transformer-based approaches. Structure-Based ApproachAs explained in Section 4, structured-based approach first finds promising candidate poses using robust image retrieval and then refines poses using PnP-RANSAC from feature matches. For image retrieval we use NetVLAD [3], which is a widely used image retrieval method that outputs a global feature vector for each image. To deploy NetVLAD in our setup, we first render \(N_{t}\times N_{r}\) synthetic views from the point cloud. Here we use \(N_{t}=100\) candidate translations and \(N_{r}=216\) candidate rotations uniformly sampled from \(SO(3)\). Then, we extract the global features for each synthetic view and the query image, and choose the top \(K=20\) synthetic views whose feature vectors are closest to the query image. As the final step, we perform feature matching [44] from each selected synthetic view against the query image, and choose the final view with the most matches. To ensure fair comparison, we undistort the selected view and the query panorama into cubemaps and separately perform feature matching for each pair of faces. The matches are then aggregated to perform refinement via PnP-RANSAC [15]. Chamfer Distance-Based ApproachInspired from Miccusik et al. [33], Chamfer distance-based approach first selects poses that best align 3D lines against lines in the query image, where the Chamfer distance is used to evaluate the potential matchings. The selected poses are then refined with PnP-RANSAC, similar to the structure-based approach. To elaborate, we find the top \(K=20\) poses from an initial pool of \(N_{t}\times N_{r}\) poses, where the poses are ranked by measuring the Chamfer distance between the projected line segments in 3D and those in the query image. We set \(N_{t}\) and \(N_{r}\) identical to LDL and use the principal directions for deducing a set of candidate rotations. As the final step, we render views at the selected \(K\) poses and perform feature matching against the query image for refinement via PnP-RANSAC. Line Transformer-Based ApproachBased on Yoon et al. [57], Line Transformer-based approach finds candidate poses attaining the most line matches with the query image, and refines poses using PnP-RANSAC. For establishing line matches, we first render \(N_{t}\times N_{r}\) synthetic views from the point cloud where we set \(N_{t}=100\) and \(N_{r}=216\). Then, the top \(K_{1}=100\) poses are selected whose NetVLAD [3] features are closest to the query image. This intermediate step is necessary as the line transformer features are computationally expensive and thus could not be naively evaluated for all \(N_{t}\times N_{r}\) views. For each synthetic view from the selected poses, we extract line Transformer embeddings and establish matchings with the query image. Similar to the structure-based baseline, we convert panoramas to cubemaps during the line matching process. Finally, we select the top \(K_{2}=20\) poses that have the most line matches, and refine them via PnP-RANSAC. ## Appendix D Details on Experimental Setup In this section, we provide additional details for experiments presented in Section 4 and Section B. Illumination Robustness EvaluationTo evaluate the robustness of LDL against illumination shifts, we apply synthetic color variations to images in Room 3 from OmniScenes [26]. We consider three synthetic color variations, where qualitative examples are shown in Figure 4: average intensity, gamma, and white balance change. For average intensity change we lower each pixel intensity by 25%. For gamma change, we set the image gamma to 0.2. For white balance change, we apply the following transformation matrix to the raw RGB color values: \(\begin{pmatrix}0.7&0&0\\ 0&0.9&0\\ 0&0&0.8\end{pmatrix}\). Candidate Pose Search EvaluationWe compare LDL against NetVLAD [3] for candidate pose search using the Extreme split from OmniScenes. The recall curves in Figure 5 are obtained by measuring the localization performance of both methods prior to pose refinement. As mentioned in Section 4.2, we use the identical set of translations with \(N_{t}=50\) for both methods and associate a large number of candidate rotations \(N_{r}=216\) for NetVLAD to ensure fair comparison. Such measures are taken for rotations since LDL estimates rotations using combinatorial matchings of principal directions, which makes the number of candidate rotations to vary for each query image. We empirically find that less than 30 candidate rotations remain after discarding infeasible rotations, and thus setting \(N_{r}=216\) for NetVLAD would provide enough evidence to achieve competitive performance against LDL. Feature Inversion Network for Privacy EvaluationTo evaluate the privacy protection of LDL against feature inversion attacks, we train a fully-convolutional neural network \(F_{\Theta}(\cdot)\) that takes a sparse feature map \(D\in\mathbb{R}^{H\times W\times C}\) as input and produces image reconstructions. The feature map stores local feature descriptors \(\mathbf{f}\in\mathbb{R}^{C}\) at keypoint locations \((i_{\text{kpt}},j_{\text{kpt}})\), namely \(D(i_{\text{kpt}},j_{\text{kpt}})=\mathbf{f}\), and zero values for other regions. For the inversion network, we use a similar U-Net architecture as in Ng et al. [34] where the only difference is in the input channel dimension that we set as 256 instead of 128 to match the SuperPoint [12] descriptor dimensions. Then for training, we use the entire Matterport3D [9] dataset where we use the first \(90\%\) of the 9581 panorama images for training and the rest for validation. We follow the training procedure of Ng et al. [34] and use the perceptual loss and mean absolute error (MAE) loss, where we employ Adam [28] with a learning rate of \(1e{-}4\) for optimization. In our experiments, we use the trained network to reconstruct panoramas from the local feature descriptors, where we shared the reconstruction results along with the image error metrics in Section 4 and Section B. To elaborate, during evaluation we first extract local features for each query image in the Stanford 2D-3D-S dataset [4] and run feature inversion, where the results are then compared against the original panorama image. 3D Line Maps for LocalizationIn Figure D.8, we show visualizations of 3D lines used as input to LDL. Despite the reliabilty of the 3D line extraction algorithm of Xiaohu et al. [54], the lines are still quite noisy. To cope with the noisy detections, LDL employs a length-based filtering scheme to only keep long, salient lines and resorts to matching the _distribution_ of lines using line distance functions instead of trying to establish direct one-to-one matchings as in previous works [33, 57].
2303.07456
5G-Advanced Towards 6G: Past, Present, and Future
Since the start of 5G work in 3GPP in early 2016, tremendous progress has been made in both standardization and commercial deployments. 3GPP is now entering the second phase of 5G standardization, known as 5G-Advanced, built on the 5G baseline in 3GPP Releases 15, 16, and 17. 3GPP Release 18, the start of 5G-Advanced, includes a diverse set of features that cover both device and network evolutions, providing balanced mobile broadband evolution and further vertical domain expansion and accommodating both immediate and long-term commercial needs. 5G-Advanced will significantly expand 5G capabilities, address many new use cases, transform connectivity experiences, and serve as an essential step in developing mobile communications towards 6G. This paper provides a comprehensive overview of the 3GPP 5G-Advanced development, introducing the prominent state-of-the-art technologies investigated in 3GPP and identifying key evolution directions for future research and standardization.
Wanshi Chen, Xingqin Lin, Juho Lee, Antti Toskala, Shu Sun, Carla Fabiana Chiasserini, Lingjia Liu
2023-03-13T20:32:25Z
http://arxiv.org/abs/2303.07456v1
# 5G-Advanced Towards 6G: Past, Present, and Future ###### Abstract Since the start of 5G work in 3GPP in early 2016, tremendous progress has been made in both standardization and commercial deployments. 3GPP is now entering the second phase of 5G standardization, known as _5G-Advanced_, built on the 5G baseline in 3GPP Releases 15, 16, and 17. 3GPP Release 18, the start of 5G-Advanced, includes a diverse set of features that cover both device and network evolutions, providing balanced mobile broadband evolution and further vertical domain expansion and accommodating both immediate and long-term commercial needs. 5G-Advanced will significantly expand 5G capabilities, address many new use cases, transform connectivity experiences, and serve as an essential step in developing mobile communications towards 6G. This paper provides a comprehensive overview of the 3GPP 5G-Advanced development, introducing the prominent state-of-the-art technologies investigated in 3GPP and identifying key evolution directions for future research and standardization. 5G, 5G-Advanced, 6G, MIMO, AI, ML, Full-Duplex, Green Networks, UAV, RIS, XR, Positioning, NTN, IAB ## I Introduction The ever-increasing demands for mobile broadband and expansion to the so-called vertical industries (e.g., automotive, satellite, internet of things) continue driving evolution in standard bodies such as the 3rd generation partnership project (3GPP), most recently evidenced by the successful standardization of the fourth generation (4G) long-term evolution (LTE) and the fifth generation (5G) new radio (NR). In particular, 5G NR has witnessed three releases of standardization, providing a robust framework supporting a comprehensive set of use cases over a wide range of spectrum which can be deployed in various scenarios (indoor or outdoor, macro or small cells, etc.). The next major step in the evolution of 5G towards the sixth generation (6G) mobile networks is called _5G-Advanced_, a new term approved by 3GPP in April 2021 as a response to a new era of 5G. Built on the strong 5G foundation, 5G-Advanced will introduce numerous new capabilities to boost the performance and to enable or expand new use cases and verticals to use 5G technology. At the same time, several forward-looking topics as part of 5G-Advanced will also build the bridge towards 6G technology development. ### _The Path to 5G_ When the standardization for LTE began in 3GPP in 2005, the framework for LTE was designed to optimize data throughput and voice over internet protocol (VoIP) capacity in macro deployments. Over the course of the subsequent and continuing LTE evolution, additional use cases and scenarios were gradually introduced [1], including the support of broadcast services, the extension to heterogeneous networks, the support of device-to-device (D2D) and vehicle-to-everything (V2X) communications, the expansion into unlicensed and shared spectrum, and the addition of machine type communications (MTC) in two flavors targeting different deployments and use cases: enhanced MTC (eMTC) and narrow-band internet of things (NB-IoT). These additional use cases and scenarios not only enriched LTE's offerings, but also provided a valuable learning experience for future generations of standardization, where the standardization has to consider at the very beginning a variety of different use cases, spectrum types, and deployment scenarios. The first release of 5G, starting with Release 15, introduced a solid baseline for rolling out 5G, first with non-standalone (NSA) networks together with LTE and more recently, standalone (SA) 5G networks with 5G core but without the need of using LTE [2]. One important aspect of 5G standardization is the easy migration from LTE to 5G in commercial deployments. At the physical layer, 5G NR is designed to be compatible with LTE, e.g., by supporting also a one-millisecond sub-frame duration and the 15 kHz sub-carrier tone-spacing. Fig. 1 illustrates the timeline for 5G standardization in 3GPP, starting with the first 5G workshop in 2015, which identified three high-level use cases for 5G [3]: 1. Enhanced mobile broadband (eMBB), 2. Massive MTC (mMTC), and 3. Ultra-reliable and low latency communications (URLLC). Release 15 was designed to be unified, accommodating and flexible in consideration of the different performance requirements (e.g., in terms of throughput, reliability, latency, coverage, and capacity) of different use cases. It is also scalable, easily adapting to different spectrum and service requirements. In particular, the spectrum supported by 5G is classified into two frequency ranges (FR): FR1 and FR2. FR1 covers 410 MHz to 7.125 GHz while FR2 is further divided into FR2-1 and FR2-2, covering 24.25 GHz to 52.6 GHz and 52.6 GHz to 71 GHz, respectively. Fig. 1: An illustration of 5G timeline in 3GPP. Releases 16 and 17 addressed the necessary enhancements as a continuation from Release 15 as well as adding capabilities for various vertical segments. The inclusive and future-compatible 5G framework facilitates its evolution in supporting or expanding features enabling new or additional services and use cases, as witnessed in Releases 16 and 17. In particular, these features include [1]: * MTC support, particularly, the accommodation of eMTC/NB-IoT into 5G and the introduction of reduced capability (RedCap) user equipment (UE) targeting new types of devices (e.g., wearables, surveillance cameras, and industrial sensors); * Unlicensed and shared spectrum (e.g., 5 GHz and lower 6 GHz bands); * Non-terrestrial networks (NTNs) primarily for satellite communications, supporting both eMBB and MTC services; * NR sidelink, supporting V2X, public safety, and network controlled interactive services; * 5G broadcast and multicast; * URLLC and industrial internet of things (IIoT). The solid start of 5G in Release 15 and the expansion of 5G into the above vertical areas in Releases 16 and 17 provide a comprehensive set of standardized features in 5G, equipping operators and vendors with a rich set of enablers for successful commercial deployments. There was a lot of attention on IIoT with work on URLLC as well as on private networks and positioning, enabling the use of 5G with industrial automation. At the same time, 3GPP standardization continues its work in boosting the radio performance and improving key metrics such as UE power consumption, uplink coverage or achievable system performance especially with 5G multiple-input multiple-output (MIMO) evolution. ### _5G-Advanced towards 6G_ 3GPP is now entering the second phase of 5G standardization, known as 5G-Advanced, built on the 5G baseline in 3GPP Releases 15, 16, and 17. 5G-Advanced will further expand and extend the 5G capabilities and use cases in many ways, starting with Release 18 to be ready in early 2024 and continuing for further releases in Release 19 and beyond [4]. The key 5G-Advanced topics are introduced in this paper. This paper also looks into future evolution beyond 5G-Advanced, i.e., 6G, and discusses technologies that are expected to come next. Although 6G standardization is not started yet, there have been a plethora of 6G initiatives around the globe, driven by research interest, industry expectations, and strategic government plans. In November 2019, China's Ministry of Science and Technology set up a working group called "China 6G Wireless Technology Task Force" responsible for the national 6G research and development and another working group consisting of government agencies to promote the development of 6G technology. In the U.S., the alliance for telecommunications industry solutions (ATIS) launched Next G Alliance in October 2020 to advance North American leadership in 6G. Japan established the Beyond 5G Promotion Consortium and Beyond 5G New Business Strategy Center in December 2020 to promote beyond 5G/6G development in Japan. Europe has also launched various 6G initiatives, notably the Hexa-X project launched in January 2021, which aims to shape the European 6G vision and develop key 6G technologies to enable the vision. In June 2021, South Korea established a 6G implementation plan to lay the groundwork for 6G research and development, which aims to push to launch commercial 6G services by around 2028. This paper is an attempt to summarize and overview many of the exciting developments in 5G-Advanced towards 6G. The rest of this paper is organized as follows. Section II provides an overview of 5G-Advanced in more detail to explain the Release 18 contents. Further evolution of MIMO is covered in Section III, followed by positioning evolution in Section IV. Section V discusses the topological evolution with elements like uncrewed aerial vehicles (UAVs), NTNs and network-controlled repeater nodes. Section VI describes the support for extended reality (XR) services, which includes augmented reality (AR), virtual reality (VR), or mixed reality (MR) type of services. The sidelink evolution is introduced in Section VII, while artificial intelligence (AI) and machine learning (ML) for the NR air interface and for the 5G radio access network (RAN) are covered in Section VIII. The paper continues with the discussion of duplex evolution studies, and green networks (aiming to reduce the energy consumed by both networks and UEs) in Section IX and Section X, respectively. Section XI presents additional 6G candidate technologies, followed by the concluding remarks in Section XII. ## II 5G-Advanced: Key Technologies The first three 5G releases, as described in Section I-A, provide a comprehensive package of features in a forward-compatible framework, enabling successful 5G commercial deployments not only for the traditional eMBB services but also expansion into the vertical domains. While it is still imperative to continue evolving 5G in response to the ever-increasing immediate commercial needs, it is also necessary to start exploring new areas to further unleash the potentials of 5G. This is particularly important considering the mounting interests and efforts of 6G in academia and various fora and alliances, as discussed in Section I-B. Release 18 will be the first release of 5G-Advanced. Therefore, how to determine the set of features to be included in Release 18 requires meticulous planning and careful discussion. To that end, a 3GPP workshop focusing on RAN-related features was organized in June 2021, which attracted over 500 contributions from about 80 companies, organizations, and research entities [5]. The workshop was primarily done via conference calls, with more than 1,200 registered participants. The contributions to the workshop were coarsely classified into three different categories [5]: * eMBB-driven evolution; * Non-eMBB driven evolution; * Cross-functionalities (or new areas) for both eMBB-driven and non-eMBB driven evolution. From these contributions, it can be observed that the proposed evolution directions are generally balanced in terms of: * Mobile broadband evolution vs. further vertical domain expansion; * Immediate vs. longer term commercial needs; * Device evolution vs. network evolution. These balanced evolution directions formed the basis for the subsequent discussion, leading to the final approval of the set of features in December 2021 [6]. These features can be roughly categorized into three different categories (eMBB/non-eMBB/new areas), as shown in Table I. Note that such classification may be subjective and thus arguable. However, it should provide a high-level insight of the overall evolution directions of 5G-Advanced. In the sequel, we will dive into the details of some of the Release 18 features, including massive MIMO, positioning, topological aspects, XR, sidelink, AI/ML, duplexing, and green networks. We will introduce these features based on the efforts not only within, but also outside, 3GPP. ## III Massive MIMO Evolution In this section, we first introduce the massive MIMO evolution in 3GPP standards and then discuss in detail multi-user (MU-MIMO) related aspects as well as how AI/ML can be used for MIMO operation. After that, we describe future evolution of massive MIMO towards 6G. ### _Evolution of Massive MIMO in 3GPP Standards_ 5G NR was standardized assuming the use of two-dimensional array with a large number of antenna elements, often called massive MIMO, so that beamforming and spatial multiplexing can be performed in a very flexible manner utilizing both horizontal and vertical directions as illustrated in Fig. 2. This approach was first taken in the full dimension MIMO (FD-MIMO) in LTE-Advanced Pro in Release 13 [7] and was exploited in the first version of 5G standard in Release 15 [8]. By forming a narrow beam, the base station can concentrate its transmission energy along a desired direction and can improve the strength of the signal received from a desired direction. Utilizing a two-dimensional antenna array makes it possible to shape beams in both horizontal and vertical directions, resulting in improved capability of distinguishing the signals to and from multiple devices. The massive MIMO in 5G is the essential technology for coverage enhancement, improved single user throughput, and cell throughput improvement through the aggressive use of multi-user multiplexing in spatial domain (a.k.a., MU-MIMO). In 5G, downlink (DL) beamforming and MU-MIMO require the base station to know about the DL channel state, which can be provided from the channel state information (CSI) report from the UE as well as the sounding reference signal (SRS) transmitted by the UE utilizing the channel reciprocity in case of a time division duplex (TDD) system. The UE can generate its CSI report by monitoring the CSI reference signal (CSI-RS) transmitted by the base station either per antenna element or per beam. The latter helps reduce the amount of time-frequency resource for CSI-RS transmissions. Support of high-resolution CSI feedback plays a crucial role for the MU-MIMO operation. For the purpose of DL data reception, the demodulation reference signal (DMRS) is specified in NR, for which the same precoding as the data is assumed. For the high frequency band operation, 5G supports analog beamforming in addition to digital precoding and beamforming. This helps reduce the implementation complexity caused by the use of several hundreds of antenna elements to realize the required coverage by overcoming the increased propaga \begin{table} \begin{tabular}{c l} \hline \hline Category & \multicolumn{1}{c}{Feature} \\ \hline eMBB & - MIMO evolution (both downlink and uplink) \\ & - Further NR coverage enhancements \\ & - Enhancement of NR dynamic spectrum sharing (DSS) \\ & - NR network-controlled repeaters \\ & - Multi-carrier enhancements \\ & - NR mobility enhancements \\ & - Dual Tx/Rx multi-subscriber identity modules (MUSIM) \\ & - In-device co-existence (IDC) enhancements for NR and multi-radio dual-connectivity (MR-DC) \\ & - Mobile terminated small data transmission (SDT) for NR \\ & - Mobile IAB \\ & - Further enhancement of data collection for self-organizing networks (SON) and minimization of drive test (MDT) in NR and LTE/NR dual-connectivity (EN-DC) \\ & - Enhancement on NR quality-of-experience (QoE) management and optimization for diverse services \\ \hline Non-eMBB & - NR sidelink relay enhancements \\ & - NR sidelink evolution \\ & - Expanded and improved NR positioning \\ & - Further NR RedCap UE complexity reduction \\ & - Radio enhancements for XR \\ & - NR NTN enhancements \\ & - IoT NTN enhancements \\ & - Enhancements of NR multicast and broadcast services \\ & - NR support for dedicated spectrum with less than 5 \\ & MHz bandwidth for FRI \\ & - NR support for UAV \\ \hline New Areas & - Study on AI/ML for NR air interface \\ & - AI/ML for NG-RAN \\ & - Study on evolution of NR duplex operation \\ & - Network energy savings \\ & - Study on low-power wake-up signal/receiver \\ & - Study on enhancement for resiliency of gNB-central unit (gNB-CU) \\ \hline \hline \end{tabular} \end{table} TABLE I: 3GPP RAN Release-18 package and rough classification Fig. 2: Illustration of the concept of Massive MIMO. tion loss. The analog beamforming results in the limitation that all simultaneous channels and signals formed by the same set of antenna elements have to be in the same analog beam. To cover all directions with this limitation, the so-called beam sweeping operation is supported. Flexible beam management functionality is supported to form and manage beams, which is essential for high frequency (e.g., millimeter wave (mmWave)) band systems. After the first version of 5G standard in Release 15, 3GPP specified enhancements on various aspects of MIMO functionalities in Release 16 and Release 17 [9, 10]. Some representative examples are described below. First, Release 16 specified enhancements for reduction of the CSI feedback overhead via spatial and frequency domain compression. This makes it possible to improve the cell throughput through MU-MIMO operation without causing too much feedback overhead. Second, various enhancements for beam management were specified. For example, Release 16 specified layer 1 (L1) signal-to-interference-plus-noise ratio (SINR)-based beam measurement to facilitate interference-aware beam selection. Measures were specified for reduction of signalling overhead and latency, including L1/layer 2 (L2) signalling-based joint indication of DL and uplink (UL) beams in Release 17, among others. Third, while Release 15 mainly focused on single TRP-based operation, operation of multiple transmit and receive points (TRPs) was enhanced in Releases 16 and 17. In particular, Release 16 specified non-coherent joint transmission (NCJT) from multiple TRPs to improve DL data rates and spectral efficiency, while Release 17 introduced the enhancement on CSI report to represent joint channels across multiple TRPs involved in NCJT operation. Furthermore, Release 17 extended the multi-TRP operation to involve TRPs of multiple cells and specified the support for beam management across multiple TRPs. The scope of NR MIMO evolution in Release 18 is defined in [11]. Release 18 gives an emphasis on enhancements for UL MIMO especially targeting non-smartphone type devices such as fixed wireless access (FWA), vehicles, and industrial devices. These include the use of 8 transmit antennas to support 4 or more streams per UE, and the simultaneous transmission from multiple panels particularly for mmWave band. CSI reporting may be enhanced for high and medium UE velocities that were not the main focus of the previous releases. Continuing the enhancement on multi-TRP operation, the L1/L2-signaling based joint indication of DL and UL beams specified in Release 17 will be extended to the case of multiple TRPs. CSI acquisition may also be enhanced for coherent joint transmission (CJT), a new target scenario in Release 18, with a number of antennas distributed over multiple TRPs. To improve the performance of MU-MIMO operation in both DL and UL, a larger number (up to 24) of orthogonal DMRS ports may be specified for cyclic prefix (CP) orthogonal frequency-division multiplexing (OFDM). MU-MIMO is a critical operation mode for realizing the performance benefit of the massive MIMO system and was first introduced for LTE-Advanced in 3GPP Release 10 [12]. Scheduling and precoding of transmitted signals for multiple UEs are fundamental operations of MU-MIMO and can be very challenging in the massive MIMO system due to the underlying massive number of antennas at the base station. Success of AI/ML in various areas motivates the research for its utilization for MU-MIMO scheduling and precoding. In addition, AI/ML is also being investigated to improve the performance of MIMO receiver operations such as channel estimation and symbol detection. The following section introduces the academic research results on MU-MIMO and the application of AI/ML to MIMO. ### _MU-MIMO Operation and AI/ML-Enabled MIMO_ Unlike other wireless networks, cellular networks need to strike a balance between cell-average throughput and cell-edge throughput. Accordingly, MU-MIMO scheduling becomes a central piece to achieve this objective. Many signal processing and/or optimization-based scheduling algorithms were introduced in the literature tailoring towards different performance metrics. For example, maximum throughput scheduling has been introduced to maximize the sum throughput across all the users in the network [13] without considering fairness among users. To address this issue, round robin scheduling has been introduced where users take turns to obtain the radio resources so that the resources can be allocated fairly among all users. Since the round robin scheduling does not consider the network throughput, this type of scheduler is also referred to as blind equal throughput (BET) scheduler [14]. To cover both network throughput and user fairness, the generalized proportional fair (GPF) scheduler has been introduced [15]. For delay-sensitive real-time traffic, the modified largest weighted delay first (MLWDF) scheduler [16] and the exponential proportional fair (EXP) scheduler [17] have been introduced for cellular networks. In [18], a multi-phase optimization-based scheduler for MU-MIMO has been introduced by leveraging large-scale parallel computation to speed up the scheduling speed. Meanwhile, in cellular networks, the input information to the network scheduler is usually the feedback information from the mobile users. Very often this information is either coarse (due to the payload limitation and granularity on channel feedback) or erroneous (due to feedback error). Since conventional scheduling methods solely rely on the input information, their scheduling decisions are far from being optimal especially when the input information is not accurate. Motivated by the success of AI/ML, the scheduling problem can be formulated as an optimal control problem of Markov decision process (MDP), which can be solved using deep reinforcement learning (DRL). In [19], a DRL-based scheduling strategy has been introduced under the assumption that all resource block groups (RBGs) are allocated to a single user in each transmission time interval. In [20], a DRL-based scheduling strategy has been introduced to adopt the same policy network to allocate every RBG so that the significant training costs and convergence issues are avoided. In [21], a DRL-based scheduler was introduced where the action is restricted to only determine the number of RBGs allocated to each user. Even though this initial investigation assumes certain idealistic assumptions, they did shed lights on applying ML models to solve the challenging problem of MU-MIMO scheduling in an efficient and resilient way. It is expected that the frameworks of DRL and/or multi-agent DRL will play important roles to reduce computational complexity of MU-MIMO scheduling. MU-MIMO precoding is conducted once the scheduling decision is made. Despite offering better performance, non-linear MU-MIMO precoding schemes, such as dirty paper coding (DPC) or vector perturbation, are not practical for MIMO systems due to their high implementation complexity. Simple linear processing techniques have been shown to offer significant performance gains for MU-MIMO systems especially for massive MIMO systems. Maximum ratio transmission (MRT) and zero forcing (ZF)-based methods for MU-MIMO precoding [22, 23] have been introduced. Low-complexity MU-MIMO precoding strategies that utilize the direction of arrival (DoA) estimation have also been introduced for massive FD-MIMO systems [24]. Similar to the MU-MIMO scheduling, AI/ML tools have also been utilized in MU-MIMO precoding to alleviate the computational burden especially for massive MIMO systems. For example, [25] introduces a deep neural network architecture by unfolding a parallel gradient projection algorithm to solve the weighted sum rate maximization problem for a multi-user massive MIMO system. In [26], a graph neural network-based design has been introduced to solve the joint beamforming and antenna selection problem with imperfect CSI for a MU-MIMO system. In [27], a general form of iterative algorithm-induced deep-unfolding neural network is developed in matrix form to reduce the computational complexity for MU-MIMO precoding. Other than being utilized in MU-MIMO scheduling and precoding, AI/ML tools have also been recently applied to various other MIMO operations. For example, various deep learning methods have been applied to the receiver processing of MIMO operations such as channel estimation and symbol detection. * **AI/ML for MIMO channel estimation**: A convolutional neural network (CNN) is adopted in [28] to learn the parameters of the minimum mean square error (MMSE) channel estimator. Channel estimation was treated as a super-resolution problem in [29, 30], where ChannelNet [29] and ReEsNet [30] have shown their capability of substantially improving the channel estimation quality. However, these works rely on offline training, where the artificially generated offline training data is assumed to have the same statistical properties as the online testing one, which cannot be guaranteed in a practical system. A reinforcement learning-based successive denoising method is introduced in [31] to improve the mean square error of least-square channel estimation. This method does not need labelled data for training. However, it requires channel power and number of channel taps as prior knowledge, and the training requires hundreds of OFDM subframes to converge. StructNet introduced in [32] provides a real-time online learning channel estimation method, which only requires over-the-air pilot symbols for training and converges fast. * **AI/ML for MIMO symbol detection**: A deep learning framework, called DetNET, has been introduced in [33] for MIMO symbol detection. In [34], a deep learning-based symbol detector, MMNet, is constructed through unfolding existing optimization-based symbol detection methods to deep neural networks. A special recurrent neural network (RNN) with extremely low training complexity and overhead, reservoir computing, has been introduced to MIMO-OFDM symbol detection in [35, 36, 37]. Domain knowledge of the MIMO-OFDM waveform has been incorporated in the design of underlying reservoir computing in [38] to improve system performance without a substantial increase in complexity. Furthermore, it enables a real-time online learning-based symbol detection for MIMO-OFDM systems [39, 40]. In [41], a multi-mode reservoir computing has been introduced to harness the tensor structure of the massive FD-MIMO channel to boost the detection performance for multi-user massive MIMO networks. ### _Future Evolution_ There are active research activities for the evolution of massive MIMO technologies towards 6G. This subsection briefly introduces the evolution of massive MIMO for mmWave bands, modular massive MIMO, immense MIMO, and holographic MIMO, among others. #### Iii-C1 mmWave massive MIMO The combination of mmWave and massive MIMO embodies the advantages of both technologies, such as large bandwidth, high beamforming gain, and compact form factor due to small wavelengths [42, 43, 44]. For a broadband mmWave signal, the array pattern varies over frequencies, thus a large antenna array is unable to generate beams pointing toward the same direction for a wide range of subcarriers. This is known as beam squint [45, 46], which poses challenges on transceiver design for mmWave massive MIMO systems. Three beamforming architectures are generally considered for mmWave massive MIMO: analog beamforming, digital beamforming, and hybrid analog/digital beamforming. Analog beamforming, which usually relies on phased arrays, enjoys low hardware cost and low implementation complexity, but it is unable to deal with the beam squint effect since each phase shifter can only provide an identical phase shift over different frequencies. Advanced codebook design has been proposed to deal with this problem [45], but its benefit is limited when either the transmission bandwidth or the number of antennas becomes large. True-time-delay lines can compensate the delays of different antennas and naturally produce frequency-dependent phase shift, but such a configuration increases hardware complexity and limits the transmission rate. While beam squint can be well handled by digital beamforming, the resultant hardware cost and complexity are high. Low-resolution analog-to-digital converters (ADCs) can significantly reduce cost and energy consumption, but it becomes difficult to correctly decode the received symbols via spatial oversampling, thus decoding with the beam squint effect under low-resolution ADCs remains an open problem. For hybrid beamforming, despite various hybrid array architectures, each radio frequency (RF) chain is connected to multiple antenna elements, thus the corresponding drawbacks of analog beamforming still exist and necessitate further study. #### Iii-B2 Modular massive MIMO Besides the high-frequency spectrum such as mmWave and THz bands, the low band realm is still expected to play an essential role in future generations of wireless communications to offer wide and reliable coverage. It can be difficult to exploit massive MIMO in the low frequency band due to the half wavelength spacing requirement between antenna elements. To address this issue, architectures like distributed massive MIMO and cell-free massive MIMO [47, 48], coordinated multipoint (also known as network MIMO), and distributed antenna systems have been studied. Recently, a concept named modular massive MIMO (mmMIMO) has been propounded [49], which consists of a variety of predefined basic antenna modules that can be flexibly combined to build a single system. Initial simulation results and field tests have shown the huge potential of mmMIMO in improving throughput. Nevertheless, a few key issues need to be solved to ensure the commercial success of mmMIMO, including the effect of asynchronous reception among various antenna modules [50], DL and UL reciprocity calibration, CSI acquisition and reporting, among others. In addition, enormous work is needed to accurately quantify the benefits of mmMIMO. #### Iii-B3 Immense MIMO There is increasing interest in utilizing arrays with extremely large number (e.g., hundreds or even thousands) of antenna elements, exhibited in different flavors including further evolution of 5G massive MIMO (denoted _extreme MIMO_ in [51]) and extremely large aperture array (ELAA). For convenience, in this article, we group them together under the umbrella term _immense MIMO_. Extreme MIMO is mainly targeted forcomtmeter-wave (cmWave), mmWave, and (sub-)THz range. For example, in case of a cmWave range such as 7-24 GHz, extreme MIMO aims to improve the throughout performance while reusing 5G base station sites [52]. The increased number of antenna elements poses technical challenges including computational complexity, CSI feedback overhead, power consumption, etc. In ELAA, better spatial resolution is achieved thanks to the increased array aperture [53, 54]. Furthermore, the super spatial resolution is likely to make the channels nearly orthogonal among different users, providing a favorable propagation condition. Besides enhancing mobile broadband services, the large spatial resolution of an ELAA can also be exploited for spatial multiplexing of an unprecedented number of MTC devices. The growth of the number of antennas results in an enlarged array size, which gives rise to new phenomena including the near-field effect and spatial non-stationarity. While there exists some work on the characterization of the near-field effect and spatial non-stationarity for ELAA or similar architectures [55, 56, 57, 58], more comprehensive studies and measurement campaigns are in need to thoroughly explore the associated channel models and beam/wavefront management strategies. #### Iii-B4 Holographic MIMO Another evolution trend of massive MIMO is holographic MIMO, sometimes also named as large intelligent surface (LIS) in the literature, in which the element spacing can be tinier than a half wavelength, and the entire array can be regarded as a continuous electromagnetic aperture in its ultimate form [59, 60, 61]. A continuous aperture is able to create and detect electromagnetic waves with arbitrary spatial frequencies (i.e., wavenumbers) without undesired sidelobes, and offer fine controllability of the array radiation pattern. It can also facilitate the sensing and imaging of RF signals [62], which is promising in positioning along with integrated sensing and communication [63]. Furthermore, holographic MIMO enables highly efficient near-field power transfer [59]. Currently, holographic MIMO can be implemented by either a tightly coupled array of discrete active antennas [62], or low-profile metamaterial/metasurface elements that do not require bulky electronic components [64, 65, 66] and are excited by an RF source. The exploitation of the full potentials of holographic MIMO requires accurate understanding of its physical properties such as channel characteristics and beamforming methodologies. While research work exists on some perspectives of the channel modeling for holographic MIMO [60, 67, 68], more comprehensive channel models and real-world measurements are in need to facilitate system design and performance evaluation. Additionally, the theoretical performance limits of holographic MIMO are worth investigating, which may necessitate new communication theories, such as electromagnetic information theory, i.e., the combination of information and electromagnetic theories [69, 70]. ## IV Positioning Evolution In this section, we first describe 5G positioning service requirements and positioning architecture, then provide an overview of the 3GPP Release 17 NR positioning methods and ongoing Release 18 NR positioning work, followed by future outlook on positioning evolution. ### _5G Positioning Services_ 5G positioning services aim to support verticals and applications with high positioning accuracies [71]. Example verticals and applications that benefit from high accuracy positioning include eHealth, factories, transportation, logistics, mission critical services, and regulatory requirements. 5G system can satisfy different levels of services and requirements (e.g., decimeter-level positioning accuracy, 99.9% positioning service availability, and sub-second positioning service latency [72]). 5G system supports the combination of 3GPP and non-3GPP positioning technologies such as global navigation satellite system (GNSS), terrestrial beacon system, sensors, and wireless local area network (WLAN) and Bluetooth-based positioning. The service-based architecture of 5G core network used for positioning services and corresponding network functions (NFs), NF services, and procedures are specified in [73], while the UE positioning architecture, functional entities, and operation to support positioning methods in next generation RAN (NG-RAN) are defined in [74]. Fig. 3 shows the UE positioning architecture applicable to NG-RAN [74]. The positioning information may be requested by and reported to a client within or attached to the core network or in the UE. When an access and mobility management function (AMF) initiates or receives a request for positioning service for a target UE, the AMF sends a positioning service request to a location management function (LMF) via the NL1 interface. The LMF processes the position service request by, e.g., transferring assistance data to the target UE and position estimation of the target UE. An LMF may have a proprietary signaling connection to an evolved serving mobile location center (E-SMLC) which may enable the LMF to access information from evolved universal terrestrial radio access (E-UTRA). An LMF may also have a proprietary signaling connection to a secure user plane location (SUPL) location platform (SLP) which is responsible for positioning over user plane. A UE supporting SUPL is known as SUPL enabled terminal (SET). The NG control plane interface (NG-C) connects the AMF to an NG-RAN node (either a gNodeB (gNB) or a next-generation eNodeB (ng-eNB)). The connection between the gNB and the ng-eNB is via the Xn interface. The target UE is connected to the ng-eNB via the LTE-Uu interface and to the gNB via the NR-Uu interface. The TRP in NG-RAN can be a transmission point for transmitting downlink positioning reference signal (PRS), a reception point for performing uplink SRS measurements, or a combination of both a transmission point and a reception point. The LTE positioning protocol (LPP), initially defined for LTE, has been extended to support the positioning signaling between an LMF and a target UE in 5G positioning [75, 76]. The positioning signaling between an LMF and an NG-RAN node (gNB or ng-eNB) is transported by the NR positioning protocol-annex [77]. ### _NR Positioning_ Determining the position of a target UE involves two main steps: measurements and position estimation based on the measurements. The position estimation is usually computed with respect to network nodes with known positions. In UE-assisted mode, the UE provides the measurements to a location server which computes the position estimation. In UE-based mode, the UE computes the position estimation based on the measurements. In network-based mode, the network performs the measurements and computes the position estimation. Positioning errors occur because the measurements are affected by noise, interference, multipath effects, etc. NR provides improved positioning accuracies by supporting larger signal bandwidths, denser network deployment, and more antennas for transmission and reception [78]. Larger signal bandwidth improves signal temporal resolution. Denser network deployment increases the probability of line-of-sight (LOS) channel conditions between TRP and UE. More antennas for transmission and reception improve signal directivity and thus facilitate angular measurement based positioning methods. The main NR positioning methods are summarized as follows [73]. * **Enhanced cell ID (E-CID)**: In cell ID based positioning, the position of the target UE is estimated to be the position of the UE's serving base station. E-CID enhances the performance of cell ID based positioning by using additional radio resource management (RRM) measurements such as reference signal received power (RSRP) and reference signal received quality (RSRQ). * **Downlink time difference of arrival (DL-TDOA)** positioning is based on time-of-arrival (TOA) measurements of DL PRSs from multiple TRPs received at the UE. The TDOA values, referred to as DL reference signal time difference (DL-RSTD) measurements, are calculated from the TOA measurements and can be used to compute position estimation. * **Uplink time difference of arrival (UL-TDOA)** positioning is based on TOA measurements of a UL signal from the UE received at multiple TRPs. Since the UL TOA is measured at the TRPs relative to a common time reference, the TOAs are referred to as relative TOA (RTOA) measurements. The RTOA measurements of the TRPs are sent to a location server, which calculates TDOAs and uses the TDOAs to compute position estimation. * **Multi-round trip time (multi-RTT)** positioning is based on two-way TOA measurements, which are used to derive the UE Rx-Tx time difference and gNB Rx-Tx time difference measurements. The RTT between the UE and the gNB can be determined from the reported UE Rx-Tx time difference and gNB Rx-Tx time difference measurements. Position estimation can be computed by using multiple RTT values. * **Downlink angle-of-departure (DL-AoD)** positioning is based on per-beam RSRP measurements of downlink PRSs from multiple TRPs received at the UE. The per-beam RSRP measurements, combined with additional information such as the TRP positions and DL PRS beam information (e.g., beam azimuth and elevation angular information), can be used to estimate AoD values, which in turn can be used to compute position estimation. * **Uplink angle-of-arrival (UL-AoA)** positioning is based on AoA measurements of a UL signal from the UE received at multiple TRPs. The azimuth and zenith of AoA measurements of the TRPs are sent to a location server, which computes position estimation. NR PRSs are fundamental for positioning measurements. A DL PRS resource can be located anywhere in the frequency grid and has a configurable bandwidth with a comb-like pattern in the frequency domain. The DL PRS resource may span multiple consecutive symbols in a slot and have multiple Fig. 3: UE positioning architecture applicable to NG-RAN. repetitions within one transmission period. The repetitions of the DL PRS resource are transmitted with the same DL transmit beam which allows the UE to sweep its receive beams over the repetitions of the DL PRS resource. A TRP can transmit multiple DL PRS resources, each of which may be transmitted with a different DL transmit beam. The UL PRS design is based on UL SRS with enhancements for the positioning purpose, such as spatial relations and transmission power control with respect to neighbor TRPs. In Release 18, 3GPP continues NR positioning evolution [79]. The scope of the work includes studying solutions for sidelink positioning, examining the positioning support for RedCap devices, and investigating solutions to further improve accuracy, integrity, and power efficiency for 5G positioning services. The detailed solutions are now part of Release 18 [80]. Positioning is a valuable service that can find applications in diverse use cases. As described in this section, 3GPP has been working on enhancing positioning capabilities of 5G systems over several releases. Future new services and applications will demand ultra-high accuracy positioning (e.g., below centimeter-level accuracy) within a few tens of millisecond latency, which will exceed the positioning capabilities of the current 5G systems. Besides positioning accuracy and latency, other metrics such as reliability, availability, power consumption, scalability, security, and privacy will also be essential design considerations for future positioning services. The following section introduces the ongoing research on positioning, which might become part of the 3GPP positioning work on the path to 6G. ### _Future Evolution_ To meet the diverse extreme positioning requirements on the path to 6G, it is vital to utilize a combination of positioning technologies and exploit different signals such as satellite signals, communication signals, ultrasonic sound, and visible light [81]. As an advanced technology, large antenna arrays will become more prevalent across different frequencies towards 6G. They can provide high spatial resolution, thus capable of achieving high positioning accuracy. Reconfigurable intelligent surfaces (RISs) that reflect radio waves in preferred directions can also be exploited for localization and mapping [82]. LOS/non-LOS (NLOS) path identification in multipath propagation environments is critical for geometry-based positioning methods. THz technology with ultra-wide signal bandwidth and large antenna arrays can achieve extremely fine time and spatial resolution, enable accurate LOS/NLOS detection, and thus have the potential to improve positioning accuracy significantly [83]. In addition, THz sensing can be used to construct 3D images of the environment. It can enable simultaneous location and mapping (SLAM) by combining high-resolution RF imaging with range and Doppler information [84]. THz positioning techniques hold great potential for supporting ultra-high accuracy positioning services on the path to 6G. Further development of the current SLAM techniques towards exploiting THz localization will be an essential evolution direction for improving positioning services. Many geometry-based positioning methods rely on TOA or TDOA measurement of the signals. Tight synchronization down to the sub-nanosecond level is vital for these methods to achieve centimeter-level positioning accuracy [85]. Further advancement of synchronization technology will be crucial to support ultra-high accuracy positioning. Carrier phase positioning (CPP) is another promising positioning technique that uses the phase-locked loop to measure the phase information of received signal and derives the geometric distance between transmitter and receiver from the measured phase. It has been used in GNSS to achieve centimeter-level positioning accuracy [86]. However, it is challenging for GNSS to offer centimeter-level positioning accuracy in dense urban or indoor areas. Incorporating CPP into cellular positioning was studied in [79] and CPP is being included together with other positioning enhancements for 5G-Advanced as part of Release 18 [80]. CPP for cellular positioning will also be an interesting evolution direction towards 6G [87]. Traditional geometry-based positioning methods have difficulty in achieving high positioning accuracy in scenarios with heavy NLOS paths, such as indoor factory environments. AI/ML-based positioning with fingerprinting or ray tracing can overcome the challenge and has attracted much attention [88][89]. Indeed, 3GPP studies AI/ML-based positioning as one of the use cases for augmenting the 5G air interface with AI/ML in Release 18 [90]. However, to make AI/ML-based positioning a success, several key aspects require further investigation, such as data collection for AI/ML model training, performance validation, and model generalization capability. Last but not least, though data communication and positioning can coexist in mobile networks, their integration has not been tight thus far. With the emergence of integrated sensing and communication towards 6G (see Section XI-B for more detail), data communication and positioning integration may become tighter in 6G [91]. Joint design of communication and positioning to improve both functionalities will be an important evolution direction on the path to 6G and beyond [92], which will be further discussed under the category of joint communication and sensing (JCS) in Section XI-B. ## V Topological Evolution In this section, we introduce the major topological evolution in 3GPP standards including integrated access and backhaul (IAB), repeaters, UAVs, and NTNs. After that, we describe future topological evolution towards 6G, with a focus on RIS. ### _IAB Evolution_ The support for the IAB node was introduced in Release 16 specifications following the study as captured in [93]. IAB aims to provide an alternative for fiber or microwave link based backhaul, by using 5G radio interface based either in-band or out-of-band backhauling. Additional enhancements for IAB in Release 17 [94] include improved robustness and load balancing, and reduced service interruption by enhancing topology adaptation, routing, and transport. Duplex enhancements (i.e., simultaneous transmission and/or reception of child and parent links for an IAB node) and efficiency enhancements (e.g., power control enhancements, extension to frequency/spatial domain multiplexing among backhaul and access links) were also introduced. Release 18 work on IAB [95] consists of enhancing IAB operation with mobile IAB nodes. Such nodes would be assumed to be placed in a vehicle (e.g., in a bus) providing coverage for UEs in the vehicle or in the close proximity of the vehicle. The UEs have no visibility regarding whether the serving cell is a mobile IAB node or not (i.e., vs. a regular gNB), although the service may entail increased latency due to the extra hops in IAB for the connection. ### _Repeater Evolution_ Traditional RF repeaters [96] are simple but their usage in TDD bands may be compromised, since these repeaters merely boost the received signals irrespective of link directions. A "smarter" repeater, on the other hand, can take into account the control information from a gNB, e.g., the TDD configuration in use, such that a transmission may be omitted or appropriately directed as desired, depending on the TDD configuration. These smart repeaters are also known as network-controlled repeaters [97]. In Release 18, the support for network-controlled repeater would be done by enabling necessary control information from the network towards the repeater. The necessary information may include the UL/DL TDD configuration, repeater ON/OFF configuration, and beamforming for improved repeater operation [97]. The work is based on the outcome of the study in 3GPP as described in [98], with the results captured in [99]. Controlling the repeater's ON/OFF status is important, e.g., for energy savings when the carrier frequency is turned off for a period of time. There is also a possibility to control the gain and power used by the repeater for more efficient network interference management. ### _UAV in 5G_ In Release 18, the work on connected UAV covers several key aspects [100, 101]. UAVs are supported to report to networks information such as flight path or height for proper operation. UAV identification is important to help control where the UAVs fly, which can be done by identifying UAVs which could be causing problems (e.g., flying in areas not permitted for UAV use) to ensure UAVs to fly only in authorized air space. 3GPP is also addressing remote UAV identification broadcast to allow for meeting requirements expected to be part of regulatory requirements in some regions. Besides the pure UAV identification broadcast, considerations have also included potentially more elaborated aspects of detecting other UAVs and avoiding them in order to prevent collisions. It is also critical to consider the impact of UAVs on the network, as UAVs associated with heavy traffic (e.g., video feed) may easily create lots of interference in the network. This is due to the fact that a UAV up in the air may have LOS connections with many cells. With the use of 5G beamforming capabilities, directive/beamforming antennas may replace omni-directional transmission such that data flow from a UAV can be more directive, with this aspect being studied also for Release 18. This is illustrated in Fig. 4. In such a case, the interference caused to the network can be heavily reduced. In addition, UAV operation can be more reliable since UAVs would experience less interference from different gNB transmitters. Further aspects of UAV operation have been discussed, including optimization of handover procedures, especially for conditional handover in connection with UAV connectivity. In case that a UAV has a predetermined flight path (and known by the network), one could expect more possibilities to prepare for handover in advance. ### _NTN Evolution_ Besides terrestrial topological evolutions, 3GPP has been working on non-terrestrial topological evolution to support satellite communication. Historically, the terrestrial and satellite communication systems have been developed and evolved independently. Despite its high data rate and low latency, the terrestrial mobile system covers about merely 20% of the land area which is only 6% of the entire earth surface [102]. In contrast, the satellite system can offer global coverage and higher survivability in case of disasters such as earthquakes, but suffers long transmission distance hence high latency. Therefore, 5G strives for the integration of terrestrial and satellite communication systems, aiming at obtaining ubiquitous coverage and high quality of service. The most popular conventional satellite communication standard is the digital video broadcasting - satellite - second generation (DVB-S2) standard and its extension (DVB-S2X) from the European Technical Standards Institute (ETSI) [103]. Meanwhile, 5G NR based satellite communication has been widely discussed by standard organizations such as 3GPP and International Telecommunication Union (ITU) [104]. Compared with 5G NR, the target spectral efficiency is lower in DVB-S2X, and certain functions, such as uplink synchronization, hybrid automatic repeat request (HARQ), control channel, discontinuous reception (DRX), RRM measurement, mobility management and core network, are in lack. Therefore, DVB-S2X is more suitable for fixed wireless access rather than mobile communication service. In 3GPP, NTN was first introduced in 5G NR Release 17 for a unified standard for both the terrestrial communication and the satellite communication, the details of which can be found in [105, 106, 107]. Compared with terrestrial communication, Fig. 4: Operation with different UAV antennas. satellite communication has its unique characters in terms of propagation channel, propagation delay, satellite mobility, cell radius, multi-layer coverage, and so on [108]. The 3GPP NTN work has been addressing the following main technical challenges: * **Challenges in transmission systems**: In the satellite-ground integrated network transmission, Doppler frequency shift, frequency management and interference, power limitation, and timing advance are urgent problems to be solved. For Doppler frequency shift, 5G adopts multi-carrier OFDM in the transmission system, and its subcarrier spacing design does not consider the impact of large Doppler frequency shift, which can bring interference between subcarriers. In terms of power limitation, it is necessary to ensure high frequency band utilization and reduce the signal peak-to-average ratio. With regard to timing advance, the rapid change of wireless link transmission delay may necessitate dynamic updates of the timing advance of each terminal to ensure that all UL transmissions are synchronized. * **Challenges in access and resource management**: Taking LEO satellites as an example, the RTT between an LEO satellite and an earth terminal ranges from 1.2 ms to 13.3 ms, while that for typical terrestrial communications is only up to 1 ms. The long delay of the satellite-ground integrated network brings challenges to the access control, HARQ, and other processes of medium access control (MAC) and radio link control (RLC) layers. In terms of access control, in order to support the effective integration of terrestrial and satellite systems, it is necessary to design reasonable access mechanisms such as pre-grant, semi-continuous scheduling, and grant free access. For HARQ, which has strict requirements on time, the round-trip time usually exceeds the maximum timer length of HARQ. In the scheduling process of MAC and RLC layers, the long delay of the satellite system will also affect the timelines of scheduling, thus its scheduling delay parameters need to be adjusted. * **Challenges in mobility management**: In the satellite-ground integrated network, the challenge of mobility management is even severer. According to the communication level, it can be divided into network-level handover and link-level handover. According to the application scenarios, it can be divided into inter-cell handover on the ground, handover between satellites and ground cells, handover between satellite cells, and inter-satellite handover. The main solution standardized in the 3GPP Release 17 to address the above challenges is to enable the NTN network to broadcast satellite ephemeris-related data. Furthermore, the NTN UE is assumed to be equipped with GNSS capabilities. With ephemeris data and GNSS capabilities, the NTN UE can calculate the relative speed and RTT between the UE and the satellite to precompensate its UL frequency and transmission timing. UE can also utilize the ephemeris data to predict the trajectory of the LEO satellites over time, facilitating mobility management in NTN networks. We refer interested readers to [104] for more details. The latest Release-18 NTN objectives focus on the applicability of the solutions developed by general NR coverage enhancement to NTN, particularly for uplink channels [109]. Additional evolution of NTN in 3GPP is possible to further user experience. In particular, at certain point in the future, satellite communication may be integrated within 6G systems, deployed at high, medium and low orbits, and working jointly with terrestrial communication. The terrestrial network may be the core of the integrated system and controls the entire space-based communication system. ### _Future Evolution through RIS_ We anticipate that 3GPP will continue topological evolution to embrace emerging technology trends on the path to 6G. One potential major topological evolution can be the inclusion of RIS. Broadly speaking, RIS is an umbrella term for two-dimensional metamaterial-based arrays that can manipulate electromagnetic waves via anomalous reflection, refraction, polarization transformation, among other functionalities [110, 111]. Herein we focus on RISs configured as anomalous reflective and/or refractive surfaces that can customize the propagation environment by reflecting and/or refracting signals to desired directions. Fig. 5 illustrates typical use cases for RISs. Among the various types of RISs, reflective RIS is most common, which reflects signals from the source node to the destination node while consuming little energy, leading to high spectral efficiency and energy efficiency. The power allocation, phase shift design, beamforming strategies, channel estimation schemes, and a variety of other aspects regarding reflective RISs have been widely studied in the literature [112, 113, 114, 115, 116]. Note that both reflective RIS and relay can provide range extension. Compared with the conventional relay, RIS has some distinguishing features such as hardware complexity, power budget, noise, and average end-to-end SNR [117] as listed in Table II. Qualitatively, RIS can be regarded as a full-duplex MIMO relay without self-interference and signal power amplification [118]. Overall, RIS-assisted transmission may outperform relay-aided transmission in terms of data rate if the size of the RIS is sufficiently large [117], but the performance gain may be limited by the quantization of the phase shifts. Thus, it is still subject to more comprehensive studies to determine whether RISs or relays/repeaters are more suitable Fig. 5: Illustration of anticipated typical use cases of RISs. for commercial deployment, especially when considering various factors including cost, use cases, and maintenance. To strive for better performance and to take advantage of both techniques, some researchers proposed combining relays with RISs and named this architecture as hybrid relay-RIS (HR-RIS), in which a single or a few elements of RIS act as active relays, while the remaining only reflect the incident signals. HR-RIS potentially excels at spectral efficiency and energy efficiency in harsh scenarios such as when the transmit power is low and/or when the HR-RIS is located far away from the transmitter [119]. Another type of RIS deployment is to further consider its transmissive properties to extend its application scenarios [120, 121], such as simultaneous transmitting and reflecting RIS (STAR-RIS) [122], where part of the incident signal is reflected into the space in front of the RIS and the remaining part is refracted into the other side of the RIS. The substrate of an STAR-RIS can be optically transparent such that it does not interfere aesthetically or physically with the surrounding environment or people's line of sight. The power ratio of the reflected and penetrated signals is determined by the hardware structure, and needs to be appropriately optimized to enhance the performance over all users. Besides theoretical analysis and simulations, early prototyping and testing have also been conducted to demonstrate the performance gain brought by RISs, including a recent multi-frequency field trial using off-the-shelf 5G UEs and reference signals defined in 3GPP 5G standards [123] which showed 15 to 35 dB RSRP enhancement by RISs. Despite the great potential of RIS and some pioneering field trials, a number of key challenges need to be handled in practical implementation, e.g.: * Inconsistent phase offset of an RIS across subcarriers for broadband communications, * Channel estimation and feedback overhead for passive RISs, * Efficient power supply for RISs, * Performance degradation owing to element failure, and * Ratio of active and passive elements in an RIS considering the tradeoff among channel estimation complexity, spectral efficiency, and energy efficiency. Therefore, compared with its counterparts such as small cells and relays, whether RIS can achieve the eventual commercial success requires further investigation and more extensive trials. ## VI XR Evolution In this section, we will start by describing XR-specific service requirements, followed by several key considerations in serving XR in 3GPP. We will also provide a brief overview of the efforts related to XR outside 3GPP. ### _XR-specific Service Requirements_ XR is an umbrella term covering applications including AR, VR, or other forms of expanded and immersive reality applications. The XR as a service differs from the traditional mobile broadband services, as it demands low latency and high data rates in line with the XR-codec periodicity. In particular, the video codecs have often variable frame rate, which is not necessarily evenly aligned with the frame structures used in 5G networks. A typical frame rate of 60 frames per second (fps) would result into a periodicity of approximately 16.67 ms, which naturally does not map well on the 10 ms 5G frame structure with the transmission time internal of either 1 or 0.5 ms depending on the frequency band and the resulting sub-carrier spacing in use for the sub-6 GHz frequencies. Besides the actual video transmission, it is necessary to enable relatively frequent control or pose update signaling transmission, and to keep the experienced delay below 10 ms, given that in some of the traffic models in use [124] the control/pose update signalling is generated every 4 ms. Furthermore, the XR applications will likely have different detailed timing characteristics, and thus the envisioned solutions need to be configurable enough to match them. ### _Key Considerations for the Support of XR in 3GPP_ The first key step in supporting XR traffic efficiently is to identify such traffic in the first place, which may be performed for example by extending the 5G quality-of-service (QoS) flow identifiers to enable the detection of the packets that are part of an XR service. Once XR packets are identified, the scheduling functionality needs to be adapted to the timing characteristics of a given XR service, such as the already mentioned frame rate as well as the jitter that can be tolerated. Additionally, when dealing with services like VR, there is also a trade-off between latency and the required maximum data rate. When the connection has low enough latency, the system allows sending accurately only content for covering the field of view, as it can react fast enough for the change of gaze focus. A larger latency, instead, must be compensated by sending also those parts of the view with high accuracy, which would not necessarily be in the current field of view otherwise. Of course, this also requires that the servers providing the service are not physically located too far from the mobile user, in order to \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt}} \hline \hline & \multicolumn{1}{c}{Relay} & \multicolumn{1}{c}{RIS} \\ \hline Hardware complexity & A series of active electronic components & Low-power and low-complexity electronic circuits (switches or variants) & Low-power and low-complexity electronic circuits (switches or variants) \\ \hline Power budget & A large amount of active electronic components with high power consumption. Total RF power is often allocated between the transmitter and the relay. & Almost all RF power is allocated to the transmitter instead of between the transmitter and the RIS. \\ \hline Noise & Additive noise or loop-back self-interference & Phase noise \\ \hline Average end-to-end & Proportional to the inverse of transmission distance & Electrically large RIS: Proportional to the inverse of transmission distance. \\ SNR & & Electrically small RIS: Proportional to the inverse of the square of transmission distance [117] \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparisons between RISs and relays benefit from taking the gaze focus into account, as shown in Fig. 6. Another important factor in XR applications is UE's power saving, as it is desirable to use small and light devices for XR services. The use of continuous high data rate suited for low latency causes the mobile devices to consume too much power. Enabling the use of moments when XR data is not coming for DRX operation and other reductions in the receiver processing pipeline allows the UE to save power and, in turn, improve the usability of XR services. Additionally, creating some breaks in the data stream is beneficial from the network power consumption point of view as well. To make the XR a real mobile service, the achievable mobility performance is also important. While the reliability of the connection is fundamental, it is also essential to minimize the interruption duration, as otherwise the interruption in the data flow becomes visible in the end-user video stream or the motion related control feedback experiences too much delay. Many other improvements foreseen by 5G-Advanced will be of benefit to XR services provision, especially improved uplink coverage and system capacity. In addition to the radio interface aspects, edge computing-related improvements have a direct relationship with XR, as XR application processing in the network needs to be performed sufficiently close to the end user so as to reduce total end-to-end latency. 3GPP has now established traffic model and evaluation assumptions for XR evaluation over NR [125] and studied the potential improvements for XR delivery over 5G-Advanced [126] and now XR improvements are being specified for Release 18 [126]. Importantly, this makes delivery of XR services more efficient and, thus, available to a larger number of simultaneous XR users than in the first-phase 5G networks. As XR requires end-to-end handling, not just in the radio access network but also in what concerns edge computing, a significant deal of work addresses core network-related improvements, especially looking at QoS evolution and application awareness to meet the requirements of XR services [127]. In addition, related to the provision of XR services, 3GPP is active in developing standards for appropriate codecs of audio and video, as they will also help deliver XR services for a larger number of users. ### _XR outside 3GPP_ In recent years, XR has attracted a great deal of interest from both research and commercialization perspectives. While the XR commercial market size is already in the order of billions of US dollars, further significant market growth is expected in the upcoming years with the development of advanced commercial products. XR devices may be tethered by cables or supported wirelessly. The former ones facilitate satisfaction of the extreme requirements, but limit users' mobility and quality of experience. Wireless-connected XR devices instead can leverage advanced wireless technologies to eliminate cables for more enjoyable users' mobility anywhere and any time [128, 129]. While XR may be provided in LTE networks [130], 5G NR is expected to take the VR/AR experience to the next level [131], thanks to extremely high throughput (multi-Gbps), ultra-low latency (down to 1 ms), and uniform experience (even at cell edge). The work [132] underscores the importance of VR technology as a disruptive user case for 5G, and it presents a list of research avenues (e.g., caching, short-range wireless communications, context information and analytics, computer vision and media, etc.) and scientific challenges (e.g., "Shannon-like" theory to maximize users' immersive experience, quality-rate-latency tradeoff, in-VR vs. in-network computation, scalability, privacy, localization, and tracking accuracy). Furthermore, [133] lists a set of technical enablers including mmWave communications, edge computing, proactive computing and caching, multi-connectivity, and multicasting, while [134] focuses on the reliability and latency achieved for VR services over a THz cellular network. The study [135] covers resource management for a network of VR users over small cell networks, where the problem is formulated as a non-cooperative game from which a distributed algorithm is proposed, which leverages ML to predict VR QoS for improved QoS utility. ## VII SIDELINK evolution In this section, we first introduce the sidelink evolution in 3GPP standards and then discuss its safety applications. After that, we describe future sidelink evolution towards 6G, with a focus on mesh networking. ### _Sidelink Evolution in 3GPP Standards_ Before LTE Release 12, standardization of over-the-air transmissions in 3GPP was primarily focusing on DL and UL Backhaul link in the relaying context was standardized in Release 10 where a relaying node is assumed to be in-band and half-duplex [136]. With the increasing need of D2D communications, 3GPP started to study and specify D2D in Release 12 [137] and further evolved it into V2X using the so-called sidelink, as illustrated in Fig. 7. Fig. 8 illustrates the timeline of sidelink standardization in 3GPP. The standardization started from Release 12 D2D communications, followed by LTE V2X support from Release 14, and 5G NR V2X support from Release 16. The standardization of V2X in LTE and 5G NR is often commonly referred to as cellular V2X or C-V2X. Fig. 6: Illustration of XR service. The D2D standardization in Release 12 covers discovery, synchronization, broadcast and groupcast communications [138]. Discovery is targeted towards commercial applications, where a discovery signal can be periodically transmitted with roughly 200 bits of information. Communications are for public safety applications only, without any physical layer feedback such as HARQ-acknowledgement (HARQ-ACK) or CSI. The physical layer structure for D2D is completely compatible with LTE for smooth co-existence, where D2D may only occupy LTE UL subframes. The resources for D2D can be either allocated by an eNB, or randomly selected. D2D communications can be supported also for cases when a UE is out of network coverage or partially in coverage. Enhanced D2D operations were specified in LTE Release 13 for both discovery and communications [139]. Discovery was expanded to accommodate inter-frequency and inter-public land mobile network (PLMN) cases, and can be performed with a transmission and reception gap. Out-of-coverage discovery is supported as well. Enhancements to D2D communications include layer-3 UE-based relay, priority handling, and multiple destination transmissions. Starting from LTE Release 14, V2X is supported in 3GPP. The D2D interface (also known as the PC5 interface) is not only used to support vehicle-to-vehicle (V2V), but also for vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P) and vehicle-to-network (V2N) for beyond safety and collision avoidance [140]. The primary target spectrum is the unlicensed intelligent transportation systems (ITS) spectrum [141]. During the standardization, the maximum vehicle speed was assumed to be 250 km/h, implying a maximum relative speed of 500 km/h. At a carrier frequency around 6 GHz, it would result in a Doppler shift of about 2.7 kHz and consequently channel variations within a 1 ms subframe, bringing various design challenges. Wireless resource for V2X can be either scheduled by an eNB or selected in an autonomous manner with the help of sensing. The sensing is performed based on a combination of priority information, energy sensing, and sidelink control channel decoding. Congestion control is also supported such that resource utilization becomes restricted upon congestion based on a channel busy ratio (CBR). Prioritization between the access link transmissions and V2X transmissions may be necessary. Additional enhancements for V2X were also done in LTE in Release 15 [142], focusing on aggregating multiple carriers and enhancing autonomous resource allocation. The work on 5G NR V2X started in Release 16 [143]. 5G NR V2X is designed to be backward compatible at upper layers so that it can co-exist with LTE V2X (as in Releases 14 and 15). In particular, Release-16 V2X is designed to support Release-14 and Release-15 V2X for safety use cases. Additionally, Release-16 V2X enables new services such as real-time updates, and coordinated driving, especially those requiring high reliability and QoS support. Different from Release-14 and Release-15 V2X where only broadcast messages are supported, Release-16 V2X additionally supports groupcast and unicast. Spectral efficiency is significantly improved, e.g., via higher modulation order (up to 256-quadrature amplitude modulation (256QAM)) and MIMO transmissions (up to two layers). Similar to the access link, flexible numerologies are also supported, e.g., with subcarrier spacing of 15, 30, 60, and 120 kHz. To cope with extremely high mobility, a range of DMRS patterns are supported with different time-domain densities. Enhancements in Release-17 NR V2X [144] are mainly focused on two aspects: power savings and resource allocation. Power savings are achieved via the support of partial sensing and sidelink DRX operations. Resource allocation can be improved significantly when two or more UEs coordinate with each other and share the resource availability/non-availability information, where the information may be assessed based on various reasons (e.g., sensing and capability). These resource allocation enhancements may bring improved reliability, reduced latency and reduced UE power consumption. NR sidelink will be further enhanced in Release 18 in a more expanded manner [145, 80, 146], including the following areas: * Sidelink carrier aggregation (CA) operation; * Sidelink in unlicensed spectrum; * Enhanced sidelink operation for FR2 licensed spectrum; * Mechanism(s) for co-channel coexistence for LTE sidelink and NR sidelink; * Sidelink relay enhancements; * Sidelink positioning and ranging. With the above items, it is expected that not only the support of C-V2X can be more comprehensive and effective, but also the support of other services (e.g., network controlled interactive services via sidelink) can be enabled or enhanced. ### _C-V2X for Safety Applications_ Given the impelling need to reduce car accidents and traffic fatalities involving vulnerable road users (VRUs), such as pedestrians, bicyclists and e-kick scooters, safety applications enabled by mobile communications have become crucial to Fig. 8: Sidelink standardization timeline in 3GPP. Fig. 7: Illustration of various link types in 3GPP. ITS. Such applications are particularly relevant in the presence of obstacles, such as buildings, that prevent a driver or the sensors aboard an autonomous/automated vehicle from realizing the danger in a timely manner. Most of these applications leverage such messages as the ETSI cooperative awareness messages (CAMs) that can be periodically broadcasted by the vehicle's radio interface and include the vehicle's position, speed, acceleration, and heading [147]. Additionally, as discussed in [148], safety applications can often benefit from messages generated by road infrastructure or smart city entities (e.g., cameras, and traffic controllers or dynamic map generators located at the edge of the network infrastructure), as they can effectively complement the local view that each single vehicle conveys through the transmission of, e.g., CAMs. Several studies have therefore appeared in the literature proposing solutions for safety applications enabled by C-V2X, where the X stands indeed for everything, including other vehicles or VRUs, the network infrastructure, or an edge server. The recent focus on C-V2X as communication technology for connected cars has also been motivated by two important facts. Firstly, as reported in the previous section, the 3GPP C-V2X standardization efforts aim at meeting increasingly challenging requirements for advanced services. Secondly, there is a clear regulation shift towards C-V2X, confirmed by, e.g., the Federal Communications Commission (FCC) regulations that have changed the use of the 5.9 GHz frequency band reserved for ITS operations from dedicated short range communications (DSRC) to C-V2X. A useful taxonomy of the types of V2X communication and the applications that they can support can be found in [149] and is highlighted in Table III. Specifically, the table lists the main safety applications enabled by C-V2X and the type of communication they require, ranging from periodic V2V/V2I/V2P messages, to V2V/V2I event-driven messages, V2V bi-directional and multi-hop communication, to V2P/V2N bi-directional communication. Relevant examples of solutions appeared in the literature, targeting the support of safety applications through the aforementioned types of messages, include [150] where V2V periodic messages allow for collective perception sharing based on Release-14 LTE PC5 mode 4, and [151] which envisions mobile base stations deployed in UAV to improve the connectivity offered to vehicles by terrestrial base stations. The use of V2V event-driven messages, combined with CAM transmissions, has been instead leveraged in [152] to implement a cooperative lane merging application using the LTE PC5 mode 4 technology. Interestingly, [153] and [154] have highlighted the limitations of the configuration of PC5 mode 4 interfaces in Release-14 LTE, and, especially, of the autonomous sidelink resource allocation algorithm to transmit V2V event-driven messages - an issue that has then been overcome in Release-16 NR PC5 mode 2 [155]. For example, the sensing window of 1 s used in Release-14 LTE PC5 mode 4 works well for V2V periodic messages, but it is too long for asynchronous traffic, in which case a 100 ms-window turns out to perform much better. Furthermore, it is worth mentioning that the combination of V2V periodic and event-driven messages have been successfully leveraged to also support platooning applications [156]. As far as V2V bi-directional messages are concerned, they are useful in the see-through application, where a vehicle requests and receives light detection and ranging (LiDAR) or camera data from another vehicle, so it can "see" beyond the visibility range of its radio and sensors. To support the exchange of such bi-directional information, [157] has investigated the use of mmWave communications, while the results of field tests with pre-5G NR Uu interfaces have been reported in [158]. Finally, V2V multihop messages have been used mainly to extend the propagation of safety messages in vehicular networks, as in [159] where vehicle clustering and Release-14 LTE PC5 mode 3 were used. Many safety applications for vehicles and VRUs are also supported through V2I messages, which are periodic and event-driven, and sent from vehicles to the infrastructure and then re-broadcasted by the latter through infrastructure-to-vehicle (I2V) or infrastructure-to-pedestrian (I2P) messages so that they can be disseminated over a larger area and leveraged at each mobile entity as input to, e.g., a collision avoidance algorithm [160, 161, 162, 163]. I2V messages can also be effectively used for in-vehicle replication of road signaling (e.g., traffic light information) and to support autonomous driving by providing vehicles with information generated by road infrastructure entities or traffic controllers [164, 165], as well as for dynamic map download and update [166]. ### _Future Evolution_ 5G NR-based sidelink enables UE-to-network and UE-to-UE relay to significantly improve network performance for C-V2X and public safety. Looking ahead to 6G, it is possible for sidelink to expand existing C-V2X capabilities, to support new use cases such as aerial networking with flexible network architectures. Furthermore, on top of the single-hop relay functionality that has already been supported by the existing 3GPP sidelink, multi-hop routing/networking will allow multiple UEs and UAVs to connect and create a mesh network under various coverage scenarios for different applications and use cases. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline Message Type & Supported Applications \\ \hline V2V periodic & Cooperative collision warning, intersection movement assistance, slow vehicle warning, cooperative glare reduction, collective perception \\ \hline V2V event-driven & Stationary vehicle warning, emergency technic brake lights, queue/traffic jam ahead warning, road condition warning \\ \hline V2I bi-directional & See-through \\ \hline V2V multi-hop & Coverage extension of ITS messages \\ \hline V2P periodic & Pedestrian collision warning to vehicles, vehicle collision warning to pedestrians \\ \hline V2I periodic & In-vehicle signage, curve speed warning \\ \hline V2I event-driven & Infrastructure based collision warning, warning of vulnerable road user presence, infrastructure based traffic jam ahead warning, infrastructure based road condition warning \\ \hline V2N bi-directional & Dynamic map download and update \\ \hline \hline \end{tabular} \end{table} TABLE III: Road safety applications supported by different message types mmWave D2D relay has been introduced in [167] to improve the coverage and spectral efficiency of a wireless network. With the help of D2D relay, it is possible to form a mesh network with dynamic network topologies and architectures. Meanwhile, it is also possible to create a three-dimensional (3D) mesh network by incorporating UAVs and NTN nodes such as high altitude platform stations (HAPS) and satellites. For example, a swarm UAV network has been introduced in [168] for substantial throughput enhancement. Recently, there is also a tremendous interest in planning novel LEO satellite systems, called mega-constellations, to form a multi-layer mesh network [167]. Routing and resource allocation are critical for mesh networks and these problems will become even more challenging in a 3D network with multiple layers. Traditionally, cooperative communication and routing strategies [169, 170, 171, 172] have been introduced for D2D mesh networks to improve the underlying network performance. Random network coding-based cooperative routing strategies have also been introduced in swarm UAV networks to reduce transmission delay with increased communication reliability [173]. For more complicated multi-layer mesh networks, it is envisioned that AI/ML could be adopted to adjust routing paths [174], to conduct distributed and dynamic resource allocation [175], and to optimize handover decisions within layers and among layers of the 3D NTN architecture [176]. ## VIII Ai/ml Evolution In this section, we first provide an overview of the 3GPP management system for network slicing and data analytics. Then we introduce the 3GPP work on AI/ML for NG-RAN and NR air interfaces. After that, we discuss AI-native air interface towards 6G. ### _Management System and Data Analytics_ One of the key features of 5G is the ability to support network slicing, i.e., to create multiple logical networks on a shared infrastructure spanning over the core network till the radio access network. Each network slice is composed of one or more NFs, which can be physical or virtual (PNF and VNF, respectively). Furthermore, each network slice is characterized by a set of key performance indicators (KPI) and, possibly, isolation requirements. In order to deploy, orchestrate, and manage a network slice, hence the NFs therein, 3GPP has defined a 5G management system, as illustrated in Fig. 9. A network slice subnet (NSS) provides the management aspects of the NFs managed through the 5G management system along with the corresponding required computing, memory, and networking resources [177]. The set of NFs, either physical or virtual, composing a network service, represents the actual implementation of an NSS, i.e., a network slice subnet instance (NSSI). The NSS management system in Fig. 9 includes the main components, namely, the NSS management function (NSSMF) and the NF management function (NFMF), as well as the interface with another key element of the 3GPP network functions virtualization (NFV) architecture, i.e., the NFV-management and orchestration (MANO). Importantly, the 3GPP Release-17 management and orchestration architecture framework [177] also introduces a service based management architecture (SBMA) for the 5G management system, as depicted in Fig. 10. This comprises management functions (MFs) that provide multiple management services (MnS) to manage single functions (NFMF), network slice subnets (NSSMF), communication services (CSMF), and the exposure of such services towards external entities (EGMF). It is worth noticing that the SBMA also comprises the management data analytics function (MDAF), which is in charge of delivering analytics services for automated network management and orchestration. Such data-driven decisions drive the logic of the NSMF and NSSMF, which manage the lifecycle of the network slices and the orchestration of their resources. It is also worth to remark that the MDAF receives as input monitoring data collected from the network (gNBs in the RAN or specific core network functions) and its management system on the existing NSS or VNF performance, as well as analytics data offered by the network data analytics function (NWDAF) [178]. In other words, MDAF and NWDAF operate at the management and at the core network level, respectively, with, e.g., the former being concerned with information on the load level of a slice instance across RAN and core network domains, while the latter with data and control in the core network domain. The MDAF then provides analytics results to drive the decisions related to the NSS life cycle management and the NS orchestration, including RAN and core network domain-related decisions. In particular, the NWDAF not only produces analytics related to the core network domain and network functions and feeds them to the MDAF, but it also leverages the MDA reports to control the core network. Interestingly, a 5G network can feature more than one instance Fig. 10: Example of functional management architecture [177]. Fig. 9: Network slice subnet management with interface to NFV-MANO [177]. of NWDAF, each associated with a different service area and possibly specialized in providing a certain type of analytics, and with one of the deployed instances acting as an aggregator [179]. ### _AI/ML for 5G Radio Access Networks_ Data driven techniques such as AI/ML-based solutions can be a powerful tool to address the challenges of 5G RANs. 5G RANs need to address complex system design and large network optimization problems to meet a wide range of performance requirements, including coverage, data rate, latency, reliability, and user experience. Though a vast number of AI/ML applications in RAN can be up to proprietary implementations and solutions, investigating the needed standards support is essential to accelerate the use of AI/ML techniques in 5G RAN and beyond [180, 181]. In Release 17, 3GPP conducted a study to investigate the functional framework for RAN intelligence enabled by further enhancement of data collection [182]. The study identified a set of high-level principles for AI-enabled RAN intelligence, such as focusing on AI/ML functionality and corresponding types of inputs and outputs while leaving the detailed AI/ML algorithms and models to implementation. Leaving AI/ML algorithms and models to implementation can incentivize vendor competitiveness. The study also introduced a functional framework for RAN intelligence and investigated a set of use cases, including: * **Network energy saving:** AI/ML algorithms can predict traffic load and energy consumption by leveraging the data collected in RAN. The prediction can be used to help decide cell activation and deactivation strategies to improve network energy efficiency [183]. * **Load balancing:** AI/ML algorithms can predict traffic load and automate the optimization of mobility management parameters to improve user experience and system capacity [184]. * **Mobility optimization:** AI/ML algorithms can enhance mobility performance by reducing the probability of unintended events (e.g., handover failure, radio link failure). AI/ML algorithms can also predict UE location and mobility trajectory, which can serve as valuable inputs for RRM and traffic steering. After completing the Release-17 study on AI-enabled RAN, 3GPP continues to conduct the corresponding normative work in Release 18 [185]. The Release-18 work item aims to specify data collection enhancements and signaling support for AI/ML-based network energy saving, load balancing, and mobility optimization. The enhancements will be introduced within the existing 5G RAN interfaces and architecture. Once the normative part is completed, additional uses cases are expected to be taken under study as part of a new study item for AI-enabled RAN. To reap the full benefits of AI/ML for 5G, exploring the potential of augmenting the NR air interface with AI/ML-based features is indispensable. To this end, 3GPP Release 18 includes a study item that investigates AI/ML for NR air interface to improve performance [90]. The study aims to set up a general framework for AI/ML related enhancements for air interface, such as characterizing the defining stages of AI/ML related algorithms and the life cycle management of AI/ML models, exploring different levels of collaboration between UE and gNB, and investigating needed datasets for AI/ML related enhancements for NR air interface. The study takes a use case-centric approach, focusing on the follow three use cases: * **CSI feedback enhancements:** AI/ML algorithms can be used for CSI compression in frequency domain and CSI prediction in time domain to reduce overhead and improve accuracy. * **Beam management:** AI/ML algorithms can be used for beam prediction in spatial and time domains to reduce overhead and latency and improve beam selection accuracy. * **Positioning accuracy enhancements:** AI/ML algorithms can be used for direct AI/ML positioning (e.g., fingerprinting) and AI/ML-assisted positioning (e.g., the output of the AI/ML model inference is a new measurement or an enhancement of an existing measurement) to improve positioning accuracy. For each use case, the study aims to establish evaluation methodologies and determine KPI to thoroughly evaluate the performance benefits of AI/ML algorithms for the NR air interface. Last but not least, the study assesses the potential specification impact of AI/ML-related enhancements for NR air interface, including physical layer aspects, protocol aspects, and interoperability and testability aspects. Normative work is foreseen for Release 19. ### _AI-native Air Interface_ Moving over 3GPP Release-18 5G-Advanced towards 6G and beyond, it is important to consider AI-native air interface where we will rely on AI/ML models and tools to design individual network components or a purely end-to-end communication system. This paradigm shift in the approach of network design represents a potential disruption to the established cellular systems design procedure, extending to the deployment and operation stages as well. #### Iv-C1 AI-native end-to-end design for the physical layer The receiver block in the physical layer is the focus of most research efforts in infusing AI/ML-based solutions to overcome issues such as model mismatch and complexity in traditional model-based processing [180]. For example, AI/ML-based techniques have been used to efficiently combat the non-linear distortion created by power amplifier [186] and low-resolution ADC [187]. They have also been applied to transmitter blocks with the intent of optimizing performance compared to and in addition to existing closed-loop feedback methods. For example, [188] uses deep learning for combined channel estimation and hybrid precoder design. The AI approach, however, requires that ML methods are used fundamentally to design the entire chain which includes the transmitter, channel and the receiver. There exists work in end-to-end physical layer design which aims to jointly train the transmitter and the receiver, while catering to simpler additive white Gaussian noise (AWGN) [189] or Rayleigh block fading channel [190] scenarios. Built on this, [191] has developed a jointly trained neural network based transmitter and receiver that achieves the state-of-art bit error rate without needing pilot transmissions in practical wireless channel models. Although promising, the AI/ML-based receiver still needs to be trained offline for a significantly large number of frames, which may be an impediment to practical adoption since an online-based operation is critical for cellular networks especially when the adaptation is on-the-fly. #### V-B2 AI-native design for the MAC layer Given the complex nature of tasks that the MAC layer is responsible for including, but not limited to, user selection, random access, resource allocation, etc., the solutions to these are mostly heuristic in nature. Since it is extremely challenging to identify optimal solutions for these problems in real-time, there is a large potential to leverage AI/ML-based solutions in addressing MAC layer problems. In this context, reinforcement learning usually finds itself best placed as the ideal tool which can handle the time-varying network conditions that may be attributable to the users or the channel. There are examples of ML-based solutions applied to a wide gamut of MAC layer operations such as reinforcement learning-based approaches for random access [192] and for spectrum access/sharing [193], federated learning for minimizing breaks in presence in wireless VR [194], and predictive resource allocation in IoT networks [195]. However, it is fair to say that a truly "AI-native" MAC design that learns and evolves over time still needs significant work. In some ways, this involves learning an entire protocol from scratch and would need to utilize tools from deep multi-agent reinforcement learning [196]. Furthermore, sample efficiency and convergence rate of the corresponding reinforcement learning tools need to be significantly improved for them to be relevant for practical networks for 6G and beyond to realize AI-native design. ## IX Duplex evolution In this section, we will first provide an overview of duplex evolution in 3GPP, starting from the standardization of traditional frequency division duplex (FDD) and semi-static TDD operations, followed by dynamic operations for better traffic adaption, and the most recent study on simultaneous transmission and reception at a gNB. We will then explore the related research outside 3GPP. ### _Duplex Evolution in 3GPP Standards_ #### V-A1 Fdd vs. Tdd FDD and TDD have been the two duplex technologies adopted in 3G, 4G, and 5G standards of 3GPP. In FDD, a paired spectrum is allocated for DL, where a base station transmits and a device receives signals, and UL, where a device transmits and a base station receives signals. TDD does not require a paired spectrum and partitions the spectrum into DL and UL in the time domain. It would be natural to consider FDD for the case of symmetric traffic between DL and UL, e.g., when voice calls dominate the wireless traffic. Furthermore, the simultaneous existence of DL and UL using a paired spectrum enables simple and efficient system operations. An example is that a device can send to its base station an acknowledgement of success or failure of DL data reception while keeping a fixed time interval between DL reception and UL transmission as specified in 3G high speed downlink packet access (HSDPA), 4G LTE, and 5G NR standards [197, 198, 199]. TDD has the benefit of offering flexible deployments with an unpaired spectrum and hence becomes more attractive when it is difficult to find a pair of frequency bands. This is a major reason for choosing TDD to deploy 5G systems using a new spectrum such as 3.5 GHz or 28 GHz bands where it is difficult to find a paired spectrum. Furthermore, the channel reciprocity required for operating massive MIMO without excessive feedback overhead is an important advocate for adopting TDD. Provisioning of different configurations of DL and UL resources in time domain depending on the traffic demand of each link has been understood as an important benefit of TDD. In practical TDD cellular systems with macro cell deployments, on the contrary, it has been typical that a pre-defined DL-UL resource partitioning in time domain and the same transmission direction (UL or DL) are maintained in all cells of the network of a mobile network operator (MNO) to prevent the cross-link interference between DL and UL of neighboring cells. A base station's DL transmission could cause serious interference to UL reception of a neighboring base station. The UL transmission from a UE of a cell could become a strong interference to the DL reception of another UE of a neighboring cell especially when they are located closely in a cell boundary area. Furthermore, MNOs using adjacent bands typically align their DL-UL configurations to prevent interference between their networks. With the increasing amount of mobile traffic generated in hotspots and indoor environments, there is increasing interest in heterogeneous networks consisting of macro and small cells. In such scenario, it could become more useful to have dynamic adaptation of DL and UL allocation in TDD operation of a small cell depending on its own traffic situation independently. #### V-A2 LTE enhanced interference mitigation and traffic adaptation (eIMTA) 3GPP made standards enabling the adaptation of configuration of DL and UL subframes per 10 ms frame duration in LTE Release 12, which was known as eIMTA [200]. Additional measures for handling of the cross link interference (CLI) between DL and UL were specified in LTE standards [198]. The eIMTA operation could cause severe variations in SINR between consecutive transmission time intervals even though the radio channel does not change. To have proper awareness of SINR fluctuation due to eIMTA, separate CSI measurements for the DL subframe, and the flexible subframe that can be either DL or UL were introduced. In order to avoid the severe performance loss due to the CLI, UL power control was enhanced to allow having different transmit power levels between the fixed UL subframe and the flexible subframe. #### V-A3 NR dynamic TDD As a further evolution of TDD operation in 5G, 3GPP specified dynamic TDD feature in Release 15 NR standard, where L1 signaling of slot format indication designates each symbol of a slot to be DL ('D') symbol, UL ('U') symbol, or flexible ('F') symbol according to Table 11.1.1-1 in [199]. 'F' symbol could be either DL or UL depending on the gNB's scheduling decision and the transmission direction is indicated to the UE via a L1 control signaling. A slot consists of 14 symbols and 56 slot formats are defined to support various combinations of 'D', 'U', and 'F' symbols in a slot. It can be seen that the dynamic TDD feature of NR supports more frequent and flexible switching between DL and UL direction in time than the LTE eIMTA. For the CLI handling, two UE measurements were specified in Release 16 [201]: (1) SRS reference signal received power (SRS-RSRP) which can be utilized by gNB to estimate the interference from a UE's uplink to another UE's downlink; and (2) CLI received signal strength indicator (CLI-RSSI) which can be utilized by gNB to estimate the total amount of CLI experienced by a UE. #### V-A4 Duplex evolution in 5G-Advanced To further enhance the NR TDD operation in 5G-Advanced, 3GPP is performing a study on evolution of NR duplex operation in Release 18 [202], which consists of two major subjects. The first subject is enhancement on the dynamic TDD to address, e.g., the gNB-to-gNB CLI, which may be caused due to either adjacent-channel CLI or co-channel CLI, or both, depending on the deployment scenario. The second subject is the feasibility of allowing the simultaneous existence of DL and UL, a.k.a., full duplex. Full duplex can create the CLI as observed with dynamic TDD as well as the self-interference from Tx to Rx. The significance of the self-interference depends on the frequency allocation for Tx and Rx as illustrated in Fig. 11, where the left case shows transmission and reception using the non-overlapping frequency while the right one shows transmission and reception using the overlapping frequency. Considering the expected complexity for mitigating the self-interference from Tx to Rx and the CLI, the study assumes full duplex with non-overlapping allocation of DL and UL subbands at the gNB side while the UE performs the conventional TDD operation. Such approach could make the continuous UL transmission possible in the TDD system that is widely used in commercial NR deployments, which would be beneficial to improve coverage, latency, and capacity of the NR TDD system. The non-ideal spectrum shaping and nonlinearity of power amplifier at Tx creates the self-interference and CLI to Rx even though DL and UL use different subbands. Separation of transmission and reception antennas possibly with a special structure for signal isolation between them and use of proper beamforming at Tx and Rx can achieve significant reduction of the self-interference. The remaining self-interference could be further suppressed by use of RF filters and digital signal processing at Rx. The Rx RF filter with sharp spectrum shape can also help reduce the CLI from adjacent carriers. On the other hand, the CLI caused by the transmission in the same carrier, especially DL transmission with high transmission power from a neighbour base station site to UL reception, would be much harder to mitigate. This property of CLI may make it easier to apply full duplex in the small cell deployment, where small cells (or small cell groups) are isolated so that the same carrier CLI can be easily avoided. For the wide-area macro cell deployment, proper coordination of transmission direction or UL-DL frequency separation between neighbouring base station sites would be necessary. ### _Full Duplex Outside 3GPP_ Full duplex, also known as in-band full duplex (IBFD) where transmission and reception occur on the same frequency band, was first utilized as early as in 1940s in the context of full duplex radars [203]. Different from the FDD or TDD duplexing method when transmission and reception are sufficiently separated, full duplex imposes daunting challenges to transceiver implementations due to self-interference (i.e., a receiver is interfered by its own transmitter due to leakage) and potential CLI, as discussed earlier. There are numerous comprehensive reviews on IBFD in the literature [204, 205, 206, 207, 208, 209]. Reference [204] focuses on full-duplex radars and IBFD wireless communications, presenting the corresponding opportunities and techniques for self-interference reduction. Reference [205] provides a comprehensive tutorial on IBFD from the perspective of physical and MAC layers. It summarizes the benefits, different topologies (i.e., bidirectional full duplex, full duplex relay, and full duplex cellular in a multiple access setting), challenges in self-interference cancellation, and IBFD for other purposes. Reference [206] compares more than 50 demonstrated IBFD communication systems with more than 80 different measurement scenarios, in terms of the corresponding isolation performance with respect to the center frequency, instantaneous bandwidth, and transmit power. A set of self-interference cancellation techniques are also summarized therein. Reference [207] presents a survey on IBFD in relaying scenarios, which covers the enabling technologies, key design issues, basics of IBFD, challenges and broader perspectives, and performance analysis. The tutorial in [208] concentrates on wireless communications, listing the potential benefits offered by full duplex techniques, surveying the critical issues related to full duplex transmissions from a physical-layer perspective relying on self-interference suppression/cancellation, while giving cognizance to the MAC-layer protocols, investigating the main hardware imperfections, and discussing the advantages, drawbacks, as well as design challenges of practical full duplex systems, and identifying their new directions and applications. Reference [209] contains a comprehensive survey on self-interference cancellation techniques in the antenna/propagation domain. It also discusses the opportunities and challenges of employing IBFD antennas in future wireless communication networks. Besides radar and wireless communications particularly for 5G NR and 6G, full duplex also finds applications in many other areas, e.g.: * Cognitive radio networks, where full duplex operations enable simultaneous information sharing and sensing for improved spectrum sharing [210, 211]; * Physical layer secrecy, where full duplex helps activate more simultaneous transmissions or introduction of jamming noise signals, thus creating additional interference to eavesdroppers leading to the increased so-called secrecy capacity (i.e., the highest possible data rate at which a secret transmission can be reliably conveyed without being eavesdropped) [212]; * Wireless power transfer, where simultaneous wireless energy transfer (e.g., via DL) and wireless information transfer (e.g., via UL) can be supported, which is important for energy-constrained wireless communication systems (e.g., wireless sensor networks) [213]. The challenges and potential solutions for IBFD were well studied in the literature [204, 205, 206, 207, 208, 209]. Effective and efficient self-interference and/or inter-node interference suppression/cancellation are critical and challenging for practical IBFD operations. There are many factors that need to be taken into account, e.g.: * Time-varying channels; * Single or multiple streams (i.e., MIMO); * Potential presence of multiple transmitters and receivers (e.g., in cellular networks, where the issue is more pronounced in case of heterogenous networks); * Hardware imperfections (e.g., phase noise, non-flat frequency response of circuits, power amplifier nonlinearity, transmit I/Q imbalance); * Reasonable power consumption (especially for battery-powered devices) and complexity (e.g., to ensure small-size circuits). The enabling techniques for interference suppression/cancellation can be generally categorized into the following domains [206]: * Propagation domain, which may be comprised of passive, active and antenna interfaces; * Analog domain, where the approaches may be in time-domain, frequency-domain, or digitally assisted; * Digital domain, where various modelling may be assumed (linear/nonlinear, reference-based by utilizing an auxiliary receive channel, receiver beamforming, etc.). The study of the evolution of NR duplex operation in Release 18 in 3GPP marks an important step for practical IBFD operations. It is expected that the study would result in successful identification of the issues, benefits, and realistic solutions for IBFD, making it possible for commercial deployments in the future. ## X Energy Efficiency Evolution towards Green Networks In this section, we focus on power consumption in wireless systems, which is one critical design aspect. Realizing green networks needs to consider both sides: UEs and base stations (network side). We first discuss UE side power consumption that may have direct impact on end-user experiences and thus has always been actively pursued and standardized in 3GPP. Then we consider the power consumption at the network side, which is crucial for network sustainability and vital for going forward. ### _UE Side Power Consumption_ Power consumption for battery-powered end devices is an important performance metric. 3GPP standardization pays close attention to UE power consumption and has standardized various power saving techniques to improve UE power consumption, especially considering different UE types and operation conditions, even when there are conflicting goals. 5G NR is designed to support various UE types. Smart phone users are of course one primary target, which is often linked with the continued advancements on eMBB services [214]. These devices are generally highly capable with advanced processing power. MTC often aims to support low cost and low complexity devices, covering a wide range of applications such as metering, road security, and consumer electronic devices [1]. Different from smart phones, MTC devices generally require very long battery life, e.g., as long as 10 years [1], since these devices may not be easily replaced due to the associated cost and operation environments. In order to more efficiently support services for connected industries, smart city innovations and wearables (e.g., smart watches), a new type of devices called RedCap devices [215], with requirements lower than eMBB but higher than the traditional MTC, was recently standardized. Power savings for these devices are supported for different operation states, in particular, idle and connected states. An idle state UE is when the UE camps on a cell but without an established radio resource control (RRC) connection. Power saving techniques may include idle discontinuous reception (I-DRX) [1], advanced paging techniques (e.g., group-based wake-up signal (WUS) for power savings [1]), relaxed measurements and reduced mobility requirements [215]. An RRC connected UE may benefit from connected state DRX (C-DRX), physcial downlink control channel (PDCCH) based WUS, and cross-slot scheduling (where a UE may perform the so-called "micro-sleep" if not scheduled [216]), restriction of low-rank MIMO operation when appropriate, reduced or skipped control channel monitoring, and fast activation and deactivation of a carrier or a configured scheduling. Monitoring PDCCH can be skipped or dynamically switched among a few configured search space set groups. A secondary cell may also be placed into a so-called dormancy mode as part of the activated state, where a UE is not required to monitor Fig. 11: Self-interference from transmission to reception. PDCCH for the secondary cell, but can be quickly indicated to transition from the dormancy mode to non-dormant mode and vice versa. A UE may also provide assistance information for power savings, such as a set of preferred power saving parameters including DRX configurations, a maximum aggregated bandwidth, and a number of carriers. For power savings, a UE can also have relaxed RRM, radio link monitoring, or beam failure detection requirements especially if its mobility level is low. The goal for UE power savings may be in conflict with other design goals, e.g., capacity or coverage. As an example, an MTC user may need coverage enhancements as high as 20 dB [1], which may require transmissions of the same information over an extended duration. Such repeated transmission due to the increased coverage need is not friendly to UE power consumption. Similarly, a UE with restricted MIMO operation is not capacity-friendly. More sustainable UE operations can benefit from energy harvesting, especially for use cases such as health and fitness tracking, environment monitoring, and transportation [217, 218, 219, 220, 221, 222, 223, 224]. Energy harvesting may be particularly beneficial and associated with passive IoT devices such as RF identification (RFID) [222]. The mechanisms may be solar-based, wind-based, vibrational, electromagnetic, thermoelectric, or based on ambient RF energy [224, 217]. The challenges for RF energy harvesting may include overall RF power conversion efficiency, form factor, operational bandwidth, and compactness [219]. Reference [220] provides a summary of the RF energy harvesting in the past six years. ### _Network Side Power Consumption_ With the need to reduce the carbon dioxide (CO2) emissions and on the other hand to reduce the network operating cost, while facing continuous increase in data volume to be carried by the networks, improvements in the network energy efficiency is a must. Unlike the handover total energy consumption, the major part of the network energy consumption occurs during the lifetime of the network infrastructure instead of the actual manufacturing phase. The development with 5G has already reduced especially the static power consumption compared to 4G or 3G, where networks had to be transmitting radio signals more frequently or even continuously even if there were no active data transmission taking place. With 5G the network side already took major step towards energy efficiency from continuous transmission with 3G or transmission every millisecond with 4G. This was also a requirement for the International Mobile Telecommunications-2020 (IMT-2020) technology [225]. The key design choice with 5G was to move away from the common reference signals sent with such a frequency that one could not really drive down the power hungry RF parts even in case of lack of traffic. 3G networks were even worse with the common pilot channel which was sent continuously. The next steps in power efficiency on the network side are related to avoiding all transmissions when there is no actual user data to be transmitted, and using power efficient waveforms when having to transmit something. This together with other energy saving features allows 5G to be 10X more energy efficient than 4G in terms of energy consumed per traffic unit, but there still remains more that could be done as part of the 5G-Advanced. As part of the 3GPP 5G-Advanced studies [226], the network energy consumption model is to be established, which then should enable vendor independent consideration of network power consumption impact when adding support for a new channel or signal to be transmitted or some new procedures between network elements. With the help of the energy consumption model, one is able to assess for example what would be the network side price to pay for a special signal to drive a low power wake-up radio at the UE side. This should be done to come up with sustainable designs for new features, which for the sake of saving on one side, would not cause major impacts on the energy efficiency in other dimensions. The best energy saving is achieved when one can shut down an entire base station, especially first shutting down and replacing the earlier generations that consume lots of power regardless of the traffic level. Ideally, with only 5G remaining, one should optimize the number of frequencies to be active for the coverage and traffic support needed at a given time in the area of interest, as well as the number of antenna ports active at the network side. Especially under low traffic load, the network could afford to use only a subset of antenna ports in the case of an antenna array, reducing the number of ports from, e.g., 64 to 16. AI/ML based control is expected to be the key enabler for this, allowing for dynamically coordinated energy saving operations across different network layers, so as to minimize the network energy consumption in 5G [226]. As indicated in [226], 70% of operator energy consumption occurs at the base station site. Under low traffic load, the possibilities are limited as the current solutions achieve most of the gains, and reducing the base station activity further would mostly make it even more challenging to find and measure signals timely at the UE side, with diminishing gains as highlighted in [227]. The detailed solutions now part of the Release 18 scope for the network side aim to add operation without synchronization signal block (SSB), i.e., SSB-less, for secondary cell (SCell) operations for the inter-band CA case, as well as enhancing the cell discontinuous transmission (DTX)/DRX mechanism. For the spatial domain, adaptation of the number of spatial elements is to be enhanced. In the power domain, the power offset between physical downlink shared channel (PDSCH) and CSI-RS is to be adapted for lower energy consumption based on improved feedback to allow for more optimized PDSCH power setting at the gNB side. For the protocol side, conditional handover is to be enhanced to cover the case when either the source or the target cell is in the network energy saving mode. Furthermore, there are various research directions on the improvements of network energy consumption. One of the major factors worth to investigate is the impact of network scheduling solutions. Besides the classical network performance, network power consumption is also impacted by the network scheduling approach, especially in 5G. In the case of low traffic load, ideally the scheduler would also send data only very sparsely, however in 5G it is essential to pay attention to the resulting latency impact as well. The scheduling impact on network power consumption has been studied for example in [228] and in [229] for a given QoS level. Another dimension to consider is the use of heterogeneous networks with small and pico cells for improved energy efficiency to achieve the needed total capacity [230]. As small cells operate with lower power levels, they can offer capacity boost with good energy efficiency. In general, the use of self organizing networks (SONs) or lately AI/ML to obtain improvements in network energy efficiency has been discussed for example in [231], and also in actual deployments, as there are many network operation elements one can deploy without impact on the standards [226]. It should be noted that when considering energy efficiency improvements, the energy cost for the enablers themselves such as AI/ML training and inference needs to be taken into account as well. ## XI Additional 6G Aspects In the previous sections, we have discussed several 6G related aspects when introducing the key technology evolution directions in 5G-Advanced. The 6G discussions we have provided are by no means exhaustive. In this section, we point out some additional 6G aspects, including spectrum evolution, JCS for 6G, and provision of hyper-distributed, innovative 6G services. ### _Spectrum Evolution_ To alleviate the scarcity of the spectrum as a precious resource, the concept of full spectrum is being propounded for 6G, which covers sub-7 GHz, centimetric range from 7-20 GHz, mmWave, THz, and optical frequency bands. The sub-7 GHz bands have been the dominant frequency spectrum in 4G and 5G, and will continue playing an important role in 6G, since they are naturally well suited to provide ubiquitous coverage and reliable wireless connection [232]. Nevertheless, the sub-7 GHz spectrum has been crowded and thus new spectrum is in need for future radio communications. The pioneer spectrum blocks for 6G are expected to be in the centimetric range from 7-20 GHz for both higher data rates (as opposed to the sub-7 GHz band) and acceptable coverage, and the mmWave bands for peak data rates exceeding 100 Gbps. For example, the Federal Communications Commission (FCC) expressed the need to support 6G service in the mid-band spectrum such as the 7-16 GHz range and started to explore repurposing spectrum in the 12.7-13.25 GHz band for next-generation wireless technologies in the U.S. [233]. Meanwhile, extensive research and trials have been carried out for the mmWave bands which have verified its great potential in cellular communications when combined with beamforming techniques. Thanks to its large bandwidths and small wavelengths (compared with lower frequency bands), mmWave is able to provide high temporal and spatial resolution, facilitating high-precision object localization and tracking. In addition, with the explosive growth of traffic globally, the THz spectrum is considered a promising new frequency range for 6G to provide even larger bandwidth and more abundant spectrum resources. In 6G wireless networks, typical application scenarios of THz communication may include indoor communication, hotspot downloading, wireless data center, fixed wireless access and wireless cellular fronthaul/backhaul, and security communication scenarios. However, based on the current state of technology, some critical challenges still need to be solved for THz communication to be applicable in 6G, including the corresponding superheterodyne transmission, modulators, channel models, channel estimation, beamforming and beam tracking, as well as signal generation and detection, antenna array manufacturing and efficiency, among others. Moreover, as a key driving force of the global Internet, the optical fiber network connects all continents to form a modern communication backbone network, providing high-speed data access for metropolises, cities, and towns. In some scenarios, it might be a promising option to extend optical fiber transmission directly to wireless interfaces to realize the connection and mobile access of the last mile. On the other hand, the visible light spectrum can be employed in LiDAR or other similar forms to sense the environment so as to assist radio communications, which is a popular use case of JCS. ### _Joint Communication and Sensing_ JCS is a prospective field for 6G, which aims to realize the coordination of communication and sensing via software and hardware resource sharing or information sharing [234]. On one hand, in the beyond-5G era, the high frequency band used by wireless communication networks and the spectrum for radio sensing gradually approach or even overlap with each other. On the other hand, the communication system and the sensing system have similar features in terms of RF transceiver, channel characteristics, and signal processing, which hastens the research and development of JCS network architecture and related technologies. Moreover, the development of communication technologies, such as extreme large-scale arrays, large bandwidths, RISs, and AI, will further promote the growth of sensing technologies. In the narrow sense, sensing refers to target positioning (including ranging, velocity measurement, and angle measurement), imaging, detection, tracking, and recognition. In the broad sense, the sensing target may be services, networks, terminals, as well as attributes and states of environmental objects. JCS can be based upon the same set of equipment and/or the same spectrum, and is able to lower the equipment cost, size, and power consumption, and improve spectral efficiency, hardware resource utilization, and information processing efficiency. The development of JCS may face multi-perspective multi-level technical challenges, among which a few major challenges are listed below. * **Fundamental theory for JCS**: Different from the classical Shannon information theory, sensing will give rise to new performance metrics and limits for the system, based on which new JCS information theory needs to be established to investigate the optimal performance limits and tradeoff of the two functionalities. * **Signal processing for JCS**: This mainly embodies aspects such as joint waveform design, joint transmission beamforming, and joint signal reception. From the viewpoint of functionality priority, joint signal processing can be divided into sensing-centric design, communication-centric design, and joint weighted design. * **Protocol and system architecture design for JCS**: Communication and sensing may have different work mechanisms. Taking radar sensing as an example, radar is generally divided into pulse type and continuous-wave methods, while communication adopts TDD or FDD. Thus, new transmission protocol and system architecture are in need to realize the coordinated operation of the communication and sensing. The development of JCS is likely to undergo three stages, namely, coexistence, mutual assistance, and mutual benefit. For coexistence, the originally independent communication system and sensing system are integrated onto the same physical platform, where communication and sensing coexist as two service forms with the main focus on interference management, resource allocation, and spectrum sharing. In the mutual assistance stage, communication capability and sensing capability cooperate and aid each other based on shared information, so as to realize sensing-assisted communication or communication-assisted sensing, where the main focus lies in air interface design, waveform design, along with transmission and reception processing algorithms. Ultimately, in the most advanced stage, communication and sensing will achieve comprehensive and multi-level integration in spectrum resource, hardware equipment, waveform design, signal processing, protocol interface, networking collaboration, among other aspects. Besides the evolution of the existing techniques in the previous two stages, AI-enabled approaches and multi-cell coordinated sensing methods will be explored, in order to build the endogenous sensing ability of 6G. ### _Hyper-distributed, Innovative Services_ Mission- and time-critical services will be essential to the further development of our societies and economy. Such services, like connected healthcare, autonomous transportation, Industry 4.0, smart grids, smart cities and homes, have strict QoS requirements, which has led to new decentralized multi-tier network-computing platforms where processing capabilities are brought closer to the data sources and/or to the end consumers. This multi-tier continuum, usually referred to as device-edge-cloud continuum, thus describes integrated network-computing platforms that consist of interconnected IoT/mobile devices, edge nodes, fog nodes, and cloud back-end servers. In these platforms, computing resources and services are seamlessly combined along the data path from the IoT/mobile devices to the back-end clouds, passing through intermediary edge and fog nodes. Most of the aforementioned mission- and time-critical services leverage ML for data analytics or decision making, which makes it mandatory to support ML models, and the data pipelines to feed them, in a sustainable manner. This concerns both the training of such models, whose computing (and energy) resource requirements may be very high [235], as well as their execution for inference purposes, especially when capability-constrained IoT/mobile devices are used. Additionally, it is worth remarking that among the most relevant services are media applications, which represent over 80% of the total Internet traffic. This suggests that human users' quality of experience should play a central role in the design not only of media applications, but also of the communication and computing networks supporting them. It is thus of paramount importance to address jointly the definition of new media and network solutions, in order to properly account for the users' interaction and emotions, as well as the use of human digital twins to create an immersive experience. In this context, 6G is expected to face some important challenges, as set forth below. * **Design and deployment of hyper-distributed services**: According to the NFV concept, services are composed of several atomic virtual functions chained together. This fact, combined with a multi-tier network and computing continuum and with the need to effectively support users' mobility, calls for novel strategies for service design and migration, where the split into functions, their deployment, and their replacement upon users' movement can be dynamically performed across the nodes in the continuum, so that the resources available therein are used at best and the service disruption time is minimized. * **Intelligent services**: To make ML sustainable in spite of its pervasiveness, it is fundamental to enable collaborative ML model building [236] by leveraging, e.g., transfer learning techniques to reduce computing and memory consumption. An ML model, locally trained in a domain can be made available to other entities for further training and update, leading to continuous learning advancements, refinement and transfer across domains. Furthermore, knowledge distillation or pruning techniques can be smartly used to reduce the complexity of an ML model without harming the quality of the decision-making process, thus greatly reducing the amount of consumed resources. * **Semantic approaches**: To be able to transfer media traffic enhanced with information related to the human behavior, emotions, and personality without increasing the bandwidth demand exceedingly, it is critical to develop synergic approaches that let services and networks closely interact [237]. More in general, being aware of the application and traffic semantic can substantially help the network to identify, hence transfer and process, only those data that are essential and sufficient to creating a context and providing a service that matches the user's preferences and needs, or the QoS required by an application. Initial solutions and future research directions in the area of semantic communications can be found in, e.g., [238, 239, 240]. ## XII Conclusion Over the past several years, we have witnessed a rapid deployment of commercial 5G networks that provide high-speed, low-latency connectivity for a wide range of use cases. New services with higher performance requirements will continue to emerge, calling for the continuous evolution of cellular technology. As this article has highlighted, 3GPP Release 18, the start of 5G-Advanced, includes a diverse set of evolutions that will significantly boost 5G performance and address a wide variety of new use cases. While we are just embarking on the 5G-Advanced journey, 6G research is already under way, and 6G standardization is expected to start within 3GPP around 2025. The innovative technology components investigated in 5G-Advanced evolution are essential precursors to key 6G building blocks, making 5G-Advanced a vital step in developing cellular technology towards 6G. 6G will require even more data processing at the edge of the network-computing continuum for critical services, which can be achieved only with the tight cooperation of a programmable network infrastructure supporting, primarily, end-to-end network slicing across the multi-tier continuum with assured QoS. Another aspect that raises unprecedented challenges to system design is the presence, in many real-world vertical domains, of interactive services based on distributed intelligence to support decision making. This requires that increasing amount of data, generated by massively deployed ubiquitous devices at the edge, is moved throughout the continuum to promptly build knowledge. On the other hand, a close inter-working between application and network transport layers offers the opportunity to develop semantic approaches that enable the system to reconfigure according to the service to be supported, thus dramatically increasing efficiency, reducing energy consumption, and improving users' QoS. It will be exciting to see how 5G-Advanced towards 6G will improve lives, foster industries, and transform society over the coming decade. ## XIII Acknowledgment The authors thank Sai Sree Rayala (Virginia Tech) and Nima Mohammadi (Virginia Tech) for their help in typesetting the manuscript. This work was partially supported by NPRP-S 13th Cycle Grant No. NPRP13S-0205-200265 from the Qatar National Research Fund (a member of Qatar Foundation) and by U.S. National Science Foundation under grants CNS-2148212, ECCS-2128594, CNS-2003059, and CCF-1937487. The work of S. Sun is supported in part by the National Natural Science Foundation of China under Grant 62271310, and in part by the Fundamental Research Funds for the Central Universities of China. The findings herein reflect the work, and are solely the responsibility, of the authors.
2301.09202
On the stability properties of power networks with time-varying inertia
A major transition in modern power systems is the replacement of conventional generation units with renewable sources of energy. The latter results in lower rotational inertia which compromises the stability of the power system, as testified by the growing number of frequency incidents. To resolve this problem, numerous studies have proposed the use of virtual inertia to improve the stability properties of the power grid. In this study, we consider how inertia variations, resulting from the application of control action associated with virtual inertia and fluctuations in renewable generation, may affect the stability properties of the power network within the primary frequency control timeframe. We consider the interaction between the frequency dynamics and a broad class of power supply dynamics in the presence of time-varying inertia and provide locally verifiable conditions, that enable scalable designs, such that stability is guaranteed. To complement the presented stability analysis and highlight the dangers arising from varying inertia, we provide analytic conditions that enable to deduce instability from single-bus inertia fluctuations. Our analytical results are validated with simulations on the Northeast Power Coordinating Council (NPCC) 140-bus system, where we demonstrate how inertia variations may induce large frequency oscillations and show that the application of the proposed conditions yields a stable response.
Andreas Kasis, Stelios Timotheou, Marios Polycarpou
2023-01-22T21:07:51Z
http://arxiv.org/abs/2301.09202v1
# On the stability properties of power networks with time-varying inertia ###### Abstract A major transition in modern power systems is the replacement of conventional generation units with renewable sources of energy. The latter results in lower rotational inertia which compromises the stability of the power system, as testified by the growing number of frequency incidents. To resolve this problem, numerous studies have proposed the use of virtual inertia to improve the stability properties of the power grid. In this study, we consider how inertia variations, resulting from the application of control action associated with virtual inertia and fluctuations in renewable generation, may affect the stability properties of the power network within the primary frequency control timeframe. We consider the interaction between the frequency dynamics and a broad class of power supply dynamics in the presence of time-varying inertia and provide locally verifiable conditions, that enable scalable designs, such that stability is guaranteed. To complement the presented stability analysis and highlight the dangers arising from varying inertia, we provide analytic conditions that enable to deduce instability from single-bus inertia fluctuations. Our analytical results are validated with simulations on the Northeast Power Coordinating Council (NPCC) 140-bus system, where we demonstrate how inertia variations may induce large frequency oscillations and show that the application of the proposed conditions yields a stable response. ## I Introduction **Motivation and literature review:** The electric power grid is currently undergoing through a major transformation due to the growing penetration of renewable sources of energy [2, 3]. As a result, conventional bulk generation is expected to be slowly replaced by renewable generation. However, retiring synchronous generation lowers the rotational inertia of the power system, which has been a key reason for the power grid's stability over the years [4]. In addition, renewable generation is intermittent, causing more frequent generation-demand imbalances that may harm the power quality and even cause blackouts [5]. Hence, novel challenges are introduced towards enabling a stable and robust operation of the power grid. The inertia of the power system represents its capability to store and inject kinetic energy, serving as an energy buffer that can slow down the frequency dynamics. The latter aids to effectively avoid undesirable incidents, such as excessive load-shedding or large-scale blackouts. A low power system inertia is associated with larger frequency deviations following a disturbance event, such as loss of generation or tie line faults [6]. The degrading effects of low inertia levels on the power system stability have already been reported by system operators [7]. To mitigate these effects, several studies proposed the introduction of virtual inertia in the power grid, i.e. schemes that aim to resemble the inertial response of machines by injecting power in proportion to the rate of change of frequency. In particular, [8] proposed the use of the energy stored in the DC-link capacitors of grid-connected power converters to emulate the inertia of the power system. Control schemes of varying complexity that aim to make converters behave in a similar way as synchronous machines and provide virtual inertia to the power grid have been investigated in [9, 10, 11, 12, 13]. These schemes require some form of energy storage, such as batteries, to provide the required energy. In addition, [14] proposed the integration of DC microgrids into the power network as virtual synchronous machines (VSMs). Moreover, [15] designed a self tuning mechanism to optimize the operation of VSMs. The scheme proposed in [16] uses the kinetic energy stored in the rotating mass of wind turbine blades to emulate inertia and provide frequency support. Furthermore, [17, 18] demonstrated how power converters can mimic the inertial response of a synchronous machine for wind turbines interconnected to the grid via doubly fed induction generators. The optimal placement of virtual inertia is considered in [19]. Comprehensive reviews on virtual inertia and virtual synchronous machine schemes are available in [20] and [21] respectively. An open research question that requires further attention concerns the effect of varying inertia on the stability properties of the power network. Variations in inertia may arise due to fluctuations in renewable generation and control action on virtual inertia. Controllable virtual inertia, possibly coupled with the power grid dynamics, may induce additional challenges for the stability of the power grid. In addition, it may pose security threats, if inertia is maliciously controlled to destabilize the power grid. The time-varying nature of inertia has been pointed out and studied in [6]. In addition, [22] considered the robustness properties of power networks with time-varying inertia and frequency damping, while [23] considered a hybrid model of the power network with time-varying inertia and applied model predictive control approaches to optimize the power inputs. In [24], the authors considered the effect of inertia variations in the frequency response. Stabilizing controllers under various inertia modes are designed in [25] using a linear quadratic regulator approach. Furthermore, [26] and [27] proposed time-varying control gains associated with the inertia and damping of wind turbines and the inertia of virtual synchronous generators respectively. However, a systematic approach on how time-varying inertia affects the stability properties of power networks is currently missing from the literature. This may lead to suitable guidelines for the design of virtual inertia schemes that offer improved stability and security properties. **Contribution:** This study investigates the impact of time-varying inertia on the behaviour and stability properties of the power network within the primary frequency control timeframe. In particular, we consider the interaction between the frequency dynamics and non-linear power supply dynamics in the presence of time-varying inertia. We study the solutions of this time-varying system and analytically deduce local asymptotic stability under two proposed conditions. The first condition, inspired from [28], requires that the aggregate power supply dynamics at each bus satisfy a passivity related property. The second condition sets a bound on the maximum rate of growth of inertia that depends on the local power supply dynamics. Both conditions are locally verifiable and applicable to general network topologies and enable practical guidelines for the design of virtual inertia control schemes that enhance the reliability and security of the power network. To demonstrate the applicability and improve the intuition on the proposed analysis, we provide examples of power supply dynamics that fit within the presented framework and explain how the maximum allowable rate of inertia variation may be deduced for these cases. In addition, when linear power supply dynamics are considered, we show how the conditions can be efficiently verified by solving a suitable linear matrix inequality optimization problem. Our stability results are complemented by analytical results that demonstrate how single-bus inertia variations may induce unstable behaviour. In particular, we provide conditions such that for any, arbitrarily small, deviation from the equilibrium frequency at a single bus, there exist local inertia trajectories that result in substantial deviations in the power network frequency. The latter coincides with the definition of instability (e.g. [29, Drn. 4.1]). Numerical simulations on the NPCC 140-bus system demonstrate the potentially destabilizing effects of varying inertia and validate our analytical results on a realistic setting. In particular, we demonstrate how varying virtual inertia, at a single or multiple buses, may induce large frequency oscillations and compromise the stability of the power network. The latter provides further motivation for the study and regulation of virtual inertia schemes. In addition, we present how the application of the proposed conditions yields a stable response. To the authors best knowledge, this is the first work that: 1. Analytically studies the behaviour of power networks under varying inertia and proposes decentralized conditions on the local power supply dynamics and inertia trajectories such that stability is guaranteed. 2. Demonstrates, under broadly applicable conditions, that single-bus inertia variations may induce large frequency fluctuations from any, arbitrarily small, frequency deviation from the equilibrium. Such behaviour is characterized as unstable. **Paper structure:** In Section II we present the power network model, a general description for the power supply dynamics and a statement of the problem that we consider. In Section III we present our proposed conditions on the power supply dynamics. Section IV contains the conditions on virtual inertia trajectories and the main result associated with the stability of the power network. In addition, in Section V we provide additional intuition on the stability result and discuss various application examples. Moreover, in Section VI we present our inertia induced instability analysis. Our analytical results are verified with numerical simulations in Section VII and conclusions are drawn in Section VIII. Proofs of the main results are provided in the appendix. **Notation:** Real, positive real, integer and positive natural numbers are denoted by \(\mathbb{R},\mathbb{R}_{+},\mathbb{Z}\) and \(\mathbb{N}_{+}\) respectively. The set of n-dimensional vectors with real entries is denoted by \(\mathbb{R}^{n}\). For \(a\in\mathbb{R},b\in\mathbb{R}\setminus\{0\}\), \(a\) modulo \(b\) is denoted by \(\operatorname{mod}(a,b)\) and defined as \(\operatorname{mod}(a,b)=a-b\lfloor\frac{a}{b}\rfloor\), where for \(x\in\mathbb{R}\), \(\lfloor x\rfloor=\sup\{m\in\mathbb{Z}:m\leq x\}\). The \(p\)-norm of a vector \(x\in\mathbb{R}^{n}\) is given by \(\left\|x\right\|_{p}=(|x_{1}|^{p}+\dots+|x_{n}|^{p})^{1/p},1\leq p<\infty\). A function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) is said to be locally Lipschitz continuous at \(x\) if there exists some neighbourhood \(X\) of \(x\) and some constant \(L\) such that \(\left\|f(x)-f(y)\right\|\leq L\left\|x-y\right\|\) for all \(y\in X\), where \(\left\|.\right\|\) denotes any \(p\)-norm. The Laplace transformation of a signal \(x(t),x:\mathbb{R}\to\mathbb{R}\), is denoted by \(\dot{x}(s)=\int_{0}^{\infty}x(t)e^{-st}dt\). A function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is called positive semidefinite if \(f(x)\geq 0\) for all \(x\in\mathbb{R}^{n}\). The function \(\textbf{sin}(x)\) gives the sinusoid of each element in \(x\in\mathbb{R}^{n}\), i.e. for \(x=[x_{1}\;x_{2}\dots x_{n}]^{T}\), \(\textbf{sin}(x)=[\sin(x_{1})\;\sin(x_{2})\dots\sin(x_{n})]^{T}\). For a matrix \(A\in\mathbb{R}^{n\times p}\), \(A_{kl}\) corresponds to the element in the \(k\)th row and \(l\)th column of \(A\). A matrix \(A\in\mathbb{R}^{n\times n}\) is called diagonal if \(A_{ij}=0\) for all \(i\neq j\) and positive (negative) semi-definite, symbolized with \(A\succeq 0\) (respectively \(A\preceq 0\)), if \(x^{T}Ax\geq 0\) (respectively \(x^{T}Ax\leq 0\)) for all \(x\in\mathbb{R}^{n}\). For \(a\in\mathbb{R}^{n},b\in\mathbb{R}_{+}\), the ball \(\mathcal{B}(a,b)\) is defined as \(\mathcal{B}(a,b)=\{x:\|a-x\|\leq b\}\). Finally, for a state \(x\in\mathbb{R}^{n}\), we let \(x^{*}\) denote its equilibrium value. ## II Problem formulation ### _Network model_ We describe the power network by a connected graph \((\mathcal{N},\mathcal{E})\) where \(\mathcal{N}=\{1,2,\dots,|\mathcal{N}|\}\) is the set of buses and \(\mathcal{E}\subseteq\mathcal{N}\times\mathcal{N}\) the set of transmission lines connecting the buses. Furthermore, we use \((k,l)\) to denote the link connecting buses \(k\) and \(l\) and assume that the graph \((\mathcal{N},\mathcal{E})\) is directed with an arbitrary orientation, so that if \((k,l)\in\mathcal{E}\) then \((l,k)\notin\mathcal{E}\). For each \(j\in\mathcal{N}\), we define the sets of predecessor and successor buses by \(\mathcal{N}_{j}^{p}=\{k:(k,j)\in\mathcal{E}\}\) and \(\mathcal{N}_{j}^{s}=\{k:(j,k)\in\mathcal{E}\}\) respectively. The structure of the network can be represented by its incidence matrix \(H\in\mathbb{R}^{|\mathcal{N}|\times|\mathcal{E}|}\), defined as \[H_{kq}=\begin{cases}+1,\;\text{if $k$ is the positive end of edge $q$},\\ -1,\;\text{if $k$ is the negative end of edge $q$},\\ 0,\;\text{otherwise}.\end{cases}\] It should be noted that any change in the graph ordering does not alter the form of the considered dynamics. In addition, all the results presented in this paper are independent of the choice of graph ordering. We consider the following conditions for the power network: 1) Bus voltage magnitudes are \(|V_{j}|=1\) per unit for all \(j\in\mathcal{N}\). 2) Lines \((k,l)\in\mathcal{E}\) are lossless and characterized by the magnitudes of their susceptances \(B_{kl}=B_{lk}>0\). 3) Reactive power flows do not affect bus voltage phase angles and frequencies. These conditions have been widely used in the literature in studies associated with frequency regulation, e.g. [23, 28, 30, 31]. In practice, they are valid in medium to high voltage transmission systems since transmission lines are dominantly inductive and voltage variations are small. It should be noted that all results presented in this paper are verified with numerical simulations, presented in Section VII, on a more detailed model than our analytical one which includes voltage dynamics, line resistances and reactive power flows. We use the swing equations to describe the rate of change of frequency at each bus. This motivates the following system dynamics (e.g. [32]), \[\dot{\eta}_{kl}=\omega_{k}-\omega_{l},\;(k,l)\in\mathcal{E}, \tag{1a}\] \[M_{j}^{0}\dot{\omega}_{j}=-p_{j}^{L}+p_{j}^{M}-d_{j}^{u}-d_{j}^{u}+p_{j}^{v}- \sum_{k\in\mathcal{N}_{j}^{s}}p_{jk}+\sum_{l\in\mathcal{N}_{j}^{p}}p_{lj},j \in\mathcal{N},\] (1b) \[p_{kl}=B_{kl}\sin\eta_{kl},\;(k,l)\in\mathcal{E}. \tag{1c}\] In system (1), variables \(p_{j}^{M}\) and \(\omega_{j}\) represent, respectively, the mechanical power injection and the deviation from the nominal value1 of the frequency at bus \(j\). Variables \(d_{j}^{v}\) and \(d_{j}^{u}\) represent the controllable demand and the uncontrollable frequency-dependent load and generation damping present at bus \(j\) respectively. Furthermore, variables \(\eta_{kl}\) and \(p_{kl}\) represent, respectively, the power angle difference and the power transmitted from bus \(k\) to bus \(l\). The positive constant \(M_{j}^{0}\) denotes the physical inertia at bus \(j\). Moreover, the constant \(p_{j}^{L}\) denotes the frequency-independent load at bus \(j\). Finally, the variable \(p_{j}^{v}\) denotes the power injection at bus \(j\) associated with time-varying inertia at bus \(j\). Its dynamics follow the virtual inertia schemes presented in e.g. [15, 21, 26], Footnote 1: We define the nominal value as an equilibrium of (1) with frequency equal to 50 Hz (or 60 Hz). \[p_{j}^{v}=-M_{j}^{v}\dot{\omega}_{j}-D_{j}^{v}\omega_{j},j\in\mathcal{N}, \tag{2}\] but with a time-varying value of the virtual inertia \(M_{j}^{v}\). In particular, in (2), \(M_{j}^{v}\) is a non-negative time-dependent variable describing the time-varying virtual inertia at bus \(j\) and the constant \(D_{j}^{v}\geq 0\) corresponds to the frequency damping coefficient associated with \(p_{j}^{v}\). **Remark 1**: _We have opted to consider a constant rather than a time-varying damping coefficient \(D_{j}^{v}\) in (2). This choice is made for simplicity and to keep the focus of the paper on the time-varying inertia. Including time-varying damping coefficients would result in non-existence of equilibria in primary frequency control since these characterize the equilibrium frequency. Jointly considering both time-varying inertia and time-varying damping coefficients is an interesting research problem for future work._ It will be convenient to define the time-varying parameters \(M_{j}=M_{j}^{0}+M_{j}^{v}\) describing the aggregate inertia at bus \(j\). In addition, we consider the net supply variables \(s_{j}\), defined as the aggregation of the mechanical power supply, the controllable demand, the uncontrollable frequency-dependent load and the generation and virtual inertia damping present at bus \(j\), as given below \[s_{j}=p_{j}^{M}-d_{j}^{c}-d_{j}^{u}-D_{j}^{v}\omega_{j},\;j\in\mathcal{N}. \tag{3}\] The above enable to compactly represent (1)-(3) by \[\dot{\eta} =H^{T}\omega, \tag{4a}\] \[M\dot{\omega} =-p^{L}+s-Hp,\] (4b) \[p =B\ \text{\bf sin}(\eta), \tag{4c}\] where \(\eta,p\in\mathbb{R}^{|\mathcal{E}|}\) and \(\omega,s,p^{L}\in\mathbb{R}^{|\mathcal{N}|}\) are vectors associated with variables \(\eta_{kl},p_{kl},(k,l)\in\mathcal{E}\) and \(\omega_{j},s_{j},p_{j}^{L},j\in\mathcal{N}\) respectively. Furthermore, \(M\in\mathbb{R}^{|\mathcal{N}|\times|\mathcal{N}|}\) and \(B\in\mathbb{R}^{|\mathcal{E}|\times|\mathcal{E}|}\) are diagonal matrices containing the variables \(M_{j},j\in\mathcal{N}\) and parameters \(B_{kl},(k,l)\in\mathcal{E}\). ### _Power supply dynamics_ To investigate a broad class of power supply dynamics, we will consider the following general dynamic description \[\dot{x}_{j}^{s} =f_{j}(x_{j}^{s},-\omega_{j}),\;\;\;j\in\mathcal{N}, \tag{5}\] \[s_{j} =g_{j}(x_{j}^{s},-\omega_{j}),\;\;\;j\in\mathcal{N},\] where \(x_{j}^{s}\in\mathbb{R}^{n_{j}}\) denotes the internal states of the power supply variables, used to update the outputs \(s_{j},j\in\mathcal{N}\). In addition, we assume that the maps \(f_{j}:\mathbb{R}^{n_{j}}\times\mathbb{R}\rightarrow\mathbb{R}^{n_{j}}\) and \(g_{j}:\mathbb{R}^{n_{j}}\times\mathbb{R}\rightarrow\mathbb{R}\) for all \(j\in\mathcal{N}\) are locally Lipschitz continuous. Moreover, we assume that in (5), for any constant input \(\omega_{j}(t)=\bar{\omega}_{j}\), there exists a unique locally asymptotically stable equilibrium point \(\bar{x}_{j}^{s}\in\mathbb{R}^{n_{j}}\), i.e. satisfying \(f_{j}(\bar{x}_{j}^{s},-\bar{\omega}_{j})=0\). The region of attraction of \(\bar{x}_{j}^{s}\) is denoted by \(\Psi_{j}\). To facilitate the characterization of the equilibria, we also define the static input-state characteristic map \(k_{x,j}:\mathbb{R}\rightarrow\mathbb{R}^{n_{j}}\) as \(k_{x,j}(-\bar{\omega}_{j}):=\bar{x}_{j}^{s},j\in\mathcal{N},\) such that \(f_{j}(k_{x,j}(-\bar{\omega}_{j}),-\bar{\omega}_{j})=0\). It should be noted that the dynamics in (5) are decentralized, depending only on the local frequency \(\omega_{j}\) for each \(j\in\mathcal{N}\). For notational convenience, we collect the variables in (5) into the vector \(x^{s}=[x_{j}^{s}]_{j\in\mathcal{N}}\). **Remark 2**: _The power supply variables represent the aggregation of the mechanical generation, controllable demand and uncontrollable frequency dependent demand and frequency damping, as follows from (3). Each of these quantities includes its own dynamics and could be represented in analogy to (5). We opted to consider a combined representation of these quantities for simplicity in presentation. However, the results presented in the paper can be trivially extended to the case where these quantities are described as individual dynamical systems._ ### _Problem statement_ This study aims to provide local analytic conditions that associate the inertia variations and power supply dynamics such that stability is guaranteed. The problem is stated as follows. **Problem 1**: _Provide conditions on the time-varying inertia and power supply dynamics associated with (4)-(5) that:_ 1. _Enable asymptotic stability guarantees._ 2. _Are locally verifiable._ 3. _Apply to high order and nonlinear power supply dynamics._ 4. _Are independent of the (connected) network topology._ The first aim requires conditions that enable asymptotic stability guarantees for the power system. The second objective requires conditions that can be verified using local information, enabling plug-and-play designs. In addition, to enhance the practicality of our results, it is desired that those include a broad range of power supply dynamics, including high order and nonlinear dynamics. Lastly, we aim for conditions that are applicable to general network topologies, i.e. that do not rely on the power network structure, to enable scalable designs. ## III Conditions on power supply dynamics In this section we study the equilibria of (4)-(5) and provide analytic conditions on the power supply dynamics that are subsequently used to solve Problem 1. ### _Equilibrium analysis_ We now define the equilibria of the system (4)-(5). **Definition 1**: _The constants \((\eta^{*},\omega^{*},x^{s,*})\) define an equilibrium of the system (4)-(5) if the following hold_ \[0 =H^{T}\omega^{*}, \tag{6a}\] \[0 =-p^{L}+s^{*}-Hp^{*},\] (6b) \[x_{j}^{s,*} =k_{x,j}(-\omega_{j}^{*}),\;j\in\mathcal{N}, \tag{6c}\] _where \(p^{*}\) and \(s^{*}\) in (6b) are given by_ \[p^{*} =B\ \text{\bf sin}(\eta^{*}), \tag{6d}\] \[s_{j}^{*} =g_{j}(x_{j}^{s,*},-\omega_{j}^{*}),\;j\in\mathcal{N}. \tag{6e}\] For compactness in presentation, we let \(\beta=(\eta,\omega,x^{s})\) where \(\beta\in\mathbb{R}^{m},m=|\mathcal{E}|+|\mathcal{N}|+\sum_{j\in\mathcal{N}}n_{j}\). **Remark 3**: _The equilibrium frequency \(\omega^{*}\) uniquely defines the values of \(x^{s,*}\) and \(s^{*}\) due to the uniqueness property of the static input-state maps \(k_{x,j},j\in\mathcal{N}\) described in Section II-B. By contrast, the equilibrium values of \(\eta^{*}\) and correspondingly \(p^{*}\) are not, in general, unique. However, these are unique under specific network configurations, such as in tree networks._ **Remark 4**: _It should be noted that the time-dependent inertia \(M_{j},j\in\mathcal{N}\), does not appear in the equilibrium conditions. The latter follows directly from (4b), i.e. the inertia affects the rate of change of frequency but not its equilibrium value. However, as shall be discussed in the following sections, the inertia trajectories have significant impact on whether the system will converge to an equilibrium; i.e., they determine the stability properties of the equilibria._ It should be noted that it trivially follows from (6a) that the equilibrium frequencies of (5), (11) synchronize, i.e. they satisfy \(\omega_{i}^{*}=\omega_{j}^{*}=\omega^{s,*},\forall i,j\in\mathcal{N}\), where \(\omega^{s,*}\in\mathbb{R}\) denotes their common value. For the remainder of the paper we assume the existence of some equilibrium to (4)-(5) following Definition 1. Any such equilibrium is described by \(\beta^{*}=(\eta^{*},\omega^{*},x^{s,*})\). Furthermore, we make the following assumption on the equilibrium power angle differences. **Assumption 1**: \(|\eta_{ij}^{*}|<\frac{\pi}{2}\) _for all \((i,j)\in\mathcal{E}\)._ The condition imposed by Assumption 1 can be interpreted as a security constraint that enables to deduce local convergence. In addition, it is associated with the existence of a synchronizing frequency (see [33]). ### _Passivity conditions on power supply dynamics_ In this section we impose conditions on the power supply dynamics which will be used to prove our main convergence result in Section IV. In particular, we introduce the following passivity notion for dynamics described by (5). **Definition 2**: _System (5) is said to be locally input strictly passive with strictness constant \(\rho_{j}\) about the constant input value \(-\bar{\omega}_{j}\) and the constant state values \(\bar{x}_{j}^{s}\), if there exist open neighbourhoods \(\Omega_{j}\) of \(\bar{\omega}_{j}\) and \(X_{j}\) of \(\bar{x}_{j}^{s}\) and a continuously differentiable, positive semidefinite function \(V_{j}(x_{j}^{s})\) (the storage function), with a strict local minimum at \(x_{j}^{s}=\bar{x}_{j}^{s}\), such that for all \(\omega_{j}\in\Omega_{j}\) and all \(x_{j}^{s}\in X_{j}\),_ \[\dot{V}_{j}\leq(-\omega_{j}-(-\bar{\omega}_{j}))(s_{j}-\bar{s}_{j})-\rho_{j}(- \omega_{j}-(-\bar{\omega}_{j}))^{2},\] _where \(\rho_{j}>0\) and \(\bar{s}_{j}=g_{j}(k_{x,j}(-\bar{\omega}_{j}),-\bar{\omega}_{j})\)._ Definition 2 introduces an adapted notion of passivity that is suitable for the subsequent analysis2. Passivity is a tool that has been extensively used in the literature to deduce network stability, see e.g. [28, 35, 36]. This property is easily verifiable for a wide range of systems. In particular, for linear systems it can be verified using the KYP Lemma [34] by means of a linear matrix inequality (LMI), which allows to form a convex optimization problem that can be efficiently solved. An additional approach to verify the passivity property for linear systems is to test that the corresponding Laplace transfer functions are positive real. For stable linear systems, positive realness is equivalent to the frequency response lying on the right half complex plane. These concepts extend to the case of passivity with given strictness constant. To further demonstrate this, in Section V we provide two examples of linear systems that satisfy the properties presented in Definition 2. In addition, we form a suitable optimization problem that allows to deduce the storage function and the corresponding strictness constant. Footnote 2: Definitions for several notions of passivity are available in [34, Ch. 6]. Below, we assume that the power supply dynamics at each bus are locally input strictly passive with some strictness constant \(\rho_{j}\), following Definition 2. This is a decentralized condition and hence locally verifiable, involving only the local power supply dynamics at each bus. **Assumption 2**: _Each of the systems defined in (5) with input \(-\omega_{j}\) and output \(s_{j}\) are locally input strictly passive with strictness constant \(\rho_{j}\), about their equilibrium values \(-\omega_{j}^{*}\) and \(x_{j}^{s,*}\) in the sense described in Definition 2. Assumption 2 is a key condition that allows to deduce the stability of power networks with constant inertia, as shown in [28]. This condition is satisfied by a wide class of power supply dynamics, including high order and nonlinear dynamics. Several examples of dynamics that satisfy the proposed condition are presented in Section V. Note that, since the power supply dynamics comprise of the aggregation of the generation, controllable demand and uncontrollable frequency-dependent demand dynamics and frequency damping, Assumption 2 allows the inclusion of dynamics that are not individually passive. ## IV Stability analysis In this section we provide analytic conditions on the inertia trajectories that allow us to solve Problem 1. In addition, we provide our main stability result. Note that in the analysis below we study (4)-(5) as a time-varying system with states \((\eta,\omega,x^{*})\) and time-dependent parameters \(M_{j}(t),j\in\mathcal{N}\). ### _Conditions on time-varying inertia_ Varying inertia may compromise the stability of the power network, as demonstrated analytically in Section VI and with simulations in Section VII. In this section, we present conditions on the inertia trajectories that enable the study of solutions to (4)-(5) and, as demonstrated in the following section, the provision of analytic stability guarantees. The first condition on the inertia trajectories is presented below. **Assumption 3**: _The inertia trajectories \(M_{j}(t),j\in\mathcal{N}\) are locally Lipschitz in \(t\) for all \(t\geq 0\)._ Assumption 3 requires the Lipschitz continuity of the inertia time-trajectories. This is a technical condition that enables the study of the solutions to (4)-(5) as a time-varying system. In particular, Assumption 3 allows to deduce the existence and uniqueness of solutions to (4)-(5), as demonstrated in Lemma 1 below, proven in the appendix. **Lemma 1**: _For any trajectory \(M_{j}(t),j\in\mathcal{N},t\geq 0\) that satisfies Assumption 3 and any initial condition \(\beta(0)\in\mathbb{R}^{m}\), there exists a unique solution \(\beta(t),t\geq 0\) to (4)-(5)._ The following assumption restricts the rate at which inertia trajectories may grow. **Assumption 4**: _The inertia trajectories \(M_{j}(t),j\in\mathcal{N}\), satisfy \(\hat{M}_{j}(t)<2\rho_{j},j\in\mathcal{N}\) for all \(t\geq 0\), where \(\rho_{j}\) is the strictness constant associated with bus \(j\), in the sense described in Definition 2._ Assumption 4 restricts the rate of growth of the inertia trajectories to be less than twice the local strictness constant \(\rho_{j}\) associated with Assumption 2. Hence, the condition relates the power supply dynamics at each bus with the rate at which inertia is allowed to grow. Assumption 4 provides a guideline for local control designs on the virtual inertia variations and could also be used from the operator's side as a means to avoid inertia induced instability. It should be noted that Assumption 4 restricts the rate of growth of inertia but not the rate at which inertia may be removed from the network. ### _Stability theorem_ In this section we present our main stability result concerning the system (4)-(5). In particular, the following theorem, proven in the appendix, shows the local asymptotic convergence of solutions to (4)-(5). **Theorem 1**: _Let Assumptions 3 and 4 hold and consider an equilibrium of (4)-(5) where Assumptions 1 and 2 hold. Then, there exists an open neighbourhood \(\Xi\) containing that equilibrium such that solutions \(\beta(t),t\geq 0\) to (4)-(5) asymptotically converge to the set of equilibria within \(\Xi\)._ Theorem 1 provides analytic guarantees for the stability of the power network at the presence of time-varying inertia. Note that the main conditions on the power supply dynamics and the varying inertia trajectories, described by Assumptions 2 and 3-4 respectively, are locally verifiable. These conditions may be used for the design of prototypes and guidelines for virtual inertia schemes and may motivate practical control designs that enhance the stability properties of the power grid. Furthermore, Assumption 2 includes a wide range of power supply dynamics, as demonstrated in the following section. In addition, Theorem 1 applies to any connected power network topology. Hence, all objectives of Problem 1 are satisfied. ## V Application examples In this section we provide two examples of power supply dynamics that fit within the presented framework. In addition, we present a systematic approach that allows us to obtain the strictness constant \(\rho_{j}\) associated with Assumption 2 for linear systems, which provides the local bound on the maximum rate of inertia growth presented in Assumption 4. In particular, we consider general linear power supply dynamics with minimal state space realization of the form \[\dot{x}_{j}^{s} =A_{j}x_{j}^{s}+B_{j}(-\omega_{j}), \tag{7}\] \[s_{j} =C_{j}x_{j}^{s}+D_{j}(-\omega_{j}),\] where \(A_{j}\in\mathbb{R}^{n_{j}\times n_{j}},B_{j}\in\mathbb{R}^{n_{j}},C_{j}\in \mathbb{R}^{1\times n_{j}}\) and \(D_{j}\in\mathbb{R}\) are matrices describing the power supply dynamics at bus \(j\). For the dynamics described in (7) it can be deduced, by suitably adapting [37, Thm. 3], that the strictness constant \(\rho_{j}\) may be efficiently obtained as the solution of the following optimization problem: \[\begin{split}&\max_{\hat{D},P}D_{j}-\hat{D}\\ s.t.&\quad P=P^{T}\succeq 0,\\ &\left[\begin{matrix}A_{j}^{T}P+A_{j}P&PB_{j}-C_{j}^{T}\\ B_{j}^{T}P-C_{j}&-2\hat{D}\end{matrix}\right]\preceq 0,\end{split} \tag{8}\] i.e., when (8) is maximized at \(\hat{D}=\hat{D}^{*}\) and some \(P\), then \(\rho_{j}=D_{j}-\hat{D}^{*}\). The above problem can be solved in a computationally efficient manner, using standard semidefinite programming tools. In addition, the matrix \(P\) can be used to obtain the storage function associated with \(s_{j}\), as \(V(x_{j}^{s})=\frac{1}{2}(x_{j}^{s})^{T}Px_{j}^{s}\). Furthermore, it is intuitive to note that a larger value of \(D_{j}\), which describes the local damping, yields a larger strictness constant \(\rho_{j}\), as follows from (8). Below, we present two examples of power supply dynamics and explain how the strictness constant may be obtained in each case. As a first example, we consider the first order generation dynamics considered e.g. in [38], that describe the time lag between changes in frequency and the response from generation units. The dynamics are given by \[\begin{split}\tau_{j}\dot{x}_{j}&=-x_{j}-K_{j}\omega_ {j},\\ s_{j}&=x_{j}-\lambda_{j}\omega_{j},\end{split} \tag{9}\] where \(\tau_{j}>0,K_{j}>0\) and \(\lambda_{j}>0\) are the time, droop and damping constants respectively. The solution to (8) for system (9) is given by \(\bar{D}=0\) for any \(\tau_{j},K_{j}\), which results in \(\rho_{j}=\lambda_{j}\). The latter suggests by Assumption 4 that \(\hat{M}_{j}<2\lambda_{j}\) should hold. A more involved example that demonstrates the applicability of the presented analysis concerns the fifth order turbine governor dynamics provided by the Power System Toolbox [39]. This model is described in the Laplace domain by the following transfer function \[G_{j}(s)=K_{j}\frac{1}{(1+sT_{s,j})}\frac{(1+sT_{3,j})}{(1+sT_{c,j})}\frac{(1+ sT_{4,j})}{(1+sT_{5,j})}+\lambda_{j}, \tag{10}\] relating the power supply output \(\hat{s}_{j}\) with the frequency deviation input \(-\hat{\omega}_{j}\), where \(K_{j}\) and \(T_{s,j},T_{3,j},T_{c,j},T_{4,j}\), \(T_{5,j}\) are the droop coefficient and time-constants respectively and \(\lambda_{j}\) denotes the frequency damping. Realistic values for the coefficients in (10) are provided in [39]. To provide a numerical example on how the strictness constant \(\rho_{j}\) may be obtained, we consider the turbine governor dynamics at bus \(36\) of the NPCC network, where the above coefficients take the values \((K_{j},T_{s,j},T_{3,j},T_{c,j},T_{4,j},T_{5,j},\lambda_{j})=(110.1,0.45,0.1,0,13.25,54,30.3)\). By solving (8) using the CVX toolbox [40], we obtain a strictness coefficient \(\rho_{j}\) of approximately \(28.0\). A graphical approach can also be used to obtain the strictness constant \(\rho_{j}\) for systems described by (7). In particular, an approach to verify passivity3 for stable linear systems, is to test that their transfer function is positive real, which is equivalent to the frequency response lying on the right half complex plane. The strictness constant \(\rho_{j}\) can be obtained as the horizontal distance between the Nyquist plot and the imaginary axis. This is demonstrated in Fig. 1, which depicts the Nyquist plot for (10) with the coefficients given as above. The distance between the plot and the imaginary axis matches exactly the value obtained by solving (8). Footnote 3: Passive systems satisfy Definition 2 with strictness constant \(\rho_{j}=0\). ## VI Inertia induced instability A key aspect highlighted in this paper is that varying inertia may cause unstable behaviour. In this section, we provide sufficient conditions that allow us to deduce, from any arbitrarily small deviation from the equilibrium frequency, the existence of single-bus inertia variations that cause substantial deviations in system trajectories. The latter describes unstable behaviour, see e.g. [29]. Hence, we aim to demonstrate how systematic unstable behaviour may result due to inertia variations. For simplicity, we shall consider linearized power flow equations for the dynamics in (4). The dynamics of such system are described below: \[\dot{\eta}=H^{T}\omega, \tag{11a}\] \[M\dot{\omega} =-p^{L}+s-Hp,\] (11b) \[p=B\eta. \tag{11c}\] The equilibria of (5), (11) are described in analogy to Definition 1 as follows. **Definition 3**: _The constants \((\eta^{*},\omega^{*},x^{*,*})\) define an equilibrium of the system (5), (11) if (6a)-(6c) hold, where in (6b) \(s^{*}\) is given by (6e) and \(p^{*}=B\eta^{*}\)._ Below, we demonstrate that the stability results presented in Section IV extend to (5), (11). In particular, the following lemma, proven in the appendix, demonstrates that Assumptions 2, 3 and 4 suffice to deduce the local asymptotic convergence of solutions to (5), (11). Note that Assumption 1 is redundant in this case due to the presence of linearized power flow equations. **Lemma 2**: _Let Assumptions 3 and 4 hold and consider an equilibrium of (5), (11) where Assumption 2 holds. Then, there exists an open neighbourhood \(\Xi\) containing that equilibrium such that solutions \(\beta(t),t\geq 0\) to (5), (11) asymptotically converge to the set of equilibria within \(\Xi\)._ ### _Conditions for instability_ To facilitate the analysis, we define the notion of \(\gamma\)-points. Such points result by considering the equilibria of (5), (11) when some bus \(k\) has a fixed frequency at all times. The notion of \(\gamma\)-points will be used to provide conditions that characterize the solutions to (5), (11). **Definition 4**: _The constants \((\hat{\eta},\hat{\omega},\hat{z}^{*})\) define a \(\gamma\)-point of (5), (11) associated with fixed frequency \(\bar{\omega}\in\mathbb{R}\) at some bus \(k\) if the following hold_ \[\hat{\omega}_{k} =\bar{\omega}, \tag{12a}\] \[0 =H^{T}\hat{\omega},\] (12b) \[0 =-p_{j}^{L}+\hat{s}_{j}-\sum_{k\in\mathcal{N}_{j}^{*}}\hat{p}_{ jk}+\sum_{l\in\mathcal{N}_{j}^{*}}\hat{p}_{lj},j\in\mathcal{N}\setminus\{k\},\] (12c) \[\hat{x}_{j}^{*} =k_{x,j}(-\hat{\omega}_{j}),\ j\in\mathcal{N},\] (12d) \[\hat{p} =B\hat{\eta}, \tag{12e}\] Fig. 1: Nyquist plot for (10), with coefficients associated with the turbine governor dynamics at bus \(36\) within the NPCC network. The strictness constant \(\rho_{j}\) corresponds to the horizontal distance between the Nyquist plot and the imaginary axis. \[\hat{s}_{j}=g_{j}(\tilde{x}_{j}^{s},-\hat{\omega}_{j}),\;j\in\mathcal{N}. \tag{12f}\] The set of points \((\hat{\eta},\hat{\omega},\hat{x}^{s})\) that satisfy (12) associated with fixed frequency \(\tilde{\omega}\) is denoted by \(\Gamma(\bar{\omega})\). It is intuitive to note that the notion of \(\gamma\)-points follows by considering the theoretical case where the inertia at a single bus \(k\) within system (5), (11) is infinite. The equilibrium points of such system when the initial conditions satisfy \(\omega_{k}(0)=\bar{\omega}\) are described by (12). Alternatively, \(\gamma\)-points may describe the equilibria of (5), (11) for some \(p^{L}\). It should be noted that for any set \(\Gamma(\bar{\omega})\) the values of \((\hat{\omega},\hat{x}^{s})\) are unique and satisfy \((\hat{\omega}_{j},\hat{x}_{j}^{s})=(\bar{\omega},k_{x,j}(-\bar{\omega}_{j})),j \in\mathcal{N}\) and that the set \(\Gamma(\bar{\omega})\) does not depend on the choice of bus \(k\) in (12a). Finally, note that \(\Gamma(\omega^{s,*})\) describes the set of equilibria to (5), (11), since \(\omega^{s,*}\in\mathbb{R}\) denotes the equilibrium frequency value. The following assumption is the main condition imposed to deduce inertia induced instability. **Assumption 5**: _The following hold for (5), (11), some bus \(k\) and some positive constants \(\hat{\epsilon},\bar{\epsilon},\Phi\) and \(\bar{\Phi}\) satisfying \(\hat{\epsilon}<\bar{\epsilon}<\Phi\) and \(\bar{\Phi}=\Phi+\bar{\epsilon}\):_ 1. _For any_ \(\bar{\omega}\in\mathcal{B}(\omega^{s,*},\Phi)\)_, there exist_ \(\epsilon>0,\hat{\tau}>0\) _such that when_ \(\omega_{j}(t)\in\mathcal{B}(\bar{\omega},\epsilon),j\in\mathcal{N},\) _for all_ \(t\in[0,\hat{\tau}]\)_, then_ \(\beta(\hat{\tau})\in\mathcal{B}(\gamma,\hat{\epsilon}),\gamma\in\Gamma(\bar{ \omega})\)_._ 2. _When_ \(M_{k}^{v}(t)=0,t\geq 0\) _and Assumption_ 4 _holds then, for any_ \(\bar{\omega}\in\mathcal{B}(\omega^{s,*},\bar{\Phi})\setminus\{\omega^{s,*}\}\) _and any solution to (_5_), (_11_) there exists_ \(\tau\) _such that_ \(\beta(0)\in\mathcal{B}(\gamma,\hat{\epsilon})\setminus\Gamma(\omega^{s,*}), \gamma\in\Gamma(\bar{\omega})\) _implies that_ \(|\omega_{k}(\tau)-\omega^{s,*}|>\bar{\omega}+\bar{\epsilon}\)_._ 3. _Assumption_ 2 _holds for all points in_ \(\Upsilon=\{(\bar{\omega},\tilde{x}^{s}):(\bar{\omega},\tilde{x}^{s})\in\Gamma( \bar{\omega}),\bar{\omega}\in\mathcal{B}(\omega^{s,*},\bar{\Phi})\}\)_. In addition, for all points in_ \(\Upsilon\)_, all neighbourhoods_ \(\Omega_{j}\) _of_ \(\tilde{\omega}_{j}\) _and_ \(X_{j}\) _of_ \(\tilde{x}_{j}^{s}\) _associated with Definition_ 2_, and regions of attraction_ \(\Psi_{j}\) _associated with_ \(\tilde{x}_{j}^{s}\) _in the description of (_5_), satisfy_ \(\Omega_{j}\times X_{j}\supseteq\Omega_{j}\times\tilde{X}_{j},\Psi_{j}\supseteq \tilde{X}_{j}\)_, where_ \(\bar{\Omega}_{j}:=\{p:p\in\mathcal{B}(\omega^{s,*},\bar{\Phi})\}\) _and_ \(\tilde{X}_{j}:=\{p:p\in\mathcal{B}(\tilde{x}_{j}^{s},\bar{\Phi}),\tilde{x}_{j }^{s}\in\Gamma(\omega^{s,*})\},j\in\mathcal{N}\)_._ Assumption 5 is split in three parts. Part (i) requires that when all frequencies lie in a ball of size \(\epsilon\) around \(\bar{\omega}\) then the solutions of the system converge to a ball of size \(\hat{\epsilon}\) around a \(\gamma\)-point in \(\Gamma(\bar{\omega})\) within some finite time \(\hat{\tau}\). This assumption is associated with Lemma 2 which states the convergence of the solutions to (5), (11). Assumption 5(i) is a mild condition that is expected to hold for almost all practical power systems. Assumption 5(ii) is the most important condition imposed, requiring that solutions initiated at any non-equilibrium point within a ball of size \(\hat{\epsilon}\) from a point within \(\Gamma(\bar{\omega})\) will be such that the frequency deviation from equilibrium at some given bus \(k\) is larger in magnitude than \(\bar{\omega}+\bar{\epsilon},\bar{\epsilon}>\hat{\epsilon}\) at some time \(\tau\). This condition is important as it enables the main arguments in our instability analysis. We demonstrate with simulations in Section VII that Assumption 5(ii) applies to two realistic networks. Finally, Assumption 5(iii) requires that the local passivity and asymptotic stability properties on power supply dynamics associated with Assumption 2 and the description below (5) hold for a broad range of points, i.e. for all points in \(\Upsilon\). In addition, it requires sufficiently large regions where these local conditions hold. Assumption 5(iii) could be replaced by the simpler, but more conservative, condition that Assumption 2 and the asymptotic stability properties on power supply dynamics hold globally for all points in \(\Upsilon\). In addition, it could further be relaxed by letting \(\Upsilon=\mathbb{R}^{|\mathcal{N}|+\sum_{j\in\mathcal{N}}n_{j}}\). ### _Instability theorem_ In this section we present our main instability results. In particular, the following theorem, proven in the appendix, demonstrates the existence of single-bus inertia trajectories that cause substantial frequency deviations from any non-equilibrium initial condition at bus \(k\) frequency. The latter suggests unstable behaviour, as shown in Corollary 1 below. **Theorem 2**: _Let Assumption 3 hold, Assumption 4 hold for all \(j\in\mathcal{N}\setminus\{k\}\) and consider an equilibrium of (5), (11) where Assumption 5 holds for some bus \(k\). Then, for any \(\delta>0\) there exists a finite trajectory \(M_{k}(t)\geq M_{k}^{0},t\geq 0\) such that \(|\omega_{k}(0)-\omega^{s,*}|\geq\delta\) implies the existence of some finite time \(\hat{t}\) such that \(\beta(\hat{t})\notin\mathcal{B}(\gamma,\Phi)\) for any \(\gamma\in\Gamma(\omega^{s,*})\) where \(\Phi\) follows from Assumption 5._ Theorem 2 demonstrates the existence of single-bus inertia trajectories such that an arbitrary small frequency deviation may result in substantial deviations in system trajectories. Hence, the stability properties of the power network may be compromised when suitable control is applied on local inertia. The latter may motivate attacks on the inertia of the power network, which may cause large frequency oscillations. In addition, Theorem 2 demonstrates the importance of restricting inertia trajectories at all buses, as follows from Assumption 4, to deduce stability. It should also be noted that a result that trivially follows from Theorem 2 is the existence of inertia trajectories on multiple buses that cause instability. The following result, proven in the appendix, is a corollary of Theorem 2 which deduces the existence of single-bus inertia trajectories that render an equilibrium point unstable. **Corollary 1**: _Let Assumption 3 hold, Assumption 4 hold for all \(j\in\mathcal{N}\setminus\{k\}\) and consider an equilibrium of (5), (11) where Assumption 5 holds for some bus \(k\). Then, there exists a finite trajectory \(M_{k}(t)\geq M_{k}^{0},t\geq 0\) such that the considered equilibrium is unstable._ ## VII Simulations In this section we present numerical simulations using the Northeast Power Coordinating Council (NPCC) \(140\)-bus system and the IEEE New York / New England \(68\)-bus system that further motivate and validate the main findings of this paper. In particular, we first demonstrate how varying virtual inertia may induce large oscillations in the power network. We then verify our analytic stability results by showing that the main imposed conditions, described by Assumptions 2 and 4, yield a stable behaviour. Finally, we demonstrate how single-bus inertia variations may yield large frequency oscillations, verifying Theorem 2. For our simulations, we used the Power System Toolbox [39] on Matlab. The model used by the toolbox is more detailed than our analytic one, including voltage dynamics, line resistances and a transient reactance generator model4. ### _Simulations on the NPCC network_ The NPCC network consists of 47 generation and 93 load buses and has a total real power of \(28.55\) GW. For our simulation, we considered a step increase in demand of magnitude \(2\) p.u. (base \(100\) MVA) at load buses \(2\) and \(3\) at \(t=1\) second. The simulation precision was set at \(10\) ms. #### Iv-A1 Inertia induced oscillations To demonstrate that varying inertia may induce large frequency oscillations and hence compromise the stability of the power network, we considered two cases: 1. no presence of virtual inertia, 2. the presence of virtual inertia at \(10\) generation buses (buses \(23,48,50,54,56,57,72,80,82\) and \(133\)) of magnitude \(M_{a}\), where \(M_{a}\) was equal to \(50\%\) of the physical inertia at bus \(133\). The trajectories of the virtual inertia associated with case (ii) were coupled with the frequency dynamics as follows: \[M_{j}^{v}(t)=\begin{cases}M_{a},\ \text{if}\ \omega_{m}(t)>0.02\ \text{Hz},\\ 0,\ \text{otherwise},\end{cases} \tag{13}\] where \(\omega_{m}(t)=\max_{j\in\mathcal{N}}|\omega_{j}(t)|\). The scheme in (13) adds inertia to the power system when a noticeable frequency deviation is experienced and removes it when the system returns close to the nominal frequency. The frequency response at bus \(70\) for the two considered cases is presented in Fig. 2. From Fig. 2, it can be seen that the addition of fast varying inertia yields large oscillations in the power network. The oscillations follow due to the coupling between the frequency and generation dynamics. In particular, generators respond to frequency signals by appropriately adapting the generated power. However, when the inertia abruptly increases under a noticeable frequency deviation, following (13), it takes longer for frequency to reach its steady state. The latter causes excess generation to be produced which induces frequency overshoots when the inertia suddenly drops. The above process results in frequency oscillations, as verified in Fig. 2. Therefore, care needs to be taken when varying virtual inertia is introduced in the power network, particularly when its dynamics are coupled with those of the network. It should be noted that the possibly destabilising effects of varying inertia are visible even when a large and robust system, such as the NPCC network, is considered. The latter highlights the potential impact of virtual inertia schemes and provides further motivation for their proper regulation. In addition, note that we opted not to present the (widely acknowledged) benefits of (constant) virtual inertia for compactness and to keep the focus on the impact of varying virtual inertia. #### Iv-A2 Stability preserving varying inertia To demonstrate the validity and applicability of the proposed conditions, we repeated the above simulation with the virtual inertia satisfying Assumption 4. In particular, we introduced varying virtual inertia of maximum magnitude \(M_{a}\) in the same set of generation buses as in case (ii). To comply with Assumption 4, the coupling between the virtual inertia dynamics and the frequency dynamics was given by: \[\dot{M}_{j}^{v} =\min(\tau(-M_{j}^{v}+u_{j}),2\rho_{j}-\epsilon), \tag{14a}\] \[u_{j}(t) =\begin{cases}M_{a},\ \text{if}\ \omega_{m}(t)>0.02\ \text{Hz},\\ 0,\ \text{otherwise},\end{cases} \tag{14b}\] where \(\tau=100\text{s}^{-1}\) was the time constant of the virtual inertia dynamics, selected such that a fast inertia variation is allowed, and \(u_{j}\) an input set point. In addition, \(\rho_{j}\) corresponded to the strictness constant at each bus, calculated following the approach presented in Section V, and \(\epsilon=10^{-4}\) a small constant introduced to ensure that the inequality in Assumption 4 was satisfied. The scheme in (14) enabled fast variations in virtual inertia by setting a large value for \(\tau\) and simultaneously restricted its rate of growth in accordance with Assumption 4. To couple the frequency and inertia dynamics, the input \(u_{j}\) was set to \(M_{a}\) when the frequency deviation exceeded \(0.02\) Hz and \(0\) otherwise, as follows from (14b). This case will be referred to as case (iii). The frequency response at bus \(70\) resulting from implementing (14) is depicted in Fig. 3. From Fig. 3, it follows that the proposed scheme yields a stable response for the power system, which validates the main analytical results of the paper. To demonstrate that Assumption 4 enables fast changes in inertia, the virtual inertia at bus \(133\) concerning cases (ii) and (iii) is depicted in Fig. 4. From Fig. 4, it follows that for case (iii), the maximum virtual inertia support is Fig. 3: Frequency response at bus \(70\) when: (i) case (iii) is implemented, with virtual inertia satisfying (14), (ii) the randomized scheme described by (14a), (15) is implemented, where green and red lines correspond to the maximum and minimum frequencies obtained after \(500\) trials respectively. Fig. 2: Frequency response at bus \(70\) when: (i) no virtual inertia is present and (ii) virtual inertia described by (13) is included at \(10\) buses. provided within \(0.3\) seconds from the time the frequency overpasses \(0.02\) Hz and is removed completely after \(3.5\) seconds. By contrast, in case (ii) the virtual inertia instantly reaches \(M_{a}\) at \(1\) second but fluctuates between \(0\) and \(M_{a}\). Its fast fluctuations create frequency oscillations which lead to more inertia fluctuations due to (13), yielding the oscillatory frequency response depicted in Fig. 2. To further demonstrate that the proposed conditions yield a stable response, we considered randomly changing inertia input set points described by: \[u_{j}(t^{+})\!=\!\begin{cases}u_{j}(t),\text{ if }\mathrm{mod}(t,0.5)\neq 0,\\ u_{j}(t)\!+\!0.5M_{a},\text{ if }\mathrm{mod}(t,0.5)=0,r_{j}(t)\geq 0.5,\\ \max(u_{j}(t)\!-\!0.5M_{a},0),\text{ otherwise,}\end{cases} \tag{15}\] where \(t^{+}=\lim_{\epsilon\to 0}(t+\epsilon)\) and \(r_{j}(t)\) is randomly selected from the uniform distribution \([0,1]\) at each \(t\) that satisfies \(\mathrm{mod}(t,0.5)=0\). The scheme in (15) updates the set points \(u_{j}\) every \(0.5\) seconds by equiprobably increasing or decreasing their values by \(0.5M_{a}\). Simultaneously, it ensures that the set points take non-negative values. The dynamics (14a), (15) were implemented and simulated \(500\) times on the same setting as cases (ii) and (iii), to show that stability is preserved for a wide range of varying virtual inertia profiles that satisfy Assumption 4. The latter is demonstrated in Fig. 3, which shows that the maximum and minimum frequencies obtained with the presented randomized scheme are very close to the response associated with case (iii). #### Vi-A3 Instability inducing single-bus varying inertia To demonstrate that local inertia variations may result in unstable behaviour, we performed simulations on the above described setting and considered a very small disturbance of magnitude \(0.05\) p.u. (base 100MVA) at load buses \(2\) and \(3\) at \(t=1\) second. In addition, we considered no virtual inertia at all buses except bus \(23\). The design of the virtual inertia trajectory at bus \(23\) followed the arguments in the proof of Theorem 2. In particular, the virtual inertia shifted between a set of large values and zero when the frequency deviations from the nominal value were large and small respectively. The virtual inertia trajectory at bus \(23\) is depicted in Fig. 5. As follows from Fig. 5, the virtual inertia is piecewise-constant and takes zero values for short durations of time. The frequency response for the considered case is presented in Fig. 6, which depicts increasing frequency oscillations as time grows which eventually lead to instability after approximately 35 seconds. The presence of increasing frequency deviations from equilibrium is in agreement with Assumption 5(ii), which is a main condition to deduce instability. These results verify the analysis presented in Section VI. ### _Simulations on the IEEE New York / New England 68-bus system_ The IEEE New York / New England system contains \(52\) load buses serving different types of loads including constant active and reactive loads and \(16\) generation buses. The overall system has a total real power of \(16.41\) GW. For our simulation, we considered a step increase in demand of magnitude \(2\) p.u. at load buses \(2\) and \(3\) at \(t=1\) second. The time precision of the simulation was \(0.01\) seconds. In analogy to Section VII-A, we aimed to demonstrate how varying inertia may induce large frequency oscillations and that the application of the proposed conditions enables a stable response. We considered the following three cases: 1. The presence of no virtual inertia. 2. The presence of virtual inertia in all generation buses, with dynamics described by (13), where \(M_{a}=20\) seconds. Fig. 4: Virtual inertia at bus \(133\) for cases (ii) and (iii) concerning inertia described by (13) and (14) and leading to oscillatory and stable responses respectively. After \(3.5\) seconds, the virtual inertia is \(M_{a}\) at almost all times, i.e. with fast fluctuations, for case (ii) and zero at all times for case (iii). Fig. 5: Virtual inertia at bus \(23\) that resulted in unstable behaviour under constant inertia in the remaining power network. Fig. 6: Frequency response at bus \(23\) when varying virtual inertia is present at bus \(23\) and no virtual inertia is present in the remaining buses. 3. The presence of virtual inertia in all generation buses with dynamics described by (14), where \(M_{a}=20\) seconds and \(\tau=100\mathrm{s}^{-1}\). Case (ii) includes fast varying, frequency dependent inertia trajectories, which do not abide by the conditions presented in this paper. On the other hand, case (iii) includes inertia with variations bounded by the local strictness constant \(\rho_{j}\). The frequency response at bus \(23\) for the three considered cases is depicted in Fig. 7. From Fig. 7, it follows that case (ii) yields an oscillatory frequency response, demonstrating how fast inertia oscillations may cause stability issues. On the other hand, a stable response was observed when case (iii) was implemented. The latter demonstrates how imposing a suitable bound on the rate of change of inertia may enable a stable response and verifies the analysis presented in Section IV. To demonstrate how local inertia variations may result in unstable behaviour, we simulated the above described setting with a very small load disturbance of magnitude \(0.01\) p.u. at load buses \(2\) and \(3\) at \(t=1\) second. In addition, we considered varying inertia at bus \(57\) and constant inertia at all remaining buses. Similar to Section VII-A, the design of the virtual inertia trajectory at bus \(57\) followed the arguments in the proof of Theorem 2, i.e. large inertia values were employed during large frequency deviations and no virtual inertia was considered otherwise. The frequency response at a randomly selected bus (bus \(20\)) is presented in Fig. 8. Figure 8 depicts growing oscillations, caused due to the inertia variations at bus \(57\). Considering that such response follows from a normally negligible disturbance (of \(2\) MW) demonstrates the capability of local inertia variations to cause instability in power networks and verifies the analysis presented in Section VI. ## VIII Conclusion We have investigated the stability properties of power networks with time-varying inertia within the primary frequency control timeframe. In particular, we considered the interaction between the frequency dynamics and a wide class of non-linear power supply dynamics at the presence of time-varying inertia. For the considered system, we provided asymptotic stability guarantees under two proposed conditions. The first condition required that the aggregate power supply dynamics at each bus satisfied a passivity related property. The second condition set a constraint on the maximum rate of growth of inertia that was associated with the local power supply dynamics. The proposed conditions are decentralized and applicable to arbitrary network topologies and may be used for the design of practical guidelines for virtual inertia schemes that will improve the reliability and enhance the stability properties of the power grid. In addition, to demonstrate their applicability, we explain how these conditions can be efficiently verified for linear power supply dynamics by solving a suitable linear matrix inequality optimization problem. Our stability analysis is complemented with further analytic results that demonstrate how single-bus inertia variations may lead to instability. Numerical simulations on the NPCC 140-bus and New York/ New England 68-bus systems offered additional motivation and validated our analytic results. In particular, the simulation results demonstrated how varying virtual inertia may induce large frequency oscillations and verified that the application of the proposed conditions resulted in a stable response. In addition, they illustrate how single-bus inertia variations may lead to unstable power system behaviour. This appendix includes the proofs of Lemmas 1 and 2, Theorems 1 and 2 and Corollary 1. Additionally, it includes Propositions 1, 2 and 3 that facilitate the proof of Theorem 2. _Proof of Lemma 1:_ Existence of a unique solution \(\beta(t)\) to (4)-(5) for all \(t\geq 0\) requires [29, Ch. 4.3] that (i) (4)-(5) is continuous in \(t\) and \(\beta\), and (ii) (4)-(5) is locally Lipschitz in \(\beta\) uniformly in \(t\in[0,\infty)\). Condition (i) is satisfied from Assumption 3 and the continuity of (4)-(5) in \(\beta\). Condition (ii) is satisfied since (4)-(5) is locally Lipschitz in \(\beta\) for given \(t\) and the fact that \(M_{j}(t)\geq M_{j}^{0}>0\) for all times which allows a uniform local Lipschitz constant to be obtained for all \(t\geq 0\). Hence, for any initial condition \(\beta(0)\in\mathbb{R}^{m}\) there exists a unique solution \(\beta(t),t\geq 0\) to (4)-(5). \(\blacksquare\) _Proof of Theorem 1:_ We will use Lyapunov arguments to prove Theorem 1 by treating (4)-(5) as a time-varying system with time-dependent parameter \(M(t)\). First, we consider the function \[V_{F}(M,\omega)=\frac{1}{2}\sum_{j\in\mathcal{N}}M_{j}(t)(\omega_{j}-\omega_ {j}^{*})^{2},\] Fig. 8: Frequency response at bus \(20\) when varying virtual inertia is present at bus \(57\) and no virtual inertia is present at the remaining buses. Fig. 7: Frequency response at bus \(23\) when: (i) no virtual inertia is present, (ii) virtual inertia with dynamics described by (13) is included in all generation buses and (iii) virtual inertia with dynamics described by (14) is included in all generation buses. with time derivative along trajectories of (1b) given by \[\dot{V}_{F} =\sum_{j\in\mathcal{N}}[\frac{\dot{M}_{j}}{2}(\omega_{j}-\omega_{j}^ {*})^{2}\] \[+(\omega_{j}-\omega_{j}^{*})(-p_{j}^{L}+s_{j}-\sum_{k\in\mathcal{N }_{j}^{*}}p_{jk}+\sum_{l\in\mathcal{N}_{j}^{*}}p_{lj})], \tag{16}\] In addition, we consider the function \[P(\eta)=\sum_{(k,l)\in\mathcal{E}}\int_{\eta_{kl}^{*}}^{\eta_{kl}}B_{kl}(\sin \phi-\sin\eta_{kl}^{*})d\phi.\] From (1a) and (1c), the derivative of \(V_{P}\) is given by \[\dot{V}_{P} =\sum_{(k,l)\in\mathcal{E}}B_{kl}(\sin\eta_{kl}-\sin\eta_{kl}^{*} )(\omega_{k}-\omega_{l})\] \[=\sum_{(k,l)\in\mathcal{E}}(p_{kl}-p_{kl}^{*})(\omega_{k}-\omega_ {l}). \tag{17}\] Furthermore, from Assumption 2 and Definition 2, it follows that there exist open neighbourhoods \(\Omega_{j}\) of \(\omega_{j}^{*}\) and \(X_{j}\) of \(x_{j}^{*,*}\) and continuously differentiable, positive semidefinite functions \(V_{j}(x_{j}^{*})\) such that \[\dot{V}_{j}\leq((-\omega_{j})-(-\omega_{j}^{*}))(s_{j}-s_{j}^{*})- \rho_{j}((-\omega_{j})-(-\omega_{j}^{*}))^{2}, \tag{18}\] where \(\rho_{j}>0\), for all \(\omega_{j}\in\Omega_{j},x_{j}^{*}\in X_{j}\) and for each \(j\in\mathcal{N}\). We now consider the following Lyapunov candidate function \[V(M,\beta)=V_{F}(M,\omega)+V_{P}(\eta)+\sum_{j\in\mathcal{N}}V_{j}(x_{j}^{*}), \tag{19}\] reminding that \(\beta=(\omega,\eta,x^{*})\). Using (16)-(18) it follows that when \(\omega_{j}\in\Omega_{j},x_{j}^{*}\in X_{j},j\in\mathcal{N}\), then \[\dot{V}\leq\sum_{j\in\mathcal{N}}(\dot{M}_{j}/2-\rho_{j})(\omega_{j}-\omega_ {j}^{*})^{2}\leq 0, \tag{20}\] where the first part follows by applying (6b) on (16) and the second from Assumption 4. Function \(V_{F}\) has a global minimum at \(\omega=\omega^{*}\). In addition, Assumption 1 guarantees the existence of a neighbourhood of \(\eta^{*}\) where \(V_{P}\) is increasing, which suggests that \(V_{P}\) has a strict local minimum at \(\eta^{*}\). Moreover, from Assumption 2 and Definition 2 it follows that each \(V_{j},j\in\mathcal{N}\) has a strict local minimum at \(x_{j}^{*,*}\). Hence, \(V\) has a local minimum at \(\beta^{*}=(\omega^{*},\eta^{*},x^{*,*})\) that is independent of the value of \(M(t)\), since \(M_{j}(t)\geq M_{j}^{0}>0,t\geq 0,j\in\mathcal{N}\). In addition, \(\beta^{*}\) is a strict minimum associated with the states, i.e. locally the set \(\{\bar{\beta}:\bar{\beta}=\arg\min_{\beta}V(M,\beta),\forall M\in S:=\bigcup_ {j\in\mathcal{N}}[M_{j}^{0},\infty)\}\) contains \(\beta^{*}\) only. We can now choose a neighbourhood of \(\beta^{*}\), denoted by \(B\), such that (i) \(\omega_{j}\in\Omega_{j},j\in\mathcal{N}\), (ii) \(x_{j}^{*}\in X_{j},j\in\mathcal{N}\), and (iii) all \(x_{j}^{*},j\in\mathcal{N}\) lie in their respective neighbourhoods \(\Psi_{j}\), as defined in Section II-B. Hence, it follows that for all \((M,\beta)\in S\times B\), \(V\) is a non-increasing function with a strict local minimum associated with the states at \(\beta^{*}\). Therefore, the set \(\Xi=\{\beta:\exists M\in S\text{ such that }V\leq\epsilon,V\text{ connected}\}\) containing \(\beta^{*}\) is both compact and positively invariant5 with respect to (4)-(5) when \(\epsilon>0\) is sufficiently small. Footnote 5: Note that the compactness of the set \(\Xi\) follows from \(M_{j}\geq M_{j}^{0}>0,j\in\mathcal{N}\) at all times. Hence, \(\Xi\) is uniquely defined, i.e. it contains the values of \(\beta\) that guarantee \(V\leq\epsilon\) for some \(M_{j}\geq M_{j}^{0},j\in\mathcal{N}\). This set is obtained at \(M_{j}=M_{j}^{0},j\in\mathcal{N}\), since increasing \(M_{j}\) results in a smaller set of values for \(\beta\) when \(\omega\neq\omega^{*}\). In addition, although \(V\) depends on \(M\), (20) guarantees that if at some \(\tau\), \(\beta(\tau)\in\Xi\), then \(\beta(t)\in\Xi\) for all \(t\geq\tau\). We can now apply [29, Theorem 4.2] with the Lyapunov function \(V\) and the invariant set \(\Xi\) for the solutions \(\beta(t),t\geq 0\) of (4)-(5). From this result, we deduce that for any \((M),\beta(0))\in S\times\Xi\) it follows that \(\beta\to Q\) as \(t\rightarrow\infty\), where \(Q\) is the largest invariant set within \(\Xi\cap\{\beta:\dot{V}=0\}\). Within this set, it holds that \(\omega_{j}=\omega_{j}^{*},j\in\mathcal{N}\) from (20), which implies that (6a), (6b) and (6d) hold. It hence follows that \(\eta\) converges to a constant value \(\bar{\eta}\). In addition, the definitions in Section II-B suggest that \(\omega=\omega^{*}\) implies the convergence of \(x^{*}\) to \(x^{*,*}\). Therefore, we conclude that all solutions \(\beta(t),t\geq 0\) of (4)-(5) initiated in \(\Xi\) converge to the set of equilibrium points within \(\Xi\). The latter completes the proof. \(\blacksquare\) _Proof of Lemma 2:_ The proof follows directly from the proof of Theorem 1 by letting \(V_{P}(\eta)=\sum_{(k,l)\in\mathcal{E}}B_{kl}(\eta_{kl}-\eta_{kl}^{*})^{2}/2\). In particular, it follows that locally around the considered equilibrium, solutions to (5), (11) satisfy (20). The rest of the arguments follow in analogy to the proof of Theorem 1. \(\blacksquare\) To facilitate the analysis associated with the instability results presented in Section VI, we consider the effect of having some bus \(k\) with a fixed frequency \(\bar{\omega}\) within system (5), (11). The dynamics of such a system are described below: \[\dot{\eta} =H^{T}\omega, \tag{21a}\] \[M_{j}\dot{\omega}_{j} =-p_{j}^{L}+s_{j}-\sum_{k\in\mathcal{N}_{j}^{*}}p_{jk}+\sum_{l\in \mathcal{N}_{j}^{*}}p_{lj},j\in\mathcal{N}\setminus\{k\},\] (21b) \[\omega_{k} =\bar{\omega},\] (21c) \[p =B\eta. \tag{21d}\] The system represented by (21) aims to describe the behaviour of (11), when the inertia of bus \(k\) is infinite and \(\omega_{k}(0)=\bar{\omega}\). Although the assumption of infinite inertia is unrealistic, the trajectories of (5), (21) approximate those of (5), (11) for a long time interval when the inertia at bus \(k\) is sufficiently large. The latter enables to explore several properties of (5), (11) and facilitates our instability analysis. It should be noted that there is a direct relation between (21) and the analysis presented in Section VI, since it can be trivially shown that \(\gamma\)-points, defined in Definition 4, coincide with the equilibria to (5), (21). The following proposition demonstrates the convergence of solutions to (5), (21) to the set of its equilibria, similarly to Theorem 1. In addition, it provides conditions that allow to deduce convergence to an equilibrium point of (5), (21). It should be clarified that within Proposition 1, and also Proposition 2 below, Assumptions 3, 4 refer to all buses besides bus \(k\). The latter follows since no inertia is defined for bus \(k\). **Proposition 1**: _Let Assumptions 3 and 4 hold and consider an equilibrium of (5), (21) where Assumption 2 holds. Then, there exists an open neighbourhood \(\Xi\) containing that equilibrium such that_ 1. _solutions_ \(\beta(t),t\geq 0\) _to (_5_), (_21_) initiated in_ \(\Xi\) _asymptotically converge to the set of equilibria within_ \(\Xi\)_,_ 2. _if Assumption_ 2 _holds for all equilibria within_ \(\Xi\)_, then solutions_ \(\beta(t),t\geq 0\) _to (_5_), (_21_) initiated in_ \(\Xi\) _converge to an equilibrium point within_ \(\Xi\)_._ _Proof of Proposition 1:_ The proof is split in two parts, regarding each statement in Proposition 1. _Part (i):_ The proof follows by using Lyapunov arguments, similar to the proof of Theorem 1, by treating (5), (21) as a time-varying system. First, we consider the function \[V_{G}(M,\omega)=\frac{1}{2}\sum_{j\in\mathcal{N}\setminus\{k\}}M_{j}(t)( \omega_{j}-\omega_{j}^{*})^{2},\] with time derivative along trajectories of (21b) given by \[\dot{V}_{G} =\sum_{j\in\mathcal{N}\setminus\{k\}}[\frac{\dot{M}_{j}}{2}( \omega_{j}-\omega_{j}^{*})^{2}\] \[+(\omega_{j}-\omega_{j}^{*})(-p_{j}^{L}+s_{j}-\sum_{k\in\mathcal{ N}_{j}^{*}}p_{jk}+\sum_{l\in\mathcal{N}_{j}^{*}}p_{lj})]. \tag{22}\] In addition, we consider the function \(V_{H}(\eta)=\sum_{(k,l)\in\mathcal{E}}B_{kl}(\eta_{kl}-\eta_{kl}^{*})^{2}/2\), with derivative given by (21a) and (21d) as \[\dot{V}_{H}=\sum_{(m,l)\in\mathcal{E}}(p_{ml}-p_{kl}^{*})(\omega_{m}-\omega_{l }). \tag{23}\] Furthermore, from Assumption 2 and Definition 2, it follows that there exist open neighbourhoods \(\Omega_{j}\) of \(\omega_{j}^{*}\) and \(X_{j}\) of \(x_{j}^{*,*}\) and continuously differentiable, positive semidefinite functions \(V_{j}(x_{j}^{*})\) such that (18) holds. We now consider the following Lyapunov candidate \[V(M,\beta)=V_{G}(M,\omega)+V_{H}(\eta)+\sum_{j\in\mathcal{N}}V_{j}(x_{j}^{*}),\] reminding that \(\beta=(\omega,\eta,x^{*})\). Using (18), (22), (23) it follows that \[\dot{V}\leq\sum_{j\in\mathcal{N}\setminus\{k\}}(\dot{M}_{j}/2-\rho_{j})( \omega_{j}-\omega_{j}^{*})^{2}\leq 0, \tag{24}\] by applying (6b) on (22) and using Assumption 4. We then define a compact set \(\Xi\) that includes \(\beta^{*}\) in analogy to the proof of Theorem 1. Using [29, Theorem 4.2] and similar arguments as in the proof of Theorem 1, it follows that solutions initiated in \(\Xi\) converge to the set of equilibria within \(\Xi\) as \(t\to\infty\). _Part (ii):_ Part (i) proved that solutions initiated in \(\Xi\) converge to the set of equilibria within \(\Xi\) and that \(\Xi\) is a compact invariant set of (5), (21). Hence, the considered equilibrium point is Lyapunov stable [29, Definition 3.1]. Now if Assumption 2 holds for all equilibria within \(\Xi\) it follows that these equilibria are also Lyapunov stable. The latter allows to use [29, Th. 4.20] to deduce that all solutions starting in \(\Xi\) converge to an equilibrium point within \(\Xi\). \(\blacksquare\) The following proposition, shows that system (5), (21) has arbitrarily long periods of time where all bus frequencies lie within a ball of size \(\epsilon\) from \(\bar{\omega}\), for any positive value of \(\epsilon\). **Proposition 2**: _Let Assumptions 3 and 4 hold and consider an equilibrium of (5), (21) where Assumption 2 holds. Then, there exists an open neighbourhood \(\Xi\) containing that equilibrium such that for any solution \(\beta(t),t\geq 0\) to (5), (21) initiated in \(\Xi\) and any \(\tau,\epsilon\in\mathbb{R}_{+}\) there exists some \(\hat{\tau}\in\mathbb{R}_{+}\) such that \(\omega_{j}(t)\in\mathcal{B}(\bar{\omega},\epsilon),t\in[\hat{\tau},\hat{\tau} +\tau],j\in\mathcal{N}\)._ _Proof of Proposition 2:_ First note that the results presented in Proposition 1 hold, since all associated assumptions are satisfied. Then, from (24) and the Lipschitz continuity of (5), (21), it follows that for any \(\tau,\epsilon\in\mathbb{R}_{+}\) there exists some \(\delta>0\) such that for any \(\bar{\tau}\), if \(\omega_{j}(t)\notin\mathcal{B}(\bar{\omega},\epsilon)\) for some \(j\in\mathcal{N}\) and some \(t\in[\bar{\tau},\bar{\tau}+\tau]\) then \(V(\bar{\tau}+\tau)\leq V(\bar{\tau})-\delta\). Noting that \(V(0)\) is bounded and \(V(t)\geq 0,t\geq 0\) allows to deduce the above result by contradiction. \(\blacksquare\) The following proposition states the existence of a virtual inertia trajectory \(M_{k}^{v}(t),t\geq 0\) such that solutions to (5), (11) and (5), (21) are arbitrarily close for arbitrarily long, but finite, time intervals. For convenience in presentation, we consider \(\omega_{k}\) as a state of system (5), (21) that keeps a constant value at all times. The latter suggests that the statement regarding the initial conditions of (5), (11) and (5), (21) implies that \(\omega_{k}(0)=\bar{\omega}\). In addition, by slightly abusing notation, we denote trajectories of (5), (11) and (5), (21) by \(\beta(t)\) and \(\hat{\beta}(t)\) respectively. **Proposition 3**: _Let Assumption 3 hold. Then, for any \(\tau,\epsilon\in\mathbb{R}_{+}\) and any bounded trajectory for \(M_{k}^{v}(t),j\in\mathcal{N}\setminus\{k\},t\geq 0\), there exists a trajectory for \(M_{k}^{v}(t),t\geq 0\) such that solutions to (5), (11) and (5), (21) with \(\bar{\beta}(0)=\bar{\beta}(0)\) satisfy \(\left\|\beta(t)-\hat{\beta}(t)\right\|\leq\epsilon,t\in[0,\tau]\)._ _Proof of Proposition 3:_ First note that both (5), (11) and (5), (21) are locally Lipschitz due to Assumption 3 and the conditions on (5). The proof follows by considering (11) and noting that when \(M_{k}^{v}\to\infty\) the frequency derivative at bus \(k\) tends to zero and hence the frequency at bus \(k\) is constant. Hence, for any bounded trajectory for \(M_{j}^{v}(t),j\in\mathcal{N}\setminus\{k\},t\geq 0\), when \(\omega_{k}(0)=\bar{\omega}\) and \(M_{k}^{v}=\infty\) the dynamics described by (11) and (21) are identical. The latter suggests the existence of sufficiently large, but finite, value \(\bar{M}\) such that for any finite time \(\tau\) and any \(\epsilon>0\), \(M_{k}^{v}(t)\geq\bar{M},t\in[0,\tau]\) implies that \(\left\|\beta(t)-\hat{\beta}(t)\right\|\leq\epsilon,t\leq\tau\). The latter completes the proof. \(\blacksquare\) _Proof of Theorem 2:_ To prove Theorem 2, we will define an iterative process and provide properties for the trajectory of \(M_{k}\) such that there exist sequences of time instants \(\hat{t}_{i},i\in\mathbb{N}_{+}\) and positive values \(\phi_{i},i\in\mathbb{N}_{+}\) satisfying \(\hat{t}_{j}>\hat{t}_{j},i>j\) and \(\phi_{i}>\phi_{j},i>j\) such that \(\beta(\hat{t}_{i})\notin\mathcal{B}(\gamma,\phi_{i}),\gamma\in\Gamma(\omega^{ \prime*,*}),i\in\mathbb{N}_{+}\). In addition, we will demonstrate the existence of some finite iteration \(n\) such that \(\phi_{n}\geq\Phi\), where \(\Phi>0\) is defined in Assumption 5. From the theorem statement, it is assumed that \(|\omega_{k}(0)-\omega^{*,*}|=\delta\). Then consider Proposition 3. The latter claims that for any finite \(\epsilon_{1},\tau\), there exists a trajectory for \(M_{k}^{v}(t)\) such that solutions to (5), (11) and (5), (21) with the same initial conditions have a distance of at most \(\epsilon_{1}\), for \(t\leq\tau\). In addition, from Proposition 2 it follows that for any \(\tau_{2},\epsilon_{2}\in\mathbb{R}_{+}\) there exists some \(\hat{\tau}_{2}\in\mathbb{R}_{+}\) such that \(\omega_{j}(t)\in\mathcal{B}(\bar{\omega},\epsilon),t\in[\hat{\tau}_{2},\hat{\tau} _{2}+\tau_{2}],j\in\mathcal{N}\). Using the previous argument and letting \(\bar{\omega}=\omega_{k}(0)\) suggests that solutions to (5), (11) satisfy \(\omega_{j}(t)\in\mathcal{B}(\bar{\omega},\epsilon),t\in[\hat{\tau}_{2},\hat{ \tau}_{2}+\tau_{2}],j\in\mathcal{N}\), where \(\epsilon=\epsilon_{1}+\epsilon_{2}\). Assumption 5(i) suggests the existence of some finite \(\hat{\tau}\) such that the previous statement implies that \(\beta(\hat{\tau})\in\mathcal{B}(\gamma,\hat{\epsilon}),\gamma\in\Gamma(\bar{ \omega}),\bar{\epsilon}>0\). The latter additionally requires that \(\tau\geq\hat{\tau}\) which can be achieved by suitably selecting \(M_{k}^{v}\) following Proposition 3. We now define \(\hat{t}_{1}=\tau\) and \(\bar{\omega}^{1}=\delta\). From Assumption 5(ii), there exists some \(\tau^{1}\) such that \(M_{k}(t)=M_{0}^{0},t\in[\hat{t}_{1},\hat{t}_{1}+\tau^{1}]\) implies that solutions to (5), (11) satisfy \(|\omega_{k}(\hat{t}_{1}+\tau^{1})-\omega^{s,*}|>\bar{\omega}^{1}+\bar{\epsilon },\bar{\epsilon}>\hat{\epsilon}\). If for some \(t\in[\hat{t}_{1},\hat{t}_{1}+\tau^{1}]\) it holds that \(\beta(\bar{t})\notin\mathcal{B}(\gamma,\Phi),\gamma\in\Gamma(\omega^{s,*})\) then the proof is complete. Otherwise, we let \(\phi_{1}=\bar{\omega}^{1}+\bar{\epsilon}\), set a sufficiently large value for \(M_{k}^{v}\) at \(t=\hat{t}_{1}+\tau^{1}\), as follows from Proposition 3, and repeat the above process iteratively6, using Assumption 5(iii) to deduce the convergence arguments7 associated with Proposition 1. This process creates a sequence of time instants \(\hat{t}_{l}\), associated with each iteration \(l\in\mathbb{N}_{+}\), satisfying \(\hat{t}_{l+1}>\hat{t}_{l}\), such that \(|\omega_{k}(\hat{t}_{l+1})-\omega^{s,*}|\geq|\omega_{k}(\hat{t}_{l})-\omega^{s,*}|+\bar{\epsilon}-\hat{\epsilon}\), which implies that \(\phi_{l+1}>\phi_{l}+\bar{\epsilon}-\hat{\epsilon}\). Footnote 6: It should be noted that Assumption 3 requires that inertia trajectories are locally Lipschitz in time. Assumption 3 is satisfied by considering trajectories where the virtual inertia linearly changes between the zero and considered sufficiently large values and vice versa within some time duration \(\bar{\delta}\) prior to \(\hat{t}_{i}\) and \(\hat{t}_{i}+\tau^{i}\) respectively, for some sufficiently small value for \(\bar{\delta}\). Footnote 7: In particular, Assumption 5(iii) suggests that the regions where the asymptotic stability and passivity arguments used to define the set \(\Xi\) in the proof of Proposition 1 hold, are supsets of the regions where the associated trajectories considered in the proof arguments lie. Since the value of \(\Phi\) is bounded, this will be reached in a finite amount of iterations (no more than \(\Phi/(\bar{\epsilon}-\hat{\epsilon})\)). Since the time required for each iteration is finite, then there exists some finite time \(\bar{t}\) such that \(\beta(\bar{t})\notin\mathcal{B}(\gamma,\Phi),\gamma\in\Gamma(\omega^{s,*})\). Noting that no assumption is made for the magnitude of \(\delta\), and hence that the above arguments hold for any \(\delta>0\), completes the proof. _Proof of Corollary 1:_ The proof follows directly from Theorem 2 which makes the same assumptions. In particular, an unstable equilibrium is defined as an equilibrium that is not stable, e.g. [29, Drfa. 4.1]. For a system described by \(\dot{x}=f(x)\), a stable equilibrium satisfies the property that for any \(\epsilon>0\), there exists some \(\delta>0\) such that \(\|x(0)\|<\delta\) implies \(\|x(t)\|<\epsilon,t\geq 0\). Theorem 2 states that for any \(\delta>0\), there exists an inertia trajectory at bus \(k\) such that \(|\omega_{k}-\omega^{s,*}|\geq\Phi\). The latter suggests that under specific inertia trajectories, there exists some \(\epsilon\) (i.e. any \(\epsilon<\Phi\)) such that there does not exist any \(\delta>0\) such that the resulting trajectories for (5), (11) are bounded by \(\epsilon\). The latter completes the proof.
2301.06131
Volume Product
Our purpose here is to give an overview of known results and open questions concerning the volume product ${\mathcal P}(K)=\min_{z\in K}{\rm vol}(K){\rm vol}((K-z)^*)$ of a convex body $K$ in ${\mathbb R}^n$. We present a number of upper and lower bounds for ${\mathcal P}(K)$, in particular, we discuss the Mahler's conjecture on the lower bound of ${\mathcal P}(K)$, which is still open. We also show connections of ${\mathcal P}(K)$ with different parts of modern mathematics, including Geometric Number Theory, Convex Geometry, Analysis, Harmonic Analysis as well as Systolic and Symplectic Geometries and Probability.
Matthieu Fradelizi, Mathieu Meyer, Artem Zvavitch
2023-01-15T16:16:01Z
http://arxiv.org/abs/2301.06131v1
# Volume product ###### Abstract. Our purpose here is to give an overview of known results and open questions concerning the volume product \(\mathcal{P}(K)=\min_{z\in K}\operatorname{vol}(K)\operatorname{vol}((K-z)^{*})\) of a convex body \(K\) in \(\mathbb{R}^{n}\). We present a number of upper and lower bounds for \(\mathcal{P}(K)\), in particular, we discuss the Mahler's conjecture on the lower bound of \(\mathcal{P}(K)\), which is still open. We also show connections of \(\mathcal{P}(K)\) with different parts of modern mathematics, including Geometric Number Theory, Convex Geometry, Analysis, Harmonic Analysis as well as Systolic and Symplectic Geometries and Probability. Key words and phrases:convex bodies, polar bodies, volume product, Mahler's conjecture, Blaschke-Santalo inequality, Equipartitions 2010 Mathematics Subject Classification: 52A20, 52A40, 53A15, 52B10 A.Z. is supported in part by the U.S. National Science Foundation Grant DMS-1101636, CNRS and U.S. National Science Foundation under Grant No. DMS-1929284 while A.Z. was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Harmonic Analysis and Convexity semester program. ## 1. Introduction More or less attached to the name of Kurt Mahler (1903-1988), there are at least two celebrated unsolved problems: * Lehmer's problem on algebraic numbers * The lower bound for the volume product of convex bodies. The celebrity of these two problems comes from the fact that they are both very natural and easy to state, but still unsolved and that for almost one century, a lot of mathematicians gave partial results, equivalent statements or many generalizations. There are still a lot of interesting attempts to resolve those problems which appear every now and then and produce connections of those questions to many areas of modern mathematics. Although we shall be interested here in the second one, for the curiosity of the reader we summarize the first one. Let \(\alpha\) be an algebraic integer and \(P\) be the minimal polynomial of \(\alpha\), that is the polynomial with integer coefficients with the smallest degree \(d\) such that the coefficient of \(x^{d}\) is \(1\) and \(\mathcal{P}(\alpha)=0\). Let \(\alpha_{1}=\alpha\) and \(\alpha_{2},\dots,\alpha_{d}\in\mathbb{C}\) be the other roots of \(P\). The _Mahler measure_ of \(\alpha\) is the number \(M(\alpha):=\prod_{k=1}^{d}\max(|\alpha_{k}|,1)\). By a classical result of Kronecker, if \(M(\alpha)=1\), then \(\alpha\) is a root of unity. But how near can \(\alpha\) be from \(1\) when it is not a root of unity? Is there a constant \(c>1\), independent of the degree of \(\alpha\), such that \(M(\alpha)>1\) implies that \(M(\alpha)>c\)? Derrick Henry Lehmer conjectured in 1933 [Leh] that the answer to this question is positive (we note that for \(c\) depending on the degree of \(\alpha\) a lot of estimates were given) and Mahler contributed to it, at least, by defining the measure to which his name was given [Sm, VG]. We shall be mainly concerned here with a second problem: Let \(K\) be a convex symmetric body in \(\mathbb{R}^{n}\), which is the unit ball of a \(n\)-dimensional normed space \(E\). Let \(K^{*}\) be the polar body of \(K\), which is the unit ball of \(E^{*}\), the dual of \(E\). What are the bounds for the _volume product_\(\mathcal{P}(E)=\mathcal{P}(K):=\operatorname{vol}(K)\operatorname{vol}(K^{*})\)? It appears that the best upper bounds are known for a long time (Blaschke 1917 for \(n=2,3\), [Bl1], Santalo 1949, \(n\geq 4\)[San2]), but to find the exact lower bounds is still an open conjecture, although the asymptotic behavior of \(\min\{\mathcal{P}(E);E\ n\text{-dimensional normed space}\}\) was discovered almost 40 year ago by Bourgain and V. Milman [BM]. This problem has a lot of generalizations and specializations. One can ask a series of very natural questions including: What happens if \(K\) is no longer centrally symmetric? What happens for special classes of convex bodies? Is there a functional version of the volume product? What are the possible applications and connections inside and outside convex geometry? We must also note that a lot of new methods were used, in particular from functional analysis, harmonic analysis, topology, differential geometry and probability, to prove properties of the volume product and to attack different cases of this question. We shall try here to explain just some of them and summarize the others. The paper is structured in the following way. We introduce the volume product and prove its basic properties in Section 1.1. In Section 1.2, we describe the methods of shadow systems which turn out to be essential in the study of the bounds for volume product. In Section 2, we discuss the upper bound for the volume product - the Blaschke-Santalo inequality. We present different proofs, including a proof via Steiner symmetrizations and a harmonic analysis approach; we also discuss a number of extensions of this inequality. In Section 3, we discuss the conjectured lower bound - the Mahler conjecture. We present a solution in a number of partial cases, including the case of zonoids, of unconditional bodies and of dimension 2 and a very recent solution for symmetric 3-dimensional bodies. We also present here an approach to stability results to both upper and lower bounds. Section 4 is dedicated to the asymptotic lower estimates for the volume product, in particular to the Bourgain-Milman inequality. Here, we extend our presentation to the harmonic analytic and complex analytic approach to the volume product. Section 5 is dedicated to the functional inequalities related to the volume product with a special connection to transport inequalities. In section 6, we discuss the generalization of the volume product to the case of many functions and bodies. Finally, in section 7, we present a sample of connections of the bounds for volume product to other inequalities, including the slicing conjecture, Viterbo's conjecture, applications to the geometry of numbers and isosystolic inequalities. We refer the reader to [1, 1, 2, 3, 4, 5, 6, 7] for many additional information on convex bodies, volume and mixed volume, duality and other core objects in analysis, geometry and convexity used in this survey. ### Acknowledgments We are grateful to Richard Gardner, Dmitry Faifman, Dylan Langharst, Erwin Lutwak and Shlomo Reisner for many corrections, valuable discussions and suggestions. ### Notations and results before 1980 A convex body \(K\) in \(\mathbb{R}^{n}\) is a convex compact subset of \(\mathbb{R}^{n}\) with nonempty interior denoted \(\operatorname{int}(K)\). We say that \(L\subset\mathbb{R}^{n}\) is centrally symmetric if \(L=-L\). Let \(K\) be a convex body in \(\mathbb{R}^{n}\) such that \(0\in\operatorname{int}(K)\); for \(x\in\mathbb{R}^{n}\), we define \[\|x\|_{K}=\inf\{t>0;\ tx\in K\}\] to be the _gauge_ of \(K\); in particular, when \(K\) is a convex symmetric body, \(x\mapsto\|x\|_{K}\) is the norm on \(\mathbb{R}^{n}\) for which \(K\) is the closed unit ball. We endow \(\mathbb{R}^{n}\) with its natural scalar product, denoted \(\langle\,\ \rangle\), the associated Euclidean norm denoted \(|\cdot|\); the Euclidean ball of radius one is denoted \(B_{2}^{n}\). The canonical Lebesgue measure of a Borel set \(A\subset\mathbb{R}^{n}\) is denoted by \(\operatorname{vol}(A)\). Let \(K\) be a convex body. If \(0\in\operatorname{int}(K)\), the _polar body_\(K^{*}\) is defined by \[K^{*}=\{y\in\mathbb{R}^{n};\langle x,y\rangle\leq 1\text{ for all }x\in K\}. \tag{1}\] Then \(K^{*}\) is also a convex body such that \(0\in\operatorname{int}(K^{*})\) and if \(T:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a linear isomorphism, one has \[\big{(}T(K)\big{)}^{*}=(T^{*})^{-1}(K^{*}),\] where \(T^{*}\) is the adjoint of \(T\). More generally, for a convex body \(K\) and \(z\in\operatorname{int}(K)\), one defines _the polar body \(K^{z}\) of \(K\) with respect to \(z\)_ by, \[K^{z}=(K-z)^{*}+z.\] The celebrated _bipolar theorem_ asserts that if \(0\in\operatorname{int}(K)\), then \[(K^{*})^{*}=K\text{ and consequently that }(K^{z})^{z}=K\] for any convex body \(K\) and any \(z\in\operatorname{int}(K)\). Let \(h_{K}(y)=\max_{x\in K}\langle x,y\rangle\) be the _support function_ of \(K\). Note that \(K^{*}=\{h_{K}\leq 1\}\), i.e. \(h_{K}(x)=\|x\|_{K^{*}}\), when \(0\in\operatorname{int}(K).\) Moreover, if \(z\in\operatorname{int}(K)\), \[K^{z}=z+\{y\in\mathbb{R}^{n};h_{K}(y)-\langle z,y\rangle\leq 1\}\] and thus \[\operatorname{vol}(K^{z})=\ \int_{K^{*}}\frac{1}{(1-\langle z,y\rangle)^{n+1}}dy.\] It follows that the map \(z\mapsto\operatorname{vol}(K^{z})\) is a strictly convex positive \(C^{\infty}\) function on \(\operatorname{int}(K)\). A small effort is enough to prove that \(\operatorname{vol}(K^{z})\to+\infty\) when \(z\) approaches the boundary of \(K\). Consider \(\theta\in S^{n-1}\), by Brunn's theorem, the function \(f_{\theta}:[-h_{K^{*}}(-\theta),h_{K^{*}}(\theta)]\to[0,\infty)\) defined by \(f_{\theta}(t):=\operatorname{vol}_{n-1}(\{y\in K^{*};\langle y,\theta\rangle =t\})\) satisfies that \(f_{\theta}^{1/(n-1)}\) is concave. Hence, one has \(f_{\theta}(t)\geq f_{\theta}(0)(1-h_{K^{*}}^{-1}(\theta)t)^{n-1}\) for \(0\leq t\leq h_{K^{*}}(\theta)\). Let \(r_{K}(\theta)=\min\{a\geq 0:\theta\in aK\}\) be the radial function of \(K\). Then \(r_{K}(\theta)=h_{K^{*}}^{-1}(\theta)\) and letting \(z=s\theta\) for \(0\leq s<r_{K}(\theta)\), we get \[\operatorname{vol}(K^{z})=\int_{K^{*}}\frac{1}{(1-\langle z,y\rangle)^{n+1}} dy=\int_{-h_{K^{*}}(-\theta)}^{h_{K^{*}}(\theta)}\frac{f_{\theta}(t)}{(1-st)^{n+1}}dt\] \[\geq f_{\theta}(0)\int_{0}^{h_{K^{*}}(\theta)}\frac{(1-th_{K^{*}}^{-1}(\theta ))^{n-1}}{(1-st)^{n+1}}dt=\frac{f_{\theta}(0)}{n(r_{K}(\theta)-s)}\geq\frac{ \min_{\theta\in S^{n-1}}\ f_{\theta}(0)}{n(r_{K}(\theta)-s)}\to+\infty,\] when \(s\to r_{K}(\theta)\), that is \(z\to\partial K\). Consequently, the function \(z\mapsto\operatorname{vol}(K^{z})\) reaches its minimum on \(\operatorname{int}(K)\) at a unique point \(s(K)\), called the _Santalo point_ of \(K\). Computing its differential, we see that \(s(K)\) is characterized by the fact that the centroid (center of mass) of \(K^{s(K)}\) is \(s(K)\) (see [2]). For \(t>0\) big enough, the sets \(\{z\in\operatorname{int}(K);\operatorname{vol}(K^{z})\leq t\}\), called _Santalo regions of \(K\)_, were studied in [2] (see also [2]). **Definition 1**.: _The Santalo point of a convex body \(K\), denoted \(s(K)\), is the unique point in \(\operatorname{int}(K)\) such that_ \[\operatorname{vol}(K^{s(K)})=\min_{z\in K}\operatorname{vol}(K^{z}).\] _The volume product of \(K\) is_ \[\mathcal{P}(K):=\operatorname{vol}(K)\operatorname{vol}(K^{s(K)}).\] We mention the following facts: * If \(K\) is centrally symmetric, then so is \(K^{*}\), and one has \(s(K)=0=s(K^{*})\) and \(\mathcal{P}(K)=\mathcal{P}(K^{*})\). One has always \(\mathcal{P}(K^{s(K)})\leq\mathcal{P}(K)\) and when \(K\) is not centrally symmetric, it may happen that \(\mathcal{P}(K^{s(K)})<\mathcal{P}(K)\). * It is easy to see that \(s(K)\) is the unique point of \(\operatorname{int}(K)\) such that \(0\) is the center of mass of \(\big{(}K-s(K)\big{)}^{*}\) or \(s(K)\) is the center of mass of \(K^{s(K)}\). * The map \(K\mapsto\mathcal{P}(K)\) is affine invariant, that is if \(A:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a one-to-one affine transform, then \(\mathcal{P}(AK)=\mathcal{P}(K)\). This indicates that if \(E\) if an \(n\)-dimensional normed space with a closed unit ball \(B_{E}\) and if \(\phi:E\to\mathbb{R}^{n}\) is a one-to-one linear mapping, then \(\mathcal{P}(E):=\mathcal{P}(\phi(B_{E}))\) does not depend on \(\phi\). In particular, this property makes \(\mathcal{P}(E)\) to be an important tool in the local theory of normed space (see [10, LMi, Tom]). * Let \(K\) be a convex body such that \(0\in\operatorname{int}(K)\) and let \(E\) be a linear subspace of \(\mathbb{R}^{n}\). Then, \(K\cap E\) is a convex body in \(E\), endowed with the scalar product inherited from the Euclidean structure of \(\mathbb{R}^{n}\), and \((K\cap E)^{*}\) (with polarity inside \(E\)) can be identified with \(P_{E}(K^{*})\), where \(P_{E}\) is the orthogonal projection from \(\mathbb{R}^{n}\) onto \(E\). Consequently, when \(K\) is centrally symmetric, \(\mathcal{P}(K\cap E)=\operatorname{vol}_{E}(K\cap E)\operatorname{vol}_{E}(P_{E} (K^{*}))\), where \(\operatorname{vol}_{E}\) denotes the Lebesgue measure on \(E\). In view of these facts, a natural question is to compute, for fixed \(n\), an upper and and a lower bound of \(\mathcal{P}(K)\) for all convex bodies \(K\) in \(\mathbb{R}^{n}\). The existence of these bounds follows from the affine invariance which allows to consider the bounds of \(K\mapsto\mathcal{P}(K)\) on a compact subset of the set of all convex bodies endowed with the Hausdorff metric. Indeed, if \(B_{2}^{n}\) is the Euclidean ball, by John's theorem, one may reduce to study \(\mathcal{P}(K)\) for \(B_{2}^{n}\subset K\subset nB_{2}^{n}\) in the general case or \(B_{2}^{n}\subset K\subset\sqrt{n}B_{2}^{n}\) when \(K\) is centrally symmetric, which gives already rough but concrete estimates for these bounds. It seems that the first one who dealt with this problem was Wilhelm Blaschke (1885-1962), who proved that for \(n=2\) and \(n=3\), \(\mathcal{P}(K)\leq\mathcal{P}(\mathcal{E})\), where \(\mathcal{E}\) is any ellipsoid [11], [12]. Then, Mahler gave exact lower bounds for \(\mathcal{P}(K)\) for \(n=2\) both in the general case and in the centrally symmetric case [13, 14]. In 1947, Luis Santalo (1911-2001) [15] extended the results of Blaschke to all \(n\) with the same tools as him. The case of equality for the upper bound was first proved much later in 1978 by Petty [16] (the argument given in [15] was not quite valid). **Theorem 1**.: **(Blaschke-Santalo-Petty)** _If \(K\subset\mathbb{R}^{n}\) is a convex body, then_ \[\mathcal{P}(K)\leq\mathcal{P}(B_{2}^{n}), \tag{2}\] _with equality if and only if \(K\) is an ellipsoid._ Bambah [B] gave rough lower bounds for \(\mathcal{P}(K)\). Guggenheimer [17, 18] believed at some moment that he had a complete proof of the exact lower bound \(\mathcal{P}(K)\geq\mathcal{P}([-1,1]^{n})=\frac{4^{n}}{n!}\) for \(K\) centrally symmetric, but it appeared that his proof was incorrect. This was the situation in the 80's, when new insights on the problem were given by Saint-Raymond [19], Reisner [14, 15], Gordon and Reisner [16] and Bourgain-Milman [17]. We conclude this section by an important tool in this theory: ### Shadow systems **Definition 2**.: \(A\) **shadow system** _along a direction \(\theta\in S^{n-1}\) is a family of convex sets \(K_{t}\subset\mathbb{R}^{n}\) which are defined by \(K_{t}=\operatorname{conv}(\{x+ta(x)\theta;x\in M\})\) where \(M\) is a bounded subset in \(\mathbb{R}^{n}\), \(a:M\to\mathbb{R}\) is a bounded function and \(t\in I\), an interval of \(\mathbb{R}\) (and where \(\operatorname{conv}(A)\) denotes the closed convex hull of a set \(A\subset\mathbb{R}^{n}\))._ It may be observed that the classical Steiner-symmetrization can be seen as a shadow system such that the volume of \(K_{t}\) remains constant (see Remark 1 below). The notion of shadow system was introduced by Rogers and Shephard [19, 20] and can be explained via an idea of Shephard in [20], who pointed out that a shadow system of convex bodies in \(\mathbb{R}^{n}\) can be seen as a family of projections of a \((n+1)\)-dimensional convex set on some \(n\)-dimensional subspace of \(\mathbb{R}^{n+1}\). Namely, let \(e_{1},e_{2},...,e_{n+1}\) be an orthonormal basis of \(\mathbb{R}^{n+1}\). Let \(M\) be as before be a bounded subset of \(\mathbb{R}^{n}\) (i.e. \(M\) is contained in the linear span of \(e_{1},e_{2},...,e_{n}\)), let \(a:M\to\mathbb{R}\) be a bounded function, \(\theta\in S^{n-1}\) and \(I\) an interval of \(\mathbb{R}\). We define a convex set \(\tilde{K}\subset\mathbb{R}^{n+1}\) be the \((n+1)\)-dimensional by \[\tilde{K}=\operatorname{conv}\{x+a(x)e_{n+1}:x\in M\}.\] For each \(t\in I\), let \(P_{t}:\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) be the projection from \(\mathbb{R}^{n+1}\) onto \(\mathbb{R}^{n}\) along the direction \(e_{n+1}-t\theta\). Then, \[P_{t}(\tilde{K})=\operatorname{conv}(\{x+ta(x)\theta;x\in M\})=K_{t}.\] This interpretation permits to see that \(t\mapsto\operatorname{vol}(K_{t})\) is a convex function [RS]. This result is a powerful tool for obtaining geometric inequalities of isoperimetric type. The following theorem connects shadow systems with the volume product. It was proved by Campi and Gronchi [CG1] when the bodies \(K_{t}\) are centrally symmetric and by Meyer and Reisner in the general case [MR3] (see also [FMZ]). **Theorem 2**.: _Let \((K_{t})_{t\in(a,b)}\) be a shadow system of convex bodies in \(\mathbb{R}^{n}\). Then the function \(t\mapsto\operatorname{vol}\left((K_{t})^{s(K_{t})}\right)^{-1}\) is convex on \((a,b)\)._ With the previous notations, if the \(K_{t}\) are centrally symmetric, then \(s(K_{t})=0\) and thus \[(K_{t})^{s(K_{t})}=K_{t}^{*}=(P_{t}(\tilde{K}))^{*}.\] As it was discussed above, the polar of the orthogonal projection of a convex body on a subspace \(E\) is the section of its polar by \(E\). But here \(P_{t}\) is not an orthogonal projection, and we get \[(P_{t}(\tilde{K}))^{*}=P_{e_{n+1}^{\perp}}(\tilde{K}^{*}\cap(e_{n+1}-t\theta) ^{\perp}),\] where \(P_{e_{n+1}^{\perp}}\) is the orthogonal projection from \(\mathbb{R}^{n+1}\) onto \(\mathbb{R}^{n}=e_{n+1}^{\perp}\). In that context, Campi-Gronchi's theorem appears as another formulation, with a new proof, of Busemann's theorem [Bu] about the central hyperplane sections of a centrally symmetric body (see also [MR5]). This point of view was put forward in [CFPP], where such properties were also generalized to more general measures than Lebesgue measure. An important component related to Theorem 2, which is proved in [MR3, Proposition 7] (see also [MR4]), is the case when both \(\operatorname{vol}(K_{t})\) and \(\operatorname{vol}((K_{t})^{s(K_{t})})^{-1}\) are affine functions of \(t\in(a,b)\). In this case, all the bodies \(K_{t}\) are affine images of each other by special affine transformations. This has been useful in identifying the cases of equality in inequalities involving volume products, as well as in the proof of the results of [MR4] and [FMZ]. The proof of this component, that involves some ODE, was extended in [MY, Proposition 6.1] to generalize this result. ## 2. The Blaschke-Santalo inequality The original proofs of the Blaschke-Santalo inequality [Bl1, San1, San2, Leic1] were based on the affine isoperimetric inequality. We show now how this inequality implies the Blaschke-Santalo inequality and how conversely the Blaschke-Santalo inequality implies it. We refer to Section 10 in [Sc] and to [SW, Leic2] for details about the tools used in this section. If \(M\) is a smooth convex body with positive curvature everywhere, its affine surface area \(\mathcal{A}(M)\) is defined by \[\mathcal{A}(M)=\int_{S^{n-1}}f_{M}(u)^{\frac{n}{n+1}}du,\] where \(f_{M}:S^{n-1}\to\mathbb{R}_{+}\) is the curvature function, i.e. the density of the measure \(\sigma_{M}\) on \(S^{n-1}\) with respect to the Haar measure on \(S^{n-1}\), where for a Borel subset \(A\) of \(S^{n-1}\), \(\sigma_{M}(A)\) is the \((n-1)\)-Hausdorff measure of the set of all points in \(\partial M\) such that their unit normal to \(M\) is in \(A\). The _affine isoperimetric inequality_ says that at a fixed volume, ellipsoids have the largest affine surface area among all convex bodies with positive continuous curvature. In other words, \[\mathcal{A}(M)^{n+1}\leq n^{n+1}v_{n}^{2}\operatorname{vol}(M)^{n-1}, \tag{3}\] where \(v_{n}=\operatorname{vol}(B_{2}^{n})\). Let \(L\) be another convex body, Holder's inequality, one has \[\mathcal{A}(M)^{n+1} \leq\left(\int_{S^{n-1}}h_{L}(u)f_{M}(u)du\right)^{n}\int_{S^{n-1 }}h_{L}^{-n}(u)du \tag{4}\] \[=n^{n+1}V(M[n-1],L)^{n}\operatorname{vol}(L^{*}),\] where \(V(M[n-1],L)=V(M[n-1],L[1])=\frac{1}{n}\int_{S_{n-1}}h_{L}(u)f_{M}(u)du\) is a mixed volume of \(M\) and \(L\), which can also be defined by the formula, for \(t\geq 0\), \[\operatorname{vol}(M+tL)=\sum_{k=1}^{n}\binom{n}{k}V(M[n-k],L[k])t^{k}.\] We refer to [Sc] for precise definition of properties mixed volumes. Using inequality (3) and the Minkowski first inequality \[V(M[n-1],L)^{n}\geq\operatorname{vol}(M)^{n-1}\operatorname{vol}(L),\] one gets \[\mathcal{A}(M)^{n+1}\leq n^{n+1}v_{n}^{2}\operatorname{vol}(M)^{n-1}\leq n^{n+ 1}v_{n}^{2}V(M[n-1],L)^{n}\operatorname{vol}(L)^{-1}. \tag{5}\] Now given a convex body \(K\), let \(s=s(K)\) be its Santalo point; then \(0\) is the centroid of \(K-s\) so that \[\int_{S^{n-1}}uh_{K-s}(u)^{-(n+1)}du=0,\] and thus by Minkowski's existence theorem (see Section 8.2 [Sc]), there exists a convex body \(M\) such that \(f_{M}=h_{K-s}^{-(n+1)}\). Set \(L=K-s\), then there is equality in (4) so that by (5) \[n^{n+1}V(M[n-1],K-s)^{n}\operatorname{vol}((K-s)^{*}) =\mathcal{A}(M)^{n+1}\] \[\leq n^{n+1}v_{n}^{2}V(M[n-1],K)^{n}\operatorname{vol}(K-s)^{-1},\] which gives the Blaschke-Santalo inequality (2). Conversely, let \(M\) be a convex body with positive curvature, and suppose that \(0\) is the Santalo point of \(M\) and that Blaschke-Santalo inequality holds for \(M\). By (4) with \(L=M\), we get \[\mathcal{A}(M)^{n+1}\leq n^{n+1}V(M[n-1],M)^{n}\operatorname{vol}(M^{*})\leq n ^{n+1}v_{n}^{2}\operatorname{vol}(M)^{n-1},\] which is the affine isoperimetric inequality. For examples of other results of this type and relations between affine surface area and the volume product, see Petty [Pe3, Pe4], Lutwak [Lu2], Li, Schutt and Werner and [LSW], Naszodi, Nazarov and Ryabogin [NNR] and Hug [Hu], who gave a proof of the affine isoperimetric inequality using Steiner symmetrization and studied the cases of equality. ### A proof of the Blaschke-Santalo inequality in the centrally symmetric case In [10, 11], Meyer and Pajor used the classical Steiner symmetrization for giving a proof which we sketch in this section. Various symmetrizations of sections appeared also in [13] and [14]. _Step 1:_ We prove first that if \(H\) is a linear hyperplane of \(\mathbb{R}^{n}\) and \(S_{H}K\) is the _Steiner symmetral of \(K\) with respect to \(H\)_, as it will be defined below, then \[\operatorname{vol}(K^{*})\leq\operatorname{vol}\big{(}(S_{H}K)^{*}\big{)}.\] To simplify notations, suppose that \(H=\mathbb{R}^{n-1}\times\{0\}\); as before, let \(P_{H}:\mathbb{R}^{n}\mapsto H\) be the orthogonal projection onto \(H\). Then \(K\) may be described as follows: \[K=\{x+se_{n};\ x\in P_{H}K,s\in I(x)\}\] where for \(x\in P_{H}K\), \(I(x)=\{s\in\mathbb{R};x+se_{n}\in K\}\) is a closed interval. The Steiner symmetral of \(K\) with respect to \(H\), defined as \[S_{H}K=\left\{x+se_{n};x\in PK,s\in\frac{I(x)-I(x)}{2}\right\}\] is a convex body symmetric with respect to \(H\), such that \[\operatorname{vol}(K)=\operatorname{vol}(S_{H}K).\] For \(t\in\mathbb{R}\), let \(K^{*}(t):=\{y\in H;y+te_{n}\in K^{*}\}\) be the section of \(K^{*}\) by the hyperplane \(\{x_{n}=t\}\) and \(J=\{t\in\mathbb{R};K^{*}(t)\neq\emptyset\}\). Then \[K^{*}=\{y+te_{n};t\in J,y\in K^{*}(t)\}.\] By the symmetry of \(K^{*}\), one has \(K(-x)=-K(x)\) for every \(x\in P_{H}K\), so that for every \(t\in J\), \(y_{1}\in K^{*}(t)\), \(y_{2}\in K^{*}(-t)\) and \(s_{1},s_{2}\in K(x)\), one has: \[\langle x,y_{1}\rangle+s_{1}t\leq 1\text{ and }\langle x,y_{2}\rangle-s_{2}t\leq 1.\] Adding these two inequalities, we get that for every \(x\in P_{H}K\), \(s=\frac{s_{1}-s_{2}}{2}\in\frac{1}{2}(I(X)-I(X))\), \(t\in J\) and \(y=\frac{y_{1}+y_{2}}{2}\in\frac{1}{2}\big{(}K^{*}(t)+K^{*}(-t)\big{)}\), one has \[\langle x,y\rangle+st\leq 1.\] Thus for every \(t\in J\), \[\frac{1}{2}\big{(}K^{*}(t)+K^{*}(-t)\big{)}\subset(S_{H}K)^{*}(t). \tag{6}\] By the symmetry of \(K\), one has \(K^{*}(-t)=-K^{*}(t)\). It follows from Brunn-Minkowski's theorem that \(\operatorname{vol}_{n-1}\big{(}(S_{H}K)^{*}(t)\big{)}\geq\operatorname{vol}_ {n-1}\big{(}K^{*}(t)\big{)}\) and, integrating in \(t\), we get that \[\operatorname{vol}\big{(}(S_{H}K)^{*}\big{)}=\int\operatorname{vol}_{n-1} \big{(}(S_{H}K)^{*}(t)\big{)}dt\geq\int\operatorname{vol}_{n-1}(K^{*}(t))dt= \operatorname{vol}(K^{*}).\] One get thus that \(\mathcal{P}(S_{H}K)\geq\mathcal{P}(K)\). _Step 2:_ It is well known that there exists a sequence of hyperplanes \((H_{m})_{m}\) such that if \(K_{0}=K\) and for \(m\geq 1\), \(K_{m}:=S_{H_{m}}K_{m-1}\), then the sequence \((K_{m})_{m}\) converges to the Euclidean ball \(R_{K}B_{2}^{n}\) with same volume as \(K\) (see for example Section 10.3 in [12]). Thus \[\mathcal{P}(K)\leq\mathcal{P}(K_{n-1})\leq\mathcal{P}(K_{n})\leq\mathcal{P}(R_ {K}B_{2}^{n})=\mathcal{P}(B_{2}^{n}).\] The case of equality in Blaschke-Santalo inequality was first proved by Petty [15], using sharp differential geometry arguments. When \(K\) is centrally symmetric, a new elementary proof was given by Saint Raymond [11], using Minkowski symmetrization of the hyperplane sections (see also [1]). These ideas were generalized by Meyer-Pajor [12] to give an elementary proof for the general case, and a somewhat stronger result, also based on Steiner's symmetrizations: **Theorem 3**.: **(Meyer-Pajor [12])** _For every convex body \(K\) and every hyperplane \(H\) separating \(\mathbb{R}^{n}\) into two closed half space \(H_{+}\) and \(H_{-}\), denoting \(\lambda=\frac{\operatorname{vol}(H_{+}\cap K)}{\operatorname{vol}(K)}\), there exists \(z\in H\) such that \(\operatorname{vol}(K)\operatorname{vol}(K^{z})\leq\frac{\mathcal{P}(\mathcal{E })}{4\lambda(1-\lambda)}\)._ **Remark 1**.: Notice that the Steiner's symmetral of a convex body \(K\) with respect to a direction \(u\in S^{n-1}\) can be written as a part of a shadow system in the following way: for \(y\in P_{u^{\perp}}K\), let \(I(y)=\{s\in\mathbb{R};y+su\in K\}\). For \(t\in[-1,1]\), let \[K_{t}=\left\{y+su;s\in\frac{1+t}{2}I(y)-\frac{1-t}{2}I(y)\right\}.\] Then \(K_{1}=K\), \(K_{0}=S_{u^{\perp}}K\) and, for every \(t\in[-1,1]\), \(K_{-t}\) is the reflection of \(K_{t}\) with respect to the hyperplane \(u^{\perp}\). This implies that the function \(t\mapsto\mathcal{P}(K_{t})\) is even. Moreover, since \(\operatorname{vol}(K_{t})=\operatorname{vol}(K)\) is constant for \(t\in[-1,1]\), using Theorem 2, the function \(t\mapsto\mathcal{P}(K_{t})^{-1}\) is convex. It follows that \(\mathcal{P}(K_{t})\) is maximized at \(0\), which proves that the volume product is non-decreasing by Steiner symmetrization for any convex body, recovering Meyer-Pajor's result [12]. Using again an appropriate sequence of Steiner symmetrizations, this gives the general Blaschke-Santalo inequality. The preceding observation is due to [13]. ### An harmonic analysis proof of the Blaschke-Santalo inequality We follow the work of Bianchi and Kelli [1]. Harmonic analysis plays an essential role in the study of duality and the volume product. The main idea was discovered by Nazarov [14], who used it to provide a proof of the Bourgain-Milman inequality [2] and was adopted by Bianchi and Kelly to give a very elegant proof of the Blaschke-Santalo inequality. We refer to [11] and [12] for basic facts in harmonic analysis. Let \(K\) be a convex symmetric body in \(\mathbb{R}^{n}\). Let \(F\in L_{2}(\mathbb{R}^{n})\) such that its Fourier transform \(\widehat{F}=0\) a.e. on \(\mathbb{R}^{n}\setminus K\). Then \(F\) is the restriction to \(\mathbb{R}^{n}\) of the entire function still denoted \(F\) defined by: \(F(z)=\int_{K}e^{2\pi i\langle z,\xi\rangle}\widehat{F}(\xi)d\xi\) for \(z\in\mathbb{C}^{n}\), which satisfies the following inequality giving a first hint on why the theory is so useful: \[|F(iy)|\!=\!\left|\int_{K}e^{-2\pi\langle y,\xi\rangle}\widehat{F}(\xi)d\xi \right|\leq\int_{K}e^{2\pi\sup\limits_{\xi\in K}|\langle\xi,y\rangle|}|\widehat {F}(\xi)|d\xi=e^{2\pi\|y\|_{K^{*}}}\!\!\int_{K}|\widehat{F}(\xi)|d\xi.\] Thus for some \(C>0\), one has \(|F(iy)|\leq Ce^{2\pi\|y\|_{K^{*}}}\) for all \(y\in\mathbb{R}^{n}\), i.e. \(F\) is _of exponential type \(K^{*}\)_. This fact is the elementary part of the following classical theorem: **Theorem 4**.: _(Paley-Wiener) Let \(F\in L_{2}(\mathbb{R}^{n})\) and \(K\) be a convex symmetric body. Then the following are equivalent:_ * \(F\) _is the restriction to_ \(\mathbb{R}^{n}\) _of an entire function of exponential type_ \(K^{*}\)_._ * _The support of_ \(\widehat{F}\) _is contained in_ \(K\)_._ Now we are ready to present the **Proof of the Blaschke-Santalo inequality adapted from Bianchi and Kelly [BK].** Let \(F=\frac{1}{\operatorname{vol}(K)}\widehat{\mathbf{1}}_{K}\), where \(\mathbf{1}_{K}\) is the characteristic function of \(K\): \[\mathbf{1}_{K}(x)=\begin{cases}1,\text{ for }x\in K\\ 0,\text{ for }x\not\in K.\end{cases}\] Then \(F(0)=1\), \(F\in L_{2}(\mathbb{R}^{n})\), \(F\) is continuous and even, and can be extended as an entire function on \(\mathbb{C}^{n}\), still denoted \(F\), as \[F(z_{1},\dots,z_{n})=\frac{1}{\operatorname{vol}(K)}\int_{K}e^{2i\pi(\sum_{k=1 }^{n}z_{i}x_{i})}dx_{1}\dots dx_{n}.\] For \(\theta\in S^{n-1}\) and \(z\in\mathbb{C}\), let \(F_{\theta}(z)=F(z\theta)\). Then, by the easy part of Paley-Wiener theorem, \(F_{\theta}\) is an even entire function of exponential type \([-\|\theta\|_{K^{*}}^{-1},\|\theta\|_{K^{*}}^{-1}]\). Moreover, since \(F\) is entire and even, there exists an entire function \(H_{\theta}:\mathbb{C}\to\mathbb{C}\) such that \(H_{\theta}(z^{2})=F_{\theta}(z)=F(z\theta)\). Finally, we define \(R_{\theta}:\mathbb{C}^{n}\to\mathbb{C}\) as a radial extension of \(F_{\theta}\) by \[R_{\theta}(z_{1},\dots,z_{n})=H_{\theta}(z_{1}^{2}+\dots+z_{n}^{2}).\] Note that \(z\mapsto R_{\theta}(z)\) is an entire function which satisfies \(R_{\theta}(0)=F(0)=1\). Moreover, since \(F_{\theta}\) is of exponential type \([-\|\theta\|_{K^{*}}^{-1},\|\theta\|_{K^{*}}^{-1}]\), \(R_{\theta}\) is of exponential type \(\|\theta\|_{K^{*}}^{-1}B_{2}^{n}\). It follows, from the Paley-Wiener theorem, that the support of the restriction to \(\mathbb{R}^{n}\) of \(\widehat{R_{\theta}}\) is contained in \((\|\theta\|_{K^{*}}^{-1}B_{2}^{n})^{*}=\|\theta\|_{K^{*}}B_{2}^{n}.\) Since \(R_{\theta}\in L_{2}(\mathbb{R}^{n})\), one may write, using the Plancherel equality and the Cauchy-Schwarz inequality with \(v_{n}=\operatorname{vol}(B_{2}^{n})\), \[\int_{\mathbb{R}^{n}}|R_{\theta}(x)|^{2}dx= \int_{\|\theta\|_{K^{*}}B_{2}^{n}}|\widehat{R_{\theta}}(x)|^{2} dx\geq\frac{|\int_{\|\theta\|_{K^{*}}B_{2}^{n}}\widehat{R_{\theta}}(x)dx|^{2}}{ v_{n}\|\theta\|_{K^{*}}^{n}}\] \[= \frac{|\int_{\mathbb{R}^{n}}\widehat{R_{\theta}}(x)dx|^{2}}{v_{n} \|\theta\|_{K^{*}}^{n}}=\frac{R_{\theta}(0)}{v_{n}\|\theta\|_{K^{*}}^{n}}= \frac{1}{v_{n}\|\theta\|_{K^{*}}^{n}}. \tag{7}\] If \(|x|=r=|re_{1}|\), one has \[R_{\theta}(x)=F(|x|\theta)=F(r\theta)=R_{\theta}(re_{1}), \tag{8}\] so that \(R_{\theta}\) is rotation invariant on \(\mathbb{R}^{n}\). Thus \[\frac{1}{\operatorname{vol}(K)}= \int_{\mathbb{R}^{n}}|\widehat{F}(x)|^{2}dx=\int_{\mathbb{R}^{n}} |F(x)|^{2}dx=\int_{S^{n-1}}\int_{0}^{+\infty}|F(r\theta)|^{2}r^{n-1}drd\theta\] \[= \int_{S^{n-1}}\int_{0}^{+\infty}|R_{\theta}(re_{1})|^{2}r^{n-1}drd \theta=\frac{1}{nv_{n}}\int_{\mathbb{R}^{n}}|R_{\theta}(x)|^{2}dx.\] It follows that \[\frac{1}{\operatorname{vol}(K)}=\frac{1}{nv_{n}}\int_{S^{n-1}}\int_{\mathbb{R }^{n}}|R_{\theta}(x)|^{2}dxd\theta\geq\frac{1}{nv_{n}^{2}}\int_{S^{n-1}}\| \theta\|_{K^{*}}^{-n}d\theta=\frac{\operatorname{vol}(K^{*})}{\operatorname{ vol}(B_{2}^{n})^{2}},\] which is the Blaschke-Santalo inequality. Bianchi and Kelly [BK] also provided a proof of the equality case. Indeed, assume that we have equality in the Blaschke-Santalo inequality, then we must have equality in (7). Thus, for every \(\theta\in S^{n-1}\) and some \(c_{\theta}\in\mathbb{R}\), one has \(\widehat{R_{\theta}}=c_{\theta}\mathbf{1}_{\|\theta\|_{K^{*}}B_{2}^{n}}\) on \(\mathbb{R}^{n}\) and \[R_{\theta}(x)=\int_{\mathbb{R}^{n}}e^{-2i\pi\langle x,y\rangle}\widehat{R_{ \theta}}(y)dy=c_{\theta}\int_{\|\theta\|_{K^{*}}B_{2}^{n}}e^{-2i\pi\langle x,y \rangle}dy.\] Moreover, since \(R_{\theta}(0)=1\), one gets \(c_{\theta}\operatorname{vol}(\|\theta\|_{K^{*}}B_{2}^{n})=1\). Next, by (8), we get \[\frac{1}{\operatorname{vol}(K)}\int_{K}e^{-2i\pi r\langle\theta,y\rangle}dy=F(r \theta)=\frac{1}{\operatorname{vol}(\|\theta\|_{K^{*}}B_{2}^{n})}\int_{\| \theta\|_{K^{*}}B_{2}^{n}}e^{-2i\pi r\langle\theta,y\rangle}dy. \tag{9}\] If \(M\) is a convex body, \(\theta\in S^{n-1}\) and \(t\in\mathbb{R}\), let \(A_{M,\theta}(t)=\operatorname{vol}_{n-1}\big{(}M\cap(\theta^{\perp}+t\theta) \big{)}\). One has \[\widehat{A_{K,\theta}}(r)=\int_{\mathbb{R}}e^{-2i\pi rt}A_{K,\theta}(t)dt=\int _{K}e^{-2i\pi r\langle\theta,y\rangle}dy.\] Inverting the Fourier transform, it follows from (9) that for all \(t\in\mathbb{R}\) and \(\theta\in S^{n-1}\), \[\frac{1}{\operatorname{vol}(K)}A_{K,\theta}(t)=\frac{A_{\|\theta\|_{K^{*}}B_{ 2}^{n},\theta}(t)}{\operatorname{vol}(\|\theta\|_{K^{*}}B_{2}^{n})}=\frac{A_{ B_{2}^{n},\theta}\left(t\|\theta\|_{K^{*}}^{-1}\right)}{\|\theta\|_{K^{*}} \operatorname{vol}(B_{2}^{n})}. \tag{10}\] Now, for \(\theta\in S^{n-1}\), one has by (10) \[\int_{K}\langle x,\theta\rangle^{2}dx=\int_{\mathbb{R}}t^{2}A_{K,\theta}(t)dt =\frac{\operatorname{vol}(K)}{\|\theta\|_{K^{*}}\operatorname{vol }(B_{2}^{n})}\int_{\mathbb{R}}t^{2}A_{B_{2}^{n},\theta}\left(t\|\theta\|_{K^{ *}}^{-1}\right)dt\] \[=\frac{\operatorname{vol}(K)}{\operatorname{vol}(B_{2}^{n})}\| \theta\|_{K^{*}}^{2}\int_{\mathbb{R}}u^{2}A_{B_{2}^{n},\theta}(u)du\] and since by rotation invariance \(A_{B_{2}^{n},\theta}(u)\) does not depend on \(\theta\in S^{n-1}\), one gets for some \(c>0\) and all \(\theta\in S^{n-1}\), \[\|\theta\|_{K^{*}}=c\left(\int_{K}\langle x,\theta\rangle^{2}dx\right)^{1/2},\] which proves that \(K^{*}\) and thus \(K\) is an ellipsoid (the last arguments are inspired from [10]). ### Further results and generalizations Let us present a few results which may be considered as offspring's of the Blaschke-Santalo inequality. #### 2.3.1. Stability K. Borozcky [Bor] established a stability version of the Blaschke-Santalo inequality, later improved by K. Ball and K. Borozcky in [BB]. Let \(d_{BM}(K,L)\) be the Banach-Mazur distance between two convex bodies \(K\) and \(L\) in \(\mathbb{R}^{n}\): \[d_{BM}(K,L)=\inf\{d>0:K-x\subseteq T(L-y)\subseteq d(K-x),\text{ for some }T\in GL(n)\text{ and }x,y\in\mathbb{R}^{n}\}.\] The following stability theorem was proved in [BB]: **Theorem 5**.: _If \(K\) is a convex body in \(\mathbb{R}^{n},\,n\geq 3\), such that for some \(\varepsilon>0\) one has_ \[(1+\varepsilon)\mathcal{P}(K)\geq\mathcal{P}(B_{2}^{n}),\] _then_ \[\log\big{(}d_{BM}(K,B_{2}^{n})\big{)}\leq c_{n}\varepsilon^{\frac{1}{3(n+1)}} |\log\varepsilon|^{\frac{2}{3(n+1)}}.\] _where \(c_{n}\) is an absolute constant depending on \(n\) only._ If in the above theorem we assume that \(K\) is symmetric, then the exponent of \(\varepsilon\) can be improved to \(2/3(n+1)\). #### 2.3.2. Local and restricted maxima After having proved that convex bodies with maximal volume product are ellipsoids, one may ask about the local maxima of the volume product, in the sense of Hausdorff distance. Using Theorem 2, it was proved in [14] that any local maximum is an ellipsoid, which gives another proof of Blaschke-Santalo's inequality. One may also investigate maxima among certain classes of bodies not containing ellipsoids. For instance, in \(\mathbb{R}^{2}\), among polygons with more than \(m\geq 4\) vertices, the maxima are the affine images of regular polygons with \(m\) vertices [13]. For \(n\geq 3\), the much more complicated situation was investigated by [1] using shadow systems. In particular, it was proved in [1] that a polytope with maximal volume product among polytopes with at most \(m\) vertices is simplicial (all its facets are simplices) and has exactly \(m\) vertices. It was also proved that, among polytopes with at most \(n+2\) vertices, the volume product is maximized by \(\operatorname{conv}(\Delta_{\lceil\frac{n}{2}\rceil},\Delta_{\lfloor\frac{n}{ 2}\rceil})\), where \(\Delta_{\lceil\frac{n}{2}\rceil}\) and \(\Delta_{\lfloor\frac{n}{2}\rceil}\) are simplices living in complementary subspaces of dimensions \(\lceil\frac{n}{2}\rceil\) and \(\lfloor\frac{n}{2}\rfloor\) respectively (by definition, for \(\alpha\not\in\mathbb{Z}\), \(\lfloor\alpha\rfloor\) is the integer part of \(\alpha\) and \(\lceil\alpha\rceil=\lfloor\alpha\rfloor+1\), for \(\alpha\in\mathbb{Z}\), \(\lceil\alpha\rceil=\lfloor\alpha\rfloor=\alpha\)). It is conjectured in [1] that, for \(1\leq k\leq n\), among polytopes with at most \(n+k\) vertices, the convex hull of \(k\) simplices living in complementary subspaces of dimensions \(\lceil\frac{n}{k}\rceil\) or \(\lfloor\frac{n}{k}\rfloor\) have maximal volume product. Among unit balls of finite dimensional Lipschitz-free spaces, which are polytopes with at most \((n+1)^{2}\) extreme points, some preliminary results were established in [1] and it was shown that the maximizers of the volume product are simplicial polytopes. #### 2.3.3. \(L_{p}\)-centroid inequalities In a series of works by Lutwak, Yang and Zhang [15, 16, 17], Blaschke-Santalo's inequality appears as a special case of a family of isoperimetric inequalities involving the so called \(L_{p}\)-centroid bodies and \(L_{p}\)-projection bodies. More precisely, consider a compact star-shaped body \(K\) in \(\mathbb{R}^{n}\) and \(p\in[1,\infty]\), the polar \(L_{p}\)-centroid body \(\Gamma_{p}^{*}K\) is defined via it's norm: \[\|x\|_{\Gamma_{p}^{*}K}^{p}=\frac{1}{c_{n,p}\operatorname{vol}(K)}\int_{K}| \langle x,y\rangle|^{p}dy,\] here the normalization constant \(c_{n,p}\) is chosen so that \(\Gamma_{p}^{*}B_{2}^{n}=B_{2}^{n}.\) It was proved in [15] that for all \(p\in[1,\infty]\) \[\operatorname{vol}(K)\operatorname{vol}(\Gamma_{p}^{*}K)\leq\operatorname{ vol}(B_{2}^{n})^{2}, \tag{11}\] with equality if and only if \(K\) is an ellipsoid centered at the origin. It turns out that if \(K\) is a centrally symmetric convex body then \(\Gamma_{\infty}^{*}K=K^{*}\) and thus the symmetric case of the Blaschke-Santalo inequality follows from (11) when \(p=\infty\). A stronger version of (11) was proved in [16]: \[\operatorname{vol}(\Gamma_{p}K)\geq\operatorname{vol}(K),\] for any star body in \(\mathbb{R}^{n}\) and \(p\in[1,\infty]\). This inequality, for \(p=1\), links the theory to the Busemann-Petty centroid inequality [12] see also [11, 12]. If \(K\) and \(L\) are compact subsets of \(\mathbb{R}^{n}\), then for \(p\geq 1\), it was proved in [16, Corollary 6.3] that for some \(c(p,n)>0\), one has \[\int_{K\times L}|\langle x,y\rangle|^{p}dxdy\geq c(p,n)\big{(}\operatorname{ vol}(K)\operatorname{vol}(L)\big{)}^{\frac{n+p}{n}}\] with equality if and only if \(K\) and \(L\) are, up to sets of measure \(0\), dilates of polar-reciprocal, origin-centered ellipsoids. When \(p\to+\infty\), one gets the following version of the symmetric Blaschke-Santalo's in [10]: If \(K,L\) are compact subsets of \(\mathbb{R}^{n}\), then \[\operatorname{vol}(B_{2}^{n})^{2}\max_{x\in K,y\in L}|\langle x,y\rangle|^{n} \geq\operatorname{vol}(K)\operatorname{vol}(L).\] #### 2.3.4. Connection to affine quermassintegrals Affine quermassintegrals were defined by Lutwak [10]. For \(1\leq k\leq n\), the \(k\)-th affine quermassintegral of a convex body \(K\) is: \[\Phi_{k}(K)=\frac{v_{n}}{v_{k}}\left(\int_{Gr(k,n)}\operatorname{vol}_{k}(P_{ F}K)^{-n}\sigma_{n,k}(dF)\right)^{-1/n},\] where \(Gr(k,n)\) is the Grassmann manifold of \(k\)-dimensional linear subspaces \(F\) of \(\mathbb{R}^{n}\), \(\sigma_{n,k}\) is Haar probability measure on \(Gr(k,n)\) and \(P_{F}\) is the orthogonal projection onto \(F\). It was proved by Grinberg [11] that \(\Phi_{k}(K)\) is invariant under volume preserving affine transformations. Let \(R_{K}>0\) satisfy \(\operatorname{vol}(R_{K}B_{2}^{n})=\operatorname{vol}(K)\). Lutwak [10] conjectured that for any convex body \(K\) in \(\mathbb{R}^{n}\) and any \(k=1,\dots,n-1\), one has \[\Phi_{k}(K)\geq\Phi_{k}(R_{K}B_{2}^{n}) \tag{12}\] with equality if and only if \(K\) is an ellipsoid. This conjecture was open for quite a long time. Lutwak proved that, for \(k=1\), it follows directly from the Blaschke-Santalo inequality (and that the case \(k=n-1\) is connected to an inequality of Petty [14][15]). Recently, E. Milman and Yehudayoff [16] proved that this conjecture is true. As one of the steps in the proof, they showed that \(\Phi_{k}(K)\geq\Phi_{k}(S_{H}K)\), generalizing the previous result of [16]. In addition, a simplified proof of the Petty projection inequality was presented in [16]. Those interesting results suggest that (12) can be viewed as a generalization of the Blaschke-Santalo inequality. #### 2.3.5. A conjecture of K. Ball Keith Ball [1] conjectured that if \(K\) is a convex symmetric body in \(\mathbb{R}^{n}\) then \[\int_{K}\int_{K^{*}}\langle x,y\rangle^{2}dxdy\leq\int_{B_{2}^{n}}\int_{B_{2}^ {n}}\langle x,y\rangle^{2}dxdy=\frac{n}{(n+2)^{2}}\operatorname{vol}(B_{2}^{n })^{2}. \tag{13}\] and he proved a kind of reverse inequality: \[\frac{n\big{(}\operatorname{vol}(K)\operatorname{vol}(K^{*})\big{)}^{\frac{n+ 2}{n}}}{(n+2)^{2}\operatorname{vol}(B_{2}^{n})^{\frac{4}{n}}}\leq\int_{K}\int _{K^{*}}\langle x,y\rangle^{2}dxdy,\] which shows that inequality (13) is stronger than the Blaschke-Santalo inequality. In [11, 12], (13) was proved for unconditional bodies. Generalizations are considered in [11] and [12] (see section 7.3 for the later). #### 2.3.6. Stochastic and log-concave measures extensions Following the ideas initiated in [13], the authors of [17] pursued a probabilistic approach of the Blaschke-Santalo inequality for symmetric bodies and established the following result. **Theorem 6**.: _For \(N,n\geq 1\), let \((\Omega,\mathcal{B},P)\) be a probability space and_ * \(X_{1},\dots,X_{N}:\Omega\to\mathbb{R}^{n}\) _be independent random vectors, whose laws have densities with respect to Lebesgue measure which are bounded by one._ * \(Z_{1},\dots,Z_{N}:\Omega\to\mathbb{R}^{n}\) _be independent random vectors uniformly distributed in_ \(rB_{2}^{n}\) _with_ \(\operatorname{vol}(rB_{2}^{n})=1\)_._ * \(\mu\) _be the rotation invariant measure on_ \(\mathbb{R}^{n}\) _with density_ \(e^{\varphi(|x|)}\)_,_ \(x\in\mathbb{R}^{n}\) _with respect to Lebesgue measure, where_ \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) _is a non-increasing function._ * \(C_{X,N}(\omega)=\operatorname{conv}(\pm X_{1}(\omega),\dots,\pm X_{N}(\omega))\) _and_ \(C_{Z,N}(\omega)=\operatorname{conv}(\pm Z_{1}(\omega),\dots,\pm Z_{N}(\omega))\) _for_ \(\omega\in\Omega\)_._ _Then for all \(t\geq 0\), one has \(P(\{\omega\in\Omega;\mu(C_{X,N}(\omega)^{*})\geq t\})\leq P(\{\omega\in\Omega; \mu(C_{Z,N}(\omega)^{*})\geq t\})\)._ It follows of course that the same comparison holds in expectation. The tools used there are shadow systems as in the work of Campi and Gronchi [CG1], together with the rearrangement inequalities of Rogers [R] and Brascamp-Lieb-Luttinger [BLL]. Applying Theorem 6 to \(X_{1},\dots,X_{N}\) uniformly distributed on a convex body \(K\) and using that when \(N\to+\infty\), the sequence of random polytopes \(P_{K,N}:=\operatorname{conv}(\pm X_{1},\dots,\pm X_{N})\) converges almost surely to \(K\) in the Hausdorff metric, we deduce that for measures \(\mu\), as in theorem 6, one has \[\mu(K^{*})\leq\mu((R_{K}B_{2}^{n})^{*})=\mu\left(\frac{B_{2}^{n}}{R_{K}}\right),\quad\text{where }R_{K}=\left(\tfrac{\operatorname{vol}(K)}{\operatorname{vol}(B_{2}^{n})} \right)^{\frac{1}{n}}.\] Since clearly \(\mu(K)\leq\mu(R_{K}B_{2}^{n})\), we deduce that \(\mu(K)\mu(K^{*})\leq\mu(R_{K}B_{2}^{n})\mu(B_{2}^{n}/R_{K})\). If, moreover, \(t\mapsto\varphi(e^{t})\) is concave, then \(t\mapsto\mu(e^{t}B_{2}^{n})\) is also log-concave (see [CFM]). Thus, it follows that for such measures \(\mu\) and for any symmetric convex body \(K\), one has \[\mu(K)\mu(K^{*})\leq\mu(B_{2}^{n})^{2}. \tag{14}\] It was proved in [CR] that under those hypotheses, \(t\mapsto\mu(e^{t}K)\) is log-concave (extending the same property for Gaussian measures established in [CFM]). It was asked in [Co] whether (14) holds for all symmetric log-concave measures \(\mu\). We shall prove (14) when moreover \(\mu\) has an unconditional density \(f\) with respect to the Lebesgue measure (a function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is said _unconditional_ if for some basis \(e_{1},\dots,e_{n}\) of \(\mathbb{R}^{n}\), one has for all \((\varepsilon_{1},\dots,\varepsilon_{n})\in\{-1;1\}^{n}\) and \((x_{1},\dots,x_{n})\in\mathbb{R}^{n}\), \(f(\sum_{i=1}^{n}x_{i}e_{i})=f(\sum_{i=1}^{n}\varepsilon_{i}x_{i}e_{i}))\). **Theorem 7**.: _If \(\mu\) a measure on \(\mathbb{R}^{n}\) with an unconditional and log-concave density with respect to the Lebesgue measure and \(K\) is a symmetric convex body in \(\mathbb{R}^{n}\), then \(\mu(K)\mu(K^{*})\leq\mu(B_{2}^{n})^{2}\)._ Proof.: We apply first a linear transform making the density of \(\mu\) unconditional with respect to the canonical basis of \(\mathbb{R}^{n}\). Let \(H\) be a coordinate hyperplane and let \(S_{H}K\) be the Steiner symmetral of \(K\) with respect to \(H\). Using (6) as in the proof of Meyer-Pajor [MP1] (see section 2.1 above), we get \(\mu(K^{*})\leq\mu((S_{H}K)^{*})\). Moreover, it is easy to see that \(\mu(K)\leq\mu(S_{H}K)\). Thus, denoting by \(L\) the convex body obtained from \(K\) after \(n\) successive Steiner symmetrisation with respect to the coordinate hyperplanes, we get \(\mu(K)\mu(K^{*})\leq\mu(L)\mu(L^{*})\). We are now reduced to the case when \(\mu\) and \(K\) are unconditional. Using the classical Prekopa-Leindler inequality (see for example [Pi, page 3]), it was shown in [FM1] that then \(\mu(L)\mu(L^{*})\leq\mu(B_{2}^{n})^{2}\). #### 2.3.7. Blaschke-Santalo type inequality on the sphere Another inequality of Blaschke-Santalo type was established by Gao, Hug and Schneider [GHS] on the sphere. We define the polar of \(A\subset S^{n-1}\) by \[A^{\circ}:=\{y\in S^{n-1};\langle x,y\rangle\leq 0,\text{ for all }x\in A\}.\] If \(\operatorname{pos}(A):=\{tx;x\in A,\ t\geq 0\}\), then \(A^{\circ}=(\operatorname{pos}(A))^{*}\cap S^{n-1}\). Let \(\sigma\) be the Haar probability measure on \(S^{n-1}\). A _spherical cap_ is the non-empty intersection of \(S^{n-1}\) with a halfspace. This work was further generalized by Hu and Li [HuLi] who proved a number of Blaschke-Santalo type inequalities in the sphere and hyperbolic space. **Theorem 8**.: _[_GHS_]_ _Let \(A\) be a non-empty measurable subset of \(S^{n-1}\) and \(C\) be a spherical cap such that \(\sigma(A)=\sigma(C)\). Then \(\sigma(A^{\circ})\leq\sigma(C^{\circ})\). If moreover \(A\) is closed and \(\sigma(A)<1/2\), there is equality if and only if \(A\) is a spherical cap._ Two proofs were given in [12]. One of them uses a special type of symmetrization called the two-point symmetrization and for the equality case the results of [1]. Hack and Pivovarov [2] gave a stochastic extension of theorem 7 in the spirit of Theorem 6. ## 3. Mahler conjecture. Special cases The problem of the lower bound of \(\mathcal{P}(K)\) is not yet solved, although significant progresses were done these last years. The first results are due to Mahler for \(n=2\), who proved that \(\mathcal{P}(K)\geq\mathcal{P}(\Delta_{2})=\frac{27}{4}\) where \(\Delta_{2}\) is a triangle and in the centrally symmetric case that \(\mathcal{P}(K)\geq\mathcal{P}([-1,1]^{2})=\frac{8}{3}\) (see also [13]). For the proofs, he used polygons and could not thus give the case of equality. Observe that he continued to be interested in this problem [14, 15]. The case of equality in dimension 2 was obtained by Meyer [10] for general bodies and by Reisner [11] (see also [12, 13, 14]) for centrally symmetric bodies. What happens in dimension \(n\geq 3\)? There are two conjectures, the first one formulated explicitly by Mahler [15], but not the second one. **Conjecture 1**.: _For every convex body \(K\) in \(\mathbb{R}^{n}\), one has_ \[\mathcal{P}(K)\geq\mathcal{P}(\Delta_{n})=\frac{(n+1)^{n+1}}{(n!)^{2}},\] _where \(\Delta_{n}\) is a simplex in \(\mathbb{R}^{n}\), with equality if and only if \(K=\Delta_{n}\)._ **Conjecture 2**.: _For every centrally symmetric convex body \(K\) in dimension \(n\), one has_ \[\mathcal{P}(K)\geq\mathcal{P}(B_{\infty}^{n})=\frac{4^{n}}{n!},\] _where \(B_{\infty}^{n}=[-1,1]^{n}\) is a cube, with equality if and only if \(K\) is a Hanner polytope (see Definition 4 below)._ ### The conjectured minimum in the symmetric case is not unique To understand conjecture 2 and different phenomena related to it, we define Hanner polytopes [10], and first the \(\ell_{1}\)-sum \(E\oplus_{1}F\) and \(\ell_{\infty}\)-sum \(E\oplus_{\infty}F\) of two normed spaces \(E\) and \(F\). **Definition 3**.: _Let \((E,\|\cdot\|_{E})\) and \((F,\|\cdot\|_{F})\) be two normed spaces. Then on \(E\times F\), we define two norms: the norm of the \(\ell_{\infty}\)-sum \(E\oplus_{\infty}F\) of \(E\) and \(F\) and of their \(\ell_{1}\)-sum \(E\oplus_{1}F\) by_ * \(\|(x,y)\|_{\infty}=\max(\|x\|_{E},\|y\|_{F})\)_._ * \(\|(x,y)\|_{1}=\|x\|_{E}+\|y\|_{F}\)_._ We note that if \(E\) and \(F\) are normed spaces then the unit ball of their \(\ell_{\infty}\)-sum is the Minkowki sum of the unit balls of \(E\) and \(F\) in \(E\times F\) and the unit ball of their \(\ell_{1}\)-sum is their convex hull. Analogously, if we consider two convex bodies \(K\subset\mathbb{R}^{n_{1}}\) and \(L\subset\mathbb{R}^{n_{2}}\), we define two convex bodies in \(\mathbb{R}^{n_{1}+n_{2}}\): * \(K\oplus_{\infty}L=K\times\{0\}+\{0\}\times L=\{x_{1}+x_{2}:x_{1}\in K,x_{2}\in L\}\), their \(\ell_{\infty}\)-sum. * \(K\oplus_{1}L=\operatorname{conv}(K\times\{0\},\{0\}\times L)\), their \(\ell_{1}\)-sum. One major property of \(\ell_{1}\) and \(\ell_{\infty}\)-sums is that \[(K\oplus_{\infty}L)^{*}=K^{*}\oplus_{1}L^{*}. \tag{15}\] Now we are ready to define Hanner polytopes. **Definition 4**.: _In dimension \(1\), Hanner polytopes are symmetric segments. Suppose that Hanner polytopes are defined in all dimension \(m\leq n-1\). A Hanner polytope in dimension \(n\) is the unit ball of an \(n\)-dimensional normed space \(H\) such that for some \(k\)-dimensional subspace \(E\), \(1\leq k\leq n\), and \((n-k)\)-dimensional subspace \(F\) of \(H\), whose unit balls are Hanner polytopes, \(1\leq k\leq n-1\), \(H\) is the \(\ell_{\infty}\)-sum or the \(\ell_{1}\)-sum of \(E\) and \(F\)._ Let us now discuss the basic properties of Hanner polytopes: * In \(\mathbb{R}^{2}\), there is a unique (up to isomorphism) Hanner polytope, which is the square. * In \(\mathbb{R}^{3}\), there are exactly \(2\) (up to isomorphism) Hanner polytopes, which are the cube and the centrally symmetric octahedron. * In \(\mathbb{R}^{4}\), there are, up two isomorphism, \(4\) different classes of Hanner polytopes, including two which are not isomorphic to the cube or the crosspolytope. And in \(\mathbb{R}^{n}\), their number increases quickly with \(n\). * The normed spaces whose unit balls \(K\) are Hanner polytopes are up to isometry exactly those which satisfy the \(3-2\)-intersection property: for any three vectors \(u_{1},u_{2}\) and \(u_{3}\) if \((K+u_{i})\cap(K+u_{j})\neq\emptyset\), for all \(1\leq i<j\leq 3\), then the intersection of all \(3\) balls is not empty [HL]. * A Hanner polytope is unconditional (see Definition 5 below). * If \(K\) is a Hanner polytope, then so is \(K^{*}\). This follows from (15). * If \(K\subset\mathbb{R}^{n_{1}}\) and \(L\subset\mathbb{R}^{n_{2}}\) are two convex bodies, then \[\mathcal{P}(K\oplus_{\infty}L)=\mathcal{P}(K\oplus_{1}L)=\frac{n_{1}!n_{2}!}{( n_{1}+n_{2})!}\mathcal{P}(K)\mathcal{P}(L).\] * Using induction, it follows that the volume product of a Hanner polytope in \(\mathbb{R}^{n}\) is \(\frac{4^{n}}{n!}\). In some sense, Conjecture 1 seems easier than Conjecture 2 because up to an isomorphism, there is only one proposed minimum. But polarity is done with respect to the Santalo point of a convex body \(K\), which is not always well located, so that one has to prove that for every \(z\in\operatorname{int}(K)\), \(\operatorname{vol}(K)\operatorname{vol}(K^{z})\geq\mathcal{P}(\Delta_{n})\). Observe however that if \(K\) has minimal volume product among all other convex bodies, then its Santalo point is also its center of gravity. ### The planar case First, note that the conjecture holds with the case of equality for \(n=2\) (Mahler [Ma1], Meyer[Me2] for another proof and the case of equality). Let us sketch a proof of the planar case and use this opportunity to give an example of how the method of shadow systems as well as Theorem 2 can be used; note that the method in this case can be traced back to the original proof from [Ma1] and is almost identical for the general and the symmetric case. We concentrate on the general case. Proof.: (Lower bound in \(\mathbb{R}^{2}\)) It is enough to show that \(\mathcal{P}(T)\geq\mathcal{P}(\Delta_{2})\) for all convex polygons \(T\subset\mathbb{R}^{2}\). The main idea is to remove vertices of \(T\). We use induction on the number \(k\) of vertices. Let \(T\) be a polygon with \(k\geq 4\) vertices. Suppose that \(T=\operatorname{conv}(v_{1},v_{2},v_{3},\ldots,v_{k})\), with \(v_{1},v_{2},v_{3},...,v_{k}\), written in the clockwise order. We shall prove that \(\mathcal{P}(T)\geq\mathcal{P}(Q)\), for a polygon \(Q\) with only \(k-1\) vertices. For \(i\neq j\), let \(\ell_{i,j}\) be a line through \(v_{i}\) and \(v_{j}\). Let \(\theta\in S^{1}\) be parallel to the line \(\ell_{1,k-1}\). And define \(T_{t}=\operatorname{conv}(v_{1},v_{2},\ldots,v_{k-1},v_{k}+t\theta)\) (i.e. we move \(v_{k}\) on a line parallel to \(\ell_{1,k-1}\)). The line \(\{v_{k}+t\theta;t\in\mathbb{R}\}\) meets \(\ell_{k-1,k}\) at \(v_{k}^{\prime}\) when \(t=a\) and \(\ell_{1,2}\) at \(v_{1}^{\prime}\) when \(t=b\). Since \(T_{0}=T\), one may assume that \(a<0<b\). It is easy to see that, for \(t\in[a,b]\), \(t\mapsto T_{t}\) is a shadow system with \(\operatorname{vol}(T_{t})=\operatorname{vol}(T)\). By Theorem 2, \(t\mapsto\mathcal{P}(T_{t})^{-1}\) is convex on the interval \([a,b]\) and thus is maximal at its end points. Thus \(\mathcal{P}(T)\geq\min(\mathcal{P}(T_{a}),\mathcal{P}(T_{b}))\) where \(T_{a}=\operatorname{conv}(v_{1},\dots,v_{k-2},v_{k}^{\prime})\) and \(T_{b}=\operatorname{conv}(v_{1}^{\prime},v_{2},\dots,v_{k-1})\) are polygons with only \(k-1\) vertices. **Remark 2**.: The above method was used to prove a number of partial cases of Mahler's conjectures (see [MR2, FMZ, AFZ, AFZ, Sar]). Unfortunately, there seems to be no way to generalize this approach to dimension 3 and higher, one of the reason is that if a vertex \(v\) of a polytope \(P\) may be a vertex of a lot of non simplicial faces, and how "moving" \(v\) without breaking the combinatorial structure of \(P\)? And when the combinatorial structure of \(P\) is broken, it is difficult to compute volumes. **Remark 3**.: In [Reb], Rebollo Bueno established also stochastic versions of the planar case of Mahler's conjectures. With the notations of section 2.3.6, he proved that for any centrally symmetric convex body \(K\) in the plane and any \(r\geq 1\), \[\mathbb{E}(\operatorname{vol}(P_{K,N}^{*})^{-r})\leq\mathbb{E}(\operatorname {vol}(P_{Q,N}^{*})^{-r}),\] where \(Q\) is a square with \(\operatorname{vol}(Q)=\operatorname{vol}(K)\). For \(r=1\) and \(N\to+\infty\), this gives back the planar case of Mahler's conjecture. The same type of result is also established in [Reb] for general convex bodies in the plane. ### The case of zonoids The conjecture holds for zonoids and polar of zonoids, with equality case for cubes (Reisner [Re1, Re2] and Gordon, Meyer and Reisner [GMR] for a second proof). We recall that a _zonoid_ in \(\mathbb{R}^{n}\) is a Hausdorff limit of _zonotopes_, that is of finite sums of segments. Since a segment is symmetric with respect to its midpoint, any zonotope, and thus any zonoid is centrally symmetric. From now, when speaking of a zonoid \(Z\), we shall suppose that \(Z=-Z\). Also, the polar bodies of zonoids can be seen as the unit balls of finite dimensional subspaces of \(L_{1}([0,1],dx)\). Observe that every convex centrally symmetric body in \(\mathbb{R}^{2}\) is a zonoid. We refer to [Bo, GW, Sc] for basic properties of zonoids. Proof.: (The lower bound of volume product for zonoids [GMR]) For a zonoid \(Z\subset\mathbb{R}^{n}\), there exists a measure \(\mu\) on \(S^{n-1}\) such that \(h_{Z}(x)=\frac{1}{2}\int_{S^{n-1}}|\langle x,u\rangle|d\mu(u)\) for all \(x\in\mathbb{R}^{n}\). Since \(\operatorname{vol}(Z)=\frac{1}{n}\int_{S^{n-1}}\operatorname{vol}_{n-1}(P_{u^ {\perp}}Z)d\mu(u)\), one has \[\operatorname{vol}(Z^{*})\int_{S^{n-1}}\operatorname{vol}_{n-1}( P_{u^{\perp}}Z)d\mu(u) =n\operatorname{vol}(Z)\operatorname{vol}(Z^{*})=\frac{n+1}{2} \operatorname{vol}(Z)\int_{Z^{*}}h_{K}(x)dx\] \[=\frac{n+1}{2}\operatorname{vol}(Z)\int_{S^{n-1}}\left(\int_{Z^{ *}}|\langle x,u\rangle|dx\right)d\mu(u).\] It follows that for some \(u\in S^{n-1}\), one has \[\operatorname{vol}(Z^{*})\operatorname{vol}_{n-1}(P_{u^{\perp}}Z)\leq\frac{n+ 1}{2}\operatorname{vol}(Z)\int_{Z^{*}}|\langle x,u\rangle|dx.\] Now \(\int_{Z^{*}}|\langle x,u\rangle|dx=2\int_{0}^{\infty}tf(t)dt\), where \(f(t)=\operatorname{vol}\left(Z^{*}\cap(u^{\perp}+tu)\right)\) is the volume in \(u^{\perp}\) of the sections of \(Z^{*}\) with hyperplanes parallel to \(u^{\perp}\). Note that \(f(0)=\operatorname{vol}(Z^{*}\cap u^{\perp})\) and \(2\int_{0}^{\infty}f(t)dt=\operatorname{vol}(Z^{*})\). By the Brunn-Minkowski theorem, the function \(f^{\frac{1}{n-1}}\) is concave on its support. By a classical estimate (see for instance [MiP]), \[\int_{0}^{\infty}tf(t)dt\leq\frac{n}{n+1}\frac{(\int_{0}^{\infty}f(t)dt)^{2}}{ f(0)},\] with equality if and only if \(f(t)=f(0)(1-ct)_{+}^{n-1}\), for some \(c>0\) and all \(t\geq 0\). This gives \[\int_{Z^{*}}|\langle x,u\rangle|dx\leq 2\frac{n}{n+1}\frac{4^{-1}\operatorname{ vol}(Z^{*})^{2}}{\operatorname{vol}_{n-1}(Z^{*}\cap u^{\perp})}=\frac{n}{2(n+1)} \frac{\operatorname{vol}(Z^{*})^{2}}{\operatorname{vol}_{n-1}(Z^{*}\cap u^{ \perp})},\] and thus \[\operatorname{vol}(Z^{*})\operatorname{vol}_{n-1}(P_{u^{\perp}}Z)\leq\frac{n+ 1}{2}\operatorname{vol}(Z)\frac{n}{2(n+1)}\frac{\operatorname{vol}(Z^{*})^{2 }}{\operatorname{vol}_{n-1}(Z^{*}\cap u^{\perp})},\] so that \[\operatorname{vol}(Z)\operatorname{vol}(Z^{*})\geq\frac{4}{n}\operatorname{ vol}_{n-1}(P_{u^{\perp}}Z)\operatorname{vol}_{n-1}(Z^{*}\cap u^{\perp}),\] which allows to conclude by induction, with the case of equality, since \(P_{u^{\perp}}Z\) is a zonoid in dimension \(n-1\) and \((P_{u^{\perp}}Z)^{*}=Z^{*}\cap u^{\perp}\). **Remark 4**.: Campi and Gronchi [10] presented a very interesting inequality on the volume of \(L_{p}\)-zonotopes, which givesinequality, in particular, another proof of the above result. It is interesting to note that the proof in [10] is based on the shadow systems technique. Another proof using shadow systems was presented by Saroglou in [11]. **Remark 5**.: Marc Meckes [12] gaveanother proof of Mahler's conjecture for zonoids, based on the notion of _magnitude_ introduced by Leinster [13], which is a numerical isometric invariant for metric spaces. He studies the magnitude of a convex body in hypermetric normed spaces (which include \(\ell_{p}^{n}\), \(p\in[1,2]\)) and proved a new upper bound for magnitude on such spaces using the Holmes-Thompson intrinsic volumes of their unit balls. ### The case of unconditional bodies **Definition 5**.: _Let \(K\) in \(\mathbb{R}^{n}\) be a convex body. We say that \(K\) is unconditional if for some basis \(e_{1},\dots,e_{n}\) of \(\mathbb{R}^{n}\) one has \(x_{1}e_{1}+\dots+x_{n}e_{n}\in K\) if and only if \(|x_{1}|e_{1}+\dots+|x_{n}|e_{n}\in K.\) We say that \(K\) is almost unconditional if for some basis \(e_{1},\dots,e_{n}\) of \(\mathbb{R}^{n}\) for every \(1\leq i\leq n\), one has \(P_{i}K=K\cap H_{i}\), where \(H_{i}\) is linear span of \(\{e_{j},j\neq i\}\) and \(P_{i}\) is the linear projection from \(\mathbb{R}^{n}\) onto \(H_{i}\) parallel to \(e_{i}\)._ If \(K\) is unconditional, after a linear transformation which does not change \(\mathcal{P}(K)\), we may suppose that \((e_{1},\dots,e_{n})\) is the canonical basis of \(\mathbb{R}^{n}\). Unconditional bodies are almost unconditional and centrally symmetric. Observe also that if \(K\) is unconditional (resp. almost unconditional) with respect to some basis, then \(K^{*}\) is also unconditional (resp. almost unconditional) with respect to the dual basis. We follow the proof of [13] of the inequality \(\mathcal{P}(K)\geq\mathcal{P}(B_{\infty}^{n})\) (the first proof was given in [14]). We don't prove the case of equality (Hanner polytopes), which is more involved. Proof.: We use induction on \(n\). It is trivial for \(n=1\). We suppose that \(e_{1},\dots,e_{n}\) is the canonical basis of \(\mathbb{R}^{n}\). Let \(K_{+}=K\cap\mathbb{R}^{n}_{+}\), \({K^{*}}_{+}=K^{*}\cap\mathbb{R}^{n}_{+}\). Then \(\mathcal{P}(K)=4^{n}\operatorname{vol}(K_{+})\operatorname{vol}(K^{*}_{+})\). For \(x\in\mathbb{R}^{n}_{+}\), one has \[x\in K_{+}\text{ if and only if }\langle x,y\rangle\leq 1\text{ for any }y\in K^{*}_{+},\] \[y\in K^{*}_{+}\text{ if and only if }\langle x,y\rangle\leq 1\text{ for any }x\in K_{+}.\] For \(1\leq i\leq n\), \(K_{i}:=K\cap\{x_{i}=0\}\) is an unconditional body in \(\mathbb{R}^{n-1}\) and \((K_{i})^{*}=(K^{*})_{i}\). Let \((K_{i})_{+}=K_{i}\cap(\mathbb{R}^{+})^{n}\). For \(x=(x_{1},\dots,x_{n})\in K_{+}\), let \(C_{i}(x)\) be the convex hull of \(\{x\}\) with \((K_{i})_{+}\). Since \(C_{i}(x)\) is a cone with apex \(x\) and basis \((K_{i})_{+}\), one has \[\operatorname{vol}\big{(}C_{i}(x)\big{)}=\frac{x_{i}}{n}\operatorname{vol}_{n- 1}\big{(}(K_{i})_{+}\big{)}.\] Thus \[\operatorname{vol}(K_{+})\geq\operatorname{vol}\big{(}\cup_{i=1}^{n}C_{i}(x) \big{)}=\sum_{i=1}^{n}\operatorname{vol}\big{(}C_{i}(x)\big{)}=\frac{1}{n}\sum_ {i=1}^{n}x_{i}\operatorname{vol}_{n-1}\big{(}(K_{i})_{+}\big{)}. \tag{16}\] Let \(a:=\frac{1}{n\operatorname{vol}(K_{+})}\Big{(}\operatorname{vol}_{n-1}\big{(} (K_{1})_{+}\big{)},\ldots,\operatorname{vol}_{n-1}\big{(}(K_{n})_{+}\big{)} \Big{)}\) in \(\mathbb{R}^{n}\). By (16) one has \(\langle a,x\rangle\leq 1\) for all \(x\in K_{+}\), that is \(a\in K_{+}^{*}\). Also, \(a^{*}:=\frac{1}{n\operatorname{vol}(K_{+}^{*})}\Big{(}\operatorname{vol}_{n-1 }\big{(}(K_{1}^{*})_{+}\big{)},\ldots,\operatorname{vol}_{n-1}\big{(}(K_{n}^{* })_{+}\big{)}\Big{)}\in K_{+}\). Thus \(\langle a,a^{*}\rangle\leq 1\), that is \[\frac{\sum_{i=1}^{n}\operatorname{vol}_{n-1}\big{(}(K_{i})_{+}\big{)} \operatorname{vol}_{n-1}\big{(}(K_{i}^{*})_{+}\big{)}}{n^{2}\operatorname{vol} (K_{+})\operatorname{vol}((K_{+}^{*})}\leq 1,\] so that \[\mathcal{P}(K)=4^{n}\operatorname{vol}(K_{+})\operatorname{vol}(K_{+}^{*}) \geq\frac{4^{n}}{n^{2}}\sum_{i=1}^{n}\operatorname{vol}_{n-1}\big{(}(K_{i})_ {+}\big{)}\operatorname{vol}_{n-1}\big{(}(K_{i}^{*})_{+}\big{)}.\] For \(1\leq i\leq n\), one has \(\operatorname{vol}_{n-1}(K_{i})=2^{n-1}\operatorname{vol}_{n-1}\big{(}(K_{i}) _{+}\big{)}\) and \(\operatorname{vol}_{n-1}(K_{i}^{*})=2^{n-1}\operatorname{vol}_{n-1}\big{(}(K_ {i}^{*})_{+}\big{)}\). Since the \(K_{i}\) are also unconditional, the induction hypothesis gives \(\mathcal{P}(K_{i})\geq\frac{4^{n-1}}{(n-1)!}\), \(1\leq i\leq n\). Thus \[\mathcal{P}(K)\geq\frac{4}{n^{2}}\sum_{i=1}^{n}\operatorname{vol}_{n-1}(K_{i}) \operatorname{vol}_{n-1}(K_{i}^{*})\geq\frac{4}{n^{2}}\cdot n\cdot\frac{4^{n-1 }}{(n-1)!}=\frac{4^{n}}{n!}.\] **Remark 6**.: A small modification of this proof allows to treat the case of almost unconditional centrally symmetric bodies. Note that every centrally symmetric body in \(\mathbb{R}^{2}\) is almost unconditional. ### The 3-dimensional symmetric case The symmetric case in \(\mathbb{R}^{3}\) was solved by Irieh and Shibota [11] in 2017 with a quite involved proof of about sixty pages. We would like here to highlight the main ideas and to connect it with the unconditional case presented above. We will use the shorter proof given in [12]. A symmetric body \(K\subset\mathbb{R}^{n}\), \(n\geq 3\) is not generally almost unconditional, and thus not unconditional. However, every planar convex body has an almost unconditional basis. For \(n=3\), the goal is to show that a 3-dimensional convex symmetric body \(K\) may still have core properties of an unconditional body. This is done with the help of the following equipartition result: **Theorem 9**.: _Let \(K\subset\mathbb{R}^{3}\) be a symmetric convex body. Then there exist 3 planes \(H_{1},H_{2},H_{3}\) passing through the origin such that:_ * _they split_ \(K\) _into_ \(8\) _pieces of equal volume, and_ * _for each_ \(i=1,2,3\)_, the section_ \(K\cap H_{i}\) _are split into_ \(4\) _parts of equal area by the other two planes._ Note that theorem 9 belongs to the very rich theory of equipartitions. For example, a celebrated result of Hadwiger [13], answering a question of Grunbaum [13], shows that for any absolutely continuous finite measure in \(\mathbb{R}^{3}\), there exist three planes for which any octant has \(1/8\) of the total mass. For proving Theorem 9, one can use a result of Klartag (Theorem 2.1 of [11]); we refer to [12] for details. Our goal is to create an analog of formula (16). Consider a sufficiently regular oriented hypersurface \(A\subset\mathbb{R}^{n}\) and define the vector \[\overrightarrow{V}(A)=\int_{A}\overrightarrow{n_{A}}(x)dx,\] where \(\overrightarrow{n_{A}}(x)\) is the unit normal to \(A\) at \(x\) defined by its orientation. Next, for a convex body \(K\subset\mathbb{R}^{n}\) with \(0\in\operatorname{int}(K)\), the orientation of a subset \(A\subset\partial K\) is given by the outer normal \(\overrightarrow{n_{K}}\) to \(K\). If \(\mathcal{C}(A):=\{rx;\ 0\leq r\leq 1,x\in A\}\), then \[\operatorname{vol}(\mathcal{C}(A))=\frac{1}{n}\int_{A}\langle x,\overrightarrow {n_{K}}(x)\rangle dx.\] The following is a key proposition for our proof. **Proposition 1**.: _Let \(K\subset\mathbb{R}^{n}\) be a convex body, with \(0\in\operatorname{int}(K)\), and let \(A\) be a Borel subset of \(\partial K\) with \(\operatorname{vol}(\mathcal{C}(A))\neq 0\), then for all \(x\in K\),_ \[\frac{1}{n}\langle x,\overrightarrow{V}(A)\rangle\leq\operatorname{vol}( \mathcal{C}(A))\text{ and thus }\frac{\overrightarrow{V}(A)}{n\operatorname{vol}( \mathcal{C}(A))}\in K^{*}.\] Proof.: For all \(x\in K\), we have \(\langle x,\overrightarrow{n_{K}}(z)\rangle\leq\langle z,\overrightarrow{n_{K }}(z)\rangle\) for every \(z\in\partial K\). Thus for all \(x\in K\), \[\langle x,\overrightarrow{V}(A)\rangle=\int_{A}\langle x,\overrightarrow{n_{K }}(z)\rangle dz\leq\int_{A}\langle z,\overrightarrow{n_{K}}(z)\rangle dz=n \operatorname{vol}(\mathcal{C}(A)).\] **Corollary 1**.: _Let \(K\) be a convex body in \(\mathbb{R}^{n}\) with \(0\in\operatorname{int}(K)\). If \(A\subset\partial K\) and \(B\subset\partial K^{*}\) are Borel subsets such that \(\operatorname{vol}(\mathcal{C}(A))>0\) and \(\operatorname{vol}(\mathcal{C}(B))>0\), then_ \[\langle\overrightarrow{V}(A),\overrightarrow{V}(B)\rangle\leq n^{2} \operatorname{vol}(\mathcal{C}(A))\operatorname{vol}(\mathcal{C}(B)).\] Proof.: We use the Proposition 1 to get \(\frac{\overrightarrow{V}(A)}{n\operatorname{vol}(\mathcal{C}(A))}\in K^{*}\) and \(\frac{\overrightarrow{V}(B)}{n\operatorname{vol}(\mathcal{C}(B))}\in K\). **Proof of Conjecture 2 for \(n=3\):** Since the volume product is continuous, it is enough to prove the conjecture for a centrally symmetric, smooth, strictly convex body \(K\) (see [Sc] Section 3.4). From the linear invariance of the volume product, we may assume that the equipartition property obtained in Theorem 9 is satisfied by the coordinates planes given by the canonical orthonormal basis \((e_{1},e_{2},e_{3})\). As in the unconditional case, we divide \(\mathbb{R}^{3}\) and the body \(K\) into the octants defined by this basis, which define cones as in Corollary 1. The main issue is that, in a sharp difference with the unconditional case, the dual cone to the cone defined as an intersection of \(K\) with an octant is not the intersection of \(K^{*}\) with this octant. We will need a bit of combinatorics to work around this issue. For \(\varepsilon\in\{-1;1\}^{3}\), let the \(\varepsilon\)-octant be \(\{x\in\mathbb{R}^{3};\varepsilon_{i}x_{i}\geq 0\text{ for }i=1,2,3\}\) and for \(L\subset\mathbb{R}^{3}\), let \(L_{\varepsilon}\) be the intersection of \(L\) with the \(\varepsilon\)-octant: \(L_{\varepsilon}=\{x\in L;\varepsilon_{i}x_{i}\geq 0;\ i=1,2,3\}.\) Let \(N(\varepsilon):=\{\varepsilon^{\prime}\in\{-1,1\}^{3}:\sum_{i=1}^{3}| \varepsilon_{i}-\varepsilon_{i}^{\prime}|=2\}\). Then \(\varepsilon^{\prime}\in N(\varepsilon)\) iff \([\varepsilon,\varepsilon^{\prime}]\) is an edge \([-1,1]^{3}\). If \(K_{\varepsilon}\cap K_{\varepsilon^{\prime}}\) is a hypersurface, we define \(K_{\varepsilon}\overrightarrow{\cap}K_{\varepsilon^{\prime}}\) to be oriented according to the outer normals of \(\partial K_{\varepsilon}\). Using Stokes theorem, we obtain \[\overrightarrow{V}(\partial K_{\varepsilon})=\int_{\partial K_{\varepsilon}} \overrightarrow{n_{\partial K_{\varepsilon}}}(x)dx-\sum_{\varepsilon^{ \prime}\in N(\varepsilon)}\overrightarrow{V}(K_{\varepsilon}\overrightarrow{ \cap}K_{\varepsilon^{\prime}}).\] Using the equipartition of the areas of \(K\cap e_{i}^{\perp}\), we get \[\overrightarrow{V}(\partial K_{\varepsilon})=-\sum_{\varepsilon^{\prime}\in N( \varepsilon)}\overrightarrow{V}(K_{\varepsilon}\overrightarrow{\cap}K_{ \varepsilon^{\prime}})=\sum_{i=1}^{3}\frac{\operatorname{vol}(K\cap e_{i}^{ \perp})}{4}\varepsilon_{i}\overrightarrow{e_{i}}.\] Let us look at the dual. Since \(K\) is strictly convex and smooth, there exists a diffeomorphism \(\varphi:\partial K\to\partial K^{*}\) such that \(\langle\varphi(x),x\rangle=1\) for all \(x\in\partial K\). We extend \(\varphi\) to \(\mathbb{R}^{3}\) by homogeneity of degree one: \(\varphi(\lambda x)=\lambda\varphi(x)\) for \(\lambda\geq 0\). Then \[K^{*}=\bigcup_{\varepsilon}\varphi(K_{\varepsilon})\text{ and }\operatorname{ vol}(K^{*})=\sum_{\varepsilon}\operatorname{vol}\big{(}\varphi(K_{ \varepsilon})\big{)}.\] From the equipartition of volumes, one has \[\operatorname{vol}(K)\operatorname{vol}(K^{*})=\sum_{\varepsilon} \operatorname{vol}(K)\operatorname{vol}(\varphi(K_{\varepsilon}))=8\sum_{ \varepsilon}\operatorname{vol}(K_{\varepsilon})\operatorname{vol}\big{(} \varphi(K_{\varepsilon})\big{)}.\] From Corollary 1, we deduce that for \(\varepsilon\in\{-1,1\}^{3}\) \[\operatorname{vol}(K_{\varepsilon})\operatorname{vol}\big{(}\varphi(K_{ \varepsilon})\big{)}\geq\frac{1}{9}\langle\overrightarrow{V}(\partial K_{ \varepsilon}),\overrightarrow{V}\big{(}\varphi\big{(}\partial K_{\varepsilon })\big{)}\rangle.\] Thus \[\operatorname{vol}(K)\operatorname{vol}(K^{*}) \geq\frac{8}{9}\sum_{\varepsilon}\langle\overrightarrow{V}( \partial K_{\varepsilon}),\overrightarrow{V}\big{(}\varphi(\partial K_{ \varepsilon})\big{)}\rangle\] \[=\frac{8}{9}\sum_{\varepsilon}\langle\sum_{i=1}^{3}\frac{ \operatorname{vol}(K\cap e_{i}^{\perp})}{4}\varepsilon_{i}\overrightarrow{e _{i}},\overrightarrow{e_{i}},\overrightarrow{V}(\varphi\big{(}\partial K_{ \varepsilon})\big{)}\rangle\] \[=\frac{8}{9}\sum_{i=1}^{3}\frac{\operatorname{vol}(K\cap e_{i} ^{\perp})}{4}\langle\overrightarrow{e_{i}},\sum_{\varepsilon}\varepsilon_{i} \overrightarrow{V}(\varphi(\partial K_{\varepsilon})\rangle.\] Now we use Stokes theorem for \(\varphi(\partial K)\) to get \[\overrightarrow{V}\big{(}\varphi((\partial K_{\varepsilon})\big{)}=-\sum_{ \varepsilon^{\prime}\in N(\varepsilon)}\overrightarrow{V}\big{(}\varphi(K_{ \varepsilon}\overrightarrow{\cap}K_{\varepsilon^{\prime}}^{\prime})\big{)}.\] The next step requires a careful computation of the sums following orientation of all surfaces, which gives many cancellations. Next one combines the correct parts of \(K\) and \(\varphi(K)\) to get \[\operatorname{vol}(K)\operatorname{vol}(K^{*})\geq\frac{4}{9}\sum_{i=1}^{3} \operatorname{vol}_{n-1}(K\cap e_{i}^{\perp})\langle\overrightarrow{e_{i}},V \big{(}\varphi(K\cap\overrightarrow{e_{i}}^{\perp})\big{)}\rangle\] (see [10] for the precise computations). Let \(P_{i}\) be the orthogonal projection onto \(e_{i}{}^{\perp}\). Then \(P_{i}:\varphi(K\cap e_{i}{}^{\perp})\to P_{i}(K^{*})\) is a bijection. Using Cauchy's formula for the volume of projections, we get \[\langle\overrightarrow{e_{i}},V\big{(}\varphi(K\cap e_{i}{}^{ \perp})\big{)}\rangle =\int\limits_{\varphi(K\cap e_{i}{}^{\perp})}\langle\overrightarrow {n_{\varphi(K\cap e_{i}{}^{\perp})}}(x),\overrightarrow{e_{i}}\rangle dx\] \[=\operatorname{vol}_{n-1}\big{(}P_{i}(\varphi(K\cap e_{i}{}^{\perp }))\big{)}=\operatorname{vol}_{n-1}\big{(}P_{i}(K^{*})\big{)}.\] and if \(\varepsilon=(\varepsilon_{1},\dots,\varepsilon_{n})\), \[\operatorname{vol}(K)\operatorname{vol}(K^{*})\geq\frac{8}{9}\sum_{i=1}^{3} \frac{\operatorname{vol}_{n-1}\big{(}K\cap e_{i}{}^{\perp}\big{)}}{4}\langle \overrightarrow{e_{i}},\sum_{\varepsilon}\varepsilon_{i}\overrightarrow{V} \big{(}\varphi(\partial K_{\varepsilon})\big{)}\rangle.\] Finally \[\operatorname{vol}(K)\operatorname{vol}(K^{*}) \geq\frac{4}{9}\sum_{i=1}^{3}\operatorname{vol}_{n-1}(K\cap e_{i }^{\perp})\operatorname{vol}_{n-1}\big{(}P_{i}(K^{*})\big{)}\] \[=\frac{4}{9}\sum_{i=1}^{3}\operatorname{vol}_{n-1}(K\cap e_{i}^{ \perp})\operatorname{vol}_{n-1}\big{(}(K\cap e_{i}^{\perp})^{*}\big{)}\] \[\geq\frac{4}{9}\times 3\times\frac{4^{2}}{2!}=\frac{4^{3}}{3!}.\] ### Further special cases where the conjectures hold Let us list here a number of other special cases in which the conjectured inequality was proved: * Symmetric polytopes in \(\mathbb{R}^{n}\) with \(2n+2\) vertices for \(n\leq 9\) (Lopez and Reisner [LR]) and for any \(n\) (Karasev [Ka]). * For \(p\geq 1\), hyperplane sections through \(0\) of \(B_{p}^{n}=\{(x_{1},\dots,x_{n})\in\mathbb{R}^{n};\sum_{i=1}^{n}|x_{i}|^{p}\leq 1\}\) (Karasev [Ka]). Karasev's proof of those results is, so far, one of the few concrete applications of the symplectic geometry, through billiards approach, to proving special cases of Mahler's conjecture. * Bodies of revolution [MR1]. * Some bodies with many symmetries: Barthe and Fradelizi in [BF], established that a convex body \(K\) which is symmetric with respect to a family of hyperplanes whose intersection is reduced to one point, satisfies Conjecture 1. More generally, it is proved in [BF] that if \(K\) is invariant under the reflections fixing \(P_{1}\times\dots\times P_{k}\), where for \(1\leq i\leq k\), the \(P_{i}\) are regular polytopes or an Euclidean ball in a subspace \(E_{i}\) and \(\mathbb{R}^{n}=E_{1}\oplus\dots\oplus E_{k}\), then \(\mathcal{P}(K)\geq\mathcal{P}(P_{1}\times\dots\times P_{k})\). * Iriyeh and Shibata established similar results in [IS2, IS3]. They determined the exact lower bound of the volume product of convex bodies invariant by some group of symmetries (many classical symmetry groups in dimension \(3\)[IS2] and for the special orthogonal group of the simplex and of the cube [IS3]). * Polytopes in \(\mathbb{R}^{n}\) with not more that \(n+3\) vertices [MR2]. * Almost unconditional symmetric bodies (Saint Raymond [SR1]) with equality case for Hanner polytopes (Meyer [Me1], Reisner [Re3]). Also in [SR1] is proved a result for unconditional sums of convex bodies : For \(1\leq i\leq m\), let \(K_{i}\subset\mathbb{R}^{d_{i}}\) be convex symmetric bodies and let \(L\subset\mathbb{R}^{m}\) be an unconditional body with respect to the canonical basis \(e_{1},\dots,e_{m}\). We define _the unconditional sum of \(K_{1},\dots,K_{m}\) with respect to \(L\)_ by \[K_{1}\oplus_{L}\dots\oplus_{L}K_{m}=\{(x_{1},\dots,x_{m})\in\mathbb{R}^{d_{1}} \times\dots\times\mathbb{R}^{d_{m}};\|x_{1}\|_{K_{1}}e_{1}+\dots+\|x_{m}\|_{K_{ m}}e_{m}\in L\}.\] Clearly \(K_{1}\oplus_{L}\dots\oplus_{L}K_{m}\) is a symmetric convex body in \(\mathbb{R}^{d_{1}+\dots+d_{m}}\). Moreover it is easy to see that and \({}^{*}L_{+}=L^{*}\cap\mathbb{R}_{+}^{m}\), one has \[\mathcal{P}(K_{1}\oplus_{L}\cdots\oplus_{L}K_{m})=\Big{(}\int_{(t_{1},\dots,t_{m})\in L_{+}}\prod_{i=1}^{m}t_{i}^{d_{i}-1}dt_{1}\dots dt_{m}\Big{)}\times\] \[\Big{(}\int_{(t_{1},\dots,t_{m})\in L_{+}^{*}}\prod_{i=1}^{m}t_{i}^ {d_{i}-1}dt_{1}\dots dt_{m}\Big{)}\prod_{i=1}^{m}\mathcal{P}(K_{i})\] and \[\Big{(}\int_{(t_{1},\dots,t_{m})\in L_{+}}\prod_{i=1}^{m}t_{i}^{d_{i}-1}dt_{1 }\dots dt_{m}\Big{)}\Big{(}\int_{(t_{1},\dots,t_{m})\in L_{+}^{*}}\prod_{i=1}^ {m}t_{i}^{d_{i}-1}dt_{1}\dots dt_{m}\Big{)}\geq\frac{d_{1}!\times\cdots\times d _{m}!}{(d_{1}+\cdots+d_{m})!}\;.\] Observe that it follows from [Me1] or [Re3] that there is equality in the last inequality if and only if \(L\) is a Hanner polytope. Finally, if \(\mathcal{P}(K_{i})\geq 4^{i}/i!\), \(1\leq i\leq m\), then \[\mathcal{P}(K_{1}\oplus_{L}\cdots\oplus_{L}K_{m})\geq\frac{4^{d_{1}+\cdots+d_ {m}}}{(d_{1}+\cdots+d_{m})!}.\] * Although their volumes have been computed (see [SR2]), it is not known whether the unit ball of classical ideals of operators satisfy Conjecture 2. * An interpretation of Conjecture 2 in terms of wavelets was given in [Ba4]. * Connections of Mahler's conjecture and the Blaschke-Santalo inequality to the maximal and minimal of \(\lambda_{1}(K)\lambda_{1}(K^{*})\), where \(K\) is a convex body and \(\lambda_{1}(K)\) is first eigenvalue of the Laplacian on the relative interior of K with Dirichlet condition \(u=0\) on \(\partial K\) was given in [BuF]. ### Local minimizers and stability results One may investigate the properties of the local minimizers for \(\mathcal{P}(K)\). A natural open question is whether such a minimizer must be a polytope. A number of results in this direction were proved by studying convex bodies with positive curvature. Stancu [St] proved that if \(K\) is a convex body, which is smooth enough and has a strictly positive Gauss curvature everywhere, then the volume product of \(K\) can not be a local minimum. She showed it as a consequence of the fact that, for some \(\delta(K)>0\), one has \[\operatorname{vol}(K_{\delta})\operatorname{vol}((K_{\delta})^{*})\geq \operatorname{vol}(K)\operatorname{vol}(K^{*})\geq\operatorname{vol}(K^{\delta} )\operatorname{vol}((K^{\delta})^{*}),\] for any \(\delta\in(0,\delta(K))\), where \(K_{\delta}\) and \(K^{\delta}\) stand for the convex floating body and the illumination body associated to \(K\) with parameter \(\delta\). A stronger result for local minimizers was proved in [RSW]: if \(K\) is a convex body which is local minimizer of volume product, then \(K\) has no positive curvature at any point of its boundary. The study of local minimizers was continued in [HHL], where the authors computed the first and the second derivative of the volume product in terms of the support function. Those results may be seen as a hint toward the conjecture that a minimizer must be a polytope. We also note that [GM] extended it to the functional case (see Section 5 below). It is known that the conjectured global minimizers, that is Hanner polytopes in the centrally symmetric case and simplices in the general case, are actually local minimizers. This question originates from the blog of Tao [T1, T2], where a number of ideas that may lead to a better understanding of the volume product were discussed. Nazarov, Petrov, Ryabogin and Zvavitch [NPRZ] were able to show that the cube and the cross-polytope are local minimizers. Kim and Reisner [KiR] generalized this result to the case of non-symmetric bodies proving that the simplex is a local minimizer. The most general result in the symmetric case was obtained by Kim [Ki] who considered the case of Hanner polytopes. More precisely, let \[d_{BM}(K,L)=\inf\{d:\ d>0,\ \text{there exists $T\in GL(n)$ such that $K\subseteq TL \subseteq dK$}\}\] be the Banach-Mazur multiplicative distance between two symmetric convex bodies \(K,L\subset\mathbb{R}^{n}\). Then **Theorem 10**.: _There exist constants \(\delta(n),c(n)>0\) depending only on \(n\) such that if \(K\) be a symmetric convex body in \(\mathbb{R}^{n}\) with_ \[\min\{d_{BM}(K,H):H\text{ is a Hanner polytope in $\mathbb{R}^{n}$}\}=1+\delta,\] _for some \(0<\delta\leq\delta(n)\), then_ \[\mathcal{P}(K)\geq(1+c(n)\delta)\cdot\mathcal{P}(B_{\infty}^{n}).\] The above theorem was used in [KiZ] to show the stability of the volume product around the class of unconditional bodies. The question of stability for minima and maxima was also treated in various cases [BMMR, BH, KiZ, Bor, FHMRZ]. A general approach to global stability of the volume product was considered in [FHMRZ], where the following natural lemma was proved: **Lemma 1**.: _Let \((\mathcal{A}_{1},d_{1})\) be a compact metric space, \((\mathcal{A}_{2},d_{2})\) be a metric space, \(f:\mathcal{A}_{1}\to\mathcal{A}_{2}\) be a continuous function and \(D\) be a closed subset of \(\mathcal{A}_{2}\). Then, (1) For any \(\beta>0\), there exists \(\alpha>0\), such that \(d_{1}(x,f^{-1}(D))\geq\beta\) implies \(d_{2}(f(x),D)\geq\alpha\). (2) If for some \(c_{1},c_{2}>0\), \(d_{1}(x,f^{-1}(D))<c_{1}\) implies \(d_{2}(f(x),D))\geq c_{2}d_{1}(x,f^{-1}(D)),\) then for some \(C>0\), one has \(d_{1}(x,f^{-1}(D))\leq cd_{2}(f(x),D))\) for every \(x\in\mathcal{A}_{1}\)._ Together with a local minima result (for example Theorem 10), Lemma 1 gives almost immediately a stability result for known bounds of the volume product. Let us illustrate this technique in the case of symmetric convex bodies in \(\mathbb{R}^{3}\). **Theorem 11**.: _There exists an absolute constant \(C>0\), such that for every symmetric convex body \(K\subset\mathbb{R}^{3}\) and \(\delta>0\) satisfying \(\mathcal{P}(K)\leq(1+\delta)\mathcal{P}(B_{\infty}^{3})\), one has_ \[\min\{d_{BM}(K,B_{\infty}^{3}),d_{BM}(K,B_{1}^{3})\}\leq 1+C\delta.\] Proof.: Using the linear invariance of the volume product and John's theorem, we reduce to the case \(B_{2}^{3}\subseteq K\subseteq\sqrt{3}B_{2}^{3}\). Our metric space \(\mathcal{A}_{1}\) will be the set of such bodies with the Hausdorff metric \(d_{H}\). Let \(\mathcal{A}_{2}=\mathbb{R}\). Then \(f:\mathcal{A}_{1}\to\mathcal{A}_{2}\), defined by \(f(K)=\mathcal{P}(K)\), is continuous on \(\mathcal{A}_{1}\) (see for example [FMZ]). Finally, let \(D=\mathcal{P}(B_{\infty}^{3})\). From the description of the equality cases (i.e. that \(K\) or \(K^{*}\) must be a parallelepiped) proved in [IS1, FHMRZ] we get \[f^{-1}(D) =\{K\in\mathcal{A}_{1};\mathcal{P}(K)=\mathcal{P}(B_{\infty}^{3})\}\] \[=\{K\in\mathcal{A}_{1};K=SB_{\infty}^{3}\ \text{or}\ K=\sqrt{3}SB_{1}^{3},\ \text{for some}\ S\in\mathrm{SO}(3)\}.\] Note that \(B_{\infty}^{3}\) is in John position (see for example [AGM1]) and thus if \(B_{2}^{3}\subset TB_{\infty}^{3}\subset\sqrt{3}B_{2}^{3}\) for some \(T\in GL(3)\), then \(T\in SO(3)\). Next,we show that the assumptions in the second part of Lemma 1 are satisfied. Since \(d_{BM}(K^{*},L^{*})=d_{BM}(K,L)\), we may restate the \(\mathbb{R}^{3}\) version of Theorem 10 in the following form: there are absolute constants \(c_{1},c_{2}>0\) such that for every symmetric convex body \(K\) in \(\mathbb{R}^{3}\) satisfying \(\min\{d_{BM}(K,B_{\infty}^{3}),d_{BM}(K,B_{1}^{3})\}:=1+d\leq 1+c_{1}\), one has \(\mathcal{P}(K)\geq\mathcal{P}(B_{\infty}^{3})+c_{2}d\) To finish checking the assumption, note that for all \(K,L\) convex bodies such that \(B_{2}^{3}\subseteq K,L\subseteq\sqrt{3}B_{2}^{3}\), one has: \[d_{BM}(K,L)-1\leq\min_{T\in GL(3)}d_{H}(TK,L)\leq\sqrt{3}(d_{BM}(K,L)-1). \tag{17}\] Applying Lemma 1, we deduce that there exists \(c>0\) such that if \(B_{2}^{3}\subseteq K\subseteq\sqrt{3}B_{2}^{3}\), then \[\min_{S\in SO(3)}\min(d_{H}(K,SB_{\infty}^{3}),d_{H}(K,S\sqrt{3}B_{1}^{3}))\leq c |\mathcal{P}(K)-\mathcal{P}(B_{\infty}^{3})|.\] Using (17) we conclude the proof. ## 4. Asymptotic estimates and Bourgain-Milman's theorem If Conjecture 2 holds true for centrally symmetric bodies \(K\), then one has \[\frac{4}{n!^{\frac{1}{n}}}\leq\mathcal{P}(K)^{\frac{1}{n}}\leq\frac{\pi}{ \Gamma(1+\frac{n}{2})^{\frac{2}{n}}}\,\] so that \[\frac{4e+o(1)}{n}\leq\mathcal{P}(K)^{\frac{1}{n}}\leq\frac{2e\pi+o(1)}{n}. \tag{18}\] Similarly, the truth of Conjecture 1 would imply that for any convex body \(K\), one has \[\mathcal{P}(K)^{\frac{1}{n}}\geq\mathcal{P}(\Delta_{n})^{\frac{1}{n}}\geq \frac{e^{2}+o(1)}{n}.\] So that the function \(K\mapsto n\mathcal{P}(K)^{\frac{1}{n}}\) would vary between two positive constants. This last fact was actually proved by Bourgain and Milman [BM] in 1986. Indeed, the upper bound is insured by the Blaschke-Santalo inequality. For the lower bound, the first important step was done by Gordon and Reisner [GR], who proved that \[\mathcal{P}(K)^{\frac{1}{n}}\geq\frac{c}{n\log(n)}\.\] Then, Bourgain and Milman [BM] proved that \[\mathcal{P}(K)^{\frac{1}{n}}\geq\frac{c}{n}. \tag{19}\] For the original proof of (19) and other proofs of the same type, see [BM, LMi, Pi]. The constant \(c\) obtained in those proofs was not at all explicit, and even if so, was quite small. After having given a low technology proof of Gordon-Reisner result [Ku1], G. Kuperberg [Ku2] gave another proof of (19) based on differential geometry, and got the explicit constant \(c=\pi e\) in (19) in the symmetric case, which is not far from the best possible bound \(4e\) and is the best constant known for now. The best constant in the general (i.e not necessary symmetric) case may be obtained using Rogers-Shephard inequality, see the end of this section. Using Fourier transform techniques, other proofs were given by Nazarov [Na] (see also Blocki [Blo1, Blo2], Berndtsson[Be2, Be3] and Mastroianis and Rubinstein [MaR]). Giannopoulos, Paouris and Vritsiou gave also a proof using classical techniques of the local theory of Banach spaces [GPV]. The isomorphic version of the lower bound in (18) is "the best possible step" one can make, before actually proving (or disproving) the Mahler conjecture. Indeed, assume we can achieve an asymptotic behavior better than \(\mathcal{P}(K)\geq c^{n}\mathcal{P}(B^{n}_{\infty})\), \(0<c<1\), i.e. we have \[\alpha(n)\mathcal{P}(B^{n}_{\infty})\leq\mathcal{P}(K),\text{ and }\lim_{n\to\infty} \alpha(n)/c^{n}=\infty, \tag{20}\] but there is a dimension, say \(l\), such that the Mahler conjecture is false in \(\mathbb{R}^{l}\), i.e. there exists a convex symmetric body \(K\subset\mathbb{R}^{l}\) such that \(\mathcal{P}(K)<\mathcal{P}(B^{l}_{\infty})\) or \[\mathcal{P}(K)\leq c_{2}\mathcal{P}(B^{l}_{\infty}),\text{ for some }0<c_{2}<1.\] Let \(K^{\prime}\) to be the \(m\)-th direct sum of copies of \(K\), \(K^{\prime}=K\oplus\cdots\oplus K\subset\mathbb{R}^{n}\), \(n=ml\), using the direct sum formula and (20) inequality we get \[\alpha(lm)\mathcal{P}(B^{lm}_{\infty})\leq\mathcal{P}(K^{\prime})=\mathcal{P} (K\oplus\cdots\oplus K)\leq c_{2}^{m}\mathcal{P}(B^{lm}_{\infty})=(c_{2}^{1/l })^{lm}\mathcal{P}(B^{lm}_{\infty}).\] This yields \(\alpha(n)\leq c^{n}\) for \(n=ml\) and \(c=c_{2}^{1/l}\), and we get a contradiction for \(m\) big enough with \(\lim\limits_{n\to\infty}\alpha(n)/c^{n}=\infty\). We note that (19) for general convex bodies follows (with a constant divided by two) from the symmetric case. Indeed, let \(L\) be a convex body in \(\mathbb{R}^{n}\) and let \(z\in\operatorname{int}(L)\). Let \(K=L-z\). Then by the Rogers-Shephard inequality [RS], \(\operatorname{vol}(\frac{K-K}{2})\leq 2^{-n}\binom{2n}{n}\operatorname{vol}(K) \leq 2^{n}\operatorname{vol}(K)\) and \[\operatorname{vol}\left(\left(\frac{K-K}{2}\right)^{*}\right)=\frac{1}{n}\int _{S^{n-1}}\!\!\left(\frac{h_{K}(u)+h_{-K}(u)}{2}\right)^{-n}\!\!\!\!d\sigma(u )\leq\frac{1}{n}\int_{S^{n-1}}\!\!h_{K}(u)^{-n}d\sigma(u)=\operatorname{vol}( K^{*}).\] It follows that \[\operatorname{vol}(K)\operatorname{vol}(K^{*})\geq 2^{-n}\mathcal{P}\left( \frac{K-K}{2}\right).\] Since this holds for every \(z\in\operatorname{int}(L)\), it follows that \(\mathcal{P}(L)\geq 2^{-n}\mathcal{P}\left(\frac{L-L}{2}\right)\). From this relation and Kuperberg's best bound \(c=\pi e\) in (19) for symmetric bodies, it follows that for general convex bodies, (19) holds with \(c=\pi e/2\). ### Approach via Milman's quotient of subspace's theorem The next lemma is a consequence of the Rogers-Shephard inequality [RS]. **Lemma 2**.: _Let \(K\) be a convex symmetric body in \(\mathbb{R}^{n}\), let \(E\) be an \(m\)-dimensional subspace of \(E\) and \(E^{\perp}\) be its orthogonal subspace. Then_ \[\binom{n}{m}^{-2}\mathcal{P}(K\cap E)\mathcal{P}(K\cap E^{\perp})\leq\mathcal{ P}(K)\leq\mathcal{P}(K\cap E)\mathcal{P}(K\cap E^{\perp}).\] The following result is the _quotient of subspace theorem_ of V. Milman ([Mi], see [Gor] for a simple proof). **Theorem 12**.: _Let \(K\) be a convex symmetric body in \(\mathbb{R}^{n}\), with \(n\) a multiple of \(4\). Then, there exists a constant \(c>0\), independent on \(n\), \(a\,\frac{n}{2}\)-dimensional subspace \(E\) of \(\mathbb{R}^{n}\), a \(\frac{n}{4}\)-dimensional subspace \(F\) of \(E\) and an ellipsoid \(\mathcal{E}\subset F\) such that_ \[\mathcal{E}\subset P_{F}(E\cap K)\subset c\mathcal{E}\] _where, as before, \(P_{F}\) is the orthogonal projection onto \(F\)._ _The proof of Bourgain-Milman's theorem by Pisier_ [Pi]. For a convex symmetric body \(K\subset\mathbb{R}^{n}\), with \(n\) multiple of \(4\), let \(a_{n}(K)=n\mathcal{P}(K)^{\frac{1}{n}}\). Let \(E\) and \(F\) be the subspaces of \(\mathbb{R}^{n}\) chosen in Theorem 12. By lemma 2, for some constant \(d>0\) independent on \(n\), one has \[d\sqrt{a_{\frac{n}{2}}(K\cap E)a_{\frac{n}{2}}(K\cap E^{\perp})}\leq a_{n}(K) \leq\sqrt{a_{\frac{n}{2}}(K\cap E)a_{\frac{n}{2}}(K\cap E^{\perp})}\] and \[d\sqrt{a_{\frac{n}{4}}(P_{F}(K\cap E))a_{\frac{n}{4}}(K\cap E\cap F^{\perp})} \leq a_{\frac{n}{2}}(K\cap E)\leq\sqrt{a_{\frac{n}{4}}(P_{F}(K\cap E))a_{\frac {n}{4}}(K\cap E\cap F^{\perp})}.\] Next, from Theorem 12, for some absolute constants \(c^{\prime},d^{\prime}>0\), one has \[c^{\prime}\leq a_{\frac{n}{4}}(P_{F}(K\cap E))\leq d^{\prime}.\] It follows that for some universal constant \(c>0\), one has \[a_{n}(K)\geq c\big{(}a_{\frac{n}{2}}(K\cap E^{\perp})\big{)}^{\frac{1}{2}} \big{(}a_{\frac{n}{4}}(K\cap E\cap F^{\perp})\big{)}^{\frac{1}{4}}. \tag{21}\] Defining now for every \(n\geq 1\), \[a_{n}=\min\{a_{m}(L);1\leq m\leq n,\,\,\,L\,\,\text{convex symmetric body in $\mathbb{R}^{m}$}\}.\] Observing that \(a_{n}>0\), one gets from (21) that \[a_{n}\geq c\big{(}a_{n}\big{)}^{\frac{1}{2}}\big{(}a_{n}\big{)}^{\frac{1}{4}}. \tag{22}\] Thus \(a_{n}\geq c^{4}\). ### Complex analysis approach Let us very briefly discuss an approach via complex and harmonic analysis which was initiated by Nazarov [Na]. We will follow here a work of Berndtsson [Be3], which is done via functional inequalities (central for the next section). We will consider a special case of the Bergman spaces. Let \(\psi:\mathbb{C}^{n}\to\mathbb{R}\cup\{+\infty\}\) be a convex function and \(\Omega=\{(x,y)\in\mathbb{R}^{2n}:\psi(x+iy)<\infty\}.\) The _Bergman space_\(A^{2}(e^{-\psi})\) is the Hilbert space of holomorphic functions \(f\) on \(\Omega\) such that \[\|f\|^{2}=\int_{\Omega}|f(x+iy)|^{2}e^{-\psi(x+iy)}dxdy<\infty.\] The (diagonal) Bergman kernel \(B\) for \(A^{2}(e^{-\psi})\) is defined as \[B(z)=\sup_{f\in A^{2}(e^{-\psi})}\frac{|f(z)|^{2}}{\|f\|^{2}}.\] Next, consider an even convex function \(\phi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) such that \(e^{-\phi(x)}\) is integrable over \(\mathbb{R}^{n}\). For \(\alpha\in\mathbb{C}\), consider the Bergman kernel \(B_{\alpha}(z)\) corresponding to the function \(\psi(z)=\phi(\text{Re}(z))+\phi(\text{Re}(\alpha z))\). The main theorem in [Be3] is the claim that \[B_{i}(0)\leq c^{n}B_{1}(0), \tag{23}\] where \(c\) is an absolute constant (precisely computed in [Be3]). We note that \(B_{1}\) is the Bergman kernel for \(\psi(x+iy)=2\phi(x)\), i.e. independent of the \(\text{Im}(z)\), and that \(B_{i}\) is the Bergman kernel for \(\psi(x+iy)=\phi(x)+\phi(y)\). It is essential to understand that the Bergman spaces corresponding to those densities are different. Thus the connection is not immediate. For example, the function \(f=1\) belongs to the second space but does not belong to the space corresponding to \(\psi(x+iy)=2\phi(x)\). Using \(f=1\) we get \[B_{i}(0)\geq\frac{1}{\int_{\mathbb{R}^{n}}e^{-\phi(x)}dx\int_{\mathbb{R}^{n}}e^{ -\phi(y)}dy}\.\] Together with (23) this gives \[B_{1}(0)\geq\frac{c^{-n}}{\left(\int_{\mathbb{R}^{n}}e^{-\phi(x)}dx\right)^{2}}\, \tag{24}\] which is an essential estimate for proving the Bourgain-Milman inequality. The proof of (23) in [1] is based on a very nice and tricky approach of "linking" \(B_{i}\) and \(B_{1}\) via \(B_{\alpha}\). Indeed. it turns out that \(b(\alpha):=\log B_{\alpha}(0)\) is subharmonic in \(\mathbb{C}\) (see [1, 1]), and moreover \(b(\alpha)\leq C+n\log|\alpha|^{2}\), which can be seen from the change of variables \[\|f\|_{\alpha}^{2} =\int_{\mathbb{C}^{n}}|f(z)|^{2}e^{-(\phi(\operatorname{Re}(z))) +\phi(\operatorname{Re}(\alpha z)))}dz\] \[=|\alpha|^{-2n}\int_{\mathbb{C}^{n}}|f(z/\alpha)|^{2}e^{-(\phi( \operatorname{Re}(z/\alpha)))+\phi(\operatorname{Re}(z)))}dz.\] Thus \(B_{\alpha}(0)=|\alpha|^{2n}B_{1/\alpha}(0).\) Moreover, \(B_{1/\alpha}(0)\) is bounded as \(\alpha\to\infty.\) Thus one can apply the Poisson representation formula in the upper half plane to the function \(b(\alpha)-n\log|\alpha|^{2}\) to get \[\log B_{i}(0)=b(i)\leq\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{b(s)-n\log(s^ {2})}{1+s^{2}}ds=\frac{2}{\pi}\int_{0}^{\infty}\frac{b(s)-n\log(s^{2})}{1+s^{ 2}}ds.\] Using that \(s\mapsto\phi(sx)\) is increasing on \((0,1]\) one has, for \(s\in(0,1]\), \[\|f\|_{s}^{2}=\int_{\mathbb{C}^{n}}|f(z)|^{2}e^{-(\phi(\operatorname{Re}(z))+ \phi(\operatorname{Re}(sz))}dz\geq\|f\|_{1}^{2},\] and hence \(b(s)\leq b(1)\). If \(s\geq 1\), \[\|f\|_{s}^{2}=|s|^{-2n}\int_{\mathbb{C}^{n}}|f(z/s)|^{2}e^{-(\phi(\operatorname {Re}(z/\alpha))+\phi(\operatorname{Re}(z))}dz\geq s^{-2n}\|f\|_{1}^{2},\] thus \(b(s)\leq b(1)+n\log s^{2}\). Putting those estimates together completes the proof of (23). The next step is to adapt the Paley-Wiener space associated to a convex body (discussed in Theorem 4) to the case of a convex function. For a convex function \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\), we denote by \(PW(e^{\varphi})\) the space of holomorphic functions \(f\) of the form \[f(z)=\int_{\mathbb{R}^{n}}e^{\langle z,\xi\rangle}\tilde{f}(\xi)d\xi,\ \text{where}\ z\in\mathbb{C}^{n},\] for which \[\|f\|_{PW}^{2}=\int_{\mathbb{R}^{n}}|\tilde{f}|^{2}e^{\varphi}dt<\infty,\] for some function \(\tilde{f}\), so that the two formulas above make sense. The classical Paley-Wiener space discussed in Theorem 4 then corresponds to the case when \(\varphi(x)=0\) for \(x\in K\) and \(\varphi(x)=+\infty\) for \(x\not\in K.\) For a convex function \(\psi\) on \(\mathbb{R}^{n}\), let us consider its logarithmic Laplace transform given by \[\Lambda\psi(\xi)=\log\int_{\mathbb{R}^{n}}e^{2\langle x,\xi\rangle}e^{-\psi}dx.\] The second key ingredient in Berndtsson's proof is the fact that the spaces \(PW(e^{\Lambda\psi})\) and \(A^{2}(e^{-\psi})\) coincide and that \[\|f\|_{A^{2}}^{2}=(2\pi)^{2n}\|f\|_{PW(e^{\Lambda\psi})}^{2}. \tag{25}\] This fact originates from the observation that any \(f\in PW(e^{\Lambda\psi})\) is the Fourier-Laplace transform of \(\tilde{f}\) and \(e^{\langle x,t\rangle}\tilde{f}(t)\) belongs to \(L_{2}(\mathbb{R}^{n})\) for all \(x\) such that \(\psi(x)<\infty\). Then, we apply Parseval's formula to get \[\int_{\mathbb{R}^{n}}|f(x+iy)|^{2}dy=(2\pi)^{n}\int_{\mathbb{R}^{n}}e^{2 \langle x,t\rangle}|\tilde{f}(t)|^{2}dt.\] Multiplying the above equality by \(e^{-\psi(x)}\) and integrating with respect to \(x\), we get \[\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}|f(x+iy)|^{2}e^{-\psi(x)}dxdy=(2\pi) ^{n}\int_{\mathbb{R}^{n}}|\tilde{f}(t)|^{2}e^{\Lambda\psi(t)}dt.\] Thus \(f\in A^{2}(e^{-\psi})\) and the \(A^{2}\) norm coincide with a multiple of the norm in \(PW(e^{\Lambda\psi})\). This confirms that the Paley-Wiener space is isometrically embedded into the corresponding Bergman space and the rest follows from the observation that it is dense. One can compute the Bergman kernel for \(PW(e^{\Lambda\psi})\) and use (25) to show that the Bergman kernel for \(A^{2}(e^{-\psi})\) is equal to \[(2\pi)^{-n}\int_{\mathbb{R}^{n}}e^{2\langle x,t\rangle-\Lambda\psi(t)}dt. \tag{26}\] We will use (26) to give an estimate from above of the value of the Bergman kernel at zero. The Legendre transform \(\mathcal{L}\psi\) of a function \(\psi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) is defined by \[\mathcal{L}\psi(y)=\sup_{x\in\mathbb{R}^{n}}(\langle x,y\rangle-\psi(x)),\quad \text{ for }y\in\mathbb{R}^{n}. \tag{27}\] Consider the Bergman space \(A^{2}(e^{-2\phi(x)})\), where \(\phi:\mathbb{R}^{n}\to\mathbb{R}\cup\{\infty\}\) is convex and even (as in (24)). Then, \[B(0)\leq\pi^{-n}\frac{\int_{\mathbb{R}^{n}}e^{-\mathcal{L}\phi(y)}dy}{\int_{ \mathbb{R}^{n}}e^{-\phi(x)}dx}. \tag{28}\] Indeed, using (26) we get \[B(0)\leq(2\pi)^{-n}\int_{\mathbb{R}^{n}}e^{-\Lambda(2\phi(t))}dt. \tag{29}\] Note that, for any \(y\in\mathbb{R}^{n}\), one has \[e^{\Lambda(2\phi(t))} =2^{-n}\int_{\mathbb{R}^{n}}e^{\langle t,u\rangle-2\phi(u/2)}du=2 ^{-n}e^{\langle t,y\rangle}\int_{\mathbb{R}^{n}}e^{\langle t,v\rangle-2\phi(v /2+y/2)}dv\] \[\geq 2^{-n}e^{\langle t,y\rangle-\phi(y)}\int_{\mathbb{R}^{n}}e^{ \langle t,v\rangle-\phi(v)}dv,\] where in the last inequality we used the convexity of \(\phi\). Using that \(\phi\) is even, we get that \[\int_{\mathbb{R}^{n}}e^{\langle t,v\rangle-\phi(v)}dv\geq\int_{\mathbb{R}^{n} }e^{-\phi(v)}dv\] and \[e^{\Lambda(2\phi(t))}\geq 2^{-n}e^{\langle t,y\rangle-\phi(y)}\int_{\mathbb{R}^{n} }e^{-\phi(v)}dv.\] Taking the supremum over all \(y\in\mathbb{R}^{n}\), we get \[e^{\Lambda(2\phi(t))}\geq 2^{-n}e^{\mathcal{L}\phi(t)}\int_{\mathbb{R}^{n}}e^{- \phi(v)}dv.\] Together with (29), this gives (28). Combining (28) with (24), we get the following theorem, **Theorem 13**.: **(Functional version of the Bourgain-Milman inequality)** _Let \(\phi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) be even and convex; then for some \(c>0\) independant on \(n\), one has_ \[\int_{\mathbb{R}^{n}}e^{-\phi(x)}dx\int_{\mathbb{R}^{n}}e^{-\mathcal{L}\phi(x) }dx\geq c^{n}.\] **Remark 7**.: Theorem 13 was first proved, via Bourgain-Milman inequality for symmetric convex bodies in [1] and then generalized to non-even functions in [13]. It implies the classical Bourgain-Milman's inequality for convex bodies as we shall see in the next section (Remark 11 below). ## 5. Functional inequalities and link with transport inequalities We dedicate this section to the study of functional inequalities related to volume product. ### Upper bounds The following general form of the functional Blaschke-Santalo inequality was proved by Ball [1] for \(f\) even, by Fradelizi and Meyer [14] for \(f\) log-concave and by Lehec [11] in the general case. **Theorem 14**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}_{+}\) be Lebesgue integrable. There exists \(z\in\mathbb{R}^{n}\) such that for any \(\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}\) and any \(g:\mathbb{R}^{n}\to\mathbb{R}_{+}\) measurable satisfying_ \[f(x+z)g(y)\leq\rho(\langle x,y\rangle)^{2}\text{ for all }x,y\in\mathbb{R}^{n} \text{ satisfying }\langle x,y\rangle>0,\] _one has_ \[\int f(x)\,dx\int g(y)\,dy\leq\left(\int\rho(|x|^{2})\,dx\right)^{2}.\] _If \(f\) is even, one can take \(z=0\)._ Applying this result to \(\rho=\mathbf{1}_{[0,1]}\) and \(f=\mathbf{1}_{K}\), one recovers the Blaschke-Santalo inequality for convex sets. Applying it to \(\rho(t)=e^{-t/2}\), it gives a proof of the following functional Blaschke-Santalo inequality for the Legendre transform due to Artstein, Klartag and Milman [1] (and [11] for another proof). **Theorem 15**.: _Let \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) satisfy \(0<\int e^{-\varphi}<+\infty\). If for \(x,y\in\mathbb{R}^{n}\), \(\varphi_{y}(x):=\varphi(x+y)\), there exists \(z\in\mathbb{R}^{n}\) such that_ \[\int_{\mathbb{R}^{n}}e^{-\varphi(x)}dx\int_{\mathbb{R}^{n}}e^{-\mathcal{L}( \varphi_{z})(y)}dy\leq\left(\int_{\mathbb{R}^{n}}e^{-\frac{|x|^{2}}{2}}dx \right)^{2}=(2\pi)^{n},\] _with equality if and only \(\varphi_{z}(x)=|Ax|^{2}\) for some invertible linear map \(A\) and some \(z\in\mathbb{R}^{n}\)._ **Remark 8**.: In [11], Lehec deduced from Theorem 15 that if the "barycenter" \(b(\varphi):=\int xe^{-\varphi(x)}dx/\int e^{-\varphi}\) satisfies \(b(\varphi)=0\), then \[\int e^{-\varphi}\int e^{-\mathcal{L}\varphi}\leq(2\pi)^{n}.\] Indeed, for any \(z\), one has \(\mathcal{L}(\varphi_{z})(y)=\mathcal{L}\varphi(y)-\langle y,z\rangle\). It follows that \(\mathcal{L}((\mathcal{L}\varphi)_{z})(y)=\mathcal{L}\mathcal{L}\varphi(y)- \langle y,z\rangle\leq\varphi(y)-\langle y,z\rangle\). Using Jensen's inequality and \(b(\varphi)=0\), we get \[\int e^{-\mathcal{L}((\mathcal{L}\varphi)_{z})}\geq\int e^{-\varphi(y)+ \langle y,z\rangle}dy\geq e^{\langle b(\varphi),z\rangle}\int e^{-\varphi}= \int e^{-\varphi}.\] Applying Theorem 15 to \(\mathcal{L}\varphi\), there exists thus a \(z\) such that \[\int e^{-\varphi}\int e^{-\mathcal{L}\varphi}\leq\int e^{-\mathcal{L}(( \mathcal{L}\varphi)_{z})}\int e^{-\mathcal{L}\varphi}\leq(2\pi)^{n}.\] As Lehec observed also, this gives a new proof of the result of Lutwak [10]: **Proposition 2**.: _For starshaped body \(K\subset\mathbb{R}^{n}\) (for all \((x,t)\in K\times[0,1]\), one has \(tx\in K\)) with barycenter at \(0\), one has_ \[\operatorname{vol}(K)\operatorname{vol}(K^{*})\leq\operatorname{vol}(B_{2}^{n })^{2}.\] Proof.: Let \(\varphi(x)=\frac{\|x\|_{K}^{2}}{2}\). Then since \[\int_{\mathbb{R}^{n}}xe^{-\frac{\|x\|_{K}^{2}}{2}}dx=\int_{\mathbb{R}^{n}}x \int_{\|x\|_{K}}^{+\infty}te^{-\frac{t^{2}}{2}}dtdx=\int_{0}^{+\infty}t^{n+1}e ^{-\frac{t^{2}}{2}}dt\int_{K}xdx=0,\] one has \(b(\varphi)=0\). Moreover for any \(y\in\mathbb{R}^{n}\), one has \(\mathcal{L}\varphi(y)=\sup_{x}\langle x,y\rangle-\frac{\|x\|_{K}^{2}}{2}=\frac {\|y\|_{K}^{2}}{2}\) and \(\int_{\mathbb{R}^{n}}e^{-\frac{\|x\|_{K}^{2}}{2}}dx=2^{\frac{n}{2}}\Gamma( \frac{n}{2}+1)\operatorname{vol}(K)\). Before giving sketches of various proofs of Theorems 14 and 15, we need a lemma: **Lemma 3**.: _Let \(\alpha,\beta,\gamma:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be measurable functions such that for every \(s,t>0\) one has \(\alpha(s)\beta(t)\leq\gamma(\sqrt{st})^{2}\), then \(\int_{\mathbb{R}_{+}}\alpha(t)dt\int_{\mathbb{R}_{+}}\beta(t)dt\leq\left(\int _{\mathbb{R}_{+}}\gamma(t)dt\right)^{2}.\)_ Proof.: Define \(f,g,h:\mathbb{R}\to\mathbb{R}\) by \(f(x)=\alpha(e^{x})e^{x}\), \(g(x)=\beta(e^{x})e^{x}\) and \(h(x)=\gamma(e^{x})e^{x}\). Then \(f(x)g(y)\leq h(\frac{x+y}{2})\) for all \(x,y\in\mathbb{R}\). By Prekopa-Leindler inequality (see [11], p.3) we get \(\int_{\mathbb{R}}f(x)dx\int_{\mathbb{R}}g(x)dx\leq\left(\int_{\mathbb{R}}h(x) dx\right)^{2}.\) We conclude with a change of variables. **Proofs of Theorem 14:** 1) In the case when \(f\) is even and \(\rho\) is decreasing, this proof is due to Ball [1]. For \(s,t\in\mathbb{R}_{+}\), let \(K_{s}=\{f\geq s\}\) and \(L_{t}=\{g\geq t\}\). The hypothesis on \(f\) and \(g\) implies that \(L_{t}\subset\rho^{-1}(\sqrt{st})K_{s}^{*}\). Since \(f\) is even, \(K_{s}\) is symmetric. We deduce from Blaschke-Santalo inequality that for every \(s,t\in\mathbb{R}_{+}\), if \(\alpha(s)=\operatorname{vol}(K_{s})\) and \(\beta(t)=\operatorname{vol}(L_{t})\), one has \[\alpha(s)\beta(t)=\operatorname{vol}(K_{s})\operatorname{vol}(L_{t})\leq( \rho^{-1}(\sqrt{st}))^{n}\operatorname{vol}(K_{s})\operatorname{vol}(K_{s}^{*}) \leq(\rho^{-1}(\sqrt{st}))^{n}\operatorname{vol}(B_{2}^{n})^{2}.\] Denoting \(\gamma(t)=(\rho^{-1}(t))^{n/2}\operatorname{vol}(B_{2}^{n})\), we apply Lemma 3 and use that \(\int_{\mathbb{R}^{n}}f(x)dx=\int_{0}^{+\infty}\alpha(s)ds...\) to conclude. 2) In the case when \(f\) is not supposed to be even, but is log-concave, the proof of Theorem 14 given in [12] uses the so-called Ball's body \(K_{f}(z)\) associated to a log-concave function \(f\), which is defined by \[K_{f}(z)=\left\{x\in\mathbb{R}^{n};\int_{0}^{+\infty}r^{n-1}f(z+rx)dr\geq 1 \right\}.\] It follows from Ball's results [Ba2] that \(K_{f}(z)\) is convex and that its radial function is \(r_{K_{f}(z)}(x)=\left(\int_{0}^{+\infty}r^{n-1}f(z+rx)dr\right)^{\frac{1}{n}}\) for \(x\in\mathbb{R}^{n}\setminus\{0\}\). If \(x,y\in\mathbb{R}^{n}\) satisfy \(\langle x,y\rangle>0\), define for \(r\geq 0\), \(\alpha(r)=r^{n-1}f(z+rx)\), \(\beta(r)=r^{n-1}g(rx)\) and \(\gamma(r)=r^{n-1}\rho(r^{2}\langle x,y\rangle)\). It follows from Lemma 3 that \[\int_{0}^{+\infty}r^{n-1}f(z+rx)dr\int_{0}^{+\infty}r^{n-1}g(rx)dr\leq\left( \int_{0}^{+\infty}r^{n-1}\rho(r^{2}\langle x,y\rangle)dr\right)^{2}.\] This means that \[\langle x,y\rangle\leq\frac{c_{n}(\rho)}{r_{K_{f}(z)}(x)r_{K_{g}(0)}(y)},\ \text{where}\ c_{n}(\rho):=\left(\int_{0}^{+\infty}r^{n-1}\rho(r^{2})dr\right)^ {2/n},\] or in other words \(K_{g}(0)\subset c_{n}(\rho)K_{f}(z)^{*}\). Moreover, one has \[\int_{\mathbb{R}^{n}}f(x)dx=n\operatorname{vol}\left(K_{f}(z)\right)\text{ for every}\ z\in\operatorname{supp}(f).\] Using Brouwer's fixed point theorem, it was proved in [FM4] that for some \(z\in\mathbb{R}^{n}\), the center of mass of \(K_{f}(z)\) is at the origin. The result follows then from Blaschke-Santalo inequality. This method was also used in [BBF] to prove stability versions of the functional forms of Blaschke-Santalo inequality. **Proofs of Theorem 15:** 1) The proof given in [AKM] attaches to \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\), _supposed here to be even_, the functions \(f_{m}(x)=\left(1-\frac{\varphi(x)}{m}\right)_{+}^{m}\), for \(m\geq 1\) and the convex bodies \[K_{m}(f_{m}):=\{(x,y)\in\mathbb{R}^{n+m};|y|\leq f_{m}(\sqrt{m}x)^{1/m}\}.\] When \(m\to+\infty\), \(f_{m}\to e^{-\varphi}\) and \[m^{\frac{n}{2}}\frac{\operatorname{vol}\left(K_{m}(f_{m})\right)}{\operatorname {vol}(B_{2}^{m})}=\int_{\mathbb{R}^{n}}f_{m}(x)dx\to\int_{\mathbb{R}^{n}}e^{- \varphi(x)}dx.\] Moreover \(K_{m}(f_{m})^{*}=K_{m}(\mathcal{L}_{m}f_{m})\), where \(\mathcal{L}_{m}(f_{m})(y)=\inf_{x}\frac{\left(1-\frac{\langle x,y\rangle}{m} \right)_{+}^{m}}{f_{m}(x)}.\) Also, when \(m\to+\infty\) \[\mathcal{L}_{m}(f_{m})(y)\to e^{-\mathcal{L}\varphi(y)}\text{ and }m^{\frac{n}{2}} \frac{\operatorname{vol}\left(K_{m}(\mathcal{L}_{m}\varphi)\right)}{ \operatorname{vol}(B_{2}^{m}|}=\int_{\mathbb{R}^{n}}\mathcal{L}_{m}f_{m}(x)dx \to\int_{\mathbb{R}^{n}}e^{-\mathcal{L}\varphi(x)}dx.\] One then applies the Blaschke-Santalo inequality to the bodies \(K_{m}(f_{m})\). 2) Lehec's proof [Le2] of Theorem 15 uses induction on the dimension. For \(n=1\), choose \(z\in\mathbb{R}\) such that \(\int_{z}^{+\infty}e^{-\varphi(t)}dt=\int_{\mathbb{R}}e^{-\varphi(t)}dt/2\). For all \(s,t\geq 0\), one has \(\varphi_{z}(s)+\mathcal{L}(\varphi_{z})(t)\geq st\). Thus, the functions \(\alpha(s)=e^{-\varphi_{z}(s)}\), \(\beta(t)=e^{-\mathcal{L}(\varphi_{z})(t)}\) and \(\gamma(u)=e^{-u^{2}/2}\) satisfy \(\alpha(s)\beta(t)\leq\gamma(\sqrt{st})^{2}\), for every \(s,t\geq 0\). It follows from Lemma 3 that \[\int_{0}^{+\infty}e^{-\varphi_{z}(t)}dt\int_{0}^{+\infty}e^{-\mathcal{L}( \varphi_{z}(t))}dt=\int_{\mathbb{R}_{+}}\alpha(t)dt\int_{\mathbb{R}_{+}}\beta( t)dt\leq\left(\int_{\mathbb{R}_{+}}\gamma(u)du\right)^{2}=\frac{\pi}{2}. \tag{30}\] This inequality also holds on \(\mathbb{R}_{-}\); adding the two inequalities, we get the result. Now suppose that the results holds for \(n\), and let us do the induction step. Let \(\varphi:\mathbb{R}^{n+1}\to\mathbb{R}\cup\{+\infty\}\). If \(X\in\mathbb{R}^{n+1}\), we denote \(X=(x,s)\in\mathbb{R}^{n}\times\mathbb{R}\). Let \[\mathcal{P}(\varphi):=\min_{z}\int_{\mathbb{R}^{n+1}}e^{-\varphi(X)}dX\int_{ \mathbb{R}^{n+1}}e^{-\mathcal{L}(\varphi_{z})(X)}dX.\] For any invertible affine map \(A\), one has \(\mathcal{P}(\varphi\circ A)=\mathcal{P}(\varphi)\). Translating \(\varphi\) in the \(e_{n+1}\) direction, we may assume that \[\int_{s>0}\int e^{-\varphi(x,s)}dxds=\int_{s<0}\int e^{-\varphi(x,s)}dxds.\] Define \(b_{+}(\varphi)\) and \(b_{-}(\varphi)\) in \(\mathbb{R}^{n+1}\) by \[b_{+}(\varphi)=\frac{\int_{s>0}\int(x,s)e^{-\varphi(x,s)}dxds}{\int_{s>0}\int e ^{-\varphi}dxds}\quad\text{and}\quad b_{-}(\varphi)=\frac{\int_{s<0}\int(x,s)e ^{-\varphi(x,s)}dxds}{\int_{s<0}\int e^{-\varphi}dxds}.\] Since \(\langle b_{+}(\varphi),e_{n+1}\rangle>0\) and \(\langle b_{-}(\varphi),e_{n+1}\rangle<0\), the point \(\{z\}:=[b_{-}(\varphi),b_{+}(\varphi)]\cap e_{n+1}^{\perp}\) is well defined. By translating \(\varphi\) in the remaining directions, we may assume that \(z=0\). Let \(A\) be the linear invertible map defined by \(Ax=x\) for \(x\in\mathbb{R}^{n}\) and \(Ae_{n+1}=b_{+}(\varphi)\). Then \(b_{+}(\varphi\circ A)=A^{-1}b_{+}(\varphi)=e_{n+1}\). Changing \(\varphi\) into \(\varphi\circ A\), we may assume that \(b_{+}(\varphi)=e_{n+1}\). We define \(\Phi,\Psi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) by \[e^{-\Phi(x)}=\int_{0}^{+\infty}e^{-\varphi(x,t)}dt\quad\text{and}\quad e^{- \Psi(x)}=\int_{0}^{+\infty}e^{-\mathcal{L}\varphi(x,t)}dt.\] Since \(b_{+}(\varphi)=e_{n+1}\), we get \(\int_{\mathbb{R}^{n}}xe^{-\Phi(x)}dx=\int_{t>0}\int_{\mathbb{R}^{n}}xe^{- \varphi(x,t)}dxdt=0.\) Hence \(b(\Phi)=0\). From the induction hypothesis and the remark after Theorem 15, it follows that \[\int_{\mathbb{R}^{n}}e^{-\Phi(x)}dx\int_{\mathbb{R}^{n}}e^{-\mathcal{L}\Phi(y )}dy\leq(2\pi)^{n}. \tag{31}\] For every \(x,y\in\mathbb{R}^{n}\) and \(s,t\in\mathbb{R}\), let \(\varphi^{x}(s)=\varphi(x,s)\) and \((\mathcal{L}\varphi)^{y}(t)=\mathcal{L}\varphi(y,t)\). Applying again Lemma 3 as in (30), we get \[\int_{0}^{+\infty}e^{-\varphi^{x}(s)}ds\int_{0}^{+\infty}e^{-\mathcal{L}( \varphi^{x})(t)}dt\leq\frac{\pi}{2}.\] Since \(\varphi^{x}(s)+(\mathcal{L}\varphi)^{y}(t)\geq\langle x,y\rangle+st\), one has \((\mathcal{L}\varphi)^{y}(t)-\langle x,y\rangle\geq\mathcal{L}(\varphi^{x})(t)\). Thus for \(x,y\in\mathbb{R}^{n}\), \[e^{-\Phi(x)-\Psi(y)}=\int_{0}^{+\infty}e^{-\varphi^{x}(s)}ds\int_{0}^{+\infty }e^{-(\mathcal{L}\varphi)^{y}(t)}dt\leq\frac{\pi}{2}e^{-\langle x,y\rangle}.\] This implies that \(e^{-\Psi(y)}\leq\frac{\pi}{2}e^{-\mathcal{L}\Phi(y)}\). Using (31), we get \[\int_{\mathbb{R}^{n}}e^{-\Phi(x)}dx\int_{\mathbb{R}^{n}}e^{-\Psi(y)}dy\leq \frac{\pi}{2}(2\pi)^{n}.\] that is \[\int_{0}^{+\infty}\int_{\mathbb{R}^{n}}e^{-\varphi(x,s)}dxds\int_{0}^{+\infty }\int_{\mathbb{R}^{n}}e^{-\mathcal{L}\varphi(y,t)}dydt\leq\frac{\pi}{2}(2\pi )^{n}.\] Adding this to the analogous bound for \(s<0\) and using \(\int_{s>0}\int e^{-\varphi(x,s)}dxds=\int_{s<0}\int e^{-\varphi(x,s)}dxds\), we conclude. **Remark 9**.: Various \(L_{p}\)-versions of the functional Blaschke-Santalo's inequalities have been given (see for instance [HJM]). Also, Blaschke-Santalo's type inequality were established in the study of extremal general affine surface area [GHSW, Y, Hoe]. A consequence of the Blaschke-Santalo inequality was recently given in [VY]. ### Lower bounds of the volume product of log-concave functions Let \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) be convex. The _domain of \(\varphi\)_ is \(\mathrm{dom}(\varphi):=\{x\in\mathbb{R}^{n};\varphi(x)<+\infty\}\). If \(0<\int e^{-\varphi}<+\infty\), we define the _functional volume product of \(\varphi\)_ is \[\mathcal{P}(\varphi)=\min_{z}\int_{\mathbb{R}^{n}}e^{-\varphi(x)}dx\int_{ \mathbb{R}^{n}}e^{-\mathcal{L}(\varphi_{z})(y)}dy.\] If \(\varphi\) is even, this minimum is reached at \(0\). The following conjectures were proposed in [FM3]. **Conjecture 3**.: _If \(n\geq 1\) and \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) is a convex function such that \(0<\int e^{-\varphi}<+\infty\). Then_ \[\int_{\mathbb{R}^{n}}e^{-\varphi(x)}dx\int_{\mathbb{R}^{n}}e^{-\mathcal{L} \varphi(y)}dy\geq e^{n},\] _with equality if and only if there is a constant \(c>0\) and an invertible linear map \(T\) such that_ \[e^{-\varphi(Tx)}=c\prod_{i=1}^{n}e^{-x_{i}}\mathbf{1}_{[-1,+\infty)}(x_{i}).\] **Conjecture 4**.: _If \(n\geq 1\) and \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) is an even convex function such that \(0<\int e^{-\varphi}<+\infty\). Then_ \[\int_{\mathbb{R}^{n}}e^{-\varphi(x)}dx\int_{\mathbb{R}^{n}}e^{-\mathcal{L} \varphi(y)}dy\geq 4^{n},\] _with equality if and only if there exist a constant \(c>0\), two complementary subspaces \(F_{1}\) and \(F_{2}\) and two Hanner polytopes \(K_{1}\subset F_{1}\) and \(K_{2}\subset F_{2}\) such that for all \((x_{1},x_{2})\in F_{1}\times F_{2}\),_ \[e^{-\varphi(x_{1}+x_{2})}=ce^{-||x_{1}||_{K_{1}}}\mathbf{1}_{K_{2}}(x_{2}).\] **Remark 10**.: With a different duality for a convex function \(\varphi\), another Blaschke-Santalo and inverse Santalo inequality were obtained in [AS, FS]. Another extension of Blaschke-Santalo inequality and of its functional form was considered in [HoS], where duality is combined with the study of inequalities related to monotone non-trivial Minkowski endomorphisms. Partial results toward the proofs of these conjectures are gathered in the following theorem. **Theorem 16**.: _Let \(n\geq 1\) and \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) be a convex function such that \(0<\int e^{-\varphi}<+\infty\). Then_ 1. _Conjecture_ 3 _holds for_ \(n=1\)_. It holds also for all_ \(n\geq 1\)_, if there exists an invertible affine map_ \(T\) _such that_ \(\mathrm{dom}(\varphi\circ T)=\mathbb{R}^{n}_{+}\) _and_ \(\varphi\circ T\) _is non-decreasing on_ \(P\)_, in the sense that if_ \(x_{i}\leq y_{i}\) _for all_ \(1\leq i\leq n\)_,_ \((\varphi\circ T)(x_{1},\ldots,x_{n})\leq(\varphi\circ T)(y_{1},\ldots,y_{n})\)_._ 2. _Conjecture_ 4 _holds if_ \(n=1\) _or_ \(n=2\)_. It holds also for all_ \(n\geq 1\) _if_ \(\varphi\) _is unconditional, in the sense that there exists an invertible linear map_ \(T\) _such that_ \((\varphi\circ T)(x_{1},\ldots,x_{n})=(\varphi\circ T)(|x_{1}|,\ldots,|x_{n}|)\) _for all_ \((x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\)_._ (1) For \(n=1\), Conjecture 3 was proved in two different ways in [FM1, FM3]. The case of non-decreasing convex functions on the positive octant was also proved in [FM3]. (2) For unconditional convex functions on \(\mathbb{R}^{n}\), Conjecture 4 was established in two different ways in [FM2, FM3], with the case of equality in [FGMR]). In particular, this settles the general case \(n=1\). For \(n=2\), it was proved in [FN]. **Remark 11**.: There is a strong link between Conjectures 1 and 2 forconvex bodies and their functional counterparts Conjectures 3 and 4. Indeed, as it was observed in [10], given a symmetric convex body \(K\) in \(\mathbb{R}^{n}\), if \(\varphi_{K}(x)=\|x\|_{K}\), we get \(e^{-\mathcal{L}\varphi_{K}}=\mathbf{1}_{K^{*}}\), and integrating on level sets, \(\mathcal{P}(\varphi_{K})=n!\mathcal{P}(K)\). Therefore, if Conjecture 4 holds for \(\varphi_{K}\), then Conjecture 2 holds for \(K\). Reciprocally, if Conjecture 2 holds in \(\mathbb{R}^{n}\) for every dimension \(n\) then, given an even, convex function \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\), we can apply it in dimension \(n+m\) to the convex sets \[K_{m}(\varphi)=\left\{(x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{m};\|y\|_{ \infty}\leq\left(1-\frac{\varphi(mx)}{m}\right)_{+}\right\}.\] Using \[\operatorname{vol}_{n+m}(K_{m}(\varphi))=\frac{2^{m}}{m^{n}}\int_{\mathbb{R}^ {n}}\left(1-\frac{\varphi(x)}{m}\right)_{+}^{m}dx\] and \[K_{m}(\varphi)^{*}=\left\{(x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{m};\|y\|_{ 1}\leq\inf_{\varphi(x^{\prime})\leq m}\frac{(1-\langle x,x^{\prime}\rangle)_{ +}}{1-\frac{\varphi(x^{\prime})}{m}}\right\},\] it is proved in [10] that when \(m\to+\infty\), the inequality \(\mathcal{P}(K_{m}(\varphi))\geq\frac{4^{n+m}}{(n+m)!}\) gives \(\mathcal{P}(\varphi)\geq 4^{n}\). In a similar way, if Conjecture 3 holds in dimension \(n+1\), given a convex body \(K\) in \(\mathbb{R}^{n}\) with Santalo point at the origin, we apply it to \(\varphi:\mathbb{R}^{n}\times\mathbb{R}\to\mathbb{R}\cup\{+\infty\}\) defined, by \[e^{-\varphi(x,t)}=\mathbf{1}_{[-n-1,+\infty)}(t)\mathbf{1}_{(t+n+1)K}(x)e^{-t}.\] Then the Legendre transform of \(\varphi\) is \[e^{-\mathcal{L}\varphi(y,s)}=\mathbf{1}_{(-\infty,1]}(s)\mathbf{1}_{(1-s)K^{ *}}(y)e^{(n+1)(s-1)},\] and \[\mathcal{P}(\varphi)=\frac{(n!)^{2}e^{n+1}}{(n+1)^{n+1}}|K||K^{*}|.\] This proves that if Conjecture 3 holds for \(\varphi\), then \[\mathcal{P}(K)\geq\mathcal{P}(\Delta_{n})=\frac{(n+1)^{n+1}}{(n!)^{2}},\] which is Conjecture 1 for \(K\). Lastly, as shown in [10], one can adapt the arguments for even functions to prove that, given a convex function \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\). Conjecture 2 applied to a well chosen sequence of bodies \(\Delta_{m}(\varphi)\) in dimension \(n+m\) gives Conjecture 3 for \(\varphi\) when \(m\to+\infty\). It was also proved in [11] that if \(\mathcal{P}(\varphi)\) is minimal, then \(\varphi\) has no positive Hessian at any point. Asymptotic estimates hold too: in the even case, it was proved in [10] that for some constant \(c>0\), one has for all even convex functions \(\varphi\) and all \(n\geq 1\), \(\mathcal{P}(\varphi)\geq c^{n}\). This was generalized to all convex functions in [10]. ### Volume product and transport inequalities Maurey [12] introduced the following property (\(\tau\)): Let \(\mu\) be a measure on \(\mathbb{R}^{n}\) and \(c:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}_{+}\) be a lower semi-continuous function (called a _cost function_); we say that the couple \((\mu,c)\) satisfies _property_ (\(\tau\)) if for any continuous and bounded function \(f:\mathbb{R}^{n}\to\mathbb{R}\), defining \[Q_{c}f(y)=\inf_{x}\left(f(x)+c(x,y)\right)\text{ for }y\in\mathbb{R}^{n},\] one has \[\int_{{\mathbb{R}}^{n}}e^{-f(x)}d\mu(x)\int_{{\mathbb{R}}^{n}}e^{Q_{c}f(y)}d\mu(y) \leq 1.\] Maurey [Mau] showed that if \(\gamma_{n}\) is the standard Gaussian probability measure on \({\mathbb{R}}^{n}\), with density \((2\pi)^{-n/2}e^{-|x|^{2}/2}\), and \(c_{2}(x,y)=\frac{1}{2}|x-y|^{2}\) then as a consequence of the Prekopa-Leindler inequality, \((\gamma_{n},\frac{c_{2}}{2})\) satisfies property \((\tau)\). In [AKM], it was pointed out that the functional form of the Blaschke-Santalo inequality for the Legendre transform (Theorem 15) is equivalent to an improved property \((\tau)\) for even functions: we say that the pair \((\gamma_{n},c_{2})\) satisfies the _even property_\((\tau)\) if for any even function \(f\), one has \[\int_{{\mathbb{R}}^{n}}e^{-f(x)}d\gamma_{n}(x)\int_{{\mathbb{R}}^{n}}e^{Q_{c_ {2}}f(y)}d\gamma_{n}(y)\leq 1. \tag{32}\] This equivalence follows from the change of function: \(\varphi(x)=f(x)+\frac{|x|^{2}}{2}\) and the fact that \[-{\mathcal{L}}\varphi(y)=\inf_{x}\left(f(x)+\frac{|x|^{2}}{2}-\langle x,y \rangle\right)=Q_{c_{2}}f(y)+\frac{|y|^{2}}{2}.\] A direct proof of (32) was then given by Lehec in [Le]. And it follows from Remark 8 above, due to Lehec [Le2], that (32) also holds as soon as \(\int_{{\mathbb{R}}^{n}}xe^{-f(x)}d\gamma_{n}(x)=0\). Moreover, as shown for example in Proposition 8.2 of [GL], there is a general equivalence between property \((\tau)\) and symmetrized forms of transport-entropy inequality. These transport-entropy inequalities were introduced by Talagrand [Ta], who showed that, for every probability measure \(\nu\) on \({\mathbb{R}}^{n}\), one has \[W_{2}^{2}(\nu,\gamma_{n})\leq 2H(\nu|\gamma_{n}), \tag{33}\] where \(W_{2}\) is the _Kantorovich-Wasserstein distance_ defined by \[W_{2}^{2}(\nu,\gamma_{n})=\inf\left\{\int_{{\mathbb{R}}^{n}\times{\mathbb{R} }^{n}}|x-y|^{2}d\pi(x,y);\pi\in\Pi(\nu,\gamma_{n})\right\},\] where \(\Pi(\nu,\gamma_{n})\) is the set of probability measures on \({\mathbb{R}}^{n}\times{\mathbb{R}}^{n}\) whose first marginal is \(\nu\) and second marginal is \(\gamma_{n}\) and \(H\) is the _relative entropy_ defined for \(d\nu=fd\gamma_{n}\) by \[H(\nu|\gamma_{n})=-\int_{{\mathbb{R}}^{n}}f\log fd\gamma_{n}.\] Using this type of equivalence between property \((\tau)\) and transport-entropy inequalities, Fathi [Fat] proved the following symmetrized form of Talagrand's transport-entropy inequality: if \(\nu_{1}\) (or \(\nu_{2}\)) is centered, in the sense that \(\int xd\nu_{1}(x)=0\), then \[W_{2}^{2}(\nu_{1},\nu_{2})\leq 2(H(\nu_{1}|\gamma_{n})+H(\nu_{2}|\gamma_{n})). \tag{34}\] He showed actually that (34) is equivalent to the functional form of Blaschke-Santalo's inequality (Theorem 15). Applying (34) to \(\nu_{1}=\gamma_{n}\), one recovers Talagrand's inequality (33). In his proof, Fathi used a reverse logarithmic Sobolev inequality for log-concave functions established in [AKSW] under some regularity assumptions, removed later with a simplified proof in [CFGLSW]. In a similar way, Gozlan [Go] gave equivalent transport-entropy forms of Conjectures 3 and 4 and of Bourgain-Milman's asymptotic inequality. This work was pursued in [FGZ], where new proofs of the one-dimensional case of Conjectures 3 and 4 are also provided. ## 6. Generalization to many functions and bodies The following intriguing conjecture was proposed by Kolesnikov and Werner [KoW]. **Conjecture 5**.: _Let \(\rho:\mathbb{R}\to\mathbb{R}^{+}\) be increasing and for \(m\geq 2\), let \(f_{i}:\mathbb{R}^{n}\to\mathbb{R},\)\(i=1,\dots,m\) be even Lebesgue integrable functions satisfying_ \[\prod_{i=1}^{m}f_{i}(x_{i})\leq\rho\left(\sum_{1\leq i<j\leq m}\langle x_{i},x_ {j}\rangle\right)\text{ for all }x_{1},\dots,x_{m}\in\mathbb{R}^{n}.\] _Then_ \[\prod_{i=1}^{m}\int_{\mathbb{R}^{n}}f_{i}(x_{i})dx_{i}\leq\left(\int_{\mathbb{ R}^{n}}\rho^{\frac{1}{m}}\left(\frac{m(m-1)}{2}|u|^{2}\right)du\right)^{m}.\] Conjecture 5 was proved by Kolesnikov and Werner when the functions \(f_{i}\) are unconditional. Observe that Conjecture 5 is a functional form of a new conjectured Blaschke-Santalo inequality involving more than two convex bodies. Indeed, for \(1\leq i\leq m\), let \(K_{i}\) be starbodies, \(f_{i}(x)=v_{n}(2\pi)^{-n/2}e^{-\|x\|_{K_{i}}^{2}/2}\) and \(\rho(t)=e^{-t/(m-1)}\). Since \(\operatorname{vol}(K_{i})=v_{n}(2\pi)^{-n/2}\int_{\mathbb{R}^{n}}e^{-\|x\|_{K _{i}}^{2}/2}dx\), we get from Conjecture 5: **Conjecture 6**.: _Let \(m\geq 2\) and let \(K_{1},\dots,K_{m}\) be symmetric convex bodies in \(\mathbb{R}^{n}\) such that_ \[\sum_{1\leq i<j\leq m}\langle x_{i},x_{j}\rangle\leq\frac{m-1}{2}\sum_{i=1}^{ m}\|x_{i}\|_{K_{i}}^{2},\text{ for all }x_{1},\dots,x_{m}\in\mathbb{R}^{n}, \tag{35}\] _then \(\prod_{i=1}^{m}\operatorname{vol}(K_{i})\leq\operatorname{vol}(B_{2}^{n})^{m}.\)_ Conjecture 5 has been confirmed in [KoW] for unconditional bodies; for \(m\geq 3\), it was shown then that there is equality if and only if \(K_{i}=B_{2}^{n}\), for \(i=1,\dots,m\). This direction was further developed by Kalantzopoulos and Saroglou [KaSa] who generalized the polarity condition (35). For \(2\leq p\leq m\) and \(x_{1},\dots,x_{m}\in\mathbb{R}^{n}\), let \[\mathcal{S}_{p}(x_{1},\dots,x_{m})=\binom{m}{p}^{-1}\sum_{l=1}^{n}s_{p}(x_{1} (l),\dots,x_{m}(l)),\] where \(x_{i}=\sum_{l=1}^{n}x_{i}(l)e_{l}\) and \(s_{p}\) is the elementary symmetric polynomial in \(m\) variables of degree \(p\). The case \(p=2\) corresponds to the sum of scalar products, i.e. \[\mathcal{S}_{2}(x_{1},\dots,x_{m})=\frac{2}{m(m-1)}\sum_{1\leq i<j\leq m} \langle x_{i},x_{j}\rangle.\] In [KaSa] the following \(p\)_-Santalo conjecture_ was proposed: **Conjecture 7**.: _Let \(2\leq p\leq m\) be two integers. If \(K_{1},\dots,K_{m}\) are symmetric convex bodies in \(\mathbb{R}^{n}\), such that_ \[\mathcal{S}_{p}(x_{1},\dots,x_{m})\leq 1,\text{ for all }x_{i}\in K_{i},\] _then \(\prod_{i=1}^{m}\operatorname{vol}(K_{i})\leq\operatorname{vol}(B_{p}^{n})^{m},\) where \(B_{p}^{n}\) is the unit ball of the \(\ell_{p}^{n}\)-norm._ Kalantzopoulos and Saroglou [KaSa] were able to confirm Conjecture 7 when \(p=m\), and in the case of unconditional convex bodies for all \(p=2,\dots,m\). Moreover when \(p=2\), it is enough to assume that only \(K_{3},\dots,K_{m}\) are unconditional. In all of those known cases, the conjectured inequality is actually sharp for \(K_{1}=\dots=K_{m}=B_{p}^{n}\). A functional analog of Conjecture 7 was also proposed in [KaSa]. ## 7. Links to other inequalities In this section we present just a sample of connections of the volume product to other inequalities in convex geometry. As before, we refer to the books [1, 1, 2, 3, 4, 5, 6, 7] and especially to the amazing diagram of connections of different open problems in convex geometry constructed by Richard Gardner in [1, Figure 1]. ### Slicing Conjecture Klartag [10] found a connection between a sharp version of Bourgain's slicing conjecture and Mahler's conjecture for general convex bodies (conjecture 1-The covariance matrix of a convex body \(K\) in \(\mathbb{R}^{n}\) is the \(n\times n\) matrix \(\mathrm{Cov}(K)\) defined by \[\mathrm{Cov}(K)_{i,j}=\frac{\int_{K}x_{i}x_{j}dx}{\mathrm{vol}(K)}-\frac{\int _{K}x_{i}dx}{\mathrm{vol}(K)}\frac{\int_{K}x_{j}dx}{\mathrm{vol}(K)}\] The isotropic constant \(L_{K}\) is defined as \(L_{K}^{2n}=\det(\mathrm{Cov}(\mathrm{K}))\,\mathrm{vol}(\mathrm{K})^{-2}.\) It is well-known that \(L_{K}\) is bounded from below by an absolute positive constant which is reached for ellipsoids. Bourgain's slicing problem asks whether for some universal constant \(C>0\), one has \(L_{K}\leq C\) for every convex body \(K\). The name _slicing conjecture_ comes from the following very interesting equivalent reformulation: is it true that for some universal \(c>0\), every convex body of volume one in \(\mathbb{R}^{n}\) has an hyperplane section with \((n-1)\)-dimensional volume greater than \(c\)? (see [11] for other equivalent statements). The boundedness of \(L_{K}\) by an absolute constant is still an open question. Bourgain [1] proved that \(L_{K}\leq Cn^{1/4}\) up to a logarithmic factor, which was removed by Klartag [10]. Chen [10] proved that \(L_{K}\leq C_{\varepsilon}n^{\varepsilon}\) for every \(\varepsilon>0\). Then, Klartag and Lehec [10] established a polylogarithmic bound \(L_{K}\leq C\log^{5}n\), which was then further improved to \(L_{K}\leq C\log^{2.2}n\) by Jambulapati, Lee and Vempala [12]. A _strong version of the slicing conjecture_ asks the following: **Conjecture 8**.: _For any convex body \(K\) in \(\mathbb{R}^{n}\) one has_ \[L_{K}\leq L_{\Delta_{n}}=\frac{(n!)^{1/n}}{(n+1)^{\frac{n+1}{2n}}\sqrt{n+2}}. \tag{36}\] Let \(K\) be a local minimizer of the volume product among the set of all convex bodies in \(\mathbb{R}^{n}\) endowed with the Hausdorff distance, then Klartag [10] was able to prove that \[\mathrm{Cov}(K^{*})\geq(n+2)^{-2}\mathrm{Cov}(K)^{-1}.\] Taking the determinant and raising to the power \(1/n\), one gets \[\frac{1}{n+2}\leq L_{K}L_{K^{*}}\mathcal{P}(K)^{1/n}. \tag{37}\] Thus combining (37) and (36), \[\frac{1}{n+2}\leq L_{K}L_{K^{*}}\mathcal{P}(K)^{1/n}\leq\frac{(n!)^{2/n}}{(n+ 1)^{\frac{n+1}{n}}(n+2)}\mathcal{P}(K)^{1/n}.\] Thus, we proved the following theorem: **Theorem 17**.: _(Klartag) The strong version of Bourgain's slicing conjecture given in Conjecture 8 implies Conjecture 1 (Mahler's conjecture) for general convex bodies._ In connection with his proof of the Bourgain-Milman inequality, Kuperberg asked in [10] whether the quantity \[\frac{1}{\operatorname{vol}(K)\operatorname{vol}(K^{*})}\int_{K}\int_{K^{*}} \langle x,y\rangle^{2}dxdy\] is maximized for ellipsoids in the class of convex symmetric bodies \(K\subset\mathbb{R}^{n}\). Alonso-Gutierrez [1] proved that this conjecture implies both the Blaschke-Santalo inequality and the hyperplane conjecture and that it holds true for \(B_{p}^{n}\), the unit ball of \(\ell_{p}^{n}\), for \(p\geq 1\). The connection to the hyperplane conjecture was also studied in [11]. Kuperberg had not much hope for his conjecture and Klartag [12] showed that it is false in high dimension, even in the case of unconditional bodies. ### Symplectic geometry and Viterbo's conjecture Artstein-Avidan, Karasev and Ostrover in [1] discovered an amazing connection between the volume product and symplectic geometry. Let \((X,\omega)\) be a symplectic manifold: \(X\) is a smooth manifold with a closed non-degenerate two-form \(\omega\). For instance, \((\mathbb{R}^{2n},\omega_{st})\), where \(\mathbb{R}^{2n}=\mathbb{R}_{p}^{n}\times\mathbb{R}_{q}^{n}\) and \(\omega_{st}=\sum dp_{i}\wedge dq_{i}\). A core fact in symplectic geometry states that symplectic manifolds have no local invariants (except the dimension). This, clearly, makes the structure very different from that of Riemannian manifolds. The first examples of global symplectic invariants were introduced by Gromov [12] and are known as Gromov's "non-squeezing theorem". Gromov's work inspired the introduction of global symplectic invariants - symplectic capacities - which may be seen as a way to measure the symplectic size of sets in \(\mathbb{R}^{2n}\). More precisely, a _symplectic capacity_\(c\) on \((\mathbb{R}^{2n},\omega_{st})\) is a mapping \(c:\mathcal{S}(\mathbb{R}^{2n})\to\mathbb{R}_{+}\), where \(\mathcal{S}(\mathbb{R}^{2n})\) is the set of all subsets of \(\mathbb{R}^{2n}\), which satisfies the following conditions * Monotonicity: \(c(U)\leq c(V)\), for all \(U\subset V.\) * Conformality: \(c(\phi(U))=|\alpha|c(U)\), for all diffeomorphism \(\phi\) such that \(\phi^{*}\omega_{st}=\alpha\omega_{st}\). * Normalization: \(c(B_{2}^{2n})=c(B_{2}^{2}\times\mathbb{R}^{2(n-1)})=\pi\). The following is the conjecture of Viterbo [13] for symplectic capacities of convex bodies. **Conjecture 9**.: _For any symplectic capacity \(c\) and any convex body \(\Sigma\) in \(\mathbb{R}^{2n}\), one has_ \[\frac{c(\Sigma)}{c(B_{2}^{2n})}\leq\left(\frac{\operatorname{vol}_{2n}( \Sigma)}{\operatorname{vol}_{2n}(B_{2}^{2n})}\right)^{\frac{1}{n}}.\] Conjecture 9 is of isoperimetric type: indeed, it claims that among all convex bodies in \(\mathbb{R}^{2n}\) of a given fixed volume, the Euclidean ball of the same volume has the maximal symplectic capacity. It is open even for \(n=2\), but it holds for certain classes of convex bodies, including ellipsoids [12] and up to a universal multiplicative constant [1]. The following was proved in [1]. **Theorem 18**.: _Conjecture 9 implies Conjecture 2._ More precisely, it was proved in [1] that for any convex symmetric body \(K\subset\mathbb{R}^{n}\), \(c_{HZ}(K\times K^{*})=4\), where \(c_{HZ}\) denotes the Hofer-Zehnder capacity, which is one of the important symplectic capacities. This fact, together with Conjecture 9 and the normalization property of \(c_{HZ}\) immediately gives an affirmative answer to conjecture 2: \[\frac{4^{n}}{\pi^{n}}=\left(\frac{c_{HZ}(K\times K^{*})}{c_{HZ}(B_{2}^{2n})} \right)^{n}\leq\frac{\operatorname{vol}_{2n}(K\times K^{*})}{\operatorname{ vol}_{2n}(B_{2}^{2n})}=\frac{n!\operatorname{vol}_{2n}(K\times K^{*})}{ \pi^{n}}.\] We refer to [AKO] and [O] for more details on these connections. The connections of Conjecture 2 with symplectic geometry were further continued in [ABKaS, BeKa, Ka, KaS]. In [Ru], Viterbo's conjecture was connected with Minkowski versions of worm problems, inspired by the well-known Moser worm problem from geometry. For the special case of Lagrangian products, this relation provides further links to systolic Minkowski billiard inequalities and Mahler's conjecture. ### Funk geometry A very interesting connection of the volume product with Funk geometry was recently discovered by Faifman [Fa]. We refer to [PT] for a detailed introduction to Finsler manifolds and Funk geometry. We will remind a few of the most basic ideas. A non-reversible Finsler manifold \((M,F)\) is a smooth manifold \(M\) equipped with a smooth function \(F\) on the tangent bundle of \(M\) which, when restricted on any tangent space, is the gauge of some convex body. The crucial difference with Riemannian geometry is the lack of inner product. The tangent unit ball at a point \(x\in M\) is denoted by \(B_{x}M\) and consists of all vectors \(v\) in the tangent space \(T_{x}M\) such that \(F(x,v)\leq 1\). For a convex body \(K\) in a fixed affine space, the Funk metric on the interior of \(K\) is given by \(B_{x}K=K\), i.e. at any point \(x\) in interior of \(K\), the body \(K\) with origin at \(x\) is the unit ball. We done in the following way: Consider \(x,y\in\operatorname{int}(K)\) and let \(R(x,y)\) be the ray starting at \(x\) passing through \(y\). Let \(a(x,y)=R(x,y)\cap\partial K\), then the Funk metric, defined for \(x\neq y\in\operatorname{int}(K)\), is \[d_{K}^{F}(x,y)=\log\frac{|x-a(x,y)|}{|y-a(x,y)|},\] and \(d_{K}^{F}(x,x)=0.\) The Funk metric is projective, i.e. straight segments are geodesics. The outward ball of radius \(r>0\) and center \(z\in\operatorname{int}(K)\) is \[B_{K}^{F}(z,r)=\{x\in\operatorname{int}(K):d_{K}^{F}(z,x)\leq r\}=(1-e^{-r})(K- z)+z.\] The Holmes-Thompson volume of \(A\subset\operatorname{int}(K)\) is defined as \[\operatorname{vol}_{K}^{F}(A)=\frac{1}{v_{n}}\int_{A}\operatorname{vol}(K^{x})dx.\] Asymptotically as \(r\to 0\), the volume of \(B_{K}^{F}(z,r)\) behaves as \(v_{n}^{-1}\operatorname{vol}_{2n}(K\times K^{z})r^{n}\). It was also shown in [BBV] that for a strictly convex and smooth body \(K\), when \(r\to+\infty\), the volume of \(B_{K}^{F}(z,r)\) behaves as \(c_{n}e^{\frac{n-1}{2}r}\mathcal{A}(K,z)\), where \(c_{n}>0\) depends only on \(n\) and \(\mathcal{A}(K,z)\) is the centro-affine surface area of \(K\) defined by: \[\mathcal{A}(K,z)=\int_{\partial K}\frac{\kappa_{K}^{1/2}(x)}{\langle x-z,n_{K} (x)\rangle^{(n-1)/2}}dx,\] where \(\kappa_{k}(x)\) is Gauss curvature of \(\partial K\) at point \(x\) and \(n_{K}(x)\) is outer normal vector, note that \(\mathcal{A}(K,0)=\mathcal{A}(K)\). The following duality relation for \(\operatorname{vol}_{K}^{F}\), for centrally symmetirc \(K\), is proved in [Fa]: \[\operatorname{vol}_{K}^{F}(B_{K}^{F}(0,r))=\operatorname{vol}_{K^{*}}^{F}(B_{K ^{*}}^{F}(0,r)).\] The existence of an analog of the Santalo point \(s(K)\) of a convex body \(K\) in the Funk geometry was proved in [FaVVW]: For any \(r>0\), there is a unique point \(s_{r}(K)\in\operatorname{int}(K)\) that minimizes the Funk volume of \(B_{K}^{F}(q,r)\). One has \(s_{r}(K)=0\) for symmetric \(K\) and \(s_{r}(K)\to s(K)\) as \(r\to 0\). Let \[M_{r}(K)=v_{n}\operatorname{vol}_{K}^{F}(B_{K}^{F}(s_{r}(K),r))\] The following conjecture was proposed in [Fa]: **Conjecture 10**.: _For all \(r>0\), \(M_{r}(K)\) is maximal when \(K\) is an ellipsoid._ The limiting cases of Conjecture 10 are the Blaschke-Santalo inequality as \(r\to 0\) and the centro-affine isoperimetric inequality as \(r\to\infty\). Faifman [Fa] was able to show that Conjecture 10 holds for unconditional bodies \(K\). The idea of the proof includes the generalization of the conjecture of K. Ball (see inequality (13)), namely \[\int_{K}\int_{K^{*}}\langle x,y\rangle^{2j}dxdy\leq\int_{B_{2}^{n}}\int_{B_{2} ^{n}}\langle x,y\rangle^{2j}dxdy, \tag{38}\] for all \(j\in\mathbb{N}\), which Faifman was able to confirm for \(K\) unconditional. A lower bound for the quantity \(M_{r}(K)\) was proposed in [FaVW]: **Conjecture 11**.: _For \(r>0\), \(M_{r}(K)\) is minimized by simplices in general and by Hanner polytopes for symmetric bodies \(K\)._ The limiting cases as \(r\to 0\) of Conjecture 11 for symmetric \(K\) is Conjecture 2 and as \(r\to+\infty\) is a conjecture of Kalai [K] on the minimization of the flag number of \(K\). Conjecture 11 is proved in [FaVW] for unconditional bodies and follows from an interesting new inequality discovered in [FaVW] and proved for unconditional bodies \[\int_{H}\int_{H^{*}}\langle x,y\rangle^{2j}dxdy\leq\int_{K}\int_{K^{*}}\langle x,y\rangle^{2j}dxdy, \tag{39}\] where \(H\) is a Hanner polytope in \(\mathbb{R}^{n}\) and \(j\in\mathbb{N}.\) The proof of (39) in [FaVW] is based on the functional inverse Santalo inequality [FM2]. ### Geometry of numbers and isosystolic inequalities The volume product is a standard tool in the geometry of numbers. The connection goes back to the theorem of Mahler [Ma2] (see [BM], [Gru], Chapter 3 or [Ev]) on the bound of the successive minima of a convex body and its dual. Let us here present yet another connection of volume product with the geometry of numbers and the systolic geometry discovered by Alvarez Paiva, Balacheff and Tzanev [APBT]. Minkowski's first theorem in the geometry of numbers states that if \(K\) is a symmetric convex body in \(\mathbb{R}^{n}\) with \(\operatorname{vol}(K)\geq 2^{n},\) then \(K\) contains at least one non-zero integer point (in \(\mathbb{Z}^{n}\)). The symmetry assumption is needed, as there are convex bodies \(K\) of large volume containing the origin and no other integer point. We know that such bodies must be "flat" [KL] and Alvarez Paiva, Balacheff [APB] conjectured that the volume of their polars \(K^{*}\) is not too small: **Conjecture 12**.: _Let \(K\subset\mathbb{R}^{n}\) be a convex body such that \(\operatorname{int}(K)\cap\mathbb{Z}^{n}=\{0\}\). Then \(\operatorname{vol}(K^{*})\geq(n+1)/n!\), with equality if and only if \(K\) is a simplex with vertices in \(\mathbb{Z}^{n}\) and no other integer points than its vertices and \(0\)._ In [APBT], Conjecture 12 was proved in \(\mathbb{R}^{2}\) and an isomorphic bound for \(\operatorname{vol}(K^{*})\) was given in all dimensions. Namely, for some absolute constant \(c>0\), one has \(\operatorname{vol}(K^{*})\geq c^{n}(n+1)/n!\) for any convex body \(K\) in \(\mathbb{R}^{n}\) such that \(\operatorname{int}(K)\) contains no integer point other than the origin. The proof of this fact in [APBT] uses Bourgain-Milman inequality, and it is shown that this isomorphic version of Conjecture 12 is actually equivalent to it. Conjecture 12 can be further generalized to a conjecture in systolic geometry. We refer to [APBT] for exact statements and definitions. We mention here a version of the conjecture in the language of Finsler geometry (see Section 7.3). The Holmes-Thompson volume of a Finsler manifold \((M,F)\) is defined as \[\operatorname{vol}_{HT}(M,F)=\frac{1}{v_{n}}\int_{M}\operatorname{vol}((B_{x}M)^ {*})dx.\] **Conjecture 13**.: _For any Finsler metric \(F\) on \(\mathbb{RP}^{n}\), there exists a closed non-contractible geodesic with length bounded by \(\frac{(n!v_{n})^{1/n}}{2}\operatorname{vol}_{HT}(\mathbb{RP}^{n},F)^{1/n}\)._ We remind that a set which can be reduced to one of its points by a continuous deformation, is said to be contractible. For \(n=2\), Conjecture 13 follows from the works of Ivanov [14, 15]. The next theorem was proved in [1]: **Theorem 19**.: _Conjecture 13 implies Conjecture 2 for centrally symmetric bodies._ The proof of Theorem 19 uses the Finsler metric on a convex symmetric body \(K\) which coincides at each point with the norm corresponding to \(K\). By identifying the points \(x\) and \(-x\) in \(\partial K\), we obtain a length space (a space in which the intrinsic metric coincides with the original metric) on \(\mathbb{RP}^{n}\). We denote this Finsler space by \((\mathbb{RP}^{n},d_{K})\). It turns out that one has \[\operatorname{vol}(\mathbb{RP}^{n},d_{K})=\frac{1}{v_{n}}\mathcal{P}(K)\] and that the length of the systoles (the shortest noncontractible geodesics) in \((\mathbb{RP}^{n},d_{K})\) is equal to \(2\). Combining those commutations and assuming that Conjecture 13 holds, we get from Conjecture 13 a proof of Conjecture 2 for symmetric convex bodies in \(\mathbb{R}^{n}\).
2304.03697
HumanLight: Incentivizing Ridesharing via Human-centric Deep Reinforcement Learning in Traffic Signal Control
Single occupancy vehicles are the most attractive transportation alternative for many commuters, leading to increased traffic congestion and air pollution. Advancements in information technologies create opportunities for smart solutions that incentivize ridesharing and mode shift to higher occupancy vehicles (HOVs) to achieve the car lighter vision of cities. In this study, we present HumanLight, a novel decentralized adaptive traffic signal control algorithm designed to optimize people throughput at intersections. Our proposed controller is founded on reinforcement learning with the reward function embedding the transportation-inspired concept of pressure at the person-level. By rewarding HOV commuters with travel time savings for their efforts to merge into a single ride, HumanLight achieves equitable allocation of green times. Apart from adopting FRAP, a state-of-the-art (SOTA) base model, HumanLight introduces the concept of active vehicles, loosely defined as vehicles in proximity to the intersection within the action interval window. The proposed algorithm showcases significant headroom and scalability in different network configurations considering multimodal vehicle splits at various scenarios of HOV adoption. Improvements in person delays and queues range from 15% to over 55% compared to vehicle-level SOTA controllers. We quantify the impact of incorporating active vehicles in the formulation of our RL model for different network structures. HumanLight also enables regulation of the aggressiveness of the HOV prioritization. The impact of parameter setting on the generated phase profile is investigated as a key component of acyclic signal controllers affecting pedestrian waiting times. HumanLight's scalable, decentralized design can reshape the resolution of traffic management to be more human-centric and empower policies that incentivize ridesharing and public transit systems.
Dimitris M. Vlachogiannis, Hua Wei, Scott Moura, Jane Macfarlane
2023-04-05T17:42:30Z
http://arxiv.org/abs/2304.03697v1
HumanLight: Incentivizing Ridesharing via Human-centric Deep Reinforcement Learning in Traffic Signal Control ###### Abstract Single occupancy vehicles are the most attractive transportation alternative for many commuters, leading to increased traffic congestion and air pollution. Advancements in information technologies create opportunities for smart solutions that incentivize ridesharing and mode shift to higher occupancy vehicles (HOVs) to achieve the car lighter vision of cities. In this study, we present HumanLight, a novel decentralized adaptive traffic signal control algorithm designed to optimize people throughput at intersections. Our proposed controller is founded on reinforcement learning with the reward function embedding the transportation-inspired concept of pressure at the person-level. By rewarding HOV commuters with travel time savings for their efforts to merge into a single ride, HumanLight achieves equitable allocation of green times. Apart from adopting FRAP, a state-of-the-art (SOTA) base model, HumanLight introduces the concept of active vehicles, loosely defined as vehicles in proximity to the intersection within the action interval window. The proposed algorithm showcases significant headroom and scalability in different network configurations considering multimodal vehicle splits at various scenarios of HOV adoption. Improvements in person delays and queues range from 15% to over 55% compared to vehicle-level SOTA controllers. We quantify the impact of incorporating active vehicles in the formulation of our RL model for different network structures. HumanLight also enables regulation of the aggressiveness of the HOV prioritization. The impact of parameter setting on the generated phase profile is investigated as a key component of acyclic signal controllers affecting pedestrian waiting times. HumanLight's scalable, decentralized design can reshape the resolution of traffic management to be more human-centric and empower policies that incentivize ridesharing and public transit systems. ## 1 Introduction Population growth along with urbanization have rendered today's transportation networks more saturated than ever before. Private vehicle ownership grows yearly, particularly since the COVID-19 outbreak [1]. As a result, traffic congestion has become one of the key challenges metropolitan areas are facing. Apart from increased travel times and vehicle miles traveled, the associated environmental consequences have made transportation the number one contributor to the climate crisis in America today. In many urban areas, the inconveniences of unreliable transit are high for commuters [2, 3], which has resulted in diminishing public transit ridership, leaving private vehicles as the most attractive solution for commuters. For policymakers, looking decades ahead, the reduction of trips performed in single occupancy vehicles (SOVs) is of paramount importance. With the COVID-19 pandemic coming to an end, incentivizing public transit use and shared modes of transportation is imperative. Ridesharing or pooling is defined as the act of two or more travellers sharing the same vehicle for a common trip. Shared rides will reduce vehicle miles traveled, energy use, and greenhouse gas emissions [4]. Apart from the system-level benefits, the users individually benefit from shared travel costs [5]. Car pooling originally became popular in metropolitan regions with the establishment of High Occupancy Vehicle (HOV) lanes. People casually grouped in one vehicle to save on time, toll fares, and gas [6]. In the last decade, app-based pooling services that reserve, match, and process payments for rides on demand have been developed. By grouping passengers with similar origin and destination locations within a walking radius from the common route, transportation network companies in urban areas provide flexible and affordable ridesharing solutions [7]. Compared to the ride-alone option, users of a pooled ride are quoted a discounted price for a usually longer estimated total travel time due to the added walking time and pick up of additional passengers. Transit and purpose-driven shuttles (e.g. airport drop-off services) have been around for decades as fixed-guideway systems that cannot respond to dynamic passenger demand. On-demand transit-like services, typically comprised of vans, shuttles and buses, have also recently emerged and are commonly referred to as microtransit [6]. Particularly in the post-pandemic era which reshaped travel patterns to require greater flexibility, the on-demand services offered by microtransit with more direct routing, reduced transfers, and better coverage can significantly improve rider experience [8]. As opposed to traditional transit, microtransit generates routes and stops in response to real-time demand. Furthermore, microtransit vehicles are guided by GPS with real-time traffic and can therefore adapt to on-the-ground conditions such as traffic jams and road closures. Microtransit solution experiments have been mostly successful and are envisioned as a key component in transportation networks by many forward-thinking transit agencies. For those reasons, this study incorporates microtransit in the formulation of HOV adoption scenarios to represent the multimodal composition of vehicle fleets in the upcoming decades. The dynamic travel behavior and demand fluctuations across on-demand mobility options have led to a decline of carpooling to currently only 7.79% of the commute mode share in the United States for 2021 [1]. The major progress in ride-hailing matching algorithms for mobility on demand is mostly achieving cheaper rides for commuters. However, [7] have indicated that monetary benefits alone do not outweigh the loss in travel time poolers sacrifice to merge into a single ride. For pooling to be competitive, shared rides should be prioritized through smart mobility solutions that democratize travel times benefits. Currently, poolers and non-poolers share the travel time benefits of reduced vehicle counts in the transportation network thanks to poolers' shift towards HOVs. Instead, fairer and more socially equitable mechanisms would reward poolers for their efforts to merge rides by providing reduced travel times. HOV lanes are currently the only case where poolers receive travel time benefits. In urban environments, HOV lanes are mostly reserved for freeways and highways. According to [9], the introduction of HOV lanes did not lead to a significant increase in ridesharing among the population of the examined route's commuters, but only in some specific periods. Results suggested that barriers to increased pooling and mode shift are formidable, requiring the development of additional carpooling incentivizing strategies based more on travel time savings. These findings are in alignment with studies suggesting that cost reductions of pooling solutions do not outweigh the potential delays in travel time [10]. Information and connected car technologies enabling real time data sharing and high performance computation now make the window of opportunity for the implementation of smart mobility solutions greater than ever. In this study, we present HumanLight, a traffic management solution developed to operate with future vehicle to infrastructure (V2I) communication technologies. A novel decentralized adaptive signal control algorithm that optimizes people's throughput at intersections is formulated to support a future modal split among private vehicles, pooled rides and public transit. Our proposed solution sets the foundations for breaking free of today's dependence on cars by prioritizing people as opposed to vehicle movement. By devolving transportation policy to be more human-centric, ridesharing and public transit systems can be re-invigorated and attract the travel demand they truly merit in sustainable and multimodal urban environments. The reinforcement learning-based signal controller HumanLight is designed to reward, with more green times, vehicles carrying more people. We explore the headroom of such prioritization strategies at intersections at different rates of vehicle pooling adoption and demonstrate the higher travel time benefits for HOV adopters. ## 2 Literature Review ### Reinforcement Learning in Traffic Signal Control Traffic controllers are typically classified into fixed-time, actuated, and adaptive controllers. Conventional traffic signal control (TSC) methods are mostly fixed-time and have been developed to heavily rely on pre-defined rules and assumptions on traffic conditions [11, 12]. Traffic splits are typically derived to alleviate traffic congestion on uniform traffic flow distribution. Webster's method [13] is one of the most widely-used methods in the field for a single intersection setting. It determines the optimum cycle length and phase split for a single intersection according to historical traffic data collected at different times during the day. Actuated signal controllers are more responsive to the traffic flows as they use real-time measurements from sensors. In most cases though, the timing plan parameters, such as maximum green and extension time, are optimized offline [14]. Adaptive traffic signal controllers have been shown to outperform fixed-time and actuated controllers because they can adjust traffic phase splits, or skips traffic phases, according to dynamic and unpredictable traffic demand patterns. Reinforcement Learning (RL) based TSC is a model-free and self-learning adaptive strategy. By interacting with the environment, usually a microscopic simulator, the agent learns to adapt to the evolving real-time traffic conditions [15, 16, 17]. Recently, with the advent of deep learning and the use of deep neural networks to approximate key components of reinforcement learning, deep reinforcement learning (DRL) has enabled a continuous state space representation. A key component of the design of a RL-based TSC system is the formulation of the algorithmic setting, comprising of the state and action space and the reward function. For the state representation, different quantitative descriptions of the environment have been proposed, including queue length [15, 18, 19], waiting time [20, 21], vehicle volume [22, 16] approximations of delay [23], speeds [24, 22, 25] or phase setting [26, 16, 15, 27, 17]. As for the design of the action space, two main representations are used in literature: i) the algorithm selects traffic phases in an acyclic manner [28, 16, 29, 17, 30, 31], ii) the algorithm, following an ordered sequence of traffic phases, determines the traffic phase splits by regulating the duration of the current phase either by keeping or switching it [32, 15, 26]. The reward, or objective function of the RL problem, has been modeled with several surrogate metrics as vehicle travel time can only be measured post trip completion. Metrics that have shown to provide effective formulations of the reward include queue length [29, 20, 19, 26, 33], waiting time [18, 21, 20, 15], speed [15, 20, 24], number of stops [20], throughput [26, 15, 33] or even safety related features such as accident avoidance [20, 34]. In an effort to derive a more theoretically grounded reward function formulation and identify how much neighboring information is necessary in the state space representation, [16] drew inspiration from max pressure (MP), a state-of-the-art (SOTA) method in adaptive traffic signal control, as proposed in [35, 36]. The MP algorithm proved that by minimizing pressure of the intersection, defined as the difference between the total queue length on incoming approaches and outgoing approaches, the risk of over-saturation is reduced and the throughput of the whole road network is maximized. [16] tested using pressure as part of the long term reward function to achieve a formulation more theoretically justified without deploying combinations of heuristic metrics that also require weight tuning. PressLight is also the first RL model that automatically achieves coordination along arterials without prior knowledge. The framework was tested under uniform unidirectional traffic with the optimal solution being known to be the greenhouse. PressLight learned to regulate consecutive traffic signals' switches with an offset equivalent to the expected vehicle travel time between intersections. Despite directly interacting with a highly dynamic environment and learning to reach a long-term goal for traffic signal control, RL methods are susceptible to the curse of dimensionality [37, 38] an issue whereby the state space becomes too large. This leads to higher computational costs during the exploration of the state-action pairs resulting in a longer learning time, as well as requiring a larger storage capacity to store the learned Q-values. MPLight [17] was recently designed as a decentralized framework to address those scalability issues. The proposed approach was applied for multi-intersection control, and specifically in a large-scale network of over 1000 intersections in New York City. Thanks to the pressure-based design and the parameter sharing capabilities in model learning, MPLight showcased strong performance and generalization ability. MPLight also adopts the FRAP architecture as its base model [39]. FRAP is a Deep Q-Learning method specifically developed for traffic signal control problems and designed based on the principles of phase competition and invariance. FRAP not only achieves superior performance with fast convergence, but is also resilient in handling complex intersection structures and multi-intersection environments. The vast majority of SOTA studies showcase the potential of reinforcement learning in TSC in a vehicle-level optimization setting under the consideration of uni-modal traffic, neglecting the heterogeneous interests and complex interactions of multimodal traffic. Their vehicle-centric design provides optimal movement of vehicles disregarding their occupancy. ### Transit Signal Priority and Person-level Traffic Optimization Human mobility and optimization of people throughput at intersections is essential for establishing efficient, sustainable and equitable transportation operations. Over the last decade, automatic passenger counts (APCs), the foundation of occupancy data, are rapidly growing in their adoption within transit fleets. APCs have already been established as standard practice by several transit agencies, as real-time occupancy is a necessary piece to the complete mobility picture. The American Public Transportation Association reported in 2020 that approximately 40% of all United States transit vehicles have APCs installed, with commuter buses exceeding 58%. This information is already helping agencies improve transit operations by reducing bus bunching and passenger pass-ups. Monitoring passenger counts is invaluable for TSC frameworks aiming to minimize person delays, as it facilitates the implementation of traffic management strategies capable of achieving efficient and reliable public transit. A transit signal priority algorithm was proposed in [40, 41], formulated as a mixed-integer non-linear program, to minimize total passenger delay while assigning priority to transit vehicles based on their passenger occupancy. The developed macroscopic mathematical model of delay at an intersection considers both regular and transit vehicles, and assumes under-saturated traffic conditions and fixed cycle lengths and phase sequences. The work was later extended to the arterial level where intersection pairs were simultaneously optimized and schedule adherence was incorporated [42]. Another real-time TSC system, minimizing total person delays of cars and transit vehicles while ensuring a priority window for transit vehicles to address the issue of wasted priority due to bus arrival uncertainty, was proposed by [43]. Several other systems handling transit vehicle priority in bi-modal traffic environments have been developed [44, 45, 46, 47]. Only recently have traffic signal control algorithms founded on RL and accounting for person-level optimization appeared in literature [37]. Using available information received from high-detailed traffic sensors, [48] proposed a multimodal Deep RL based traffic signal controller that combines both regular traffic and public transit and minimizes the overall travelers' delay through the intersection. The controller builds on the authors previous work [14]. The position and average speed of vehicles are used as state inputs, the change of delay is defined as the reward function, and actions are set to flexibly choose the next phase in the pre-defined phase set. [49] also adopted a person-based reward function to propose an extended Dueling Double Deep Q-learning (DDQL) algorithm, eD3QNI, to improve bus operational efficiency and handle conflicting bus priority requests. Performance is evaluated by simulation for a single intersection with two traffic demands and random arrivals, schedule deviations, and occupancies of buses. The authors also performed an exploration around the penetration rate of connected buses, illustrating that it does not affect the convergence speed but it will affect performance. Another person-based approach also restricted in an isolated intersection was proposed by [50]. The reward function was built around passenger waiting time and queue length and optimized using the DDDQL network. The authors even account for non-motorized traffic including pedestrians, and report a 6.3% decrease in waiting time compared to a baseline vehicle-level approach. To the best of our knowledge, existing literature still lacks a scalable RL-based multi-intersection traffic signal controller capable of handling multimodal traffic and generalizable to large scale networks. ### Contributions and Organization The key contributions of this research work can be summarized as follows. HumanLight is the first human-centric RL-based decentralized adaptive traffic signal controller that is scalable to transportation networks of multiple intersections such as corridors and grids. HumanLight is designed to democratize urban traffic by allocating green times in a socially equitable manner. By providing travel time savings to riders of HOVs, HumanLight can be a powerful tool for policymakers to incentivize a shift from privately owned motorized transportation towards shared and equitable mobility. Our proposed algorithm's effectiveness is evaluated in a variety of road network configurations and mode share scenarios. The different multimodal scenarios include diverse distributions of SOVs, carpools, microtransit and public transportation to quantify HumanLight's potential at different levels of HOV adoption. We introduce some key methodological novelties that enable HumanLight to achieve robust performance in person-level optimization. Inspired by the concept of pressure [35, 16] quantifying the degree of disequilibrium between vehicle density on the incoming and outgoing lanes, HumanLight extends the idea to person pressure minimization to achieve optimal people throughput at intersections. We also introduce the concept of active vehicles, loosely defined as those in proximity to the intersection within the action interval window. Through systematic experiments, the utility of the active vehicle consideration during person-level optimization is shown to improve handling the high variance of vehicle occupancies in multimodal traffic environments. In addition to enabling reduced passenger travel times and socially equitable green time allocation at signalized intersections, HumanLight enables policymakers and traffic engineers to control the aggressiveness in the prioritization of HOVs. We achieve this via a modification in the state embedding where vehicle occupancies are encoded. This way, the travel time benefits across vehicle types of different occupancies are parameterized for the system operator to adjust rather than being generated in a fixed approach. Experiments are conducted to establish the most effective formulation of the algorithm (state space and reward function), as well as, parameter setting and tuning. Experiments include exposure to different traffic demand scenarios and network structures while the analytics extend further from traditional traffic metrics (travel times, delays and queues) to traffic signal design aspects such as average phase durations and number of changes. To examine the potential failure points of HumanLight in network settings, the study also includes analyses on maximum vehicle queues and stopping behaviors of the different vehicle types as a result of the person-level TSC policies. The remainder of this paper is structured as follows. In Section 3, we introduce some preliminary definitions and formulate the decentralized multi-intersection traffic signal control problem. Section 4 presents the agent design and provides details on the deployed Deep Q-learning model. In Section 5, the experimental framework is demonstrated along with the methods of comparison and evaluation metrics. Section 6 discusses HumanLight's performance for different network configurations and demand profiles and presents analyses justifying the algorithmic formulation and the policymaking impacts of the proposed solution. Section 7 concludes the study and Section 8 suggests directions for future research. ## 3 Problem Definition ### Preliminaries **Definition 1: Traffic Network** The traffic network is represented as a directed graph, with intersections modeled as nodes and road segments between intersections as edges. Each intersection may have both incoming and outgoing roads serving the upstream and downstream traffic respectively. Each road may be comprised of multiple lanes. Road segment \(i\) of intersection \(I\) is denoted by \(r^{I}_{i}\) while lane \(l\) of \(r^{I}_{i}\) is denoted as \(r^{I}_{(i,I)}\). We represent the set of incoming and outgoing lanes of intersection \(I\) as \(L^{I}_{in}\) and \(L^{I}_{out}\) respectively. **Definition 2: Traffic Movement Traffic movement** is defined as the traffic traveling across an intersection from an incoming road to an outgoing road. We denote a traffic movement from \(r^{I}_{i}\) to \(r^{I}_{j}\) as \((r^{I}_{i},r^{I}_{j})\), in which \((r^{I}_{i},r^{I}_{j})=(r^{I}_{(i,I)},r^{I}_{(j,m)})\), \(l\in r^{I}_{i}\subset L^{I}_{in},m\in r^{I}_{j}\subset L^{I}_{out}\). **Definition 3: Signal Phase** A traffic signal phase of intersection \(I\), \(\phi^{I}\), is defined as a permissible combination traffic movements. **Definition 4: Movement, Phase and Intersection Pressure** Vehicle pressure of a traffic movement is defined as the difference of vehicle density between the upstream lane \(r^{I}_{(l,I)}\) and the downstream lane \(r^{I}_{(j,m)}\): \[p_{v}(r^{I}_{(i,I)},r^{I}_{(j,m)})=\frac{C_{v}(r^{I}_{(i,I)})}{C^{max}_{v}(r^ {I}_{(i,I)})}-\frac{C_{v}(r^{I}_{(j,m)})}{C^{max}_{v}(r^{I}_{(j,m)})} \tag{1}\] where \(C_{v}(\cdot)\) is the vehicle count function, and \(C^{max}_{v}(\cdot)\) the maximum permissible vehicle number for a road segment. If all lanes of the transportation network share the same maximum vehicle capacity, vehicle pressure can be simplified to capture the difference in vehicle counts between the upstream and downstream traffic. The vehicle pressure of a traffic phase \(\phi^{I}\) is defined as the sum of pressures over all traffic movements comprising the traffic phase: \[p_{v}(\phi^{I})=\sum_{(r^{I}_{(i,I)},r^{I}_{(i,m)})\in\phi^{I}}p_{v}(r^{I}_{( i,I)},r^{I}_{(j,m)}) \tag{2}\] The intersection vehicle pressure is defined as the sum of pressures over all traffic movements: \[P^{I}_{v}=\sum_{(r^{I}_{(i,I)},r^{I}_{(j,m)})\in I}p_{v}(r^{I}_{i},r^{I}_{j}) \tag{3}\] Accordingly, we derive person intersection pressure (\(P^{I}_{p}\)) by substituting the vehicle count function \(C_{v}(\cdot)\) of Eq. 1 with a person count function \(C_{p}(\cdot)\): \[P^{I}_{p}=\sum_{(r^{I}_{(i,I)},r^{I}_{(j,m)})}p_{p}(r^{I}_{i},r^{I}_{j}) \tag{4}\] In real-world world scenarios, person counts can be generated via automated passenger counters (APCs). **Definition 5: Active Vehicles (AV)** Literature has shown that vehicle-level traffic conditions can be adequately described by lane counts, including all vehicles in the incoming lanes [24, 22, 18, 15], and queue lengths, including all vehicles stopped in queue waiting to traverse the intersection [17, 22, 27, 16]. In a person-level setting though, the distribution of people's locations along a segment may display significant variance, especially the more the world shifts to a multi-modal HOV setting. Figure 1: (a) Standard four-way intersection with 8 incoming lanes. (b) Traffic movement illustration and pressure calculation. (c) Traffic phase illustration and pressure aggregation.The MaxPressure optimal phase is highlighted. We emphasize two keys points: i) Accounting for people movement returns a different optimal phase than the traditional vehicle-level pressure maximization strategy. ii) The introduction of active vehicles has no impact in the vehicle-level approach (same phase is returned as optimal) but radically affects the person-level approach (the returned phases are not only different but share no common movements). The goal is to assure that only persons capable of traversing the intersection are accounted for in our state space and reward function calculation. Inspired by the work of [51], we introduce the concept of active vehicles (\(AV\)) to describe vehicles that are in range of the controlled intersection within the action interval window. As opposed to the predecessor study, vehicle range (\(L\)) is not calculated under maximum speed but by deploying the position equation of motion assuming constant acceleration and a maximum vehicle speed threshold, according to Eq. 5: \[L_{(v,t_{a})}(v_{(v,t_{a})},v^{max}_{v},a^{max}_{v})=v^{max}_{v}\Delta t-\frac{1 }{2\;a^{max}_{v}}(v^{max}_{v}-v_{(v,t_{a})})^{2} \tag{5}\] where \(v_{(v,t_{a})}\) is vehicle's \(v\) speed at timestamp \(t_{a}\) when action \(a\) is to be taken, \(v^{max}_{v},a^{max}_{v}\) are the vehicle's maximum speed and acceleration, and \(\Delta t\) is the action interval. For lanes upstream of the intersection, we compute the vehicles' maximum feasible projected location at the end of the upcoming action \(a\). Our concept is also extended for outgoing lanes to properly inform the algorithm regarding the impact of the applied policies. For vehicles downstream of the intersection, we are interested on whether the controller's last selected action resulted in the vehicle crossing the intersection and joining the outgoing lane. In this case, the vehicle location at the moment of the previously selected action can be either collected from past location data or approximated similarly to the incoming lane case. Eq. 6 summarizes the definition of the active vehicle set for intersection \(I\) at timestep \(t_{a}\), \(AV^{I}_{t_{a}}\): \[AV^{I}_{t_{a}}=\left\{v\epsilon V|d^{I}_{(v,t_{a})}\leq L_{(v,t_{a})},\forall v \epsilon V\right\} \tag{6}\] where \(d^{I}_{(v,t_{a})}\) is the distance of vehicle \(v\) from intersection \(I\) at timestamp \(t_{a}\), and \(V\) is the set of all vehicles. In Section 6.4, we evaluate the efficacy of the AV consideration under different topological settings. Figure 1 illustrates the defined concepts in a standard four-way intersection with 8 incoming and outgoing lanes. Traffic movements and phases are identified and the corresponding pressure values are calculated both at the vehicle and person-level, as well as, with or without the consideration of AV. Figure 1 highlights that accounting for people movement returns a different optimal phase based on the MaxPressure algorithm than the traditional vehicle-level pressure maximization strategy. Another key observation is that the introduction of active vehicles has no impact in the vehicle-level approach (same phase is returned as optimal) but radically affects the person-level approach as the returned phases are not only different but share no common movements. ### Formulation: Decentralized Multi-intersection Traffic Signal Control Each traffic light regulated intersection in the network is controlled by a reinforcement learning agent based on the real-time people distribution along the incoming and outgoing lanes. Under the decentralized structure, agent steps are performed individually by each agent at every distinct action interval with duration \(\Delta t\), while all agents of the network engage in the same learning process. Post training, the controllers will have learned to select the appropriate signal phases that maximize passenger throughput through the intersections. The traffic signal control problem is formulated as a Markov Decision Process \(\mathcal{M}=<S,\mathcal{A},\mathcal{P},\mathcal{R},\gamma>\)[52]. \(\mathcal{S}\) is a finite state space, \(\mathcal{A}\) is a finite action space, \(\mathcal{P}(s^{\prime}|s,a)\,:\,S\times\mathcal{A}\times\mathcal{S}\to[0,1]\) is the state transition probability from state \(s\) to state \(s^{\prime}\) determined by action \(a\), and \(\mathcal{R}(s,a)\,:\,S\times\mathcal{A}\to\mathbb{R}\) is the reward function defined as \(\mathcal{R}(s,a)=\mathbb{E}[R_{t+1}|s_{t}=s,a_{t}=a]\). At timestep \(t\), agent \(I\) aims to learn policy \(\pi(a_{t}=a|s_{t}=s)\) returning the optimal action \(a\) given the state \(s\) to maximize the discounted reward: \[\mathcal{J}_{t}(\pi)=\sum_{m=0}^{\infty}\gamma^{m}R_{t+m+1} \tag{7}\] The agent calibrates its estimates of the executed action's utility based on environmental feedback and will potentially adjust the rates of the actions leading up to the current action. ## 4 Methodology ### Agent Design In this section, we describe the basic constructs of the reinforcement learning algorithm. The state and action spaces and the reward function are defined at the level of a single intersection \(I\). * State space: The state space includes the number of persons drawn from the active vehicles in each incoming lane and in each outgoing lane of the intersection, \(C_{p}^{AV}(r_{(i,I)}^{I})\) and \(C_{p}^{AV}(r_{(i,m)}^{I})\) respectively with \(leL_{in}^{I}\) and \(meL_{out}^{I}\). The state space also includes the current traffic signal phase \(\phi^{I}\). * Action space: The traffic phase \(\phi_{t+1}^{I}\) for the next action interval. For most common transportation intersections with up to four entering and existing roads, the maximum compatible and non-conflicting phase combinations are eight for each isolated intersection (Fig. 1). As per common practice in RL research on traffic signal control [29, 15, 16, 53, 17], we adopt the acyclic phase paradigm which, as opposed to the cyclic paradigm, does not require a predetermined phase sequence to be imposed and for all available phases to appear at least once within a cycle. By enabling phase shortening and skipping, acyclic phasing schemes have demonstrated superior performance [54] and are already implemented in urban environments (e.g. Amsterdam). However, our RL representation is still fully transferable to a cyclic phase modelling scheme. * Reward function: For each individual agent, the reward is defined as the opposite of person intersection pressure as derived from Eq. 4: \[R^{I}=-P_{p}^{I}\] (8) ### Base Model: FRAP The FRAP architecture [39] is adopted as the base model for our traffic signal control system. FRAP is a Deep Q-Learning method specifically designed for traffic signal control problems. In vehicle-level traffic optimization, FRAP captures the competition relation between different traffic movements achieving superior performance and improved convergence. The authors also illustrated the model's transferability and adaptability to different traffic signal settings, roads structures and unbalanced traffic flows. The model is designed based on the principals of: * Competition: Phases with higher traffic demand need to be prioritized. * Invariance: Flipping or rotation of traffic flows along the intersection should not affect algorithmic performance. The prediction of the Q-values is divided into three stages: 1. Phase demand modeling: Obtains a representation for the demand of each signal phase. The state features for each movement are extracted from the simulator and passed through two fully-connected layers. The outputs for the non-conflicting movements of every phase are added to generate the phase demand for green signal. 2. Phase pair representation: Establishes the phase pairs demand embeddings and applies convolutional layers with 1x1 filters to extract the phase pair representations, enabling the competing phases to interact with each other. 3. Phase pair competition: Predicts Q-values for each phase. The relative priorities of each phase are computed by multiplying the phase pair demand representation with a phase competition mask and applying an additional convolutional layer with 1x1 filter. The row sum of this pairwise competition matrix returns the array of Q-values, whose maximum value dictates the action to be selected. All agents (traffic signal regulated intersections) of the evaluated network share the same FRAP model. The replay memory stores experiences (observations, actions and rewards) from all intersections which are used for the model's parameter updating. ## 5 Experiment All of our experiments are conducted in an open-source microscopic traffic simulator called CityFlow [55], whose interface supports RL training for large-scale traffic signal control. Each green signal is followed by three-second yellow time during which -if possible- vehicles always choose to stop, and two-second all red time to prepare the signal phase transition. Right turning vehicles are considered to be allowed to turn during any phase but always yield priority to through moving vehicles in case of conflict. The action interval set to 10s as per common practice in RL research on acyclic TSC [15, 29, 16, 53, 17]. [15] also concluded that the action time interval has minimal influence on performance when in the range of 5 to 25 seconds. The adopted values for all simulation and model parameters are detailed in Table 1. ### Mode Share Scenarios Our experiments are set up by establishing a total people count served by the transportation network when it is operating almost at capacity. This is assessed through a critical movement analysis on the intersections' hourly expected flows [56]. We maintain those people counts constant throughout the mode share scenarios investigating different levels of HOV adoption. Note that we assume no latent demand in the network due to the increased level of service in order to quantify the benefits of mode shift with person-based RL traffic signal controlled intersections. Determining mode share percentages is a complex task and distributions may vary per location, especially after the COVID-19 pandemic. For the lowest HOV adoption scenario, the selected numbers aim to reflect the significant reduction transit ridership took globally during the pandemic. Even, in areas such as the Bay Area where transit \begin{table} \begin{tabular}{l c l l} \hline **Simulation** & & & \\ \hline Simulation time & 1 hour & CityFlow step length & 1 s/step \\ Number of phases & 8 & Type of Signal & Acyclic \\ Yellow and all red period & 5 s & Action interval & 10 s \\ Vehicle maximum speed & 40km/hr & Vehicle maximum acceleration & 2 m/s\({}^{2}\) \\ Vehicle maximum deceleration & 4.5 m/s\({}^{2}\) & Vehicle minimum gap & 2.5 m \\ Headway & 2 s & SOV and Carpool vehicle length & 5 m \\ Microtransit (occupancy 10) vehicle length & 7 m & Microtransit (occupancy 20) vehicle length & 9.5 m \\ Microtransit (occupancy 30) vehicle length & 12 m & Public transit vehicle length & 15 m \\ \hline **FRAP Model** & & & \\ \hline Discount factor \(\gamma\) & 0.6 & Episodes & 200 \\ Learning rate & 0.001 & Buffer size & 10000 \\ Batch size & 32 & Greedy policy \(\epsilon\) & 1 \(\rightarrow\) 0 \\ Learning start & 0 & Optimizer & Adam \\ Number of layers (phase demand modeling) & 2 & Number of convolution layers w/ 1x1 filters & 20 \\ \hline \end{tabular} \end{table} Table 1: Simulation and model parameters. Figure 2: (a) Adjusted driving commute mode share statistics drawn from the Bureau of Transportation Statistics for the year 2021. (b) Vehicle occupancy profiles for the HOV adoption scenarios (low, light, medium and high). Across each of the four scenarios, 15% of the single occupancy vehicle individuals shift to microtransit vehicles and public transit. Carpool ridership has been preserved at the same levels. was highly adopted, recent studies estimate that between 66% and 78% of Bay Area commuters drive alone when commuting [57]. As shown in Fig. 2a, commute mode data was drawn from the Bureau of Transportation Statistics for 2021 in California to extract the mode share for persons driving vehicles. People walking, working from home and biking were excluded. The percentage of the taxi, motorcycle, or other class was merged into the drive alone class. The carpooling class is split to two sub-classes of occupancy two and three with ratios 2/3 and 1/3 respectively. Fig. 2b summarizes the defined mode share scenarios. For our experimental setup, commute mode changes across scenarios with approximately 15% of the single occupancy vehicles individuals shifting to microtransit vehicles and public transit, while carpool ridership has been preserved at the same levels. The low HOV adoption scenario has transit set to its lowest levels of the last decade during allowing us to explore the potential of person-level traffic signal control optimization in a wide spectrum of HOV penetration environments. ### Infrastructure Configurations The set of synthetic experiments were designed to test the performance and efficiency of HumanLight at different road network configurations. These are outlined below and displayed in Fig. 3: 1. Single intersection 2. Corridor of six intersections (1x6) with buses operating only through the corridor. 3. Grid of intersections (4x4) with two fixed bus route-lines. For the single four-way intersection experiment, each road segment has one through and one left turning lane while average hourly traffic demands are described in Table 2. We impose high left turning volumes equal to the through-moving traffic of the secondary direction. This allows us test the robustness of HumanLight on handling demand of different distributions across directions compared to the following configurations. For the corridor and grid configurations, each road segment has one through, one left turning and one right turning lane. The turning ratios at intersections are on average set as 10% left, 60% straight and 30% right as statistical analyses on real-world data sets Figure 3: Road Network Configurations: (a) Isolated four-way intersection where each road segment has one through and one left turning lane, (b) Six intersection corridor and (c) 4X4 grid of intersections with three lanes per road segment (one through, one left turning, and one right turning). Fixed public transit routerlines are illustrated in green. have showcased [17, 58]. The road networks have segments of 300m in length and connect at four-way intersections. The synthetic data fed into the simulator include bi-directional flows with turning traffic. To assess our traffic signal control models under various traffic demands the configurations of Table 3 are realised describing the different mode shift scenarios. The maximum speeds are set equally for all vehicle types since the imposed values for speed limits are not that high at 40 \(km/hr\). ### Methods of comparison In this section, we introduce classical TSC methods in the transportation field [12, 53, 11], and current RL-based methods. For the purposes of evaluating HumanLight's performance, all methods including parameter setting are tuned. The best performing method will be used as benchamrk for the computation of the evaluation metrics. * Webster's Formula**[13]: Fixed-time controllers use a predetermined cycle and phase time plan. Webster's method calculates the cycle length and phase split for a single intersection setting. Under the assumption of uniform traffic flow during a certain period of time, using the closed-form solution of Eq. 9, Webster's formula derives the optimal cycle length \(C\), minimizing the travel time of all vehicles passing the intersection. \[C(V_{c})=\frac{N*t_{L}}{1-\frac{V_{c}}{3600/h*PHF*(v/c)}}\] (9) where \(N\) represents the number of phases, \(t_{L}\) the total loss time per phase (used to model the all-red time and the acceleration and deceleration of vehicles), parameter \(h\) the saturation headway time, \(PHF\) the peak hour factor (used to model traffic demand fluctuations within peak hours), and \(v/c\) is desired volume-to-capacity ratio. \(V_{c}\) expresses the sum of all critical lane volumes, with \(V_{c}=\sum_{p_{i}}^{N}V_{c}^{p_{i}}\), where \(V_{c}^{p_{i}}\) is the critical lane volume for phase \(p_{i}\). The critical lane volume is determined from the approaching lane with the highest ratio of traffic flow to saturation flow during a phase. As for the phase split, having established the cycle length, green times are calculated proportionally to the critical lane volumes served by each phase according to Eq. 10: \[\frac{t_{p_{i}}}{t_{p_{i}}}=\frac{V_{c}^{p_{i}}}{V_{c}^{p_{j}}}\] (10) where \(t_{p_{i}}\) and \(t_{p_{j}}\) correspond to the phase duration for phases \(p_{i}\) and \(p_{j}\) respectively. * **Self-Organizing Traffic Light (SOTL)**[59, 60]: SOTL is an actuated method that adaptively regulates traffic lights based on a hand-tuned threshold on the number of waiting vehicles. The controller switches the phase of a lane to green if the required minimum green phase duration is met for the current phase and provided that the number of vehicles on the lane exceeds the hand-tuned threshold. SOTL resembles a fully-actuated controller but instead of sending requests for green when a single vehicle is approaching, it does so when the number of vehicles exceeds the threshold. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \cline{2-6} \multicolumn{1}{c|}{} & North/South (veh/hr/lane) & East/West (veh/hr/lane) & \multirow{2}{*}{v/c} \\ \cline{1-1} **Demand Scenario** & Through & Left & Through & Left & \\ \hline Low HOV adoption & 220 & 220 & 350 & 220 & 0.99 \\ \hline Light HOV Adoption & 194 & 194 & 307 & 194 & 0.87 \\ \hline Moderate HOV Adoption & 157 & 157 & 249 & 157 & 0.71 \\ \hline High HOV Adoption & 121 & 121 & 190 & 121 & 0.54 \\ \hline \end{tabular} \end{table} Table 2: Average flow per direction per lane in the single intersection experiment across all demand scenarios. * **MaxPressure**[35]: Max Pressure sets the optimization objective as minimizing the vehicle pressure of phases for individual intersections, as defined in Section 3. The method greedily selects the phase with the maximum pressure, activates it and keeps the selected phase for a given period of time \(t_{min}\). A sensitivity analysis on parameter \(t_{min}\) is carried out every time the algorithm is applied to identify the best performing value. * **MPLight**[17]: MPLight is a state-of-the-art deep reinforcement learning method enabling large scale road network control. Aside from parameter sharing, it accommodates a SOTA Q-network model structure (FRAP) as proposed in [39] and can incorporate the concept of vehicle pressure in the reward function as proposed in [16]. ### Evaluation Metrics Both person and vehicle based metrics will be used to evaluate the impact of the proposed traffic signal control strategy. These are: 1. Average Vehicle Travel Time (AVTT) & Average Person Travel Time (APTT) \[AVTT=\sum_{v\in V}TT_{v}\Bigg{/}|V|\] (11) where \(TT_{v}\) denotes the travel time of vehicle \(v\) and \(|V|\) is the cardinality of the vehicle set \(V\), and \[APTT=\sum_{v\in V}TT_{v}\cdot O_{v}\Bigg{/}\sum_{v\in V}O_{v}\] (12) where \(O_{v}\) is the occupancy of vehicle \(v\). 2. Vehicle Queue Length (VQL) & Person Queue Length (PQL) \[VQL(t)=\Bigg{|}\left\{v\epsilon V|v_{(v,t)}\leq v_{stop},\forall v\epsilon V \right\}\Bigg{|}\] (13) where \(v_{stop}\) is the speed lower bound for vehicles to be considered in motion and is set at 0.1 \(m/s\), and \[PQL(t)=\sum_{v\in V|v_{(v,t)}\leq v_{stop}}O_{v}\] (14) 3. Average Vehicle Delay (AVD) & Average Person Delay (APD) \[AVD=\sum_{v\in V}(TT_{v}-FFTT_{v})\Bigg{/}|V|\] (15) where \(FFTT_{v}\) is the free- flow travel time of vehicle \(v\), derived accounting for the maximum speed allowed on \begin{table} \begin{tabular}{|l|c|} \hline **Demand Scenario** & \begin{tabular}{c} **Arrival Rate** \\ **(veh/hr/road)** \\ \end{tabular} \\ \hline Low HOV adoption & 700 \\ \hline Light HOV adoption & 571 \\ \hline Moderate HOV adoption & 469 \\ \hline High HOV adoption & 372 \\ \hline \end{tabular} \end{table} Table 3: Demand configurations for corridor (1x6) and grid (4x4) experiments. each road segment along the vehicle's route, and \[APD=\sum_{v\in V}(TT_{v}-FFTT_{v})\cdot O_{v}\bigg{/}\sum_{v\in V}O_{v} \tag{16}\] ## 6 Results This section quantitatively highlights the key contributions of HumanLight. Firstly, the overall performance of the algorithm is discussed compared to SOTA controllers. Then, the socially equitable allocation of green times achieved by HumanLight is illustrated. A vehicle stopping behavior analysis as well as edge cases of vehicle queues are displayed to assure the stability of the framework. The impact of incorporating the concept of active vehicles in the formulation of our RL model is quantified through systematic experiments at different network structures. The capability of HumanLight to regulate the aggressiveness of HOV prioritization is showcased via a modification in the state embedding. Finally, the impact of the discount factor parameter \(\gamma\) on the generated phase profile is investigated. The presented results have been reproduced and validated across all network configurations considered (single intersection, corridor and grid). We selectively include results from specific networks in this section, while adding the rest to the Appendix to avoid overwhelming readers with potentially similar patterns. As per common practice in RL research [49, 61, 62, 63], multiple, in our study three, independent trials of learning are run for each scenario and the evaluation metrics (travel times, delays, queues) reported in the results are derived from averaging the last 20 episodes across all runs to smooth out episodic oscillations. The tables both in the main body and in the appendix section provide the standard deviations of those values in parenthesis. ### Convergence and Overall Performance HumanLight displays strong performance across all considered road network configurations. The algorithm consistently converges after approximately 160 episodes of training across all HOV adoption scenarios (Fig. 4 top). We observe that average person intersection pressure, the opposite of our reward function, follows a similar diminishing Figure 4: Convergence curves of HumanLight’s reward function (top) and average person delay (bottom) throughout the learning episodes for all network structures across all HOV adoption scenarios: Episodic evolution of average person pressure is approximating average person delays. Average person queues display similar trends and are provided in Fig. A1. trend as average person delays (Fig. 4 bottom) and queues (Fig. A1), validating the suitability of person pressure for the task of optimizing people throughput at intersections. Fig. 5 summarizes the performance of the various controllers in a 4x4 grid setting. Not distinguishing between vehicle types when it comes to determining intersection priorities, vehicle-level optimization controllers render similar average vehicle and person delays. We observe however, that vehicle delays are consistently lower than person delays in the corridor (Table A2) and grid experiments (Fig. 5, Table A3). This is attributed to transit's routines never including a right turn in those configurations (Fig. 3). With the most dense, occupancy-wise, vehicle type never making right turns which are independent from the current phase, person travel times are expected to be slightly higher. Instead, in the single intersection setting where no right turns exist, the trend does not appear (Table A1). By rewarding HOV riders with more green times, HumanLight achieves reduced person delays and queues. The higher the penetration of HOVs becomes, the improvements from the SOTA vehicle-level optimization controller in person delays and queues increase, even exceeding 55% in the high HOV adoption scenario. In Fig. 5c&d, we observe that HumanLight achieves improvements even on vehicle-level metrics, attributed to the consideration of active vehicles in the RL problem formulation. ### Socially Equitable Allocation of Green Times Apart from superior performance compared to SOTA approaches, HumanLight achieves a fair allocation of green times to vehicles based on the number of passengers they are carrying. Figure 6 demonstrates the distributions of travel times for the different vehicle types over the last episode of training (left) and the linear least-squares regression fitted on the box-plot means over the last 20 episodes of each run to smooth out the effect of fluctuations across episodes (right). All vehicles are considered in the single intersection setting, while only vehicles traversing all six intersections are accounted for in the corridor experiments to assure all vehicles share the same route length and thus have comparable travel time. In Fig. 6c, carpools and microtransit are purposefully not displayed due to data sparsity, as limited amount Figure 5: Performance evaluation of HumanLight versus the SOTA vehicle-level optimization controllers in the 4x4 grid setting: the person-level benefits (a&b) of HumanLight increase as more people shift towards HOV alternatives. Thanks to the consideration of active vehicles, vehicle-level metrics (c&d) also improve. The corresponding results for the single intersection and corridor experiments are provided in Tables A1, A2 and A3. of some of those vehicles fulfilled the entire six intersection route especially in the lower HOV adoption scenarios. Examining those distributions of travel times, the following key observations are derived. Firstly, the vehicle-level SOTA optimization controller (MPLight w/ FRAP & Pressure, shown in red) provides similar travel times across all vehicle types. Instead, HumanLight prioritizes vehicles based on their occupancy with the differences in travel time being eminent even at the low HOV adoption scenario (orange) with a strong coefficient of determination \(R^{2}\) between the two variables. This trend being validated at the corridor level illustrates the scalability of the model to larger networks and paves the way for large-scale evaluations with a focus on route-level effects for public transit. The lower bound of the minimum travel time is only achievable in an unrealistic scenario where all vehicles arrive at the intersection from directions served by non conflicting maneuvers within the action interval window. ### Vehicle Stopping Behavior and Queues An analysis on vehicle stopping behavior is carried out in the 4x4 grid network. As mentioned in the queue definition, a vehicle is considered stopped or in queue during the duration it's speed is less than or equal to \(0.1m/s\). We present the per segment normalized number of stops for the various vehicle types to account for the different lengths of routes taken. In Fig. 7, both in terms of normalized per segment stopping frequency as well as average stop duration, HumanLight achieves reduced and shorter stops for HOVs. Instead, the vehicle-level approach (MPLight) generates similar stopping patterns across vehicle types. The peaks in the normalized stop frequency for transit can be explained from their routeline. As shown in Fig. 3, bus-lines in the grid scenario never take a right turn. In contrast, all other Figure 6: (a,c) Travel time distribution for different vehicle types over the last episode (200). (b,d) HumanLight prioritizes vehicles based on their occupancy as shown by the linear least-squares regression fitted on the box-plot means of all vehicle types over the last 20 episodes of each run. vehicles' routes may contain right turns that use dedicated right lanes, independent of the current phase and yielding priority only to through moving vehicles in cases of conflict. A crucial aspect of implementing a human-centric approach, such as HumanLight, is the impact on vehicle-level metrics. Prioritizing HOVs can lead to network failure by generating queues in non-prioritized directions of traffic. Spill-back effects are caused when vehicle queues exceed road capacities, significantly deteriorating the operational efficiency of the signalized intersections. To quantify that effect on the 4x4 grid network, we evaluate the maximum queue length observed on the incoming lanes of each controlled intersection at every action interval. Fig. 7(a) compares the distributions of the averaged over time maximum queues generated from HumanLight versus MPLight, the state of the art vehicle-level approach. Thanks to the consideration of active vehicles in the formulation of the RL problem, HumanLight overall achieves similar or even better maximum vehicle queues while still prioritizing the maximization of people throughput. Only in the low and light HOV adoption scenarios, maximum vehicle queues degrade at some intersections compared to MPLight. Fig. 7(b) isolates the average, over time, value of the maximum vehicle queues generated by HumanLight for each intersection in the light HOV scenario. In parentheses, the percentage change from MPLight is displayed. We highlight that MPLight is a SOTA vehicle-level controller, not currently in practice. The worst performing intersections experience the heaviest public transit load by serving conflicting and left turning bus routelines. To accommodate the arrival of buses, the controller dictates more frequent phase changes, overwhelming the intersection with stopped vehicles in the non-served traffic movements. Even though for some intersections the maximum queues observed degrade compared to MPLight, the absolute values (illustrated in bold) are still low and far lower from the segments' capacity Figure 7: Vehicle stopping behavior: Both in terms of normalized per segment stopping frequency (a) as well as average stop duration (b), HumanLight achieves reduced and shorter stops for HOVs as opposed to the vehicle-level approach (MPLight) that generates similar stopping patterns for all vehicles. at jam density. For higher HOV adoption scenarios, the active vehicles consideration in HumanLight renders even better comparative results because of the increased penetration of microtransit and buses (Fig. 8c). ### Active Vehicles: State Space and Reward Formulation Evaluation This section evaluates the efficacy of active vehicles (AVs) in the state space and reward function formulation. Intuitively, AVs are conceptualized to provide a more accurate representation of the traffic dynamics. Accounting for vehicles and people not in range of the intersection within the time horizon of the action interval may potentially degrade the performance of control algorithms. Although vehicle counts across the entirety of lanes are extensively used in literature to create embeddings describing the traffic conditions of road segments, passenger loads can significantly fluctuate across vehicles, especially the more the world shifts towards a multimodal reality. These variations can make a substantial difference in the controller's decision making process when determining which phase needs to be prioritized on a person-level approach (Fig 1). To demonstrate the motivation and the benefits of considering the active vehicles in our person-level approach, we explore three different formulations. In all three alternatives, the current traffic signal phase \(\phi^{I}\) is included in the state space, while the way people counts in the incoming and outgoing lanes are encoded varies as described in Table 4. All alternatives are illustrated in a corridor of three intersections with the low HOV adoption scenario, while the demand is fixed nearly at capacity as described in Section 5. As shown in Eq. 5 and Eq. 6, a vehicle being considered as active depends on their relative location on the road segment, their current speed and maximum speed and acceleration. For the set values of vehicle features (Table 1), the active range for a vehicle leaving the intersection under zero speed is \(80.25m\) and under maximum speed is \(111.11m\). These translate to approximately \(1/3\) of the typical road segment length (\(300\)m) used in our experiments to be within the active vehicle range. To assess the sensitivity of AVs to the input parameters, we perform the three evaluation alternatives on an additional network structure with longer segments (\(800\)m). We observe that all formulations converge after approximately \(160\) episodes (Fig. 9). When considering person counts from active vehicles versus all vehicles, in the \(300\)m segment corridor around \(10\%\) decreases in vehicle and person delays and queues are achieved (Table 5). The percentage improvements of those metrics almost reach or even exceed \(50\%\) in the \(800\)m segment corridor, while the actual values of delays and queues are similar to the \(300\)m segment corridor. Those results are in alignment with the intuitive expectation that the longer the road segments the higher the impact of the consideration of AVs would be. Table 5 illustrates how the consideration of both active and all vehicles in the state space achieves similar performance as the active vehicle representation alone without further benefits to justify the additional complexity in the embedding. The same experiments were conducted in the same \(1\)x\(3\) corridors under lighter traffic with an average intersection \(v/c\approx 0.65\) (Table A4). Results are consistent with Table 5. We do notice though that the improvements for the corridor with \(800\)m road segments under light traffic flatten out at approximately \(20\) %, hinting that the benefits Figure 8: (a) Distributions of the maximum queue lengths observed on the incoming lanes for all intersections at every action interval on the 4\(\times\)4 grid network as generated by MPLight and HumanLight. (b),(c) Maximum vehicle queues observed for the light and high HOV adoption scenarios respectively: In the light scenarios, maximum queues deteriorate on intersections serving the heaviest public transit load, while in the high scenario they improve throughout. Per intersection maximum queues for the other two scenarios are provided in Fig. A2. of AVs are higher for heavier traffic demand. ### Regulating the aggressiveness of HOV prioritization Apart from enabling reduced passenger travel times and socially equitable green time allocation at signalized intersections, HumanLight empowers policymakers and traffic engineers to regulate the aggressiveness in the prioritization of the HOV fleet. We achieve this via a modification in the state embedding where vehicle occupancies are encoded. This way, the travel time benefits across vehicle types of different occupancies is fully in the hands of the system operator rather than generated through a black-box approach. This concept is illustrated in Figure (a)a, where five different encodings of person counts are displayed. The dark green representation (\(e=1\)) depicts the person count encoding as defined in Section 3. With \(e\) decreasing, the impact of microtransit and public transit in the reward function compared to SOVs and carpools is diminishing. On the other extreme, the red representation (\(e=0\)) completely cancels out the person count function, turning it into the vehicle counting function, treating all vehicles the same. We evaluate how HumanLight can regulate the prioritization aggressiveness of vehicles of different occupancies in our high adoption mode share scenario. The high HOV adoption scenario was selected to provide higher number of \begin{table} \begin{tabular}{|l l|l|l|} \hline Formulation & Description & State Space & Person Pressure \\ & & Embedding Size & Computed From \\ \hline All Vehicles & Person counts drawn from all vehicles & \(|L_{h}^{I}+L_{aa}^{I}|+1\) & All Vehicles \\ \hline Active Vehicles & Person counts drawn from active vehicles & \(|L_{h}^{I}+L_{aa}^{I}|+1\) & Active Vehicles \\ \hline All \&Active Vehicle & Person counts drawn from both all and active vehicles & \(2\cdot|L_{in}^{I}+L_{aa}^{I}|+1\) & Active Vehicles \\ \hline \end{tabular} \end{table} Table 4: Formulations explored around the efficacy of the active vehicles consideration in the state space and reward function. Figure 9: Person delays (top) and queues (down) across the three representations of state space for the consideration of active vehicles. Two network structures are tested, one with segments of length 300m (left) and one with segments of length 800m (right). samples (vehicles) in the microtransit and public transit categories, making the line fitting more robust by definition. Results were reproduced and remain consistent across all mode share scenarios. The evaluation is performed in the single intersection setting. That way every vehicle in our system follows a route of the same length, whether it includes a turn or not, thus making their travel times and delays comparable. Additionally, there is no bias originating from bus pre-specified routes. For example, in the corridor case where buses run along the corridor, vehicles riding from one end to the other without exiting are more likely to get a higher allocation of green times compared to turning vehicles. Figure 10b illustrates how the prioritization of HOVs intensifies with the increase of parameter \(e\). A more detailed snapshot of those distributions on the last episode of training can be found in Fig. A3. From evaluating the fitted lines for \(e=0.75\) and \(e=1.00\) in Figure 10b, we observe that the incline of the travel time vs vehicle occupancy fitted line does not change much in absolute value. That is because HOVs cannot be further over-prioritized. Thus, the two scenarios return similar handling of inter-vehicles conflicts during arrivals at the intersection. As intuitively expected, the fitted line for \(e=0\) has an incline of \(0.01\), corresponding to a vehicle-level optimization approach. We need to stress that the travel time of \(78s\) corresponds to a delay of \(24s\) from the minimum free-flow travel time, which is approximately \(5s\) less than the vehicle delay of MPLight (Table 1) in the same scenario. That improvement is a result of the active vehicle consideration. Overall, operators of HumanLight should take input from behavioral studies that evaluate the travel time elasticity of commuter mode choice. The aggressiveness in HOV prioritization should be scientifically informed to incentivize riders to mode shift. Over-penalizing SOVs' travel times must not be discouraging to commuters. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{**Delay (s)**} & \multicolumn{4}{c|}{**Queue**} \\ \cline{2-9} \multicolumn{1}{c|}{} & Vehicle & \% Change & Person & \% Change & Vehicle & \% Change & Person & \% Change \\ \hline \multicolumn{9}{|c|}{_Corridor Ly3 (300m segments)_} \\ \hline All Vehicles & 67.99 (2.40) & Baseline & 49.98 (2.08) & Baseline & 31.70 (1.06) & Baseline & 45.01 (2.47) & Baseline \\ \hline Active Vehicles & 61.10 (4.05) & **-10.13\%** & **45.07 (4.24)** & **-9.82\%** & 28.46 (1.97) & **-10.22\%** & 40.12 (4.87) & **-10.64\%** \\ \hline All \& Active Vehicle & 61.49 (2.82) & -9.56\% & 45.25 (2.30) & -9.46\% & 28.64 (1.42) & -9.65\% & 40.18 (2.51) & -10.73\% \\ \hline \multicolumn{9}{|c|}{_Corridor Ly3 (800m segments)_} \\ \hline All Vehicles & 116.10 (5.69) & Baseline & 93.81 (6.98) & Baseline & 55.23 (2.51) & Baseline & 87.11 (13.64) & Baseline \\ \hline Active Vehicles & 59.95 (3.27) & **-48.36\%** & **44.35 (2.76)** & **-52.72 \%** & 28.00 (1.69) & **-49.30\%** & 38.67 (3.25) & **-55.61\%** \\ \hline All \& Active Vehicle & 60.35 (2.23) & -48.02\% & 44.69 (1.89) & -52.36\% & 28.05 (1.13) & -49.21\% & 38.16 (2.02) & -56.19\% \\ \hline \end{tabular} \end{table} Table 5: Impact of active vehicle consideration on corridors of different segment lengths. Higher benefits are observed for longer road segments. Figure 10: (a) Five modified encodings of the person count function for the different vehicle types. (b) Linear least-squares regression fitted on means of travel times for all vehicle types. ### Impact of Discount Factor \(\gamma\) on Phase Profile From the hyper-parameters tuned to optimize the performance of our reinforcement learning model, we conducted the most experiments on the discount factor \(\gamma\). Apart from a significant impact on performance, the generated policies rendered different phase profiles. This section aims to advise system operators on the impact of prioritising immediate versus long-term rewards. Four values are tested to capture the full spectrum of the effect, ranging from the fully myopic \(\gamma=0\) to \(\gamma=0.90\) that heavily weighs future rewards as described in Eq. 7. Similar to the active vehicle analysis (section 6.4), we perform our evaluation on a corridor of three intersections with 300m long segments to grasp any network level effects. We illustrate results under heavy traffic demand (average intersection \(v/c\) exceeding 0.92). In terms of convergence, for the lowest \(\gamma\) value (\(\gamma=0.00\)) we observe significant oscillations even after 200 episodes of training (Fig. 11). Performance-wise, the best performing value for hyper-parameter \(\gamma\) is 0.60 is minimizing delays and queues both in absolute value as well as standard deviation (Table 6). When evaluating the impact of the discount factor \(\gamma\) to the profile of the generated phases (Table 7 and Fig. 12), we notice that the more short-sighted the controller is (smaller \(\gamma\) values), less phase changes are occurring translating to longer durations of the imposed phases. The reason behind this pattern originates from the imposition of the \(5sec\). yellow and all red period before the initiation of a new (different from the current) phase. A myopic and greedier algorithm will learn to postpone that penalty period where no vehicles are allowed to traverse the intersection of control. Instead, when considering a longer time horizon in the optimization cost function, the controller will be more eager to endure the inevitable penalty period of a phase switch at the appropriate (as dictated by the cost function) point in time. As a benchmark, we note that a typical 8-phase fixed controller with 120 sec. signal length results in \(3600/120\cdot 8=240\) changes per hour. Therefore, the increased number of phase changes (Table 7) with the increase of \(\gamma\) should not pose any threats to pedestrian crossings waiting time but rather the opposite. Although in this paper HumanLight adopts the value of \(\gamma=0.60\) thanks to its superior performance and balanced phase profile, system operators may chose to sacrifice travel time gains for a desired phase profile. Although HumanLight is built on the acyclic phase scheme, it is fully compatible with a fixed-sequence (cyclic) phase configuration where the algorithm would tune phase extensions and shortenings. For consistency, the evaluation is also performed under light traffic (average intersection \(v/c\approx 0.65\)). The results are provided in the appendix (Tables A5 and Fig. A4&A5). Under light traffic, tuning \(\gamma\) seems to be less important as \(\gamma=0.60\) and \(\gamma=0.90\) perform equally well. Intuitively, lighter traffic implies the controller will require less consecutive action intervals to fully accommodate the traffic of each direction. This is validated when looking at the number of phase changes per hour (Table 7), where we observe that reduced vehicle density leads to increased phase changes. The phase duration profile varies even less than in the heavy traffic case, as long term implications are not as significant. Figure 11: Evolution of average person delay and queue for the different values of discount factor \(\gamma\) during episodic training for the high traffic demand configuration. \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Average Delay**} & \multicolumn{2}{c|}{**Average Queue**} \\ \hline \(\gamma\) & Vehicle (s) & Person (s) & Vehicle (vehicles) & Person (persons) \\ \hline 0.00 & 92.59 (36.45) & 75.59 (36.18) & 43.02 (15.25) & 61.12 (18.71) \\ \hline 0.30 & 70.52 (11.58) & 53.41 (10.55) & 33.07 (5.31) & 48.15 (9.46) \\ \hline **0.60** & 61.10 (4.05) & **45.07 (4.24)** & 28.46 (1.97) & **40.12 (4.87)** \\ \hline 0.90 & 65.40 (7.85) & 51.00 (7.70) & 31.37 (5.38) & 44.25 (5.45) \\ \hline \end{tabular} \end{table} Table 6: Impact of discount factor \(\gamma\) on performance metrics. \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{**Discount factor \(\gamma\)**} \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{0.00} & 0.30 & 0.60 & 0.90 \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{v/c \(\approx\) 0.92} & 56.0 (12.0) & 94.6 (16.3) & 121.9 (10.2) & 149.2 (9.8) \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{changes per hour} & \multicolumn{1}{c|}{v/c \(\approx\) 0.65} & 144.3 (28.0) & 151.00 (9.1) & 151.33 (23.1) & 175.00 (17.3) \\ \hline \end{tabular} \end{table} Table 7: Sensitivity analysis on the number of phase changes per hour for different values of the discount factor \(\gamma\) on two different traffic demand levels at a synthetic corridor 1x3 of intersections. Figure 12: Phase profile for different values of \(\gamma\) under heavy traffic: A myopic controller, prone to postpone the yellow and all red period will result in less phase changes and higher durations. ## 7 Conclusions This study presents HumanLight, a decentralized adaptive signal control algorithm, which to the best of our knowledge is the first person-based approach founded on reinforcement learning in the context of network level control. Our traffic management solution requires fully developed V2I communication technology to optimize people's throughput at intersections. HumanLight is designed to equitably allocate green times at intersections. By rewarding HOV commuters with travel time savings for their efforts to merge into a single ride, HumanLight helps democratize urban traffic. Considering a mutlimodal vehicle split among private vehicles, pooled rides, microtransit and public transit at various scenarios of HOV adoption, we showcase significant headroom compared to a state-of-the-art RL-based vehicle-level optimization controller. In a 4x4 intersection grid, improvements on person delays and queues exceed 25% for the lowest HOV adoption scenario which considers no microtransit vehicles and public transit adoption as low as in the COVID-19 pandemic era. For moderate and high HOV adoption, improvements on those same metrics reached over 40% and 55% respectively considering a \(\sim\)26% and \(\sim\)41% shift from single occupancy vehicles to HOV alternatives respectively. As travel time benefits increase along with HOV adoption, HumanLight can invigorate ridesharing and public transit systems to attract the travel demand they truly merit in sustainable and multimodal urban environments. HumanLight formulates the reward function by extending the transportation theory inspired concept of pressure to the person level. To handle the highly variant occupancy profiles in multimodal urban environments, we introduce active vehicles as vehicles in proximity to the controlled intersections within the action interval window. Through systematic experiments on varying traffic flows and network structures, we showcase how generating the state space and computing rewards based on active vehicles, enables better decision making for optimizing people's throughput at intersections. Apart from rendering reduced passenger travel times and socially equitable green time allocation, HumanLight allows policymakers and traffic engineers to regulate the aggressiveness in the prioritization of the HOV fleet. We illustrate how via a modification in the state embedding, the benefits for the different vehicles types can be tuned, ideally with input from behavioral studies on the travel time elasticity of mode choice. Furthermore, we explore the impact of the discount factor, which determines the importance of future rewards, on performance as well as the generated phase profile. System operators can be informed on the expected phase change and duration patterns which are critical aspects of acyclic signal controllers as they affect pedestrian waiting times. Finally, to assure that HOV prioritization does not lead to excessive queues in the non-prioritized directions of traffic, maximum vehicle queues from all incoming lanes are evaluated in the 4x4 grid setting. Across all scenarios, vehicle queues remain far lower than the segments' capacity at jam density, posing no threats to the operational efficiency of the signalized intersections. ## 8 Future Directions The field of human-centric RL-based traffic signal control is currently under-explored in literature, empowering us to present various directions to be considered for future research. Thanks to the decentralized architecture, HumanLight is a scalable traffic management solution. The algorithmic performance still needs to be evaluated on city-level networks with heterogeneous road and intersection structures. Advancements in high performance computing and simulation [64] have made large-scale algorithmic applications possible. As traffic volumes may significantly vary per location, more elaborate designs for coordination and cooperation among neighboring intersections or training multiple controllers may potentially yield better results. From the demand side, open work contains an exploration at the metropolitan level considering advanced matching algorithms for pooling of trips and quantifying the gains in travel time for different distributions of individuals' walking radius flexibility to join a pooled ride, either at it's origin or on route. For those different levels of adoption, it would interesting to assess the benefits of our human-centric controller in terms of emissions. Although results demonstrate potential as single occupancy vehicles are taken off the roads and HOVs are receiving priority, thus performing less stops, a follow up study quantifying emissions would be of great value. The sensitivity in the framework's effectiveness for varying penetration rates of vehicles with connected technology capabilities should also be evaluated. From a planning perspective, the interaction of person-based traffic signal control with dedicated bus lanes needs to be investigated for policymakers to identify the optimal allocation of resources and whether these two HOV prioritization strategies should be applied simultaneously. With HumanLight establishing near free-flow travel times for transit vehicles, the dedicated bus lanes can be freed and used as dedicated bike lanes or pedestrian safe-spaces, paving the way towards a more sustainable urban environment. As opposed to HOV lanes, our solution also achieves democratization of rides independent of the congestion levels, providing people with incentives to pool even at low traffic demand scenarios. HumanLight allows the flexibility to balance even more complex priorities of mulitmodal traffic. Bike lanes and pseudo pedestrian lanes, modeling sidewalks, can be included in the codification of the transportation network if demands from these travel means were to be available. Apart from prioritizing bikes and pedestrians via higher weights, HumanLight can accommodate emergency vehicles and paratransit. By assigning ambulances and fire trucks extremely high occupancy values in cases of emergency, these vehicles would receive the highest level of prioritization. Although establishing transit adherence to schedule can be embedded in our reward functions, we believe that if the technology is adopted the bus line schedules should be adjusted accordingly to anticipate the near optimal travel time along the bus route resolving all potential issues of punctuality and bus bunching. With the transit network operating with higher efficiency, the schedule could also become denser offering riders an even higher level of service. In a similar manner, HumanLight can assist policymakers of prosperous urban societies to incentivize ridership of paratransit shuttles. Paratransit provides transportation to people whose disabilities prevent them from riding traditional fixed-route transit service. By accordingly adjusting the weights of paratransit vehicles in the space embedding, HumanLight can make paratransit more attractive for people with disabilities who face widespread lack of accessibility to built environments and roads. By prioritizing these vehicles, cities can foster participation and inclusion of all members of society. Further research could explore the effectiveness of our method under cyclic traffic signal configurations. Although research has shown that acyclic phase transitions consistently yield better performance than cyclic phase transitions [54], the fixed sequence provides with more reliability for pedestrian serving across all directions. Other approaches worth exploring include the incorporation of pedestrian waiting time in the reward function where elongated waiting times would be penalized, or the development of an acyclic controller with a minimum green time constraint for some traffic movements. Finally, the impact of the action interval on the benefits from the consideration of active vehicles needs to be investigated. A larger interval will increase the active range across all vehicles, and therefore more vehicles will be accounted for in the state representation. ## Code and Data Availability The code, data and relevant documentation are available on Github at [https://github.com/DimVlachogiannis/HumanLight.git](https://github.com/DimVlachogiannis/HumanLight.git). ## Funding This work was sponsored by the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) under the Big Data Solutions for Mobility Program, an initiative of the Energy Efficient Mobility Systems (EEMS) Program. The following DOE Office of Energy Efficiency and Renewable Energy (EERE) manager played important roles in establishing the project concept, advancing implementation, and providing ongoing guidance: Prasad Gupte. This scientific paper was additionally supported by the Onassis Foundation--Scholarship ID: F ZQ-010/2020-2021. ## Conflict of Interest The authors have no known competing financial interests or relationships influencing the work reported to declare. ## Acknowledgements We sincerely thank Professor Alex Skabardonis and Professor Daniel Rodriguez for their insightful comments and support. ## CRediT authorship contribution statement **Dimitris M. Vlachogiannis:** Conceptualization, Methodology, Software, Data curation, Formal analysis, Validation, Writing - original draft, review & editing. **Hua Wei:** Methodology, Validation, Writing - review & editing, Supervision. **Scott Moura:** Writing - review & editing, Supervision. **Jane Macfarlane:** Conceptualization, Writing - review & editing, Supervision, Project administration. ## Appendix \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Mode Share**} & \multirow{2}{*}{**Controller**} & \multicolumn{2}{c}{**Average**} & \multicolumn{2}{c}{**Average**} & \multicolumn{2}{c}{**Average**} & \multicolumn{2}{c}{**Average**} \\ **Scenario** & & **Controller** & **Vehicle Delay** & **Person Delay** & **Vehicle Queue** & **Person Queue** \\ & & **(s)** & **(s)** & **(w vehicles)** & **(persons)** \\ \hline \hline \multirow{4}{*}{**Little HOV**} & Fused (Websters) & 235.76 & 231.86 & 98.09 & 105.75 \\ & SOTI & 93.38 & 95.11 & 46.53 & 52.23 \\ & MaxPressive & 62.56 & 62.36 & 31.09 & 33.90 \\ **Adoption** & MPLight with FRAP + Pressure & 58.75 (2.05) & 59.55 (2.29) & 29.20 (1.14) & 32.47 (1.34) \\ & HumanLight & 61.25 (3.21) & 58.95 (2.29) & 30.56 (1.78) & 32.11 (1.41) \\ & \% Improvement from SOTA & **-2.676** & **1.015** & **-4.656** & **1.117** \\ \hline \multirow{4}{*}{**Little HOV**} & Fused (Websters) & 169.65 & 174.23 & 70.20 & 90.20 \\ & SOTI & 68.63 & 69.70 & 29.40 & 38.25 \\ & MaxPressive & 63.36 & 64.01 & 31.15 & 39.98 \\ **Adoption** & MPLight with FRAP + Pressure & 47.03 (2.63) & 46.68 (2.41) & 20.16 (1.78) & 25.54 (1.39) \\ & HumanLight & 47.95 (3.02) & 40.53 (2.56) & 20.54 (1.63) & 21.77 (1.27) \\ & \% Improvement from SOTA & **-1.96\%** & **13.17\%** & **-1.88\%** & **14.76\%** \\ \hline \multirow{4}{*}{**Moderate HOV**} & Fused (Websters) & 109.07 & 116.51 & 40.36 & 67.14 \\ & SOTI & 54.86 & 52.86 & 18.52 & 27.75 \\ & SOTI & 56.71 & 52.83 & 23.74 & 34.20 \\ **Adoption** & MPLight with FRAP + Pressure & 38.48 (0.93) & 40.10 (2.37) & 13.05 (0.34) & 21.33 (1.41) \\ & HumanLight & 35.35 (1.20) & 25.44 (1.03) & 11.82 (0.45) & 12.63 (0.53) \\ & \% Improvement from SOTA & **8.11\%** & **36.56\%** & **9.43\%** & **40.79\%** \\ \hline \multirow{4}{*}{**High HOV**} & Fused (Websters) & 64.64 & 61.43 & 18.17 & 35.59 \\ & SOTI & 39.62 & 36.50 & 9.81 & 18.67 \\ **High HOV** & MaxPressive & 53.05 & 57.76 & 17.51 & 39.59 \\ **Adoption** & MPLight with FRAP + Pressure & 29.04 (0.95) & 28.89 (1.58) & 7.23 (0.26) & 14.88 (1.01) \\ & HumanLight & 29.31 (1.23) & 16.56 (0.77) & 7.45 (0.38) & 7.96 (0.49) \\ & \% Improvement from SOTA & **-0.93\%** & **42.68\%** & **-3.04\%** & **46.51\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Metric evaluation across all scenarios in the single intersection set of experiments. Figure 3: For episode 200, the penalization of SOVs in terms of travel time is becoming less intense with the increase of parameter \(e\). \begin{table} \begin{tabular}{c c c c c c} **Mode Share** & \multirow{2}{*}{**Controller**} & \multicolumn{2}{c}{**Average**} & \multicolumn{2}{c}{**Average**} & \multicolumn{2}{c}{**Average**} \\ **Scenario** & & **Vehicle Delay** & **Person Delay** & **Vehicle Queue** & **Person Queue** \\ & & **(s)** & **(s)** & **(whelices)** & **(persons)** \\ \hline \hline \multicolumn{1}{c}{Fixed (Websters)} & 500.86 & 512.51 & 55.95 & 62.98 \\ & SOTL & 446.3 & 462.9 & 69.51 & 79.49 \\ Low HOV & MaxPressure & 159.39 & 162.97 & 30.74 & 34.49 \\ **Adoption** & MPLight with FRAP + Pressure & 143.81 (5.62) & 146.29 (5.63) & 25.25 (0.78) & 27.82 (1.36) \\ & HumanLight & 110.40 (8.23) & 107.00 (8.20) & 19.40 (1.63) & 20.50 (4.10) \\ & \% Improvement from SOTA & **23.23\%** & **26.86\%** & **23.17\%** & **26.31\%** \\ \hline \multicolumn{1}{c}{Fixed (Websters)} & 410.14 & 427.24 & 49.51 & 66.24 \\ & SOTL & 265.09 & 280.01 & 40.33 & 54.34 \\ Little HOV & MaxPressure & 139.92 & 141.57 & 24.94 & 32.24 \\ **Adoption** & MPLight with FRAP + Pressure & 100.94 (2.14) & 103.23 (2.38) & 15.83 (0.31) & 20.14 (0.98) \\ & HumanLight & 85.20 (3.08) & 74.80 (2.68) & 13.10 (0.54) & 14.40 (0.92) \\ & \% Improvement from SOTA & **15.59\%** & **27.54\%** & **17.25\%** & **28.50\%** \\ \hline \multicolumn{1}{c}{Fixed (Websters)} & 239.82 & 265.94 & 31.42 & 54.33 \\ & SOTL & 179.23 & 197.73 & 22.5 & 38.39 \\ **Moderate HOV** & MaxPressure & 122.33 & 128.66 & 18.99 & 30.92 \\ **Adoption** & MPLight with FRAP + Pressure & 76.01 (0.91) & 77.86 (1.48) & 9.81 (0.13) & 15.54 (0.70) \\ & HumanLight & 98.00 (2.60) & 45.30 (2.03) & 7.50 (0.37) & 8.40 (0.55) \\ & \% Improvement from SOTA & **21.33\%** & **41.82\%** & **23.55\%** & **45.95\%** \\ \hline \multicolumn{1}{c}{Fixed (Websters)} & 169.11 & 193.41 & 18.26 & 41.88 \\ & SOTL & 136.35 & 154.19 & 13.41 & 30.12 \\ High HOV & MaxPressure & 111.66 & 113.09 & 14.28 & 28.86 \\ **Adoption** & MPLight with FRAP + Pressure & 57.44 (1.11) & 59.78 (1.45) & 5.76 (0.13) & 11.58 (0.88) \\ & HumanLight & 40.50 (2.84) & 25.70 (1.80) & 3.90 (0.31) & 4.50 (0.38) \\ & \% Improvement from SOTA & **29.49\%** & **57.01\%** & **32.29\%** & **61.14\%** \\ \hline \end{tabular} \end{table} Table 3: Metric evaluation across all scenarios in the 4x4 grid traffic network configuration. \begin{table} \begin{tabular}{c c c c c c} **Mode Share** & \multirow{2}{*}{**Controller**} & \multicolumn{2}{c}{**Average**} & \multicolumn{2}{c}{**Average**} & \multicolumn{2}{c}{**Average**} \\ **Scenario** & & **Vehicle Delay** & **Person Delay** & **Vehicle Queue** & **Person Queue** \\ & & **(s)** & **(s)** & **(whelices)** & **(persons)** \\ \hline \hline \multicolumn{1}{c}{Fixed (Websters)} & 552.49 & 368.53 & 84.89 & 100.16 \\ & SOTL & 314.97 & 337.22 & 101.53 & 119.69 \\ Low HOV & MaxPressure & 68.83 & 73.84 & 26.23 & 30.73 \\ **Adoption** & MPLight with FRAP + Pressure & 70.22 (4.18) & 73.41 (4.59) & 25.73 (1.92) & 27.82 (1.87) \\ & HumanLight & 54.78 (3.45) & 53.79 (3.62) & 19.81 (1.20) & 21.24 (1.85) \\ & \% Improvement from SOTA & **20.41\%** & **26.73\%** & **23.01\%** & **23.65\%** \\ \hline \multicolumn{1}{c}{Fixed (Websters)} & 237.16 & 268.08 & 63.84 & 92.93 \\ & SOTL & 120.18 & 153.51 & 38.70 & 63.32 \\ **Little HOV** & MaxPressure & 66.96 & 76.77 & 24.78 & 36.23 \\ **Adoption** & MPLight with FRAP + Pressure & 48.23 (1.77) & 52.43 (1.88) & 15.67 (0.62) & 21.35 (1.43) \\ & HumanLight & 41.16 (1.80) & 37.37 (1.82) & 12.95 (0.66) & 14.52 (0.79) \\ & \% Improvement from SOTA & **14.66\%** & **28.72\%** & **17.36\%** & **31.99\%** \\ \hline \multicolumn{1}{c}{Fixed (Websters)} & 116.06 & 151.72 & 32.17 & 67.73 \\ & SOTL & 109.07 & 116.51 & 40.36 & 67.14 \\ **Adoption** & MPLight with FRAP + Pressure & 59.95 & 70.59 & 19.07 & 35.01 \\ **Adoption** & MPLight with FRAP + Pressure & 37.11 (0.75) & 42.28 (1.34) & 9.75 (0.24) & 16.28 (1.02) \\ & HumanLight & 28.77 (1.41) & 22.92 (1.24) & 7.18 (0.45) & 8.35 (0.54) \\ & \% Improvement from SOTA & **22.47\%** & **45.79\%** & **26.36\%** & **48.71\%** \\ \hline \multicolumn{1}{c}{Fixed (Websters)} & 76.92 & 117.54 & 17.60 & 54.80 \\ & SOTL & 67.04 & 100.72 & 13.69 & 41.44 \\ **High HOV** & MaxPressure & 55.31 & 66.04 & 14.41 & 35.00 \\ **Adoption** & MPLight with FRAP + Pressure & 28.49 (0.98) & 31.13 (1.24) & 5.70 (0.26) & 12.75 (1.55) \\ & HumanLight & 23.71 (2.11) & 16.30 (1.26) & 4.61 (0.51) & 5.64 (0.68) \\ & \% Improvement from SOTA & **16.78\%** & **47.64\%** & **19.12\%** & **55.76\%** \\ \hline \end{tabular} \end{table} Table 2: Metric evaluation across all scenarios in the 1x6 corridor set of experiments. Figure A.4: Evolution of average person delay and queue for the different values of discount factor \(\gamma\) during episodic training for the light traffic demand configuration. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Average Delay**} & \multicolumn{2}{c|}{**Average Queue**} \\ \hline **gamma** & Vehicle (s) & Person (s) & Vehicle (vehicles) & Person (persons) \\ \hline 0.00 & 50.58 (60.80) & 39.52 (56.21) & 17.19 (16.95) & 22.88 (23.88) \\ \hline 0.30 & 37.57 (1.80) & 26.32 (1.69) & 13.28 (0.72) & 15.73 (1.17) \\ \hline **0.60** & 35.95 (2.09) & **24.40 (1.41)** & 12.57 (0.88) & **14.70 (1.18)** \\ \hline **0.90** & 34.01 (0.81) & **24.10 (0.59)** & 11.76 (0.30) & **14.52 (0.64)** \\ \hline \end{tabular} \end{table} Table A.5: Impact of discount factor \(\gamma\) on performance metrics under light traffic demand. Figure A.5: Phase profile for different values of \(\gamma\) under light traffic. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-8} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{**Delay (s)**} & \multicolumn{4}{c|}{**Queue**} \\ \hline \multicolumn{1}{c|}{Vehicle} & \% Change & Person & \% Change & Vehicle & \% Change & Person & \% Change \\ \hline \multicolumn{1}{c|}{_Scenario Name_} & \multicolumn{4}{c|}{_Corridor lx3 (300m segments)_} \\ \hline All Vehicles & 43.27 (1.38) & Baseline & 29.31 (0.86) & Baseline & 15.46 (0.55) & Baseline & 17.36 (0.58) & Baseline \\ \hline Active Vehicles & 35.95 (2.09) & **-16.92\%** & **24.90 (1.41)** & **-15.05\%** & 12.57 (0.88) & **-18.69\%** & 15.00 (1.18) & **-13.59\%** \\ \hline \multicolumn{1}{c|}{_Scenario Name_} & \multicolumn{4}{c|}{_Corridor lx3 (800m segments)_} \\ \hline All Vehicles & 44.17 (1.72) & Baseline & 29.88 (1.27) & Baseline & 15.88 (0.69) & Baseline & 17.80 (0.84) & Baseline \\ \hline Active Vehicles & 35.69 (1.86) & **-19.20\%** & **24.56 (1.40)** & **-17.80 \%** & 12.48 (0.75) & **-21.41\%** & 14.07 (0.93) & **-20.96\%** \\ \hline \end{tabular} \end{table} Table A.4: Impact of active vehicle consideration on experimental corridors under light traffic demand.
2310.19988
Counterfactual fairness for small subgroups
While methods for measuring and correcting differential performance in risk prediction models have proliferated in recent years, most existing techniques can only be used to assess fairness across relatively large subgroups. The purpose of algorithmic fairness efforts is often to redress discrimination against groups that are both marginalized and small, so this sample size limitation often prevents existing techniques from accomplishing their main aim. We take a three-pronged approach to address the problem of quantifying fairness with small subgroups. First, we propose new estimands built on the "counterfactual fairness" framework that leverage information across groups. Second, we estimate these quantities using a larger volume of data than existing techniques. Finally, we propose a novel data borrowing approach to incorporate "external data" that lacks outcomes and predictions but contains covariate and group membership information. This less stringent requirement on the external data allows for more possibilities for external data sources. We demonstrate practical application of our estimators to a risk prediction model used by a major Midwestern health system during the COVID-19 pandemic.
Solvejg Wastvedt, Jared D Huling, Julian Wolfson
2023-10-30T20:12:59Z
http://arxiv.org/abs/2310.19988v2
# Counterfactual fairness for small subgroups ###### Abstract While methods for measuring and correcting differential performance in risk prediction models have proliferated in recent years, most existing techniques can only be used to assess fairness across relatively large subgroups. The purpose of algorithmic fairness efforts is often to redress discrimination against groups that are both marginalized and small, so this sample size limitation often prevents existing techniques from accomplishing their main aim. We take a three-pronged approach to address the problem of quantifying fairness with small subgroups. First, we propose new estimands built on the "counterfactual fairness" framework that leverage information across groups. Second, we estimate these quantities using a larger volume of data than existing techniques. Finally, we propose a novel data borrowing approach to incorporate "external data" that lacks outcomes and predictions but contains covariate and group membership information. This less stringent requirement on the external data allows for more possibilities for external data sources. We demonstrate practical application of our estimators to a risk prediction model used by a major Midwestern health system during the COVID-19 pandemic. Algorithmic fairness, causal inference, risk prediction, small subgroups. [1][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][]][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][]][][][][][][][][][][][][][][]][][][][][][][][][][]][][][][][][][][][][][][][][][][][]][][][][][][]][][][][][][][][][][][][][][][][][][][][][]][][][][][][][]][][][][][][][][][][][][][][]][][][][][][][][][][][][][][][][]][][][][][][][][]] ## 1 Introduction Increasing use of complex risk prediction models in health care settings has drawn attention to both the opportunities and the challenges such models present. Clinical risk prediction models can improve care for patients and efficiency for providers by personalizing treatment, identifying high-risk patients for early intervention, and more. However, the models, which often use opaque techniques that are not readily understandable by patients and providers, have also demonstrated the potential to create and entrench health inequities. One well-documented strand of this problem is the potential for risk prediction models to perform more poorly for some groups than others. In clinical settings, inaccurate model predictions can lead to sub-optimal assignment of treatment and worse health outcomes for the affected groups, which are often those already marginalized in society (Obermeyer et al., 2019). While techniques for assessing and correcting model bias have proliferated in recent years (e.g., Castelnovo et al. (2022), Chen et al. (2023)), much of this work does not address a major challenge in the clinical setting: limited sample size in the smallest groups where model performance is to be assessed to ascertain fairness. As an example, our previous work, which proposes estimators of intersectional unfairness adapted for clinical situations in which a treatment is in use, requires a reasonably large sample size in all groups to obtain useful precision (Wastvedt et al., 2023). In that work, we analyzed a risk prediction model used during the COVID-19 pandemic to help determine whether a patient should be transferred to a COVID-19 cohort hospital. We looked at unfairness across intersecting race and age categories, but we were only able to separate race into white, Black or African American, and a group for all other races because of limited sample size. While this analysis reveals differing model error rates by age and for patients identifying as white vs. Black or African American, the "all other races groups" has little utility. The experiences of individuals in this group, which includes for example both Asian and American Indian patients, differ widely. It is likely that the model performs differently for the subgroups encompassed in this category, but those error rate differences disappear when the subgroups are examined together. More broadly, lumping small subgroups together in this fashion can erase the experiences of these patients from analyses. The main purpose of fairness assessment is to uncover mistreatment of marginalized groups which are often, though not always, numeric minorities as well. Thus it is crucial to the goals of fairness methods that they prioritize the ability to obtain results for small subgroups. In this work, we address the problem of limited sample size by proposing new estimators that are estimable and have reasonable precision even for very small groups. We extend the COVID-19 model application presented in our earlier work to show the utility of these new estimators in situations where existing methods fail. The COVID-19 risk prediction model we analyze is one of many that were used to help guide allocation of resources in high-pressure scenarios as case numbers rose. The model we analyze was trained by the health system on 1,469 adults who tested positive for SARS-CoV-2. Features used for prediction included patient demographic variables, home medications, and medical conditions. Data available for our after-the-fact assessment of this model is limited because of the ever-changing nature of COVID-19 treatment and the relatively short period, during the height of the pandemic, over which the model was used. As is common in clinical risk prediction scenarios, the population for which the model was used is small in comparison to the population served by the health system. Thus there is potential to borrow information from other available data drawn from the same population in which risk scores and outcomes are not recorded; we explore this possibility in our proposed methods. In the sections that follow we first describe the _counterfactual error rates_, model performance metrics developed in existing work (Coston, Mishler, Kennedy, and Chouldechova, 2020; Mishler, Kennedy, and Chouldechova, 2021) that are well-suited to clinical applications. We then take a three-pronged approach to improving these metrics' performance for small subgroups. First, we re-formulate the error rates to leverage information across groups by incorporating the overall error rate. We then use the causal identification process to arrive at estimators that use more of the data than existing methods. Finally, we propose a novel data borrowing procedure that uses "external data", such as that available in our COVID-19 risk model application, to further improve estimation. ## 2 Statistical framework Following algorithmic fairness convention, define a _protected characteristic_ as any grouping, such as race or gender, along which we wish to measure discrimination. Let \(\mathbf{A}\) denote a vector of categorical protected characteristics, in which each element indicates group membership for one characteristic. Let \(m\) be the number of protected characteristics that we wish to consider. Denote the characteristics \(A_{j}\), \(j\in\{1,...,m\}\) and assume each \(A_{j}\) is a categorical variable with a finite number of levels, the set of which is denoted \(\mathcal{A}_{j}\). Let \(\mathbf{A}=\{A_{1},A_{2},...,A_{m}\}^{T}\in\mathcal{A}\) contain all protected characteristics of interest, where \(\mathcal{A}\) is the set of all possible combinations of all levels of the \(m\) characteristics. To finish notation, let \(S\) denote a binary risk prediction. Although many clinical risk models produce predicted probabilities, we are using the binary prediction derived by selecting a cut-off threshold. Let \(D\) denote a binary treatment assignment and \(Y\) a binary outcome such as an adverse health event. Under a binary treatment, there are two potential outcomes: \(Y^{0}\), the outcome under no treatment, and \(Y^{1}\), the outcome under treatment. We focus on the \(Y^{0}\) outcome as it represents patients' baseline risk and thereby is informative for guiding treatment. Existing work (Mishler et al., 2021) defines the following counterfactual versions of the common false positive rate and false negative rate model performance metrics for a single protected characteristic. In our previous work, we apply the definitions to the vector of protected characteristics \(\mathbf{A}=\mathbf{a}\). Definition 1:The _counterfactual false positive rate_ of a predictor \(S\) for the group having protected characteristic vector \(\mathbf{A}=\mathbf{a}\), denoted \(cFPR(S,\mathbf{a})\), is equal to \(Pr(S=1|Y^{0}=0,\mathbf{A}=\mathbf{a})\). The _counterfactual false negative rate_, \(cFNR(S,\mathbf{a})\), is equal to \(Pr(S=0|Y^{0}=1,\mathbf{A}=\mathbf{a})\). Our previous work proposed the following weighted estimators of \(cFPR(S,\mathbf{a})\) and \(cFNR(S,\mathbf{a})\): \[\widehat{cFPR}(S,\mathbf{a})=\frac{\sum_{i=1}^{n}[(1-D_{i})S_{i}(1-Y_{i})\mathbb{1 }(\mathbf{A}_{i}=\mathbf{a})/(1-\hat{\pi}_{i})]}{\sum_{i=1}^{n}[(1-D_{i})(1-Y_{i}) \mathbb{1}(\mathbf{A}_{i}=\mathbf{a})/(1-\hat{\pi}_{i})]} \tag{1}\] \[\widehat{cFNR}(S,\mathbf{a})=\frac{\sum_{i=1}^{n}[(1-D_{i})(1-S_{i})Y_{i}\mathbb{1 }(\mathbf{A}_{i}=\mathbf{a})/(1-\hat{\pi}_{i})]}{\sum_{i=1}^{n}[(1-D_{i})Y_{i}\mathbb{1 }(\mathbf{A}_{i}=\mathbf{a})/(1-\hat{\pi}_{i})]} \tag{2}\] The _counterfactual error rate differences_, denoted \(\Delta^{+}(S)\) and \(\Delta^{-}(S)\), quantify the differences in counterfactual false positive and false negative rates between two protected characteristic groups. ## 3 New estimands to borrow strength across groups The estimators for \(cFPR(S,\mathbf{a})\) and \(cFNR(S,\mathbf{a})\) in equations (1) and (2) present a challenge in practical applications with limited sample size. The estimators are weighted averages over a subset of a subgroup defined by intersecting protected characteristics. In the case of \(cFPR(S,\mathbf{a})\), only observations with \(D_{i}=0\), \(S_{i}=1\), and \(Y_{i}=0\) contribute non-zero components to the average. The estimator for \(cFNR(S,\mathbf{a})\) is analogous. Restricting the estimation to this slice of what may already be a small protected subgroup can make the estimators unstable. To address this issue, we propose alternative estimators for the counterfactual error rates that incorporate more of the data. Our three-pronged approach begins by reformulating the estimands for group-specific error rates to leverage the overall error rate. Next, we make an additional assumption during causal identification to reduce reliance on indicator variables. Finally, the next section explains potential use of an auxiliary data set during estimation to further improve performance. Our re-formulation of the counterfactual error rates draws on Efron (2010), who propose rewriting the false discovery rate (FDR) of a predictor \(S\) for a particular subgroup as a ratio of probabilities multiplied by the overall FDR. Following a similar process, we obtain alternate expressions for the counterfactual error rates given in Proposition 1 (proof in Web Appendix A). **Proposition 1**: Given two binary variables \(S\) and \(Y^{0}\) and a subgroup vector \(\mathbf{A}=\mathbf{a}\), the counterfactual false negative rate and counterfactual false positive rate of \(S\) in group \(\mathbf{a}\) can be rewritten as the following: \[cFPR(S,\mathbf{a}) =cFPR(S)\frac{P(\mathbf{A}=\mathbf{a}|Y^{0}=0,S=1)}{P(\mathbf{A}=\mathbf{a}|Y^{0}= 0)} \tag{3}\] \[cFNR(S,\mathbf{a}) =cFNR(S)\frac{P(\mathbf{A}=\mathbf{a}|Y^{0}=1,S=0)}{P(\mathbf{A}=\mathbf{a}|Y^{0}= 1)} \tag{4}\] We then identify our estimands from observed data using standard causal inference assumptions plus one additional assumption that allows us to avoid many of the indicator variables used in Wastvedt et al. (2023), thereby employing a larger volume of data. We assume \(Y^{0}\) is independent of \(\mathbf{A}\) given \(X\), or that we have collected sufficient covariates such that protected group membership gives no additional information about the probability of \(Y^{0}\). A full list of assumptions and simulations demonstrating that our methods are robust to violations of the additional assumption are in Web Appendices B and D. Using these assumptions, we break down the ratio of conditional probabilities in each error rate expression in Proposition 2 (proof in Web Appendix B). Let \(X\) be a vector of observed covariates and define the propensity score function for a given protected group as \(\pi=P(D=1|A,X,S)\). Proposition 2: The conditional probability ratios in expressions (3) and (4) are identified as follows for \(Y^{0}=y\in\{0,1\}\) and \(S=s\in\{0,1\}\): \[\frac{P(\textbf{{A}}=\textbf{{a}}|Y^{0}=y,S=s)}{P(\textbf{{A}}=\textbf{{a}}|Y^ {0}=y)}=\frac{E[\mu_{0}(y,s,X)\mathbb{1}(\textbf{{A}}=\textbf{{a}})\mathbb{1} (S=s)]/E[\mu_{0}(y,s,X)\mathbb{1}(S=s)]}{E[\mu_{0}^{*}(y,X)h(X,\textbf{{a}})]/E [\mu_{0}^{*}(y,X)]},\] where \(\mu_{0}(y,s,X)=P(Y=y|D=0,S=s,X)\) and \(\mu_{0}^{*}(y,X)=P(Y=y|D=0,X)\). The \(h\) function is defined as \(h(x,\textbf{{a}})=P(\textbf{{A}}=\textbf{{a}}|X=x)\). While the estimators in Wastvedt et al. (2023) limited data to specific intersections of \(D\), \(S\), \(Y\), and \(A\), Proposition 1 allows us to estimate the denominator using all the data and only restrict by the protected characteristic vector (_A_) and model prediction (\(S\)) in the numerator. Drawing from more data reduces our estimators' variance and enables estimation even in very small subgroups. The overall counterfactual error rate components of equations (3) and (4) use the identification result in Wastvedt et al. (2023), generalized to include all observations: \(cFPR(S)=\frac{E[(1-D)S(1-Y)/(1-\pi(\textbf{{A}},X,S))]}{E[(1-D)(1-Y)/(1-\pi( \textbf{{A}},X,S))]}\); \(cFNR(S)=\frac{E[(1-D)(1-S)Y/(1-\pi(\textbf{{A}},X,S))]}{E[(1-D)Y/(1-\pi( \textbf{{A}},X,S))]}\). ## 4 Estimation and data borrowing In this section, we demonstrate use of an auxiliary, external data set to aid in estimation of \(h(X,\textbf{{a}})=P(\textbf{{A}}=\textbf{{a}}|X)\). This procedure can reduce variance and bias compared to estimation using only the main data. We then propose regression estimators for the remaining components of the probability ratios in Proposition 2. Throughout, let \(\{\textbf{{A}}_{i},D_{i},Y_{i},X_{i},S_{i}\}\), \(i=1,...,n\) be the observed data and binary risk predictions. Then the counterfactual error rates for protected group \(\textbf{{A}}=\textbf{{a}}\) are estimated as follows, where for conciseness we assume that \(\hat{\mu}_{0}\) and \(\hat{\mu}_{0}^{*}\) are the estimated versions with \(y=1\): \[c\widehat{FPR(S,\textbf{{a}})}=c\widehat{FPR(S)}\frac{\sum_{i=1}^ {n}(1-\hat{\mu}_{0}(s=1,X_{i}))1(\textbf{{A}}_{i}=\textbf{{a}})S_{i}/\sum_{i=1 }^{n}(1-\hat{\mu}_{0}(s=1,X_{i}))S_{i}}{\sum_{i=1}^{n}(1-\hat{\mu}_{0}^{*}(X_{ i}))\hat{h}(X_{i},\textbf{{a}})/\sum_{i=1}^{n}(1-\hat{\mu}_{0}^{*}(X_{i}))} \tag{5}\] \[c\widehat{FNR(S,\textbf{{a}})}=c\widehat{FNR(S)}\frac{\sum_{i=1 }^{n}\hat{\mu}_{0}(s=0,X_{i})1(\textbf{{A}}_{i}=\textbf{{a}})(1-S_{i})/\sum_{ i=1}^{n}\hat{\mu}_{0}(s=0,X_{i})(1-S_{i})}{\sum_{i=1}^{n}\hat{\mu}_{0}^{*}(X_{i}) \hat{h}(X_{i},\textbf{{a}})/\sum_{i=1}^{n}\hat{\mu}_{0}^{*}(X_{i})} \tag{6}\] ### Data borrowing to estimate group membership probabilities Estimating the group membership probabilities \(\hat{h}(X,\textbf{{a}})\) involves neither prediction (\(S\)) nor outcome (\(Y\)) information. In many clinical risk prediction situations, large volumes of data are available in which the outcome and/or prediction are not present, but a rich set of covariates are. For example, in a health system, this external data could be patient records in the same electronic health record system who were not screened for the adverse event or who did not receive risk predictions. Because of the larger sample size, this external data will typically have better representation from small subgroups. If the distribution of \(P(\textbf{{A}}=\textbf{{a}}|X)\) in the external data is similar enough to that in the test (or "internal") data, we can use the external data to help estimate group membership probabilities. To quantify "similar enough", we employ an adaptive data borrowing method that borrows more information if the distributions are more closely aligned and less (or none) if they differ. Let \(\hat{h}_{E}\) be the internal data group membership probabilities estimated using a model trained on the external data. Let \(\hat{h}_{I}\) be the probabilities fit and predicted using only the internal data. We maximize the predictive performance of the function \(\hat{h}^{*}=\hat{\alpha}\hat{h}_{E}+(1-\hat{\alpha})\hat{h}_{I}\) to find the borrowing parameter \(\hat{\alpha}\). Various metrics could be used for predictive performance. We use the Brier score because it encompasses both concordance and calibration; however, we achieve similar results in simulations using multi-class area under the ROC curve (AUC). The estimates \(\hat{h}^{*}_{i}\) are then used in equations (5) and (6). ### Estimation of remaining quantities The estimators in equations (5) and (6) require several additional parameters. The two outcome models, \(\mu_{0}(X_{i},S=1)\) and \(\mu^{*}_{0}(X_{i})\), can be estimated using logistic regression or a more flexible method suitable for binary data. The overall counterfactual error rates are estimated using the weighted estimators proposed in Wastvedt et al. (2023): \(\widehat{cFPR(S)}=\frac{\sum_{i=1}^{n}[(1-D_{i})S_{i}(1-Y_{i})/(1-\hat{\pi}_{i })]}{\sum_{i=1}^{n}[(1-D_{i})(1-Y_{i})/(1-\hat{\pi}_{i})]}\), \(\widehat{cFNR(S)}=\frac{\sum_{i=1}^{n}[(1-D_{i})(1-S_{i})Y_{i}/(1-\hat{\pi}_{ i})]}{\sum_{i=1}^{n}[(1-D_{i})Y_{i}/(1-\hat{\pi}_{i})]}\). The propensity score, \(\pi(\textbf{A},X,S)\), can be estimated using regression methods similar to the outcome models. Below we compare estimation of propensity score and outcome models using a generalized linear model and a Super Learner (Laan et al., 2007), a more complex ensemble approach that combines multiple machine learning models. Because of the simplicity of the generalized linear models, we use the full sample without any sample splitting or cross-fitting. For the ensemble models, we use a 10-fold cross-fitting approach. For each fold, we fit the three nuisance parameter models on the remaining data to obtain predictions for the held-out fold. ## 5 Simulations In this section we demonstrate the benefits of our proposed estimators compared to existing methods. We compare to the counterfactual estimators in (Wastvedt et al., 2023) that are not adapted for small subgroups. All simulations consider two binary protected characteristics, \(A_{1}\) and \(A_{2}\), where for each level 1 is less common. Thus the group \(A_{1}=0\), \(A_{2}=0\) is the numeric majority, and the group \(A_{1}=1\), \(A_{2}=1\) is the numeric minority. We denote the other groups \(M1\) and \(M2\) by the protected characteristic (\(A_{1}\) or \(A_{2}\)) that is equal to 1. Overall internal data sample size is limited in all scenarios to create small groups (\(N_{internal}=\{50,100,150,200\}\)). External data is large (\(N_{external}=10,000\)) such that all groups are adequately represented in the external sample. We focus on \(cFNR(S,\boldsymbol{a})\) for conciseness; our conclusions also hold for \(cFPR(S,\boldsymbol{a})\). Following the simulation framework in Mishler et al. (2021), we first generate a set of data (\(N=1,000\)) for training a random forest risk prediction model. We generate a validation data set (\(N=50,000\)) to determine the true error rates of the risk model. We use the same risk prediction model for all scenarios. For each scenario, we generate internal and external data sets and estimate our unfairness metrics, considering multiclass area under the ROC curve (AUC) and Brier score as performance metrics for data borrowing. In all scenarios, the internal and external \(h(X,\boldsymbol{a})\) models are neural networks with a single hidden layer, 100 units, and a weight decay parameter of 1 (R package _nnet_). Full data generation details are in Web Appendix C. ### Variance reduction for small subgroups This section compares our proposed and existing estimators under internal sample sizes that are at the lower end of what would typically be encountered in a clinical risk prediction assessment setting (\(N_{int}\in\{100,200,500,1000,2000\}\)). We use the Brier score for the data borrowing metric. Outcome models (proposed estimators) and the propensity score model (comparison estimators) are correctly specified GLMs. Figure 1 shows notable variance reduction with our new estimators across sample sizes. Error bars without a mean (shape) indicate the presence of NAs in the replications, i.e. insufficient data for estimation. This occurs with our comparison estimators in all groups at \(N_{int}=200\) and in non-majority groups at higher \(N_{int}\). In contrast, both our proposed internal and borrowing estimators are able to be calculated for all groups at all sample sizes. Comparing internal and borrowing shows little gain with data borrowing under these particular scenarios. ### Data borrowing adapts to external data agreement This section investigates our data borrowing procedure's ability to respond to varying levels of disagreement in internal and external data distributions. In this section we fix \(N_{int}=1000\) and manipulate the level of agreement between internal and external data by multiplying the external data coefficients for \(P(\boldsymbol{A}=\boldsymbol{a}|X)\) by a constant in \([-1,1]\). Figure 1(a) shows that \(\alpha\) decreases as \(b\) moves away from \(1\). Recall that a lower \(\alpha\) means less weight on the external data predictions, i.e., less borrowing. Figure 1(b) focuses on the minority group \(cFNR\) to show that across all distributions, our data borrowing estimators have bias that is less than or equal to the internal data estimation estimator and reasonable variance. ### Benefit of data borrowing under complex scenarios Next we examine scenarios where data borrowing provides gains in bias and variance reduction compared to the internal data version of our estimators. Such gains occur when the internal \(h(X,\boldsymbol{a})\) model is difficult to estimate. We use a more complex data generation scenario with an increasing number of noise variables added to the \(10\) informative compo nents of \(X\) before generating group membership, \(P(\textbf{{A}}=\textbf{{a}}|X)\), the \(Y^{0}\) potential outcome, \(P(Y^{0}=1|X,\textbf{{A}})\), and the treatment, \(P(D=1|X,\textbf{{A}},S)\). We consider these scenarios with and without 2- and 3-way interactions among four of the \(X\) used to generate these probabilities. For simplicity, we focus on the minority group and Brier score for the data borrowing metric. The internal sample size is 500 throughout. Figure 3 shows that both with and without \(X\) interactions, bias is low and consistent across increasing \(X\) noise up to 20 \(X\) components. Data borrowing does not notably affect bias except a small reduction under the no-interaction, 40 \(X\) components scenario. ## 6 COVID-19 risk prediction fairness During the height of the COVID-19 pandemic, risk prediction models were commonly used to help health systems allocate scarce resources. In addition to their potential benefits, these models raised the challenge of ensuring model performance, and thus resource allocation, was equitable. We studied one such model at a major Midwestern health system that was used to help determine whether a patient should be transferred to a COVID-19 cohort hospital. In our previous work, we analyzed this model's counterfactual error rates across the intersection of age and race, grouping patient self-reported race into white, Black or African American, and all other races. As we noted in that paper, grouping all patients who identify as neither white nor Black/African American together severely limits the utility of the analysis. Here we show how our proposed estimators allow us to disaggregate race and use an additional ethnicity variable to gain a more nuanced picture of model performance. We focus on a single protected characteristic to demonstrate the wide applicability of our proposed methods; our estimators can also be used with multiple, intersecting characteristics. The risk prediction model was trained on 1,469 adult patients with confirmed or symptomatic suspected COVID-19. Our "internal" data for evaluating the model consisted of 3,649 adult SARS-CoV-2 patients from the same health system who tested positive between 10/27/2020 and 1/9/2022. Deploying our new estimators allowed us to group patient self-reported race and ethnicity variables into five categories: Hispanic or Latino (6.3%); and non-Hispanic/Latino American Indian or Alaska Native (1.5%), Asian (4.2%), Black or African American (13.9%), and white (74.1%). We removed patients who marked two or more races, "other" or "all other races" since these records lacked sufficient information to group race/ethnicity. We chose this schema to balance disaggregation with a large enough sample size to obtain useful estimates. Our race categories align with U.S. Census Bureau categories with one exception: we did not have sufficient sample size to break out the Native Hawaiian or Other Pacific Islander category since it had 14 total observations, so we grouped these patients with the Asian category. Likewise, cross-cutting the data by both race and Hispanic or Latino ethnicity was not possible given our sample size, so we opted to consider Hispanic or Latino ethnicity by disaggregating it from the racial groups. We note that the data used in this application does offer relatively rich detail on patients' self-reported race and ethnicity, with 8 and 48 unique responses to the race and ethnicity questions, respectively. Future applications using data sources like this with larger samples could take fuller advantage of the available detail by considering race and ethnicity separately and with more categories. We chose a cutoff of 0.15, approximately the \(80^{th}\) percentile, for dichotomizing the risk score and used 30-day inpatient readmission or mortality as our outcome and transfer to the cohort hospital as our treatment variable. Covariates comprised comorbidities, home medications, number of prior emergency department visits, and labs and vitals. We excluded covariates with greater than 2/3 missing and performed random forest imputation (missForest package). We considered the same set of covariates for propensity score, outcome, and \(h(X,\boldsymbol{a})\) modeling, performing lasso selection separately for each model to select the most relevant variables. As in our prior work, we used a logistic regression propensity score model. Our outcome models were also logistic regressions, and the internal \(h(X,\mathbf{a})\) model was a single-layer neural network with 50 units (package nnet). Our external data comprised 8,449 patients from the same health system without a matching SARS-CoV-2 positive test and risk model score. External data patients were grouped into the same five race/ethnicity categories: Hispanic or Latino (8.6%); and non-Hispanic/Latino American Indian or Alaska Native (1%), Asian (11.2%), Black or African American (20%), and white (59.2%). While not all internal data covariates were available in the external data, covariates comprised the same general categories. We excluded covariates with greater than 2/3 missing and performed random forest imputation. We performed lasso selection to choose the most important covariates for the external \(h(X,\mathbf{a})\) model and then fit a single-layer neural network with 100 units using the selected covariates. We used the Brier score data borrowing metric which gave \(\alpha=0.004\). We obtained standard errors using the rescaled bootstrap method proposed in Wastvedt et al. (2023) and confidence intervals using t-intervals truncated at 0 and 1. All analyses were done in R (version 4.2.3, R Core Team 2023). Our proposed estimators enable estimation for small subgroups where estimation failed using the comparison estimators. Figure 4 shows that internal and borrowing estimators reduced variance substantially for the Black or African American group and slightly for the white group. The smallest group, American Indian or Alaska Native, was too small to obtain a confidence interval and obtained the unlikely point estimate of 1 using the comparison method. Our new estimators obtain an interval and a more reasonable point estimate, although the interval covers the entire \((0,1)\) range and thus has minimal utility. This wide interval demonstrates the sample size limitations that still occur using our proposed estimators. The Asian and Hispanic or Latino groups had point estimates that differ substantially under the new and comparison methods, which may be due to undercoverage issues for the comparison method. Estimates for these groups using the comparison method are also highly variable because of extremely small counts (\(<10\)) in the false negative cell of the confusion matrix, which is the relevant cell for the comparison \(cFNR\) estimate. Note also that while the confidence interval for the Asian group appears smaller under the comparison method, this is due to the truncation of intervals at zero. Intervals affected by truncation are noted in the caption to Figure 4. ## 7 Discussion In this paper we took a three-pronged approach to addressing the common challenge of small subgroups in risk prediction model fairness assessments. First, we proposed new estimands that reformulate the counterfactual error rates to borrow strength across groups. We showed how these estimands can be identified from observed data in a manner that draws on more of the data than existing alternatives. Finally, we proposed a novel data borrowing method with the potential to leverage external data to further improve estimation. Our methods have limitations: most importantly, subgroups can still be too small for the methods to produce useful estimates. When sample size is too small, confidence intervals may be too wide, as in the American Indian or Alaska Native group in our application (Section 6), or estimates may differ substantially between our proposed method and the comparison method. However, this sample size cutoff is fairly small, as demonstrated by the useful estimates produced for the Black or African American group in our application, a group which only had 11 observations in the false negative cell of the confusion matrix (see Web Table 1 for confusion matrices by group). We refrain from offering a specific sample size cut-off for our methods since this number will depend on the richness of the covariates available in a given application, both for model fitting and for meeting our additional assumption described in Section 3. Our proposed estimators also require practitioners to fit more models than existing alternatives which is both more time-consuming and provides more opportunities for model mis-specification. Nevertheless, we find little impact of model mis-specification under the scenarios considered in our simulations (Web Appendix D). Finally, our estimators allow practitioners to look at more disaggregated protected groups, which we hope will facilitate more nuanced consideration in fairness assessments of groups that are both marginalized and small. In the case of race and ethnicity, as in our COVID-19 application, our estimators are an improvement over those that require grouping disparate subgroups or excluding small groups altogether. However, given the prevalence of using racial categories as a protected characteristic in this field, it is important to recognize the limitations inherent in any race-based fairness assessment. No matter how disaggregated, racial categories are not intrinsic identities; rather, they are socially constructed labels that comprise "a system of inherently unequal status categories" (Benthall and Haynes, 2019). In a critique of traditional approaches to algorithmic fairness, Weinberg (2022) frames the problem as one of abstraction: by treating race as an intrinsic quality, researchers minimize the structural factors that create and maintain racial inequality. Whether conducted with our proposed metrics or others, a fairness assessment involving race bears the responsibility to drive toward undermining the systems of hierarchy and power imbalances that cause algorithmic unfairness. Hanna et al. (2020) suggest considering descriptive analyses of model performance across the many dimensions of race such as self-identified, other-identified, phenotype, and others. Differing patterns of unfairness can provide more insight into how to proceed to ameliorate inequities. Finally, Hanna et al. (2020) emphasize the importance of looking beyond the algorithm to promote justice at all parts of the process, including the data models are trained on, the choice of where and on whom to deploy them, and the way model predictions are incorporated into human practices. ## 8 Figures and Tables Figure 2: Results of data borrowing with varying levels of agreement between internal and external data distributions. The x-axis shows the factor by which internal data coefficients for generating \(P(\mathbf{A}=\mathbf{a}|X)\) were multiplied to obtain the external data coefficients. Shapes show the mean of 500 replications of the estimation procedure, error bars show 95%-tile intervals, and colors denote data borrowing metric. **Figure 3**: Bias in estimation of minority group \(cFNR\) under increasing noise in \(X\) with interactions among four of the \(X\) (right) and no interactions (left). Shapes show the mean of 500 replications of the estimation, and error bars show 95%-tile intervals. With no \(X\) interactions, bias under data borrowing is similar to internal-only estimation until 40 \(X\), where borrowing performs slightly better. With \(X\) interactions, bias under data borrowing is similar to internal-only estimation under all scenarios tested. For \(\alpha\) values showing how much information is borrowed in each scenario, see Web Appendix D. **Figure 4**: Estimation of group \(cFNR\) for 5 racial and ethnic groups using our proposed estimators (internal and borrowing) and our comparison estimator. Shapes show the mean of 200 bootstrap replications, and error bars show 95% bootstrap t-intervals. Our proposed estimators reduce variance for the Black or African American group and enable estimation for the American Indian or Alaska Native group, although the confidence interval for this group covers the entire \((0,1)\) range. Comparison estimators for Asian, Black or African American, and Hispanic or Latino groups are truncated at 0. New estimators (internal and data borrowing) for the American Indian or Alaska Native group are truncated at 1.
2305.13975
Simulating secondary electron and ion emission from the Cassini spacecraft in Saturn's ionosphere
The Cassini spacecraft's Grand Finale flybys through Saturn's ionosphere provided unprecedented insight into the composition and dynamics of the gas giant's upper atmosphere and a novel and complex spacecraft-plasma interaction. In this article, we further study Cassini's interaction with Saturn's ionosphere using three dimensional Particle-in-Cell simulations. We focus on understanding how electrons and ions, emitted from spacecraft surfaces due to the high-velocity impact of atmospheric water molecules, could have affected the spacecraft potential and low-energy plasma measurements. The simulations show emitted electrons extend upstream along the magnetic field and, for sufficiently high emission rates, charge the spacecraft to positive potentials. The lack of accurate emission rates and characteristics, however, makes differentiation between the prominence of secondary electron emission and ionospheric charged dust populations, which induce similar charging effects, difficult for Cassini. These results provide further context for Cassini's final measurements and highlight the need for future laboratory studies to support high-velocity flyby missions through planetary and cometary ionospheres.
Zeqi Zhang, Ravindra T. Desai, Oleg Shebanits, Fredrik L. Johansson, Yohei Miyake, Hideyuki Usui
2023-05-23T11:57:34Z
http://arxiv.org/abs/2305.13975v1
# Simulating secondary electron and ion emission from the Cassini spacecraft in Saturn's ionosphere ###### Abstract The Cassini spacecraft's Grand Finale flybys through Saturn's ionosphere provided unprecedented insight into the composition and dynamics of the gas giant's upper atmosphere and a novel and complex spacecraft-plasma interaction. In this article, we further study Cassini's interaction with Saturn's ionosphere using three dimensional Particle-in-Cell simulations. We focus on understanding how electrons and ions, emitted from spacecraft surfaces due to the high-velocity impact of atmospheric water molecules, could have affected the spacecraft potential and low-energy plasma measurements. The simulations show emitted electrons extend upstream along the magnetic field and, for sufficiently high emission rates, charge the spacecraft to positive potentials. The lack of accurate emission rates and characteristics, however, makes differentiation between the prominence of secondary electron emission and ionospheric charged dust populations, which induce similar charging effects, difficult for Cassini. These results provide further context for Cassini's final measurements and highlight the need for future laboratory studies to support high-velocity flyby missions through planetary and cometary ionospheres. 0000-0002-8800-7000]Z. Zhang 0000-0002-4880-7000]R. T. Desai 0000-0002-4880-7000]O. Shebanits 0000-0002-4880-7000]F. L. Johansson 0000-0002-4880-7000]Y. Miyake 0000-0002-4880-7000]H. Usui ## 1 Introduction Cassini's Grand Finale obtained the first ever in-situ measurement of Saturn's ionosphere. Passing through on 22 orbits prior to its final plunge, the spacecraft provided unprecedented observations from inside Saturn's D-ring down to 1,360 km altitude (Ip et al., 2016; Cravens et al., 2019; Dougherty et al., 2018; Hsu et al., 2018; Lamy et al., 2018; Mitchell et al., 2018; Roussos et al., 2018; Waite et al., 2018; Wahlund et al., 2018). Cassini's Plasma Spectrometer (Young et al., 2004) was, however, offline post-2012 and significant unknowns remain regarding charged ion and dust populations and their influence on the gas giant's ionosphere. Saturn's inner rings are inherently unstable and were identified as raining onto the top of the gas giant's equatorial ionosphere (Connerney and Waite, 1984; Northrop and Hill, 1982). This was subsequently observed by Cassini in-situ where the spacecraft's Ion and Neutral Mass Spectrometer (INMS), Cosmic Dust Analyser and Charge Energy Mass Spectrometer detected ring fragments consisting of water, silicates and organics in-flowing at estimated fluxes between 4,800 and 45,000 kg/s (Hsu et al., 2018; Mitchell et al., 2018; Waite et al., 2018). Cassini's high-velocity limited the spectroscopic plasma measurements to \(<\)5 u and the composition of Saturn's ionosphere has thus been inferred from the available measurements. For example, Cassini's Radio and Plasma Wave Science antenna observed up to an order of magnitude greater electrons than \(1-4\) u positive ions, and ion populations of \(>4\) u were therefore inferred to be present (Waite et al., 2018). Cassini's Langmuir Probe, also, simultaneously measured nearly an order of magnitude greater positive ion currents compared with electron currents (Hadid et al., 2019; Morooka et al., 2019; Wahlund et al., 2018) and these observations were thus interpreted as arising from increasingly abundant populations of negatively charged ions and dust with decreasing altitude (Morooka et al., 2019), in addition to the larger \(>\)4 u positive ions. Cassini's Langmuir Probe measured the bulk plasma currents and therefore uniquely provides a measure of all ionospheric plasma constituents. As an integral measurement, however, the interpretation of this data-set is non trivial. Two distinct interpretations of the LP data thus exist within the literature. Morooka et al. (2019) first report the apparent current discrepancies as arising from significant populations of charged dust, an interpretation which has formed the basis for sequential studies of Saturn's ionosphere (e.g. Hadid et al., 2019; Wahlund et al., 2018; Shebanits et al., 2020; Zhang et al., 2021). Johansson et al. (2022), however, recently suggested that less dust is present and that the LP current imbalance is caused by secondary electrons and ions, emitted due to impacting gas molecules. The two contrasting interpretations of the LP data have the commonality that they both identify Cassini as having charged to positive floating potentials. Zhang et al. (2021) examined the role of charged dust in charging Cassini and showed that this could charge Cassini to positive potentials when the negatively charged ion/dust mass was significantly greater than that of the positive ions/dust and when the electrons constitute less than 10 % of the total electron density. In this study we evaluate the hypothesis that secondary electron emissions in Saturn's ionosphere might have induced a similar effect. Given the differing interpretation of the LP data, we focus on evaluating the effect on the spacecraft potential to provide complementary understanding of the underlying system state. The dynamics of the spacecraft floating potential in these conditions are also relevant to low energy plasma measurements obtained during any high-velocity flyby missions of planetary and cometary environments. To study the effect of secondary electron and ion emission (SEE and SIE) for the Cassini spacecraft during the Grand Finale, we utilise three-dimensional particle-in-cell (PIC) simulations, as follows: Section 2 introduces the methods of the simulation and describes the input parameters. Section 3 analyses and discusses the results of the simulations as well as the varying of key parameters. Section 4 then concludes by summarising the results and discusses the implications to our understanding of the low energy plasma measurements of the composition of Saturn's ionosphere. ## 2 Method In this study we utilise the three-dimensional Particle-In-Cell simulation code for ElectroMagnetic Spacecraft Environment Simulation (EMSES) developed for a self-consistent analysis of spacecraft-plasma interactions at electron scales (Miyake and Usui, 2009). We embed a toy model of Cassini to scale within a predefined simulation domain. The three-dimensional simulations are run in the spacecraft frame where the inflowing ionospheric plasma consists of drifting Maxwellian velocity distributions. Each species has mass and charge normalized to the proton scale with a real ion-to-electron mass ratio. The spacecraft is treated as a perfect conductor, and a detailed description of conductors and the numeric can be found in the previous work (Zhang et al., 2021). We model Cassini at the representative altitude of 2500 km during Rev 292 as in the previous study (Zhang et al., 2021) but instead of including "dust" particles, we evaluate the hypothesis that SEE and SIE currents present a viable alternative to charged dust currents. To thus compare the effect of dust and secondary electron emissions accurately and independently, we use similar environment parameters as our previous study where the dust is investigated, but we replace the dust populations with secondary ion and electron emissions. We then scale our simulation parameters across multiple orders of magnitude to represent a larger range of Saturn's ionosphere, as explained below. To test the hypothesis, we balance the electron populations to match the positive ion densities derived from the Langmuir Probe (Morooka et al., 2019) and introduce secondary electron and ion particles emitted from the spacecraft due to neutral-spacecraft collisions. The bulk current, I\({}_{total}\), onto Cassini can therefore be broken down into the electron current, I\({}_{electron}\), the ion current, I\({}_{ion}\), and the secondary currents I\({}_{SEE}\) and I\({}_{SIE}\): \[I_{total}=I_{electron}+I_{ion^{+}}+I^{e}_{SEE}-I^{e}_{SEE}+I^{e}_{SIE}-I^{e}_{SIE} \tag{1}\] where, importantly, I\({}^{e}_{SEE}\) and I\({}^{e}_{SIE}\) are the emitted electron and ion currents, respectively, and I\({}^{e}_{SEE}\) and I\({}^{e}_{SIE}\) represents those returning to impinge upon Cassini. The same density of electrons and positive ions with mass 1.35 u are introduced in the simulations, as inferred from Langmuir Probe observations of the effective positive charge carrier at this altitude (Morooka et al., 2019). Increasing the positive ion mass was also found to have only a small impact on the potential (Zhang et al., 2021, Fig 5a therein). We also consider a cooler ionosphere of 370 \(K\). This temperature change is motivated by ionospheric models (Moore et al., 2008; Moore et al., 2018; Muller-Wodarg et al., 2019; Yelle et al., 2018) indicating ionospheric temperatures lower than the electron temperature inferred from Cassini's Langmuir Probe (Morooka et al., 2019) which is suggested to have been affected by secondaries (Johansson et al., 2022). As shown in the subsequent results, the differing temperature choices of Zhang et al. (2021) and this study do not affect the trends reported. This choice also results in a smaller electron Debye length than previously considered. This requires a smaller grid width of 5 \(cm\) with a total grid of 256\({}^{3}\), across a total simulation box size of 12.8 \(m^{3}\). The emitted SEE and SIE current densities are a function of the atmospheric neutral number density, \(n\), elementary charge, \(e\), the spacecraft velocity, v\({}_{sc}\), the yield defined as the number of electrons ejected per incident neutral, \(\gamma\), and the spacecraft's geometric cross-section, \(A\), as: \[I_{SEE/SIE}^{e}=\sum_{\alpha}\,n_{\alpha}\,e\,v_{sc}\,\gamma_{\alpha}\,A, \tag{2}\] where \(\alpha\) represents the neutral species of interest. Due to the lack of laboratory experiments of quantum yields from Cassini's surface materials, the inclusion of emission requires careful consideration and as a result we utilise and vary yields associated with water ions. Schmidt & Arends (1985) determined yields experimentally from water molecules incident on three materials relevant for Giotto's 70 km/s flyby velocity of Comet 1P/Halley. Here we took the measured value of \(\gamma=0.15\) as our base value of SEE impact yield in our study. At 2500 km altitude, these values correspond to \(\approx 10\ \mu A/m^{2}\). However, it is necessary to also vary this emission rate across a large range of values when applying this in the actual model, for two reasons. Firstly, there are significant uncertainties in adopting this rate for Cassini's Kapton blankets (Lin & Stultz, 1995) and its lower 35 km/s flyby velocities compared to the Giotto's 70 km/s velocity that Schmidt & Arends (1985) did their experiment on. Secondly, the neutral density increases exponentially with decreasing altitude in Saturn's ionosphere (Yelle et al., 2018) and, as the secondary emission is directly proportional to the neutral density, varying the emission yield therefore captures this natural variation in Cassini's interaction with Saturn's ionosphere (Moore et al., 2018; Muller-Wodarg et al., 2019). Therefore, by varying the emission yield, \(\gamma\), across multiple orders of magnitude, we qualitatively recover a feature of the interaction between the Cassini spacecraft and Saturn's ionosphere that varies with altitude, from little to no neutral content at the top-side ionosphere down to the densest part of Saturn's ionosphere sampled. The variation therefore also qualitatively captures a variation of altitudes. In this regard, the sensitivity of Cassini spacecraft potential to the emission density at different regimes, as we will see below, may also be of use for future high-velocity flyby missions of ionospheric environments. The secondary ion emission is anticipated to be between 5-40 times lower than electron yields and we therefore implement this to be 10 % of the electron yields with an emitted ion temperature of 10 eV (Schmidt & Arends, 1985). In our simulations the electron and ion secondaries are emitted as a Maxwellian distribution. We adopt yields due to neutral water ion density of \(n\) = 1.5 \(\times\) 10\({}^{4}\) cm\({}^{-3}\) from the model of Moore et al. (2018) due to the significant effect of these species on the Giotto and Vega spacecraft but, as discussed previously, the variation in \(\gamma\) can be viewed as interchangeable with variations in \(n\) and therefore also altitude. Emission from further species such as CH\({}_{4}\) and CO\({}_{2}\) might also contribute given that kinetic electron emission processes (Sternglass, 1957) will dominate over potential emission ones (Kishinevsky, 1973) in this regime, but are not included at this stage. ## 3 Results & Analysis ### Plasma Interaction Figure 1 shows the global plasma interaction between the Cassini spacecraft and Saturn's ionosphere at 2500 km altitude for the conditions outlined in Table 1. The colour-bar depicts the ion and electron densities (primary and secondary) and the plasma is moving along the positive y axis, and the magnetic field is approximately parallel to the -y axis. The spacecraft charges here to a positive potential. A plasma wake can be clearly seen trailing behind the spacecraft at regions where the density is depleted. Due to the high speed of the plasma flows and non-zero potential of the spacecraft, the incoming ions and electrons are deflected around the sides of the spacecraft forming enhanced densities adjacent to the wake. The probe swept from -4 to +4 V in Saturn's ionosphere and is negatively biased in this simulation, thus appearing as a region void of electrons. This biased potential also affects the surrounding plasma that subsequently impinges upon Cassini. Secondary electrons and ions are generated at the surfaces of impact simulating the incoming neutral impacts and result in electron and ion concentrations more than double the ambient ionospheric densities. These are generated in the spacecraft frame and are therefore able to diffuse away from the surfaces and upstream. This notably results in a decrease in the incoming ionospheric electron density ahead of the spacecraft. A unique aspect of this plasma regime is that the electron gyro-radii is smaller than the spacecraft while the ion gyro-radii is significantly larger. The emitted ions there appear to diffuse uniformly out in space upstream whereas the electrons are tied to the field lines. This notably presents a prediction for when they might be detected, i.e. for a geometry where the Langmuir Probe is magnetically connected to Cassini's main body or antenna dish. In front of the spacecraft, "electron wings" are present formed by Langmuir waves propagating along the background magnetic field (Miyake et al., 2020) which is oriented predominantly anti-parallel to the plasma flow. This appears to be notably enhanced compared to situations without SEE (Zhang et al., 2021) due to the enhanced electron densities resulting from SEE. Due to this effect striking the in-flowing boundary condition, the simulation box was expanded upstream as shown in Figure 1 up to the point where the wing structures no longer intersected the upstream plasma. This verified that this effect produced negligible (\(<\)1 %) differences in the simulation results. ### Secondary Emitted Currents Figure 2 shows the current decomposition comparison for spacecraft at relatively low secondary emission density (ia-c) and at high secondary emission density (ia-c). These two scenarios represent two distinct regimes of Cassini charging to negative and positive potentials, as these simulations indicate occurs as Cassini descends into Saturn's ionosphere. The ultimate values reached in Figure 2 are therefore of relevant to Cassini at Saturn while the time-history reveal the time-dependent interactions between the currents as the simulations reach steady state. The most interesting result is that there is significant re-absorption of the emitted electrons back onto the spacecraft. In Figure 2(ib), where the spacecraft is charging to a significant positive potential, over 50% of the emitted secondary electrons are re-absorbed back onto the spacecraft later, resulting in the net yield of the SEE emission being less than 40% of the neutral yield one would expect. Even in the case where there is little emission and the potential is negative, there is still significant re-absorption of the SEE electrons due to the space-charge-limited effect, i.e. the Child-Langmuir law. This makes the net effect of the SEE current environment dependent, as well as a diminishing return of SEE density when the spacecraft becomes significantly positive, as can be seen in the later Figures. The ions are however absorbed onto the spacecraft in much lower amounts, due to their larger emitted energies and greater momenta. For the high emission case, even though a majority of the SEE electrons are being re-absorbed, due to its high density it still dominates the positive currents in the system as its "net" current is still much larger than the positive ion currents in the system. As a result, the spacecraft's current balance and hence its potential is controlled largely by the properties of the SEE currents. In contrast, for the low emission case, as shown in Figure 2(ib), since the density is much smaller, the "net" SEE current is now much smaller and the positive ion current becomes the significant current in the system, hence in this case the spacecraft would not be sensitive to the properties of the SEE currents. \begin{table} \begin{tabular}{|l c|} \hline \multicolumn{2}{|c|}{**Environmental Parameters**} \\ Plasma ion density, \(n_{0}\) & 505 cm\({}^{-3}\) \\ Ion mass, \(m_{i}\) & 1.35 amu \\ Electron temperature, \(T_{e}\) & 0.0318 eV (370 K) \\ Ion temperature, \(T_{i}\) & 0.0318 eV (370 K) \\ Magnetic field, \(\vec{B}\) & [1.48\(\hat{x}\), –14.8\(\hat{y}\), 1.24\(\hat{z}\)] \(\mu\)T \\ Flow velocity, \(\vec{v}_{flow}\) & [–0.25\(\hat{x}\), –32.4\(\hat{y}\), –10.7\(\hat{z}\)] km s\({}^{-1}\) \\ Ion acoustic speed, \(v_{S}\) & 2.47 km s\({}^{-1}\) \\ Debye length, \(\lambda_{D}\) & 5.90 cm \\ Electron gyroperiod, \(\tau_{ge}\) & 4.76 \(\mu\)s \\ Electron plasma period, \(\tau_{pe}\) & 4.98 \(\mu\)s \\ Ion gyroperiod, \(\tau_{gi}\) & 5.94 ms \\ Ion plasma period, \(\tau_{pi}\) & 0.247 ms \\ Electron emission density, \(J_{SEE}\) & 0.005 - 500 \(\mu A/m^{2}\) \\ Electron emission temperature, \(T_{SEE}\) & 2 \(eV\) \\ Ion emission density, \(J_{SIE}\) & 0.0005 - 50 \(\mu A/m^{2}\) \\ Ion emission temperature, \(T_{SIE}\) & 10 \(eV\) \\ \multicolumn{2}{|c|}{**System Parameters**} \\ Grid width, \(\Delta\)r & 5 cm \\ Time step, \(\Delta\)t & 0.033 \(\mu\)s \\ Simulation time, \(t\) & 0.67 ms \\ Particles per cell & 25 \\ \hline \end{tabular} \end{table} Table 1: Environmental and System Simulation Parameters ### Varying the Secondary Emission Figure 3 shows the overall potential changes when one varies the secondary electron and ion emission currents in Equation 2. When the emission density is low (\(<0.1~{}\mu A/m^{2}\)) as expected at the top of the ionosphere, the potential of the spacecraft is virtually unchanged compared to when the secondary electrons and ions are not emitted. On the other hand, when the emission is high as expected for higher neutral densities lower in the ionosphere, the spacecraft potential becomes very sensitive to the emitted currents and, not only do they successfully bring the spacecraft potential to positive values, they are able to raise its potential to up to \(>3~{}V\) at \(500\mu A/m^{2}\) SEE current density. This shows secondary currents, with SEE in excess of SIE, are indeed able to raise the spacecraft potential significantly, thus achieving some of the same global effect on the spacecraft as dust currents. We now compare our spacecraft potential results to Cassini measurements. In the absence of SEE and SIE, the simulated spacecraft potential is close to zero (\(-0.08~{}V\)) for the environmental conditions considered, as indeed is anticipated for an object moving through a cool, dense ionosphere. This baseline potential is dependent upon the electron temperature, if a temperature of 3 times higher is used, a starting negative potential of \(-0.46~{}V\) will be obtained. This value is quite similar to the potential obtained in Zhang et al. (2021), where the same temperature but with dust included obtained a potential of \(-0.42~{}V\). Cassini at this altitude measured at \(-0.12~{}V\), close to the simulated environment with a cold plasma. The variation in secondary emission density, as shown in Figure 3, therefore represents a clear departure to the potentials with clear dependence between the spacecraft potential and the neutral density, albeit mediated by the unknowns in the quantum yields. Using estimated yields and densities outlined Section 2, Cassini is estimated to experience \(\approx 5\mu A/m^{2}\) of SEE and SIE at higher altitude (240 0km) and \(50\mu A/m^{2}\) around the lowest altitude it experienced (1700 km). This corresponds to the range where SEE and SIE begin to make significant impact towards the potential of Cassini, as shown in the Fig 3. Although there is much uncertainties surrounding these estimations, this illustrated the possibility of SIE and SEE becoming a factor in the positive spacecraft potential during Cassini's flybys. The spacecraft potentials reported by the Langmuir Probe (Morooka et al., 2019) show variations from just Figure 1: Simulated plasma densities around the Cassini spacecraft. [a] shows the ionospheric ions, [b] shows the secondary emitted ions, [c] shows the inospheric electrons, and [d] shows the secondary emitted electrons. The plasma velocity, v\({}_{flow}\), is predominantly along the Y-axis and the magnetic field, B\({}_{0}\) is approximately anti-parallel to this. Specific input parameters can be found in Table 1. The ionospheric electrons are electrostatically displaced upstream resulting in a combination of ionospheric and secondary electrons surrounding the spacecraft. Electrons wings caused by propagating Langmuir waves further modify the plasma ahead of the spacecraft and the negatively biased Langmuir Probe is visible as a region devoid of plasma. A schematic of the simulated spacecraft geometry can be found in Zhang et al. (2021, Figure 1 therein) below -1 V to +0.6 V using an estimate from the maximum derivative of the current onto the probe. Johansson et al. (2022), however, suggest that the additional consideration of SEE changes the sweep interpretation and identifies higher potentials through determining the change between exponential and linear regions of the electron current in the current-voltage sweeps. The simulations presented herein therefore present constraints on the underlying system state, which can inform the various methods of inferring the spacecraft potential. ### Emitted Electron Temperature As electron emission is anticipated to dominate over ion emission, further attention is given to the properties of the emitted electrons. The temperature of the emitted electrons was implemented at 2 \(eV\), as anticipated by Schmidt & Arends (1985), but this might well be be different for Cassini's surface's interaction with Saturn's ionosphere and Johansson et al. (2022) indeed indicate a lower temperature of 0.5 eV. Figure 4 therefore shows the sensitivity of the SEE current simulated under high and low emission current densities by varying the secondary electron's temperature. When the SEE current's magnitude is low, varying the temperature of the SEE species has almost no impact on the Figure 2: Currents onto the Cassini spacecraft’s for two distinct regions: (ia–c) show the case of a negative floating potential induced when I\({}_{SEE}\)= 0.5 \(\mu\)A/m\({}^{2}\) and I\({}_{SIE}\)= 0.05 \(\mu\)A/m\({}^{2}\), and (iia–c) show the case of a positive floating potential induced when I\({}_{SEE}\)= 50 \(\mu\)A/m\({}^{2}\) and I\({}_{SIE}\)= 0.5 \(\mu\)A/m\({}^{2}\). The upper panels (ia) and (iia) show all the positive and negative currents inclusive, panels (ib) and (iib) show the currents associated with the emitted electrons along with the spacecraft potential and the lower panels (ic) and (iic) shows the currents associated with the emitted ions. potential value of the spacecraft, and the spacecraft potential does not become positive. This is an expected result as, when the floating potential is negative, the electrons are strongly repelled and the emitted current density remains constant. However, for the much higher 50 \(\mu A/m^{2}\) current density case, by raising the temperature of the electrons by a factor of 10, the potential of the spacecraft raised from 1.6 \(V\) to almost 2.5 \(V\). This trend supports the analysis of Figure 2 by showing the higher the current density of the SEE electrons, the more sensitive the spacecraft's potential is to the emitted electron temperature. This therefore shows that when the spacecraft potential becomes positive, the characteristics of SEE might better provide an indicator of the characteristics of the neutrals striking the spacecraft. The secondary electron emission temperature inferred by Johansson et al. (2022) is 0.5 \(eV\), and notably lower than the laboratory derived rates of Schmidt and Arends (1985). The variation of the SEE temperature in Figure 4 covers the temperature inferred by Johansson et al. (2022) under its varying range. The resultant trend showed that the spacecraft potential varies smoothly with emitted electron temperature when positively charged, and the spacecraft stays positively charged for large emission rate even at very low temperatures of 0.01 \(eV\). Therefore, a sufficient emitted electron current would theoretically drive the Cassini spacecraft to positive potentials in Saturn's ionosphere, as Johansson et al. (2022) infers. This variation with temperature also allows these results to be applicable to future missions where the environment and emitted electron characteristics could be different. ## 4 Discussion & Conclusions In conclusion, we use three-dimensional PIC simulations to demonstrate that SEE theoretically represents a phenomenon for producing positive spacecraft potentials in Saturn's ionosphere. Specifically, when the amount of SEE is large (\(>\)1 \(\mu A/m^{2}\)), spacecraft potentials were very sensitive to the SEE yield and hence the simulations could produce a smooth transition from negative to positive values as observed during the Grand Finale flybys (Johansson et al., 2022; Morooka et al., 2019). For small SEE yields, however, SEE induced a negligible effect on the simulated Cassini spacecraft's plasma interaction. The simulations show the emitted electrons propagate upstream of the spacecraft along the magnetic field and can then be re-absorbed, which means they might also be detected by the Langmuir Probe and other plasma instruments for specific spacecraft-magnetic field orientations. Figure 1 highlights how measurements of the ionospheric electrons might also be affected by the production of secondary electron populations. Identifying these re-absorbed electrons could also be useful for identifying SEE populations by other instruments on-board Cassini, as well as helping to calibrate for Langmuir Probe analysis of the ionospheric content. The inference of charged dust populations in Saturn's equatorial ionosphere (Morooka et al., 2019; Wahlund et al., 2018) and the charge depletion of electrons of over 90% is consistent with Langmuir Probe observation at Enceladus (Morooka et al., 2011; Wahlund et al., 2009) and Titan (Agren et al., 2012; Shebanits et al., 2016), where large negatively charged ions and dust had been detected using Cassini's Plasma Spectrometer (Coates et al., 2010, 2007; Desai et al., Figure 3: Spacecraft potential dependence upon the secondary electron emission current density (yield) with zero ion emission (blue) and where the ion emission is 10 % of the electron emission (red). 2017; Wellbrock et al., 2019; Mihailescu et al., 2020). At Saturn, the presence of negatively charged ions and dust is explained through the accumulation of in-falling ring particles (Hsu et al., 2018; Mitchell et al., 2018) which undergo electron impact ionisation processes. In a preceding study, Zhang et al. (2021) thus showed that charged dust can also produce a positive spacecraft potential when the positive species are overall more mobile than the negative species,with electron depletions of over 90 %. This study, however, shows this is potentially explained by the phenomenon of neutral-induced electron and ion emission with electron emission rates dominating over the ion emission rates. The amount of dust outside of the ionosphere is constrained by both CDA (Hsu et al., 2018) INCA/CHEMS (Mitchell et al., 2018), and RPWS (Wahlund et al., 2018) and the amount of dust that falls into the equatorial region ionosphere from above D-ring is estimated at around 10-100 cm\({}^{-3}\) at 1500 km as projected by models to lower altitudes. A major outstanding question therefore remains as to the fate of these inflowing dust populations. When considering only the spacecraft potential, the two effects of SEE and charged dust cannot be distinguished from one another, and it is possible that both contributed to the positive potential observed at Saturn. A definitive question within the SEE debate is what emission yields to use for which neutral species incident upon Cassini which highlights the urgent need for further laboratory studies therefore. Here we used emission rates typical for metals and and those closest to the conditions at hand (Schmidt and Arends, 1985) but these are still not directly representative of Saturn's atmospheric neutrals impacting Cassini as they were designed for the significantly higher, 70 km/s, velocity flybys of the Giotto and Vega missions (Gard and Mikhailov, 1989) compared with Cassini's 35 km/s velocity. In this study we also only considered water molecule densities derived from the ionospheric model of Moore et al. (2018, Figure 2 therein). Observations from the Grand Finale revealed significant populations of methane, ammonia, and organics in addition to the anticipated molecular hydrogen, helium, and water (Hsu et al., 2018; Mitchell et al., 2018; Waite et al., 2018) and NH\({}_{3}\), CO\({}_{2}\), CH\({}_{4}\) and the ambiguous mass 28 detections, all having densities the same or higher as H\({}_{2}\)O. These should all have energies sufficient to trigger electron emission from Cassini as their energies in the spacecraft frame are expected to exceed the work function of the target surfaces. If these species have similar yields, the SEE current densities should be several factors, if not an order of magnitude higher which seems unphysically large. Such elevated SEE currents would drive Cassini to even higher potentials but as the potential exceeds the peak of the Maxwellian of the emitted electron energies, fewer and fewer would be able to escape the potential well surrounding the spacecraft (Marchand et al., 2014). The emitted ions would, however, easily escape due to electrostatic repulsion. In this scenario the potential would apparently be mediated by the SIE currents which act to prevent the potential diverging to extreme positive potentials. A more accurate Cassini spacecraft model could also be used within further studies. For example, while the spacecraft is generally designed to be conducting, the high-gain-antenna is coated in a resistive paint, the properties of which are not included herein. The most important factor for the plasma interaction is however the ram-pointing side of Cassini and so the first Grand Finale flyby, where Cassini flew with the HGA in ram, this effect would be most important. Figure 4: Spacecraft potential dependence upon temperature of the emitted electrons. Given the uncertainties in the measurements of Saturn's ionospheric plasmas and the multitude of parameters that might therefore be varied, we have therefore opted to sweep through the most important parameters of interest. We directly compared our potentials, Figure 3 and Figure 4, and found potentials similar to those inferred for Cassini (Morooka et al., 2019; Johansson et al., 2022) and the simulations results and trends discovered are therefore applicable to studies of spacecraft charging in Saturn's ionosphere and in similar environments. It is also worth noting that if accurate quantum yields were determined from neutral molecules onto Cassini thermal Kapton blankets, the spacecraft potentials might also yield further information on the neutral composition of the giant planet's atmospheric densities as the remaining unknown in Equation 2. In future missions where dust effects are small, accurate quantum yields might therefore be used to infer information on neutral populations from the spacecraft potential and measured incident currents. ## Acknowledgements ZZ acknowledges funding from the Royal Astronomical Society. RTD acknowledges STFC Ernest Rutherford Fellowship ST/W004801/1 and NERC grants NE/P017347/1 and NE/V003062/1. YM and HU acknowledge grant no. 20K04041 from the Japan Society for the Promotion of Science: JSPS, and support from the innovative High-Performance-Computing Infrastructure (HPCI: hp210159) in Japan. OS acknowledges SNSA grant no. Dnr:195/20. FLJ acknowledges a grant from Lennanders stifelse. This work used the Imperial College High Performance Computing Service (doi: 10.14469/hpc/2232).
2301.01886
Equivariant $K$-theory of Springer Varieties
The aim of this paper is to describe the topological equivariant $K$-ring, in terms of generators and relations, of a Springer variety $\mathcal{F}_{\lambda}$ of type $A$ associated to a nilpotent operator having Jordan canonical form whose block sizes form a weakly decreasing sequence $\lambda=(\lambda_1,\ldots, \lambda_l)$. This parallels the description of the equivariant cohomology ring of $\mathcal{F}_{\lambda}$ due to Abe and Horiguchi and generalizes the description of ordinary topological $K$-ring of $\mathcal{F}_{\lambda}$ due to Sankaran and Uma \cite{su}.
Vikraman Uma
2023-01-05T03:07:42Z
http://arxiv.org/abs/2301.01886v2
# Equivariant \(K\)-theory of Springer Varieties ###### Abstract. The aim of this paper is to describe the topological equivariant \(K\)-ring, in terms of generators and relations, of a Springer variety \(\mathcal{F}_{\lambda}\) of type \(A\) associated to a nilpotent operator having Jordan canonical form whose block sizes form a weakly decreasing sequence \(\lambda=(\lambda_{1},\ldots,\lambda_{l})\). This parallels the description of the equivariant cohomology ring of \(\mathcal{F}_{\lambda}\) due to Abe and Horiguchi and generalizes the description of ordinary topological \(K\)-ring of \(\mathcal{F}_{\lambda}\) due to Sankaran and Uma [18]. Key words and phrases:Springer varieties, flag varieties, equivariant K-theory, equivariant cohomology 2020 Mathematics Subject Classification: 55N15, 14M15, 19L99 ## 1. Introduction Fix a positive integer \(n\) and consider the complete flag variety \(\mathcal{F}(\mathbb{C}^{n})\) (or more briefly \(\mathcal{F}\)) defined as \[\mathcal{F}(\mathbb{C}^{n}):=\{\underline{V}:=(0=V_{0}\subset V_{1}\subset \cdots\subset V_{n}=\mathbb{C}^{n})\mid\dim V_{i}=i\,\,\,\text{for all}\,\,\,i\}.\] Let \(N:\mathbb{C}^{n}\longrightarrow\mathbb{C}^{n}\) denote a nilpotent linear transformation of \(\mathbb{C}^{n}\). The Springer variety of type \(A\) associated to \(N\) denoted by \(\mathcal{F}_{N}\) is the closed subvariety of \(\mathcal{F}\) defined as \[\{\underline{V}\in\mathcal{F}\,\,\,\,\mid\,\,\,\,NV_{i}\subset V_{i-1}\,\,\, \,\text{for all}\,\,\,\,1\leq i\leq n\}.\] The Springer variety \(\mathcal{F}_{N}\) is seen to be the subvariety of \(\mathcal{F}\) fixed by the action of the infinite cyclic group generated by the unipotent element \(U=I_{n}+N\in SL(n,\mathbb{C})\). Moreover, denoting by \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{l})\) the partition of \(n\) where the \(\lambda_{j}\) are the sizes of the diagonal blocks of the Jordan canonical form of \(N\), the variety \(\mathcal{F}_{N}\) depends, up to isomorphism, only on the partition \(\lambda\). This is so, because two different choices of nilpotent transformations corresponding to the same partition \(\lambda\) are conjugates in \(GL(n,\mathbb{C})\). For this reason, we assume that \(N\) itself is in the Jordan canonical form: \(N=J_{\lambda}:=\text{diag}(J_{\lambda_{1}},\ldots,J_{\lambda_{l}})\) with \(\lambda_{1}\geq\cdots\geq\lambda_{l}\) and denote the Springer variety \(\mathcal{F}_{N}\) by \(\mathcal{F}_{\lambda}\). (Here \(J_{p}=(a_{i,j})\in M_{p}(\mathbb{C})\) is the matrix where \(a_{i,i+1}=1,1\leq i<p\), and all other entries are zero.) If \(\lambda=(1,\ldots,1)\), then \(N=0\) and we have \(\mathcal{F}_{\lambda}=\mathcal{F}(\mathbb{C}^{n})=\mathcal{F}\). At the other extreme, when \(\lambda=(n)\), \(N\) is a regular nilpotent element and we see that \(\mathcal{F}_{(n)}\) is the one-point variety consisting only of the standard flag \(0=E_{0}\subset E_{1}\subset\cdots\subset E_{n}=\mathbb{C}^{n}\) where \(E_{j}\) is spanned by the standard basis vectors \(e_{1},\ldots,e_{j}\) for \(1\leq j\leq n\). Note that \(\mathcal{F}_{\lambda}\) is stable by the action of the algebraic torus \(T^{l}_{\mathbb{C}}\cong(\mathbb{C}^{*})^{l}\) contained in \(GL(n,\mathbb{C})\) consisting of all diagonal matrices which commute with \(N\). We shall denote by \(T^{l}=(\mathbb{S}^{1})^{l}\) the compact torus contained in \(T^{l}_{\mathbb{C}}\). Denoting the diagonal subgroup of \(GL(n,\mathbb{C})\) by \(T^{n}_{\mathbb{C}}\), we have \((t_{1},\ldots,t_{n})\in T^{n}_{\mathbb{C}}\) belongs to \(T^{l}_{\mathbb{C}}\) if and only if \(t_{a_{j}+i}=t_{a_{j}+1}\) for \(1\leq i\leq\lambda_{j+1}\) where \(a_{j}:=\lambda_{1}+\cdots+\lambda_{j},1\leq j\leq l-1\). The variety \(\mathcal{F}_{\lambda}\) was first studied by Springer (see [16], [17] and also [8]). In particular, Springer showed that there is a natural action of the symmetric group \(S_{n}\) on the rational cohomology \(H^{*}(\mathcal{F}_{\lambda};\mathbb{Q})\) which is compatible with the standard action of \(S_{n}\) on \(H^{*}(\mathcal{F},\mathbb{Q})\). Moreover, the restriction homomorphism \(H^{*}(\mathcal{F};\mathbb{Z})\longrightarrow H^{*}(\mathcal{F}_{\lambda}; \mathbb{Z})\) induced by the inclusion \(\mathcal{F}_{\lambda}{\hookrightarrow}\mathcal{F}\), is surjective (see [8]). The variety \(\mathcal{F}_{\lambda}\) is not irreducible in general, but it is equidimensional. The irreducible components of \(\mathcal{F}_{\lambda}\) are naturally labelled by the set of standard tableaux of shape \(\lambda\). See [15]. Under the \(S_{n}\)-action, the \(S_{n}\)-module, \(H^{2\dim\mathcal{F}_{\lambda}}(\mathcal{F}_{\lambda};\mathbb{Q}))\) is isomorphic to the irreducible representation \(M_{\lambda}\) of \(S_{n}\) induced from the identity representation of the subgroup \(S_{\lambda}=S_{\lambda_{1}}\times S_{\lambda_{2}}\cdots\times S_{\lambda_{l}} \subset S_{n}\). See [12]. De Concini and Procesi [6] gave a description of \(H^{*}(\mathcal{F}_{\lambda};\mathbb{C})\) as the coordinate ring of an (unreduced) variety over \(\mathbb{C}\) which we now describe. Let \(\lambda^{\vee}\) denote the partition dual to \(\lambda\). The coordinate ring \(\mathbb{C}[\mathfrak{t}_{\mathbb{C}}\cap\overline{O}_{\lambda^{\vee}}]\) of the (non-reduced) scheme \(t_{\mathbb{C}}\cap\overline{O}_{\lambda^{\vee}}\) (scheme theoretic intersection) where \(\mathfrak{t}=Lie(T^{n}_{\mathbb{C}})\subset\mathfrak{gl}(n,\mathbb{C})=M_{n}( \mathbb{C})\) and \(\overline{O}_{\lambda}\subset M_{n}(\mathbb{C})\) denotes the closure of the orbit of \(J_{\lambda^{\vee}}\) under the adjoint action of \(GL(n,\mathbb{C})\). De Concini and Procesi showed that \(H^{*}(\mathcal{F}_{\lambda};\mathbb{C})\) is isomorphic to the algebra \(\mathbb{C}[\mathfrak{t}\cap\overline{O}_{\lambda^{\vee}}]\) (see [6]). Tanisaki [19] described \(H^{*}(\mathcal{F}_{\lambda};\mathbb{C})\) as a quotient of a polynomial ring over \(\mathbb{C}\) by an ideal, which has come to be known as the Tanisaki ideal. Tanisaki's description in fact yields the integral cohomology ring of \(\mathcal{F}\). Recently, the \(T^{l}\)-equivariant cohomology algebra \(H^{*}_{T^{l}}(\mathcal{F}_{\lambda};\mathbb{Z})\) has been described by H. Abe and T. Horiguchi. It turns out that \(H^{*}_{T^{l}}(\mathcal{F}_{\lambda};\mathbb{Z})\) is the quotient of a polynomial algebra over \(H^{*}_{T^{l}}(pt;\mathbb{Z})=H^{*}(BT;\mathbb{Z})\) modulo an ideal, which is a natural generalization of the Tanisaki ideal. This presentation recovers the presentation for the ordinary integral cohomology ring via the forgetful map \(H^{*}_{T^{l}}(\mathcal{F}_{\lambda};\mathbb{Z})\longrightarrow H^{*}( \mathcal{F}_{\lambda};\mathbb{Z})\) (see [1, Theorem 4.1]). We denote by \(\mathcal{L}_{i}\) the canonical line bundle over \(\mathcal{F}(\mathbb{C}^{n})\) whose fibre over a flag \(\underline{V}\) is the vector space \(V_{i}/V_{i-1},1\leq i\leq n\). Let \(L_{i}=\mathcal{L}_{i}|_{\mathcal{F}_{\lambda}}\). Recall from [19] that first Chern classes of \(L_{i}\), \(1\leq i\leq n\) generate \(H^{*}(\mathcal{F}_{\lambda};\mathbb{Z})\). In [18], Sankaran and the author described the topological \(K\)-ring of \(\mathcal{F}_{\lambda}\) in terms of generators and relations. The generators of \(K(\mathcal{F}_{\lambda})\) are the classes \([L_{i}]\), \(1\leq j\leq n\). The relations are obtained by interpreting the relations in the cohomology ring in terms of the classes of the generating line bundles, using suitable gamma operations in \(K\)-theory (see [18, Proposition 4.1]). It can be seen that \(L_{i}\), \(1\leq i\leq n\) are in fact \(T^{l}\)-equivariant line bundles on \(\mathcal{F}_{\lambda}\), since they are the restrictions of the tautological line bundles \(\mathcal{L}_{i}\), \(1\leq i\leq n\) on \(\mathcal{F}\) which are \(T^{n}\)-equivariant. In this article we study the \(T^{l}\)-equivariant topological \(K\)-ring of \(\mathcal{F}_{\lambda}\). In our main theorem we give a presentation for \(K^{*}_{T^{l}}(\mathcal{F}_{\lambda})\) as an \(R(T^{l})\)-algebra in terms of generators and relations. More precisely, we show that \(K^{0}_{T^{l}}(\mathcal{F}_{\lambda})\) is generated by the classes \([L_{i}]_{T^{l}}\) of the \(T^{l}\)-equivariant line bundles \(L_{i}\) for \(1\leq i\leq n\). We further determine the ideal of relations in \(K_{T^{l}}(\mathcal{F}_{\lambda})\) as the equivariant analogue of the \(K\)-theoretic Tanisaki ideal defined in [18]. Before stating the main result, we need to set the following notations. A non-increasing sequence \(\lambda=(\lambda_{1},\ldots,\lambda_{l})\) of positive integers where \(\sum_{1\leq j\leq l}\lambda_{j}=n\), will be identified with the partition \((\lambda_{1},\ldots,\lambda_{n})\) where \(\lambda_{j}=0\) for \(j>l\). Let \(1\leq s\leq n\) and \[p_{\lambda}(s):=\lambda_{n-s+1}+\cdots+\lambda_{n},1\leq s\leq n.\] Recall that \(R(T^{l})=K_{T^{l}}(pt)=\mathbb{Z}[u_{i}{}^{\pm 1}\mid 1\leq i\leq l]\) where \(u_{i}\), \(1\leq i\leq l\) are the characters of \(T^{l}\) corresponding to the coordinate projections. We now state the main theorem. Let \(q:=p_{\lambda^{\vee}}(s)\). Let \(\mathcal{R}=R(T^{l})[x_{1},x_{2},\ldots,x_{n}]\) and let \(\mathcal{I}_{\lambda}\) denote the ideal in \(\mathcal{R}\) generated by the elements \[\sum_{0\leq k\leq d}(-1)^{d-k}e_{k}(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{s}})h_{d -k}(u_{\varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(s+1-d)}) \tag{1.1}\] for \(1\leq s\leq n\), \(1\leq i_{1}<\cdots<i_{s}\leq n\) and \(d\geq s+1-q\). Here \(e_{k}\) stands for the elementary symmetric function, \(h_{d-k}\) stands for the complete symmetric function (see [12]) and \(\varphi_{\lambda}\) is the map \([n]\longrightarrow[l]\) defined by the condition \[(u_{\varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(n)})=(\underbrace{u_{1 },\ldots,u_{1}}_{\lambda_{1}-\lambda_{2}},\underbrace{u_{1},u_{2},\ldots,u_{1},u_{2}}_{\lambda_{2}-\lambda_{3}},\ldots,\underbrace{u_{1},\ldots,u_{l},\ldots,u_{1},\ldots,u_{l}}_{\lambda_{l}-\lambda_{l+1}}) \tag{1.2}\] as ordered sequences where for each \(1\leq r\leq l\) the \(r\)th sector of the right hand side consists of \((u_{1},u_{2},\ldots,u_{r})\) repeated \((\lambda_{r}-\lambda_{r+1})\)-times. Here we let \(\lambda_{l+1}=0\). (see[1, (4.2)]). We now state our main theorem. **Theorem 1.1**.: _With the above notations, let_ \[\Psi_{\lambda}:R(T^{l})[x_{1},\ldots,x_{n}]\longrightarrow K^{0}_{T^{l}}( \mathcal{F}_{\lambda})\] _be the ring homomorphism defined by \(\Psi_{\lambda}(x_{j})=[L_{j}]_{T^{l}}\) for \(1\leq j\leq n\). Then \(\Psi_{\lambda}\) is surjective and \(\text{ker}(\Psi_{\lambda})=\mathcal{I}_{\lambda}\)._ We now briefly explain our method of proof of the main theorem and an outline of the paper. Let \[\binom{n}{\lambda}:=\frac{n!}{\lambda_{1}!\cdots\lambda_{n}!}.\] Using the fact that \(\mathcal{F}_{\lambda}\) has a \(T^{l}\)-stable algebraic cell decomposition with \(\binom{n}{\lambda}\) cells (see [15], [20]) we show in Theorem 5.1 that \(K^{0}_{T^{l}}(\mathcal{F}_{\lambda})\) is a free \(R(T^{l})\)-module of rank \(\binom{n}{\lambda}\). Moreover, \(K^{1}_{T^{l}}(\mathcal{F}_{\lambda})=0\). We note that \(\mathcal{F}_{\lambda}\) admits a \(T^{l}\)-stable filtration by closed subvarieties arising from the \(T^{l}\)-equivariant algebraic cell decomposition. Using this fact and an induction argument, we show in Theorem 5.3, that the pull-back map \(K^{*}_{T^{n}}(\mathcal{F})\longrightarrow K^{*}_{T^{l}}(\mathcal{F}_{\lambda})\) is surjective. For the special case when \(\lambda=(1,1,\ldots,1)\) the structure of \(K_{T^{n}}(\mathcal{F}_{\lambda})=K_{T^{n}}(\mathcal{F})\) is well known by the results in [13] and [11]. We recall this structure in Section 4. In Theorem 4.1 we give a presentation for \(K_{T^{n}}(\mathcal{F})\) as an \(R(T^{n})\)-algebra. In particular, we see that \(K_{T^{n}}(\mathcal{F})\) admits an \(S_{n}\)-action. In Proposition 5.5 we prove that there exists a natural \(S_{n}\)-action on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) such that the pull-back map \(K^{*}_{T^{n}}(\mathcal{F})\longrightarrow K^{*}_{T^{l}}(\mathcal{F}_{\lambda})\) is \(S_{n}\)-equivariant. The methods used in proving this result are similar to that used by Abe and Horiguchi for equivariant cohomology namely, by restricting to the \(T^{n}\) and \(T^{l}\) fixed points of \(\mathcal{F}\) and \(\mathcal{F}_{\lambda}\) respectively. Next by using the methods similar to that for the ordinary \(K\)-ring by Sankaran and the author in [18, Proposition 4.1] and \(\lambda\) operations in equivariant \(K\)-theory we show that the relations defining the ideal \(\mathcal{I}_{\lambda}\) hold in \(K^{*}_{T^{l}}(\mathcal{F}_{\lambda})\). This in particular, will show that \(\text{ker}(\Psi_{\lambda})\) contains \(\mathcal{I}_{\lambda}\). Thus we have a well defined surjective ring homomorphism from \(\mathcal{R}/\mathcal{I}_{\lambda}\) to \(K^{*}_{T^{l}}(\mathcal{F}_{\lambda})\). Now, by an argument similar to [1, Lemma 5.2] we find generators for \(\mathcal{R}/\mathcal{I}_{\lambda}\) as a \(R(T^{l})\)-module. Thus \(\Psi_{\lambda}:\mathcal{R}/\mathcal{I}_{\lambda}\longrightarrow K_{T^{l}}( \mathcal{F}_{\lambda})\) is a surjective ring homomorphism between two free modules of the same rank over \(R(T^{l})\) which is an integral domain. Hence it will follow that \(\Psi_{\lambda}\) is an isomorphism. This will prove our main theorem. Finally, through the forgetful map \(K_{T^{l}}(\mathcal{F}_{\lambda})\longrightarrow K(\mathcal{F}_{\lambda})\) we recover the presentation of ordinary \(K\)-ring \(K(\mathcal{F}_{\lambda})\) given in [18, Theorem 4.2], ## 2. Equivariant topological \(K\)-theory of cellular varieties For the definition and basic properties of equivariant topological \(K\)-ring we refer to [14]. Let \(X\) be a \(T_{\mathbb{C}}\)-variety for a torus \(T_{\mathbb{C}}\simeq(\mathbb{C}^{*})^{k}\) with a \(T_{\mathbb{C}}\)-stable algebraic cell decomposition. Let \[X_{m}\subseteq X_{m-1}\subseteq\cdots\subseteq X_{2}\subseteq X_{1}=X\] be the associated \(T_{\mathbb{C}}\)-stable filtration so that \(Z_{i}:=X_{i}\setminus X_{i+1}\simeq\mathbb{C}^{k_{i}}\) for \(1\leq i\leq m\). Here \(Z_{i}\simeq\mathbb{C}^{k_{i}}\) for \(1\leq i\leq m\) are the distinct algebraic cells with \(T_{\mathbb{C}}\)-fixed points \(x_{i}\) and \(X_{i}=\bigsqcup_{j=i}Z_{j}\). In particular, \(Z_{m}=X_{m}=\{x_{m}\}\). We consider the restricted action of the maximal compact subgroup \(T\simeq(S^{1})^{k}\) of \(T_{\mathbb{C}}\) on \(X\) as well as on \(X_{i}\) and \(Z_{i}\) for \(1\leq i\leq m\). **Proposition 2.1**.: _The ring \(K_{T}^{0}(X)\) is a free \(R(T)\)-module of rank \(m\) which is the number of cells. Further, \(K_{T}^{1}(X)=0\)._ Proof.: By [14, Proposition 2.6, Definition 2.7, Definition 2.8, Proposition 3.5] it follows that we have a long exact sequence of \(T\)-equivariant \(K\)-groups which is infinite in both directions: \[\cdots\to K_{T}^{-q}(X_{i},X_{i+1})\to K_{T}^{-q}(X_{i})\to K_{T}^{-q}(X_{i+1} )\to K_{T}^{-q+1}(X_{i},X_{i+1})\to\cdots \tag{2.3}\] for \(1\leq i\leq m\) and \(q\in\mathbb{Z}\). Moreover, by [14, Proposition 2.9] and [14, Proposition 3.5] we have \[K_{T}^{-q}(X_{i},X_{i+1}) =K_{T}^{-q}(X_{i}\setminus X_{i+1}) \tag{2.5}\] \[=K_{T}^{-q}(\mathbb{C}^{k_{i}})\simeq K_{T}^{-q}(x_{i})\] (2.6) \[=\widetilde{K}_{T}^{-q}(x_{i}^{+})\] (2.7) \[=\widetilde{K}_{T}^{0}(S^{q}(x_{i}^{+}))=R(T)\otimes\widetilde{K} ^{0}(S^{q}(x_{i}^{+})) \tag{2.4}\] for \(1\leq i\leq m\)(see [14, Proposition 2.2]). Thus when \(q\) is even \(K_{T}^{-q}(X_{i},X_{i+1})=R(T)\) and when \(q\) is odd \(K_{T}^{-q}(X_{i},X_{i+1})=0\). Here \(x_{i}^{+}\) is the sum of the \(T\)-fixed point \(x_{i}\) and a base point \(\mathfrak{o}\) which is also \(T\)-fixed (see [14, p. 135]). Alternately, we can also identify \(K_{T}^{-q}(X_{i},X_{i+1})=\widetilde{K}_{T}^{-q}(X_{i}/X_{i+1})\) where \(X_{i}/X_{i+1}\simeq S^{2k_{i}}\). For any integer \(q\) we have \(\widetilde{K}_{T}^{-q}(S^{2k_{i}})\simeq\widetilde{K}_{T}^{0}(S^{q+2k_{i}})\)[14, p. 136]. Thus when \(q\) is even \(K_{T}^{-q}(X_{i},X_{i+1})\simeq\widetilde{K}_{T}^{0}(S^{q+2k_{i}})=R(T)\otimes \widetilde{K}^{0}(S^{q+2k_{i}})=R(T)\) since \(q+2k_{i}\) is even. Further, when \(q\) is odd \(K_{T}^{-q}(X_{i},X_{i+1})=\widetilde{K}_{T}^{0}(S^{q+2k_{i}})=R(T)\otimes \widetilde{K}^{0}(S^{q+2k_{i}})=0\) since \(q+2k_{i}\) is odd (see [14, Section 2, Proposition 3.5, Proposition 2.2] and [2]). Moreover, since \(X_{m}=Z_{m}=\{x_{m}\}\) and \(X_{m+1}=\emptyset\) we have \(K_{T}^{0}(X_{m})=R(T)\) and \(K_{T}^{-1}(X_{m})=K_{T}^{-1}(x_{m})=\widetilde{K}_{T}^{-1}(x_{m}^{+})=K_{T}^{0 }(S^{1}(x_{m}^{+}))=0\) where \(x_{m}^{+}=x_{m}\sqcup\mathfrak{o}\) where both \(x_{m}\) and the base point \(\mathfrak{o}\) are \(T\)-fixed (see [14, p. 135]). Now, by decreasing induction on \(i\) suppose that \(K_{T}^{0}(X_{i+1})\) is a free \(R(T)\)-module for \(1\leq i\leq m\) of rank \(m-i\) and \(K_{T}^{-1}(X_{i+1})=0\). We can start the induction since \(K_{T}^{0}(X_{m})=R(T)\) and \(K_{T}^{-1}(X_{m})=K_{T}^{-1}(x_{m})=0\). It then follows from (2.3) that we get the following split short exact sequence of \(R(T)\)-modules (2.8) for \(1\leq i\leq m\). Thus we have the following \[K_{T}^{0}(X_{i})=K_{T}^{0}(X_{i+1})\bigoplus K_{T}^{0}(X_{i},X_{i+1}) \tag{2.9}\] Hence it follows that \(K_{T}^{0}(X_{i})\) is a free \(R(T)\)-module of rank \(m-i+1\). By induction we conclude that \(K_{T}^{0}(X)\) is free \(R(T)\)-module of rank \(m\) where \(X=X_{1}\). Since \(K_{T}^{-1}(X_{i},X_{i+1})=0\) as shown above and \(K_{T}^{-1}(X_{i+1})=0\) by the induction assumption, it also follows from (2.3) that \(K_{T}^{-1}(X_{i})=0\). Therefore \(K_{T}^{-1}(X)=0\) by induction on \(i\) since \(X=X_{1}\). Henceforth we shall denote \(K_{T}^{0}\) by \(K_{T}\). ## 3. The equivariant cohomology of \(\mathcal{F}_{\lambda}\) The \(T^{l}\)-equivariant integral cohomology ring of the Springer variety \(\mathcal{F}_{\lambda}\) has been described by Abe and Horiguchi [1] in terms of generators and relations, in a way that generalizes the above description of \(H^{*}_{T^{n}}(\mathcal{F};\mathbb{Z})\). We shall recall below the presentation. We need the following notation. **Definition 3.1**.: _Let \([n]:=\{1,2,\ldots,n\}\). We define the function \(p_{\lambda}:[n]\to[n]\) associated to a partition \(\lambda\) of \(n\) as follows:_ \[p_{\lambda}(s)=\lambda_{n-s+1}+\ldots+\lambda_{n},\ 1\leq s\leq n. \tag{3.10}\] Thus \(p_{\lambda}\) is a decreasing function of \(s\). For example, \(p_{\lambda}(1)=n\) and \(p_{\lambda}(s)=0\) if \(s>n-l\). The function \(p_{\lambda^{\vee}}\) associated to the dual partition \(\lambda^{\vee}\) is more relevant for us. Recall that the dual partition \(\lambda^{\vee}\) is defined as \(\lambda^{\vee}=(\eta_{1},\ldots,\eta_{n})\) where \(\eta_{j}=\#\{i\ |\ \lambda_{i}\geq j\}\). Writing \(\lambda\) as \(1^{a_{1}}.2^{a_{2}}\cdots n^{a_{n}}\), where \(a_{j}\) is the number of times \(j\) occurs in \(\lambda\), we have \(\eta_{j}=a_{1}+\cdots+a_{j}\) for all \(j\geq 1\). We illustrate in a small example: **Example 3.2**.: Let \(n\)=20, and, \(\lambda=(5,4,4,2,2,2,1)\). Then \(\lambda^{\vee}=(7,6,3,3,1)\) and \(p_{\lambda^{\vee}}(s)=0\) for \(1\leq s\leq 15\), \(p_{\lambda^{\vee}}(16)=1\), \(p_{\lambda^{\vee}}(17)=4,\ p_{\lambda^{\vee}}(18)=7,\ p_{\lambda^{\vee}}(19)= 13,\ p_{\lambda^{\vee}}(20)=20\). **Definition 3.3**.: _Let \(\mathcal{S}=H^{*}(BT^{l})[y_{1},\ldots,y_{n}]\) be the polynomial ring in \(n\)-indeterminates \(y_{1},\ldots,y_{n}\) where \(H^{*}(BT^{l})=\mathbb{Z}[u_{1},\ldots,u_{l}]\) where \(u_{i}\), \(1\leq i\leq l\) are the characters of \(T^{l}\) corresponding to the canonical coordinate projections. The \(T^{l}\)-equivariant analogue of the Tanisaki ideal is the ideal \(\mathcal{J}_{\lambda}\subset\mathcal{S}\) generated by the following elements:_ \[\sum_{k=0}^{d}(-1)^{d-k}e_{k}(y_{i_{1}},\ldots,y_{i_{s}})\cdot h_{d-k}(u_{ \varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(s+1-d)}),\ \text{for}\ d\geq s+1-p_{\lambda^{\vee}}(s)\] _where \(1\leq i_{1}<\cdots<i_{s}\leq n,\ 1\leq s\leq n\). Here \(\varphi_{\lambda}\) is as defined above in the introduction. (See [1, (4.1), (4.2)])_ **Theorem 3.4**.: _([1, Theorem 4.1]) Let \(\lambda=\lambda_{1}\geq\cdots\geq\lambda_{l}\) be a partition of \(n\). Then one has an isomorphism of rings_ \[H^{*}_{T^{l}}(\mathcal{F}_{\lambda};\mathbb{Z})\cong\mathcal{S}/\mathcal{J}_{\lambda}\] _where \(c_{1}^{T^{l}}(L_{j})\) corresponds to \(y_{j}+\mathcal{J}_{\lambda},1\leq j\leq n\). Moreover, the inclusion \(\iota_{\lambda}:\mathcal{F}_{\lambda}\to\mathcal{F}\) induces a surjection \(\iota_{\lambda}^{*}:H^{*}_{T^{n}}(\mathcal{F})\to H^{*}_{T^{l}}(\mathcal{F}_{ \lambda})\)._ The above presentation is the equivariant analogue of the following presentation of \(H^{*}(\mathcal{F}_{\lambda},\mathbb{Z})\) due to Tanisaki [19]. ## 4. \(T^{n}\)-equivariant \(K\)-ring of the flag variety \(\mathcal{F}\) When \(l=n\) and \(\lambda=(1,\ldots,1)\) then \(\mathcal{F}_{\lambda}=\mathcal{F}\) is the full flag variety. Let \(\mathcal{V}_{j}\) be the subbundle of the trivial vector bundle \(\mathcal{F}\times\mathbb{C}^{n}\) whose fibre over the flag \(\underline{V}=(V_{i})\in\mathcal{F}\) is just \(V_{j}\). Denote by \(\mathcal{L}_{i}\) the \(T^{n}\)-equivariant line bundle \(\mathcal{V}_{i}/\mathcal{V}_{i-1},1\leq i\leq n\), on \(\mathcal{F}\). One has an exact sequence of algebraic vector bundles \(0\to\mathcal{V}_{s-1}\hookrightarrow\mathcal{V}_{s}\to\mathcal{L}_{s}\to 0\), which leads to an \(T^{n}\)-equivariant isomorphism of _complex_ vector bundles for \(1\leq s\leq n\): \[\mathcal{L}_{1}\oplus\cdots\oplus\mathcal{L}_{s}\cong\mathcal{V}_{s}. \tag{4.11}\] Since, \(\mathcal{V}_{n}=\epsilon_{1}\oplus\cdots\oplus\epsilon_{n}\), the right hand side is the trivial vector bundle of rank \(n\), where the action of \(T^{n}\) on \(\epsilon_{i}\) is via the character corresponding to the \(i\)th canonical projection \(T^{n}\longrightarrow S^{1}\). We have \[\mathcal{L}_{1}\oplus\cdots\oplus\mathcal{L}_{n}\cong\epsilon_{1}\oplus\cdots \oplus\epsilon_{n}. \tag{4.12}\] We shall denote the \(T^{n}\)-equivariant class of the line bundle \(\mathcal{L}_{i}\) in \(K_{T^{n}}(\mathcal{F})\) by \([\mathcal{L}_{i}]_{T^{n}}\in K^{0}_{T^{n}}(\mathcal{F};\mathbb{Z}),1\leq i\leq n\). Recall that the \(T^{n}\)-equivariant \(K\)-ring of \(\mathcal{F}\) is well studied classically (see [11] and [13]). Also the presentation for the ordinary \(K\)-ring of flag variety and flag bundle are classical and can be found in [10, SS3, Chapter IV]. We further have the following presentation for the \(T^{n}\)-equivariant \(K\)-ring of \(\mathcal{F}\) which is analogous to the presentation of flag bundle as an algebra over the topological \(K\)-ring of the base in [10]. Let \(t_{1},\ldots,t_{n}\) denote the characters of \(T^{n}\) corresponding to the coordinate projections. Thus \(R(T^{n})=\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1}]\). **Theorem 4.1**.: _Let \(J^{\prime}\) be the ideal in \(R(T^{n})[x_{1},\ldots,x_{n}]\) generated by the elements_ \[e_{k}(x_{1},\ldots,x_{n})-e_{k}(t_{1},\ldots,t_{n}) \tag{4.13}\] _for \(1\leq k\leq n\). We have the following isomorphism as a \(R(T^{n})\)-algebra_ \[\mathcal{R}_{\mathcal{F}}:=R(T^{n})[x_{1},\ldots,x_{n}]/J^{\prime}\simeq K_{T ^{n}}(\mathcal{F}) \tag{4.14}\] _where \(x_{i}\) maps to \(\mathcal{L}_{i}\) for \(1\leq i\leq n\). In particular, \(K_{T^{n}}(\mathcal{F})\) is generated by \(\mathcal{L}_{i}\) for \(1\leq i\leq n\) as an \(R(T^{n})\)-algebra._ **Proof:** Let \(U_{n}\) denote the complex unitary subgroup of \(GL_{n}\) acting transitively on the flag variety \(\mathcal{F}\) with the stabilizer at the standard full flag the compact torus \(T^{n}\), identifying \(\mathcal{F}\) with the homogeneous space \(U_{n}/T^{n}\). Further, the line bundle \(\mathcal{L}_{i}\simeq\mathcal{V}_{i}/\mathcal{V}_{i-1}\) defined above can be identified with the line bundle \[U_{n}\times_{T^{n}}\mathbb{C}_{i}\] on \(\mathcal{F}=U_{n}/T^{n}\) associated to the character \(t_{i}:T^{n}\longrightarrow S^{1}\) corresponding to the \(i\)th coordinate projection for \(1\leq i\leq n\). Recall from [13] and [11] that we have an isomorphism \[R(T^{n})\otimes_{R(T^{n})^{S_{n}}}R(T^{n})\simeq K^{0}_{T^{n}}(\mathcal{F}) \tag{4.15}\] which is defined on the first factor by sending \([V]\in R(T^{n})\) to the class of the trivial \(T^{n}\)-equivariant vector bundle \([\mathcal{F}\times V]\) on \(\mathcal{F}\) (this is induced by pull back via the constant map \(\mathcal{F}\longrightarrow pt\)) and on the second factor by sending \([W]\in R(T^{n})\) to the class of the associated vector bundle \(U_{n}\times_{T^{n}}W\). In particular, it maps \(t_{i}\) to \([\mathcal{L}_{i}]_{T^{n}}\) in the second factor. Note that if we compose the second map with the forgetful homomorphism \(K^{0}_{T^{n}}(\mathcal{F})\longrightarrow K^{0}(\mathcal{F})\) we get the classical Atiyah-Hirzebruch homomorphism \(R(T^{n})\longrightarrow K^{0}(\mathcal{F})\). Further, \(R(T^{n})^{S_{n}}\) is identified with \(R(U_{n})\) (see [9]) and \(R(T^{n})\) on the second factor in the tensor product is identified with \(K^{0}_{U_{n}}(\mathcal{F})\). Recall that \(S_{n}\) acts on \(R(T^{n})=\mathbb{Z}[t_{i}^{\pm 1}:1\leq i\leq n]\) by \(\sigma(t_{i}):=t_{\sigma(i)}\) for every \(1\leq i\leq n\) and \(\sigma\in S_{n}\). Thus \(R(T^{n})^{S_{n}}\) is \(\mathbb{Z}[t_{1},\dots,t_{n}]^{S_{n}}\) localized at the element \(t_{1}\cdots t_{n}\), which is invariant under \(S_{n}\). Thus \(R(U_{n})=R(T^{n})^{S_{n}}\) is a polynomial ring generated by the elements \(e_{1}(t_{1},\dots,t_{n}),\dots,e_{n}(t_{1},\dots,t_{n}),1/(t_{1}\cdots t_{n})\) where \(e_{k}(t_{1},\dots,t_{n})\) denotes the \(k\)th elementary symmetric polynomial for \(0\leq k\leq n\). In particular, note that \(t_{1}\cdots t_{n}=e_{n}(t_{1},\dots,t_{n})\) and \(e_{0}(t_{1},\dots,t_{n})=1\) (see [9]). Since \(R(T^{n})\otimes_{R(T^{n})^{S_{n}}}R(T^{n})\simeq R(T^{n})\otimes R(T^{n})/J\) where \(J\) is the ideal generated by elements of the form \(a\otimes 1-1\otimes a\) where \(a\in R(T^{n})^{S_{n}}\). It is enough the consider \(a=e_{k}(t_{1},\dots,t_{n})\) and \(a=e_{n}(t_{1},\dots,t_{n})^{-1}\). Thus the ideal \(J\) is generated by \[e_{k}(t_{1},\dots,t_{n})\otimes 1-1\otimes e_{k}(t_{1},\dots,t_{n})\] and \[(t_{1}^{-1}\cdots t_{n}^{-1})\otimes 1-1\otimes e_{n}(t_{1},\dots,t_{n})^{-1}.\] From the above relations and (4.15) we can see that in the ring \(K^{0}_{T^{n}}(\mathcal{F})\) we have the following \[[\mathcal{L}^{\vee}_{i}]_{T^{n}}=(t_{1}^{-1}\cdots t_{n}^{-1})\cdot\prod_{ \begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{n}[\mathcal{L}_{j}]_{T^{n}}\] for \(1\leq i\leq n\). Thus the classes \([\mathcal{L}_{i}]_{T^{n}}\) for \(1\leq i\leq n\) generate \(K_{T^{n}}(\mathcal{F})\) as an \(R(T^{n})\)-algebra. Consider the homomorphism \(\varphi:R(T^{n})[x_{1},\dots,x_{n}]\longrightarrow K_{T^{n}}(\mathcal{F})\) which sends \(x_{i}\mapsto[\mathcal{L}_{i}]_{T^{n}}\) for \(1\leq i\leq n\) and \(t_{i}^{\pm 1}\) to \(t_{i}^{\pm 1}\) which are trivial line bundles on \(\mathcal{F}\) associated to the characters \(t_{i}^{\pm 1}\) for \(1\leq i\leq n\). Let \(J^{\prime}\) be the ideal in \(R(T^{n})[x_{1},\dots,x_{n}]\) generated by the elements \(e_{k}(x_{1},\dots,x_{n})-e_{k}(t_{1},\dots,t_{n})\) for \(1\leq k\leq n\). From the relations in \(K_{T^{n}}(\mathcal{F})\) given by the isomorphism (4.15) it can be seen that \(J^{\prime}\) is contained in the kernel of \(\varphi\). Thus \(\varphi\) induces a well defined surjective \(R(T^{n})\)-algebra homomorphism \(\varphi^{\prime}:R(T^{n})[x_{1},\dots,x_{n}]/J^{\prime}\longrightarrow K_{T^{n }}(\mathcal{F})\). We know by Theorem 2.1 that \(K_{T^{n}}(\mathcal{F})\) is a free \(R(T^{n})\)-module of rank \(n!\). On the other hand it is well known that \(R(T^{n})[x_{1},\dots,x_{n}]\) is a free \(R(T^{n})[e_{1}(x_{1},\dots,x_{n}),\dots,e_{n}(x_{1},\dots,x_{n})]\) module of rank \(n!\) with basis \(x_{1}^{r_{1}}\cdots x_{n-1}^{r_{n-1}}\) where \(0\leq r_{i}\leq n-i\) for \(1\leq i\leq n-1\) (see [10, proof of Theorem 3.6, p. 198]). This implies that the quotient \(R(T^{n})[x_{1},\dots,x_{n}]/J^{\prime}\) is a free \(R(T^{n})\)-module with basis the products \(x_{1}^{r_{1}}\cdots x_{n-1}^{r_{n-1}}\). Thus \(\varphi^{\prime}\) is a well defined and surjective \(R(T^{n})\)-algebra homomorphism between two free \(R(T^{n})\)-modules of the same rank. Hence \(\varphi^{\prime}\) is an isomorhism of \(R(T^{n})\)-algebras. Hence the theorem. \(\square\) ## 5. \(T^{l}\)-equivariant \(K\)-ring of \(\mathcal{F}_{\lambda}\) In this section we study the \(T^{l}\)-equivariant \(K\)-ring of the Springer variety \(\mathcal{F}_{\lambda}\) with its canonical action of the torus \(T^{l}\). Recall ([14]) that pull-back via the constant map \(\mathcal{F}_{\lambda}\longrightarrow pt\) induces a natural \(R(T^{l})=K_{T^{l}}(pt)\)-algebra structure on \(K_{T^{l}}(\mathcal{F}_{\lambda})\). ### \(R(T^{l})\)-module structure of \(K_{T^{l}}(\mathcal{F}_{\lambda})\) **Theorem 5.1**.: _The ring \(K_{T^{l}}^{0}(\mathcal{F}_{\lambda})\) is a free \(R(T^{l})\)-module of rank \(\binom{n}{\lambda}\). This will in particular also imply that \(K_{T^{l}}^{0}(\mathcal{F}_{\lambda})\) is torsion free (since \(R(T^{l})\) is an integral domain). Moreover, \(K_{T^{l}}^{1}(\mathcal{F}_{\lambda})=0\)._ Proof.: The Springer variety \(\mathcal{F}_{\lambda}\) is known to admit a cell decomposition by the results of Spaltenstein (see [15]), with \(\binom{n}{\lambda}:=\frac{n!}{(\lambda_{1}!\cdots\lambda_{l}!)}\) locally closed cells isomorphic to affine spaces. This algebraic cell decomposition can be further seen to be \(T^{l}\)-stable since the cells arise as the intersections of \(\mathcal{F}_{\lambda}\) with the \(T^{n}\)-invariant Schubert cells in \(\mathcal{F}\) (see [20, Section 3.2]). The theorem now follows by Proposition 2.1. **Remark 5.2**.: Note that \(\mathcal{F}_{\lambda}\) need not be a CW complex but only has an algebraic cell decomposition (see the example of a string of pearls in [20, Section 3.2]). #### 5.1.1. The \(T^{l}\)-fixed points of \(\mathcal{F}_{\lambda}\) We recall below the description of \(\mathcal{F}_{\lambda}^{T^{l}}\) from [1, Lemma 2.1]. The \(T^{n}\)-fixed points of the full flag variety \(\mathcal{F}\) are given by \[\{(\langle e_{w(1)}\rangle\subseteq\langle e_{w(1)},e_{w(2)}\rangle\subset \cdots\subset\langle e_{w(1)},e_{w(2)},\ldots,e_{w(n)}\rangle=\mathbb{C}^{n} \;\mid\;w\in S_{n}\}\] where \(e_{1},\ldots,e_{n}\) is the standard basis of \(\mathbb{C}^{n}\). Thus we can identify \(\mathcal{F}^{T^{n}}\) with \(S_{n}\). Then the \(T^{l}\)-fixed points of \(\mathcal{F}_{\lambda}\) can be identified with the set of \(w\in S_{n}\) such that \(w\) satisfies the condition \[\text{for every }\;1\leq k\leq l,\;\text{ the numbers between }\,\lambda_{1}+\cdots\lambda_{k-1}+1\text{ and }\lambda_{1}+\cdots+\lambda_{k} \tag{5.16}\] appear in the one-line notation of \(w\) as a subsequence in the increasing order. We let \(\lambda_{1}+\cdots+\lambda_{k-1}+1=1\) when \(k=1\). Moreover, \(\mathcal{F}_{\lambda}^{T^{l}}\) can also be identified with unique right cosets representatives of the subgroup \(S_{\lambda_{1}}\times\cdots\times S_{\lambda_{l}}\) of \(S_{n}\). Consider the inclusion \(\iota_{\lambda}:\mathcal{F}_{\lambda}\to\mathcal{F}\) of the Springer variety in the full flag variety which is \(T^{l}\)-invariant with respect to the restricted \(T^{l}\)-action on \(\mathcal{F}\) and the natural action of \(T^{l}\) on \(\mathcal{F}_{\lambda}\) described above. This induces a pull back map \[\iota_{\lambda}^{!}:K_{T^{n}}(\mathcal{F})\to K_{T^{l}}(\mathcal{F}_{\lambda})\] which factors through \(K_{T^{l}}(\mathcal{F})\) by the inclusion of \(T^{l}\) in \(T^{n}\). (this is analogous to the map \(\rho_{\lambda}\) in [1, Section 3] for the equivariant cohomology). We can view \(K_{T^{l}}(\mathcal{F}_{\lambda})\) as \(R(T^{n})\)-module via the canonical map \(R(T^{n})\longrightarrow R(T^{l})\) induced by the inclusion \(T^{l}\subseteq T^{n}\). Moreover the map \(R(T^{n})\longrightarrow R(T^{l})\) is surjective since the character \((u_{1},\ldots,u_{l})\mapsto u_{i}\) lifts to the character \((t_{1},\ldots,t_{n})\mapsto t_{\lambda_{1}+\cdots+\lambda_{i}}\) for every \(1\leq i\leq l\). In the following theorem we show that \(\iota_{\lambda}^{!}\) is surjective analogous to the corresponding statement for ordinary cohomology in [19], equivariant cohomology in [1] and ordinary \(K\)-theory in [18]. In the following we let \(m=\binom{n}{\lambda}\). Let \(w_{1},\ldots,w_{m}\) denote \(\mathcal{F}_{\lambda}^{T^{l}}\). In particular, \(w_{1},\ldots,w_{m}\) are elements of \(S_{n}\) satisfying the condition (5.16). **Theorem 5.3**.: _The map \(\iota_{\lambda}^{!}\) is a surjective morphism of \(R(T^{n})\)-algebras._ **Proof:** Recall that we have a decreasing filtration of \(T^{l}\)-stable subvarieties of \(\mathcal{F}_{\lambda}\) given by \[\{w_{m}\}=X_{w_{m}}\subseteq\cdots X_{w_{i}}\subseteq X_{w_{i+1}}\subseteq \cdots\subseteq X_{w_{1}}=\mathcal{F}_{\lambda}\] such that \(Z_{w_{i}}:=X_{w_{i}}\setminus X_{w_{i+1}}=\mathbb{C}^{k_{i}}\) for \(1\leq i\leq m\) where \(X_{w_{m+1}}=\emptyset\). Moreover, it is also known that \(Z_{w_{i}}=C_{w_{i}}\cap\mathcal{F}_{\lambda}\) where \(C_{w_{i}}\) is a Schubert cell in \(\mathcal{F}\). Consider the chain of inclusions of \(T^{l}\)-stable closed subvarieties \(X_{w_{i}}\subseteq\mathcal{F}_{\lambda}\subseteq\mathcal{F}\). This induces the following chain of morphisms of equivariant \(K\)-rings \[K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(\mathcal{F})\longrightarrow K _{T^{l}}(\mathcal{F}_{\lambda})\longrightarrow K_{T^{l}}(X_{w_{i}}).\] Our aim is to show that \(K_{T^{l}}(\mathcal{F})\longrightarrow K_{T^{l}}(\mathcal{F}_{\lambda})\) is surjective. We shall show this by decreasing induction on \(i\). Note that \[K_{T^{n}}(\mathcal{F})=\bigoplus_{w\in S_{n}}K_{T^{n}}(C_{w})\simeq R(T^{n})^{ n!} \tag{5.17}\] and \[K_{T^{l}}(\mathcal{F}_{\lambda})=\bigoplus_{i=1}^{m}K_{T^{l}}(Z_{w_{i}}) \simeq R(T^{l})^{m} \tag{5.18}\] (see (2.9)). Since \(Z_{w_{i}}=C_{w_{i}}\cap\mathcal{F}_{\lambda}\) is a closed subvariety of \(C_{w_{i}}\) we have the induced map \[K_{T^{n}}(w_{i})\simeq K_{T^{n}}(C_{w_{i}})\longrightarrow K_{T^{l}}(Z_{w_{i} })\simeq K_{T^{l}}(w_{i}) \tag{5.19}\] for every \(1\leq i\leq m\). We note that (5.19) can be identified with the canonical map \(R(T^{n})\longrightarrow R(T^{l})\) and is hence surjective. Thus the map \(K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(Z_{w_{i}})\) obtained by composing the projection to the factor \(K_{T^{n}}(C_{w_{i}})\) in (5.17) with the map (5.19) is surjective for \(1\leq i\leq m\). In particular, since \(K_{T^{l}}(X_{w_{m}})=K_{T^{l}}(Z_{w_{m}})=K_{T^{l}}(w_{m})=R(T^{l})\) it follows that \(K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(X_{w_{m}})\) is surjective. We assume by induction that \(K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(X_{w_{i+1}})\) is surjective. From (2.9) we have \[K_{T^{l}}(X_{w_{i}})=K_{T^{l}}(X_{w_{i+1}})\bigoplus K_{T^{l}}(Z_{w_{i}}). \tag{5.20}\] Further, the restriction maps \[K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(X_{w_{i+1}})\] and \[K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(Z_{w_{i}})\] are obtained by composing \(K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(X_{w_{i}})\) with the respective projections in (5.20). Thus by induction assumption and the fact that \(K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(Z_{w_{i}})\) is surjective for \(1\leq i\leq m\), we get that \(K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(X_{w_{i}})\) is surjective for every \(1\leq i\leq m\). Therefore it follows that \(\iota^{!}:K_{T^{n}}(\mathcal{F})\longrightarrow K_{T^{l}}(X_{w_{1}})=K_{T^{l }}(\mathcal{F}_{\lambda})\) is surjective. \(\square\) ### Localization When \(\lambda=(1,\ldots,1)\) and \(\mathcal{F}_{\lambda}=\mathcal{F}\) it is known (see [11]) that the map \(\iota_{1}\) in equivariant \(K\)-ring induced by restriction to fixed points \(K_{T^{n}}(\mathcal{F})\stackrel{{\iota_{1}}}{{\longrightarrow}} K_{T^{n}}(\mathcal{F}^{T^{n}})=\prod_{w\in S_{n}}R(T^{n})\) is injective. We have the following result for any \(\mathcal{F}_{\lambda}\). **Lemma 5.4**.: _The canonical restriction map_ \[K_{T^{l}}(\mathcal{F}_{\lambda})\stackrel{{\iota_{2}}}{{ \longrightarrow}}K_{T^{l}}(\mathcal{F}_{\lambda})^{T_{l}})\simeq\prod_{i=1}^{ m}(R(T^{l})=K_{T^{l}}(w_{i}))\] _is injective where \(m=\binom{n}{k}\)._ Proof.: Since the prime ideal \((0)\) of \(R(T^{l})\) has support \(T^{l}\) by localizing at \((0)\) (see [14, Proposition 4.1]), we have that \[K_{T^{l}}(\mathcal{F}_{\lambda})\otimes_{R(T)}Q(T^{l})\longrightarrow K_{T^{l }}(X^{T^{l}})\otimes_{R(T)}Q(T^{l})\] is an isomorphism where \(Q(T^{l}):=R(T^{l})_{\{0\}}\) is the quotient field of the integral domain \(R(T^{l})\). This further implies that the restriction map \(K_{T^{l}}(\mathcal{F}_{\lambda})\longrightarrow K_{T^{l}}(\mathcal{F}_{ \lambda}^{T^{l}})\) is injective since \(K_{T^{l}}(\mathcal{F}_{\lambda})\) is a free \(R(T^{l})\)-module of rank \(m=\binom{n}{k}\). ### The action of the symmetric group on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) The following result is the equivariant \(K\)-theoretic analogue of the corresponding results for ordinary cohomology (see [16], [17] and [8]), equivariant cohomology (see [1, Section 3]) and ordinary \(K\)-theory (see [18, Section 3.1]). **Proposition 5.5**.: _There exists an \(S_{n}\)-action on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) such that the map \(\iota_{\lambda}^{!}\) is an \(S_{n}\)-equivariant homomorphism._ Proof.: Our proof is along similar lines as that of the corresponding result on equivariant cohomology in [1, Section 3]. Consider the following commuting square: \[\begin{array}{llll}&K_{T^{n}}(\mathcal{F})&\xrightarrow{\iota_{1}}&K_{T^{n} }(\mathcal{F}^{T^{n}})=\bigoplus_{w\in S_{n}}\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^ {\pm 1}]\\ &\iota_{\lambda}^{!}\Big{\downarrow}&&\pi\Big{\downarrow}\\ &K_{T^{l}}(\mathcal{F}_{\lambda})&\xrightarrow{\iota_{2}}&K_{T^{l}}(\mathcal{ F}_{\lambda}^{T^{l}})=\bigoplus_{w\in\mathcal{F}_{\lambda}^{T^{l}}\subseteq S_{n}} \mathbb{Z}[u_{1}^{\pm 1},\ldots,u_{l}^{\pm 1}]\end{array} \tag{5.21}\] where the horizontal maps \(\iota_{1}\) and \(\iota_{2}\) are maps in \(T^{n}\) and \(T^{l}\)-equivariant \(K\)-theory induced from the inclusion of \(\mathcal{F}^{T^{n}}\) in \(\mathcal{F}\) and \(\mathcal{F}_{\lambda}^{T^{l}}\) in \(\mathcal{F}_{\lambda}\) respectively. As already discussed above the vertical map \(\iota_{\lambda}^{!}\) is induced from the inclusion of \(\mathcal{F}_{\lambda}\) in \(\mathcal{F}\) and is a map of \(R(T^{n})\)-algebras. The vertical map \(\pi\) is the canonical projection \(R(T^{n})\longrightarrow R(T^{l})\) induced from the inclusion \(T^{l}\subseteq T^{n}\) on the factors corresponding to \(w\in\mathcal{F}_{\lambda}^{T^{l}}\) and the zero ring map on the other factors. We shall construct \(S_{n}\)-actions on the modules \(K_{T^{n}}(\mathcal{F})\), \(\bigoplus_{w\in S_{n}}\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1}]\), \[\bigoplus_{w\in\mathcal{F}_{\lambda}^{T^{l}}\subseteq S_{n}}\mathbb{Z}[u_{1} ^{\pm 1},\ldots,u_{l}^{\pm 1}]\text{ to construct an $S_{n}$ action on $K_{T^{l}}(\mathcal{F}_{\lambda})$.}\] First we shall recall the left action of the symmetric group \(S_{n}\) on \(K_{T^{n}}(\mathcal{F})\). For this we consider the right \(S_{n}\)-action on the flag variety \(\mathcal{F}\) as described below (see [1, Section3]). For any \(V_{\bullet}\in Flags(\mathbb{C}^{n})\) there exists \(g\in U(n)\) so that \(V_{i}=\bigoplus_{j=1}^{i}\mathbb{C}g(e_{j})\), where \(\{e_{1},\ldots,e_{n}\}\) is the standard basis of \(\mathbb{C}^{n}\). Then the right action of \(w\in S_{n}\) on \(\mathcal{F}\) can be defined by \[V_{\bullet}\cdot w=V_{\bullet}^{\prime} \tag{5.22}\] where \(V_{i}^{\prime}=\bigoplus\mathbb{C}g(e_{w(j)})\). We recall below the explicit presentation of \(K_{T^{n}}(\mathcal{F})\) from Theorem 4.1. Let \(J^{\prime}\) be the ideal in \(R(T^{n})[x_{1},\ldots,x_{n}]\) generated by the elements \(e_{k}(x_{1},\ldots,x_{n})-e_{k}(t_{1},\ldots,t_{n})\) for \(1\leq k\leq n\) where \(R(T^{n})=\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1}]\). We have the following isomorphism as an \(R(T^{n})\)-algebra \[\mathcal{R}_{\mathcal{F}}:=R(T^{n})[x_{1},\ldots,x_{n}]/J^{\prime}\simeq K_{T^{n} }(\mathcal{F})\] where \(x_{i}\) maps to \([\mathcal{L}_{i}]_{T^{n}}\) for \(1\leq i\leq n\). We shall denote by \(\overline{x}_{i}\) the class of \(x_{i}\) in \(\mathcal{R}_{\mathcal{F}}\). The right action (5.22) of the symmetric group \(S_{n}\) on \(\mathcal{F}\) induces the following left action on \(K_{T^{n}}(\mathcal{F})\): \[w\cdot\overline{x}_{i}:=\overline{x}_{w(i)},w\cdot t_{i}=t_{i} \tag{5.23}\] for \(w\in S_{n}\). This is because the pull back of the line bundle \(\mathcal{L}_{i}\) under the right action is nothing but the line bundle \(\mathcal{L}_{w(i)}\), and the right action is \(T^{n}\)-equivariant. Now, we shall define a left action of \(v\in S_{n}\) on \(\bigoplus_{w\in S_{n}}\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1}]\) as follows: \[v\cdot f\mid_{w}=f\mid_{wv} \tag{5.24}\] for \(w\in S_{n}\) and \(f\in\bigoplus_{w\in S_{n}}\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1}]\). Also we note that \(\iota_{1}\) maps \(\overline{x}_{i}\) to \[(t_{w(i)})\in\bigoplus_{w\in S_{n}}\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1 }]\text{ where }\iota_{1}(\overline{x}_{i})\mid_{w}=t_{w(i)}\text{. Also }\iota_{1}(t_{i}^{\pm 1})=t_{i}^{\pm 1}.\] Thus it follows that \(\iota_{1}\) is \(S_{n}\) equivariant since \[\iota_{1}(w\cdot\overline{x}_{i})\mid_{v}=\iota_{1}(\overline{x}_{w(i)})\mid_ {v}=t_{v\cdot w(i)}=\iota_{1}(\overline{x}_{i})\mid_{v\cdot w}=w\cdot\iota_{1} (\overline{x}_{i})\mid_{v}\] for \(1\leq i\leq n\). Let \(\overline{y}_{i}\) denote the image in \(K_{T^{l}}(\mathcal{F}_{\lambda})\) of \(\overline{x}_{i}\) under the surjective \(R(T^{n})\)-algebra homomorphism \(\iota_{\lambda}^{!}\). We have the following lemma which is analogous to [1, Lemma 3.1]. **Lemma 5.6**.: _The \(T^{l}\)-equivariant topological \(K\)-ring \(K_{T^{l}}(\mathcal{F}_{\lambda})\) is generated by \(\overline{y}_{1},\ldots,\overline{y}_{n}\) as an algebra over \(R(T^{l})=\mathbb{Z}[u_{1}^{\pm 1},\ldots,u_{l}^{\pm 1}]\)._ Now we proceed to construct an \(S_{n}\)-action on \(\bigoplus_{w\in\mathcal{F}_{\lambda}^{T^{l}}}\mathbb{Z}[u_{1}^{\pm 1}, \ldots,u_{l}^{\pm 1}]\) and on \(K_{T^{l}}(\mathcal{F}_{\lambda})\). Recall the map \(\varphi_{\lambda}\) from (1.2). Then the map \[\pi:\bigoplus_{w\in S_{n}}\mathbb{Z}[t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1}] \longrightarrow\bigoplus_{w\in\mathcal{F}_{\lambda}^{T^{l}}}\mathbb{Z}[u_{1}, \ldots,u_{l}]\] is given by \[\pi(f\mid_{w}(t_{1}^{\pm 1},\ldots,t_{n}^{\pm 1}))=f\mid_{w}(u_{\varphi_{ \lambda}(1)},\ldots,u_{\varphi_{\lambda}(n)})\] where \(f\mid_{w}\) denotes the \(w\)th component of \(f\). Thus from the commutative diagram (5.21) it follows that \[\iota_{2}(\overline{y}_{i})=\pi(\iota_{1}(\overline{x}_{i}))=\pi(t_{w(i)})=u_{ \varphi_{\lambda}(w(i))} \tag{5.25}\] and \[\iota_{2}(u_{i})\mid_{w}=u_{i}. \tag{5.26}\] The left action of \(v\in S_{n}\) on \(\bigoplus_{w\in\mathcal{F}_{\lambda}^{T^{l}}}\mathbb{Z}[u_{1}^{\pm 1}, \ldots,u_{l}^{\pm 1}]\) is defined by \[(v\cdot f)\mid_{w}=f_{w^{\prime}} \tag{5.27}\] for \(w\in\mathcal{F}_{\lambda}^{T^{l}}\) and \(f\in\bigoplus_{w\in\mathcal{F}_{\lambda}^{T^{l}}}\mathbb{Z}[u_{1}^{\pm 1}, \ldots,u_{l}^{\pm 1}]\) and \(w^{\prime}\in\mathcal{F}_{\lambda}^{T^{l}}\) is the coset representative of the right coset \([wv]\). (Recall that \(\mathcal{F}_{\lambda}^{T^{l}}\) can be identified with the coset representatives of the cosets of the subgroup \(S_{\lambda_{1}}\times\cdots\times S_{\lambda_{l}}\) in \(S_{n}\).) We have the following lemma analogous to [1, Lemma 3.2]. **Lemma 5.7**.: (5.28) \[v\cdot(\iota_{2}(\overline{y}_{i}^{\pm 1}))=\iota_{2}(\overline{y}_{v(i)}^{ \pm 1})\] _and_ \[v\cdot(\iota_{2}(u_{j}))=\iota_{2}(u_{j}). \tag{5.29}\] Proof.: From (5.26) and (5.27) we have \[(v\cdot\iota_{2}(u_{i}^{\pm 1}))\mid_{w}=\iota_{2}(u_{i}^{\pm 1})\mid_{w^{ \prime}}=u_{i}^{\pm 1}=\iota_{2}(u_{i}^{\pm 1})\mid_{w}\] for every \(w\in S_{n}\). Thus (5.29) follows. Now, from (5.25) and (5.27) we have \[v\cdot\iota_{2}(\overline{y}_{i}^{\pm 1})\mid_{w}=\iota_{2}(\overline{y}_{i}^{ \pm 1})\mid_{w^{\prime}}=u_{\varphi_{\lambda}(w^{\prime}(i))}^{\pm 1}\] and \[\iota_{2}(\overline{y}_{v(i)}^{\pm 1})\mid_{w}=u_{\varphi_{\lambda}(w(v(i)))}^{ \pm 1}.\] Thus it suffices to prove that \(\varphi_{\lambda}(w^{\prime}(i))=\varphi_{\lambda}(wv(i))\). This follows since \([w^{\prime}]=[wv]\) in \(S_{\lambda_{1}}\times\cdots\times S_{\lambda_{l}}\backslash S_{n}\) (see [1, proof of Lemma 3.2]). Hence the proof. Since \(\iota_{2}\) is injective we therefore obtain an \(S_{n}\) action on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) given by \[w\cdot\overline{y}_{i}^{\pm 1}=\overline{y}_{w(i)}\text{ and }\ w\cdot u_{j}^{\pm 1}=u_{j}^{\pm 1} \tag{5.30}\] for \(w\in S_{n}\) for \(1\leq i\leq n\) and \(1\leq j\leq l\). It follows that the action is well defined by Lemma 5.6 and Lemma 5.7. By identifying \(\overline{y}_{i}\) with \([L_{i}]_{T^{l}}\) the \(S_{n}\)-action on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) is given by \(w\cdot[L_{i}]_{T^{l}}:=[L_{w(i)}]_{T^{l}}\) for \(1\leq i\leq n\) and \(w\cdot u_{j}^{\pm 1}=u_{j}^{\pm 1}\) for \(1\leq j\leq l\). Moreover, this also implies that for the \(S_{n}\) action on \(K_{T^{n}}(\mathcal{F})\) and \(K_{T^{l}}(\mathcal{F}_{\lambda})\) the map \(\iota_{\lambda}^{!}\) is \(S_{n}\)-equivariant. ### Sectioning canonical bundles over \(\mathcal{F}_{\lambda}\) For \(1\leq s\leq n\) we let \[W_{n,s}:=\{\mathbf{i}=(i_{1},\ldots,i_{s})\ \ \text{where}\ \ 1\leq i_{1}< \cdots<i_{s}\leq n\}. \tag{5.31}\] Recall that \(K_{T^{l}}(\mathcal{F}_{\lambda})\) is generated as \(R(T^{l})\)-algebra by \([L_{i}]\) for \(1\leq i\leq n\) and the action of \(S_{n}\) on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) is given by \(w\cdot[L_{i}]=[L_{w(i)}]\) for \(1\leq i\leq n\) and \(w\in S_{n}\). **Proposition 5.8**.: _Let \(1\leq s\leq n\) and let \(\mathbf{i}\in W_{n,s}\). Then_ \[L_{i_{1}}\oplus\cdots\oplus L_{i_{s}}\cong\xi\oplus\epsilon_{j_{1}}\oplus \cdots\epsilon_{j_{q}} \tag{5.32}\] _for some \(T^{l}\)-equivariant complex vector bundle \(\xi=\xi(\mathbf{i})\) and \(T^{l}\)-equivariant trivial line bundles \(\epsilon_{j_{1}}\) for \(1\leq\mathbf{1}\leq q\) over \(\mathcal{F}_{\lambda}\) where \(q:=p_{\lambda^{\vee}}(s)\). Moreover, for \(d\geq s+1-q\), (5.32) also implies the following isomorphism_ \[L_{i_{1}}\oplus\cdots\oplus L_{i_{s}}\cong\xi^{\prime}\oplus\epsilon_{j_{1}} \oplus\cdots\epsilon_{j_{s+1-d}} \tag{5.33}\] _where_ \[\xi^{\prime}=\xi\oplus\epsilon_{j_{s+1-d+1}}\oplus\cdots\oplus\epsilon_{j_{q}}\] _is a \(T^{l}\)-equivariant complex vector bundle of rank \(d-1\)._ Proof.: Fix \(s\leq n\). Since the action of \(S_{n}\) on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) permutes the \(L_{j}\) for \(1\leq j\leq n\) it suffices to consider the case where \(i_{r}=r,\ \ \forall\ \ 1\leq r\leq s\). Moreover, the \(T^{l}\)-equivariant trivial line bundles \(\epsilon_{j_{1}}\) on \(\mathcal{F}_{\lambda}\) where \(T^{l}\) acts through character \(\chi_{1}\) for \(1\leq\mathbf{1}\leq q\) are stable under the \(S_{n}\)-action since \(w\cdot\chi_{1}=\chi_{1}\). Thus the isomorphism \[L_{1}\oplus\cdots\oplus L_{s}\cong\xi\oplus\epsilon_{j_{1}}\oplus\cdots \epsilon_{j_{q}}\] for some \(T^{l}\)-equivariant vector bundle \(\xi\) of rank \(s-q\) will imply (5.32). We replace \(N\) by a conjugate \(gNg^{-1}\) so that \(\operatorname{Im}\left(gNg^{-1}\right)^{n-k}=U_{p_{\lambda^{\vee}}(k)}= \mathbb{C}^{p_{\lambda^{\vee}}(k)}\) for \(k\geq 1\). We may then choose the refinement \(\underline{U}\in\mathcal{F}\) to be the standard flag \(0\subset\mathbb{C}\subset\cdots\subset\mathbb{C}^{n}\). Thus \(\mathbb{C}^{q}\subset V_{s}\) for any \(\underline{V}\in\mathcal{F}_{gNg^{-1}}\). Let \(\iota_{g}:\mathcal{F}\to\mathcal{F}\) be the translation by \(g\): \(\underline{V}\mapsto g\underline{V}=gV_{0}=0\subset gV_{1}\subset\cdots\subset gV _{n}=\mathbb{C}^{n}\). Since \(GL(n,\mathbb{C})\) is connected, the composition \(\mathcal{F}_{N}\stackrel{{\iota_{\lambda}}}{{\hookrightarrow}} \mathcal{F}\stackrel{{\iota_{g}}}{{\longrightarrow}}\mathcal{F}\), denoted \(\iota_{\lambda,g}\) is homotopic to \(\iota_{\lambda}\) and maps \(\mathcal{F}_{N}\) onto \(\mathcal{F}_{gNg^{-1}}\subset\mathcal{F}\). It follows that \(\iota_{\lambda}\) and \(\iota_{\lambda,g}\) induce the same map in \(T^{l}\)-equivariant \(K\)-theory. In particular, \(\iota_{\lambda,g}^{!}([\mathcal{L}_{j}])=[L_{j}]\ \forall\ 1\leq j\leq n\). Let \(G_{n,s}=G_{s}(\mathbb{C}^{n})\) denote the Grassmann variety of \(s\)-planes in \(\mathbb{C}^{n}\). One has a projection \(\pi_{s}:\mathcal{F}_{N}\to G_{n,s}\) defined as \(\underline{V}\mapsto V_{s}\). Note that the map \(\pi_{s}\) is \(T^{l}\)-equivariant where the \(T^{l}\) action on \(G_{n,s}\) is the natural one obtained by restricting the action of \(T^{n}\) which takes an \(s\)-dimensional subspace of \(\mathbb{C}^{n}\) to another \(s\)-dimensional subspace. Let \(Y_{q}\subset G_{n,s}\) denote the subvariety \(\{U\in G_{n,s}\mid U\supset U_{q}\},1\leq q<s\). Note that since \(T^{l}\) commutes with \(N\) it stabilizes \(Im(N)^{n-s}=U_{q}=\mathbb{C}^{q}\). Thus the inclusion \(Y_{q}{\hookrightarrow}G_{n,s}\) is \(T^{l}\)-equivariant. Moreover, \(Y_{q}\) is isomorphic to a Grassmann variety \(G_{n-q,s-q}\). A specific isomorphism \(Y_{q}\cong G_{s-q}(\mathbb{C}^{n}/U_{q})\) is obtained by sending \(U\in Y_{q}\) to \(U/U_{q}\). The tautological complex vector bundle \(\gamma_{n,s}\) is of rank \(s\), whose fibre over \(A\in G_{n,s}\) is the vector space \(A\). Note that \(\gamma_{n,s}\) is \(T^{l}\)-equivariant vector bundle. When restrictred to \(Y_{q}\), \(\gamma_{n,s}\) has a trivial subbundle \(q\epsilon\) of rank \(q\). The subbundle \(q\epsilon\) of \(\gamma_{n,s}\) is also \(T^{l}\)-equivariant since the action of \(T^{l}\) preserves the subspace \(U_{q}\). Indeed we have a commuting diagram \[\begin{array}{ccc}Y_{q}\times U_{q}&\hookrightarrow&E(\gamma_{n,s}|_{Y_{q}}) \\ \downarrow&&\downarrow\\ Y_{q}&\xrightarrow{id}&Y_{q}\end{array}\] where all maps are \(T^{l}\)-equivariant and the vertical arrows are bundle projections. Thus as a \(T^{l}\)-equivariant vector bundle we shall write \(Y_{q}\times U_{q}\) as \(\epsilon_{u_{\varphi_{\lambda}(1)}}\oplus\cdots\oplus\epsilon_{u_{\varphi_{ \lambda}(q)}}\). Therefore \[\gamma_{n,s}|_{Y_{q}}\cong\omega\oplus\epsilon_{\chi_{1}}\oplus\cdots\oplus \epsilon_{\chi_{q}} \tag{5.34}\] where \(\omega\) is the complex vector bundle over \(Y_{q}\) whose fibre over \(A\in Y_{q}\) is the complex vector space \(A^{\prime}:=A/\mathbb{C}^{q}\). Since \(\gamma_{n,s}\) and \(\epsilon_{\chi_{1}}\oplus\cdots\oplus\epsilon_{\chi_{q}}\) are \(T^{l}\)-equivariant we get that the quotient bundle \(\omega\) is \(T^{l}\)-equivariant. Thus the direct sum decomposition (5.34) is \(T^{l}\)-equivariant. Further, we note that \(Y_{q}\) can be identified with the Schubert variety \[X(\sigma)=\{U\in G_{n,s}\mid\dim(U\cap\mathbb{C}^{\sigma_{i}})\geq i,\ \ 1\leq i\leq s\}\] where \[\sigma_{i}=\begin{cases}i,&\text{if $i\leq q$},\\ n-s+i,&\text{if $q<i\leq s$}.\end{cases}\] From Proposition 5.8, the image of the composition \[\mathcal{F}_{N}=\mathcal{F}_{\lambda}\xrightarrow{\epsilon_{\chi_{q}}} \mathcal{F}\xrightarrow{\pi_{\lambda}}G_{n,s},\] denoted \(\pi_{\lambda,s}\), is contained in \(Y_{q}\). Moreover, the map \(\pi_{\lambda,s}\) is \(T^{l}\)-equivariant. Therefore, we have a commuting diagram where all the maps are \(T^{l}\)-equivariant \[\begin{array}{ccc}\mathcal{F}_{\lambda}&\stackrel{{\iota_{ \lambda,g}}}{{\longrightarrow}}&\mathcal{F}\\ \pi_{\lambda,s}\downarrow&&\downarrow\pi_{s}\\ Y_{q}&\hookrightarrow&G_{n,s}.\end{array} \tag{5.35}\] Now \[\pi_{s}^{*}(\gamma_{n,s})=\mathcal{V}_{s}=\mathcal{L}_{1}\oplus\cdots\oplus \mathcal{L}_{s}\] by (4.11). Therefore \[L_{1}\oplus\cdots\oplus L_{s} =\iota_{\lambda,g}^{!}(\mathcal{L}_{1}\oplus\cdots\oplus\mathcal{ L}_{s}) \tag{5.37}\] \[=\iota_{\lambda,g}^{!}\circ\pi_{s}^{*}(\gamma_{n,s})\] (5.38) \[=\pi_{\lambda,s}^{*}(\gamma_{n,s}|_{Y_{q}})\] (5.39) \[=\pi_{\lambda,s}^{*}(\omega)\oplus\pi_{\lambda,s}^{*}(\epsilon_{ \chi_{1}}\oplus\cdots\oplus\epsilon_{\chi_{q}}), \tag{5.36}\] from (5.34). Thus if \(\epsilon_{\dot{n}}:=\epsilon_{\chi_{1}}\) for \(1\leq 1\leq q\) and \(\xi:=\pi_{\lambda,s}^{*}(\omega)\) then the proposition follows. **Remark 5.9**.: Note that the \(T^{l}\) action on \(U_{q}=\mathbb{C}^{q}\) is through the characters \(\chi_{1},\ldots,\chi_{q}\) on each factor. Moreover, since the \(T^{l}\) action on \(\mathbb{C}^{n}\) is through \((u_{\varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(n)})\) hence on the coordinate subspace \(\mathbb{C}^{q}\subseteq\mathbb{C}^{n}\) the \(T^{l}\)-action is by \((u_{\varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(q)})\). Thus \[\chi_{1}=u_{\varphi_{\lambda}(1)}\] for \(1\leq 1\leq q\). ### The \(\lambda\)-operations in equivariant \(K\)-theory We recall here the \(\lambda\)-operations or the exterior power operations in equivariant \(K\)-theory (see [4, Section 1]). Let \(X\) be a finite CW complex for \(x\in K_{T^{l}}(X)\) we let \[\lambda_{t}(x):=\sum_{i\geq 0}\lambda^{i}(x)t^{i}\] as an element of the formal power series ring \(K_{T^{l}}(X)[[t]]\) in the indeterminate \(t\). Note that \(\lambda^{0}(x)=1\). When \(x=[\xi]\in K_{T^{l}}(X)\) is the class of a vector bundle \(\xi\) of rank \(k\), we have \(\lambda_{t}(x)\) is a polynomial of degree \(k\) since the exterior power \(\lambda^{d}([\xi])=0=\lambda^{d}([\xi]+d-1)\) for \(d\geq k+1\). In particular, when \(\xi\) is a line bundle, then \(\lambda_{t}(\xi)=1+\xi t\). In the case when \(\xi=\epsilon_{1}\oplus\cdots\oplus\epsilon_{k}\simeq\mathbb{C}^{k}\) is a trivial bundle on \(X\) where \(T^{l}\) acts on \(\epsilon_{i}\) through the character \(\chi_{i}\) for \(1\leq i\leq k\), we have \[[\xi]=\chi_{1}+\cdots+\chi_{k}\in R(T^{l})\subseteq K_{T^{l}}(X)\] and \(\lambda_{t}(\xi)=\prod_{i=1}^{k}(1+[\epsilon_{i}]t)\) and so \(\lambda_{t}(-\xi)=\prod_{i=1}^{k}(1+[\epsilon_{i}]t)^{-1}\). The last equality follow from the identity \(\lambda_{t}(x+y)=\lambda_{t}(x)\cdot\lambda_{t}(y)\). When \(X=\mathcal{F}_{\lambda}\) then from (5.32), (5.33) and the above properties of \(\lambda_{t}\) we have \[\lambda_{t}(\xi^{\prime})=\prod_{1\leq r\leq s}(1+[L_{i_{r}}t])\prod_{1\leq 1 \leq s+1-d}(1+[\epsilon_{j_{1}}t])^{-1} \tag{5.40}\] for \(d\geq s+1-q\) where \(q=p_{\lambda^{\vee}}(s)\). Now, \(\lambda_{t}(\xi^{\prime})\) is a polynomial of degree \(d-1\) since \(rank(\xi^{\prime})=d-1\) for \(d\geq s+1-q\). Comparing the coefficients of \(t^{d}\) in (5.40) where \(d\geq s-q+1\) we obtain the following equation in \(K_{T^{l}}(\mathcal{F}_{\lambda})\) \[\sum_{0\leq k\leq d}(-1)^{d-k}e_{k}([L_{i_{1}}],\ldots,[L_{i_{s}}])h_{d-k}([ \epsilon_{j_{1}}],\ldots,[\epsilon_{j_{s+1-d}}])=0. \tag{5.41}\] Moreover, \([\epsilon_{j_{1}}]=\chi_{j_{1}}\in R(T^{l})\subseteq K_{T^{l}}(\mathcal{F}_{ \lambda})\) for \(1\leq 1\leq q\). Furthermore, we note that \(T^{l}\) acts on \(\mathbb{C}^{n}\) via the \(n\)-tuple of characters \[(u_{\varphi_{\lambda}(1)},u_{\varphi_{\lambda}(2)},\ldots,u_{\varphi_{\lambda }(n)}).\] Since the inclusion of the trivial subbundle \(\epsilon_{j_{1}}\oplus\cdots\oplus\epsilon_{j_{q}}\simeq\mathbb{C}^{q} \subseteq\mathbb{C}^{n}\) is \(T^{l}\)-equivariant it follows that \(\{\chi_{j_{1}},\ldots,\chi_{j_{q}}\}\) is the set \[\{u_{\varphi_{\lambda}(1)},u_{\varphi_{\lambda}(2)},\ldots,u_{\varphi_{ \lambda}(q)}\}\] (see Remark 5.9). Thus we have the following with the notations of Section 1. In particular, \(\mathcal{I}_{\lambda}\) is the equivariant \(K\)-theoretic Tanisaki ideal. **Proposition 5.10**.: _We have a well defined surjective \(R(T^{l})\)-algebra homomorphism_ \[\overline{\psi_{\lambda}}:R(T^{l})[x_{1},\ldots,x_{l}]/\mathcal{I}_{\lambda} \longrightarrow K_{T^{l}}(\mathcal{F}_{\lambda})\] _which sends \(x_{i}\) to \([L_{i}]_{T^{l}}\in K_{T^{l}}(\mathcal{F}_{\lambda})\) for \(1\leq i\leq l\)._ The main theorem Theorem 1.1 will follow if we show that the isomorphism in Proposition 5.10 is an isomorphism. It therefore suffices to show injectivity of \(\overline{\psi_{\lambda}}\). For this we prove the following lemma. **Lemma 5.11**.: _The ring \(\mathcal{R}:=R(T^{l})[x_{1},\ldots,x_{n}]/\mathcal{I}_{\lambda}\) is generated as an \(R(T^{l})\)-module by \(\binom{n}{n}\) elements._ Proof.: Let \(\mathcal{I}^{\prime}_{\lambda}\) be the ideal in the polynomial ring \[\mathbb{Z}[u_{1},\dots,u_{l},x_{1},\dots,x_{n}]\] with its natural grading generated by the homogeneous elements of degree \(d\) \[\sum_{0\leq k\leq d}(-1)^{d-k}e_{k}(x_{i_{1}},x_{i_{2}},\dots,x_{i_{s}})h_{d-k}( u_{\varphi_{\lambda}(1)},\dots,u_{\varphi_{\lambda}(s+1-d)})\] for \(1\leq s\leq n\), \(1\leq i_{1}<\dots<i_{s}\leq n\) and \(d\geq s+1-q\) where \(q:=p_{\lambda^{\vee}}(s)\). Consider the ring \(\mathcal{R}^{\prime}:=\mathbb{Z}[u_{1},\dots,u_{l},x_{1},\dots,x_{n}]/\mathcal{ I}^{\prime}_{\lambda}\). It suffices to find polynomials \(\Phi_{1}(x),\dots,\Phi_{m}(x)\) whose classes in \(\mathcal{R}^{\prime}\) generate \(\mathcal{R}^{\prime}\) as a \(\mathbb{Z}[u_{1},\dots,u_{l}]\)-module. Since \(\mathcal{R}\) (resp. \(R(T^{l})\)) is the localization of \(\mathcal{R}^{\prime}\) (resp. \(\mathbb{Z}[u_{1},\dots,u_{l}]\)) at \(u_{1}\cdots u_{l}\), this will imply that the classes of the polynomials \(\Phi_{1}(x),\dots,\Phi_{m}(x)\) in \(\mathcal{R}\) will generate \(\mathcal{R}\) as \(R(T^{l})\)-module. The lemma will then follow. Now, by Theorem 3.4 we note that \(\mathcal{R}^{\prime}\) is isomorphic to the ring \(\mathcal{S}/\mathcal{J}_{\lambda}\simeq H^{*}_{T^{l}}(\mathcal{F}_{\lambda})\). Further, by [1, Lemma 5.2] we know that there exists polynomials \(\Phi_{1}(x),\dots,\Phi_{m}(x)\) which generate \(\mathcal{R}^{\prime}\) as an \(\mathbb{Z}[u_{1},\dots,u_{l}]\)-module. Hence the lemma. Now we prove Theorem 1.1. Proof.: (Proof of the main theorem) We have shown in Theorem 5.1 that \(K_{T^{l}}(\mathcal{F}_{\lambda})\) is a free \(R(T^{l})\)-module of rank \(\binom{n}{\lambda}\). Further, by Lemma 5.11\(\mathcal{R}\) is generated as an \(R(T^{l})\)-module by \(\binom{n}{\lambda}\) elements. Thus we will have a surjective \(R(T^{l})\)-module homomorphism \[\Psi:R(T^{l})^{\binom{n}{\lambda}}\longrightarrow\mathcal{R}.\] Since by Proposition 5.10\(\overline{\psi_{\lambda}}\) is surjective, we will get a surjective \(R(T^{l})\)-module homomorphism \(\overline{\psi_{\lambda}}\circ\Psi\) between two free of the same rank \(\binom{n}{\lambda}\) over the intergral domain \(R(T^{l})\). This implies that \(\overline{\psi_{\lambda}}\circ\Psi\) is an isomorphism and hence \(\overline{\psi_{\lambda}}\) is an isomorphism. **Remark 5.12**.: We have the identity \[\sum_{0\leq k\leq d}(-1)^{d-k}e_{k}(u_{\varphi_{\lambda}(1)}, \dots,u_{\varphi_{\lambda}(s)})h_{d-k}(u_{\varphi_{\lambda}(1)},\dots,u_{ \varphi_{\lambda}(s+1-d)})\] \[=e_{d}(u_{\varphi_{\lambda}(s+2-d)},\dots,u_{\varphi_{\lambda}(s )})\] \[=0\] since the number of variables in the set \(\{u_{\varphi_{\lambda}(s+2-d)},\dots,u_{\varphi_{\lambda}(s)}\}\) is less than \(d\). Thus we can rewrite the relations defining \(\mathcal{I}_{\lambda}\) in \(\mathcal{R}\) as follows: \[\sum_{0\leq k\leq d}(-1)^{d-k}\big{[}e_{k}(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{s}})-e_ {k}(u_{\varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(s)})\big{]}\cdot h_{d-k} (u_{\varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(s+1-d)}) \tag{5.42}\] for \(1\leq s\leq n\), \(1\leq i_{1}<\cdots<i_{s}\leq n\) and \(d\geq s+1-q\) where \(q=p_{\lambda^{\vee}}(s)\). In particular, when \(\lambda=(1,\ldots,1)\) and \(\mathcal{F}_{\lambda}=\mathcal{F}\) then \(\lambda^{\vee}=(n)\). Hence \(p_{\lambda^{\vee}}(s)=0\) for \(1\leq s<n\) and \(p_{\lambda^{\vee}}(n)=n\). Moreover, \(\varphi_{\lambda}\) is the identity map. In this case the relation (5.42) reduces to \[\sum_{0\leq k\leq d}(-1)^{d-k}\big{[}e_{k}(x_{1},x_{2},\ldots,x_{n})-e_{k}(u_{ 1},\ldots,u_{n})\big{]}\cdot h_{d-k}(u_{1},\ldots,u_{n+1-d}) \tag{5.43}\] for \(d\geq 1\). The relations (5.43) can be readily seen to be equivalent to the relations (4.13) given in Theorem 4.1. ## 6. Relation with the ordinary \(K\)-ring Recall that we have the augmentation \(\epsilon:R(T^{l})\longrightarrow\mathbb{Z}\) which sends the class \([V]\) of any \(T^{l}\)-representation \(V\) to \(\dim(V)\). This gives \(\mathbb{Z}\) a structure of \(R(T^{l})\)-module. Also we have a canonical \(R(T^{l})\)-module structure on \(K_{T^{l}}(\mathcal{F}_{\lambda})\) through pull back via the structure morphism or equivalently by the map which sends the class \([V]\) of a \(T^{l}\)-representation to the trivial \(T^{l}\)-equivariant vector bundle on \(\mathcal{F}_{\lambda}\times V\). Similarly we have a \(\mathbb{Z}\)-module structure on \(K(\mathcal{F}_{\lambda})\) given by pull back via the structure morphism or equivalently by the map \(\theta\) which sends any positive integer \(n\) to the trivial vector bundle over \(\mathcal{F}_{\lambda}\) of rank \(n\). Let \(f:K_{T^{l}}(\mathcal{F}_{\lambda})\longrightarrow K(\mathcal{F}_{\lambda})\) denote the forgetful homomorphism which sends the class of any \(T^{l}\)-equivariant vector bundle to the class of the underlying vector bundle forgetting the \(T^{l}\)-structure. (see [9],[14]). **Corollary 6.1**.: _The map_ \[F:\mathbb{Z}\otimes_{R(T^{l})}K_{T^{l}}(\mathcal{F}_{\lambda})\longrightarrow K (\mathcal{F}_{\lambda})\] _induced by \(\theta\) on the first factor and by \(f\) on the second factor is an isomorphism of \(\mathbb{Z}\)-modules. In other words the Springer variety \(\mathcal{F}_{\lambda}\) is weakly equivariantly formal for \(K\)-theory [7]._ **Proof:** By Theorem 5.1, \(K_{T^{l}}(\mathcal{F}_{\lambda})\) is a free \(R(T^{l})\)-module of rank \(\binom{n}{\lambda}\). By [18, Proposition 3.1] we have that \(K(\mathcal{F}_{\lambda})\) is a free \(\mathbb{Z}\)-module of rank \(\binom{n}{\lambda}\) and also that it is generated by the class of the line bundles \([L_{i}]\) for \(1\leq i\leq n\). Since \(L_{i}\) are \(T^{l}\)-equivariant this implies that \([L_{i}]\) are the image of \([L_{i}]_{T^{l}}\in K_{T^{l}}(\mathcal{F}_{\lambda})\) so that the map \(F\) is surjective. Thus \(F\) is a surjective homomorphism between two free abelian groups of the same rank \(\binom{n}{\lambda}\) and is therefore an isomorphism. \(\square\) On the other hand by specializing to \(u_{i}=1\) for all \(1\leq i\leq l\), that is by tensoring with \(\mathbb{Z}\) over \(R(T^{l})\) we have \[h_{d-k}(u_{\varphi_{\lambda}(1)},\ldots,u_{\varphi_{\lambda}(s+1-d)})=\binom{s+ 1-d+d-k-1}{s+1-d-1}=\binom{s-k}{s-d}\] (see [12]) for \(d\geq s+1-q\). Furthermore, since \[\binom{q+d-k-1}{q-1}=\left(\prod_{i=1}^{q-s-1+d}\frac{s-k+i}{s-d+i}\right) \cdot\binom{s-k}{s-d}\] it follows that the equivariant \(K\)-theoretic Tanisaki ideal reduces to the ordinary \(K\)-theoretic Tanisaki ideal \(I_{\lambda}\) defined in [18, p. 11]. By combining Theorem 1.1 and Corollary 6.1 we retrieve the presentation for the ordinary \(K\)-ring of \(\mathcal{F}_{\lambda}\) which was obtained directly in [18] by alternate methods. **Theorem 6.2**.: _[_18_, Theorem 4.2]_ _The map \(\psi_{\lambda}^{\text{ord}}:R=\mathbb{Z}[x_{1},\ldots,x_{n}]\to K( \mathcal{F}_{\lambda})\) be the ring homomorphism defined by \(\psi_{\lambda}^{\text{ord}}(x_{j})=[L_{j}]\) for \(1\leq j\leq n\). Then \(\psi_{\lambda}^{\text{ord}}\) is surjective and \(ker(\psi_{\lambda}^{\text{ord}})=I_{\lambda}\) where \(I_{\lambda}\) is the ideal in \(R\) generated by the elements_ \[\sum_{0\leq k\leq d}(-1)^{d-k}e_{k}(x_{i_{1}},\ldots,x_{i_{s}})\cdot\binom{q+d -k-1}{q-1}\] _where \(1\leq i_{1}<\cdots<i_{s}\leq n\)\(1\leq s\leq n\) and \(d\geq s+1-q\)._ **Acknowledgement:** The work was supported by Serb Matrics research grant with project number MTR/2022/000484. The author wishes to thank Prof. Parameswaran Sankaran for several valuable discussions. The author wishes to thank Prof. Shrawan Kumar for very valuable suggestions during the preparation of the manuscript. The author also wishes to thank Prof. Megumi Harada for encouragement and helpful email exchanges.
2310.06670
Domain Generalization by Rejecting Extreme Augmentations
Data augmentation is one of the most effective techniques for regularizing deep learning models and improving their recognition performance in a variety of tasks and domains. However, this holds for standard in-domain settings, in which the training and test data follow the same distribution. For the out-of-domain case, where the test data follow a different and unknown distribution, the best recipe for data augmentation is unclear. In this paper, we show that for out-of-domain and domain generalization settings, data augmentation can provide a conspicuous and robust improvement in performance. To do that, we propose a simple training procedure: (i) use uniform sampling on standard data augmentation transformations; (ii) increase the strength transformations to account for the higher data variance expected when working out-of-domain, and (iii) devise a new reward function to reject extreme transformations that can harm the training. With this procedure, our data augmentation scheme achieves a level of accuracy that is comparable to or better than state-of-the-art methods on benchmark domain generalization datasets. Code: \url{https://github.com/Masseeh/DCAug}
Masih Aminbeidokhti, Fidel A. Guerrero Peña, Heitor Rapela Medeiros, Thomas Dubail, Eric Granger, Marco Pedersoli
2023-10-10T14:46:22Z
http://arxiv.org/abs/2310.06670v1
# Domain Generalization by Rejecting Extreme Augmentations ###### Abstract Data augmentation is one of the most effective techniques for regularizing deep learning models and improving their recognition performance in a variety of tasks and domains. However, this holds for standard in-domain settings, in which the training and test data follow the same distribution. For the out-of-domain case, where the test data follow a different and unknown distribution, the best recipe for data augmentation is unclear. In this paper, we show that for out-of-domain and domain generalization settings, data augmentation can provide a conspicuous and robust improvement in performance. To do that, we propose a simple training procedure: (i) use uniform sampling on standard data augmentation transformations; (ii) increase the strength transformations to account for the higher data variance expected when working out-of-domain; and (iii) devise a new reward function to reject extreme transformations that can harm the training. With this procedure, our data augmentation scheme achieves a level of accuracy that is comparable to or better than state-of-the-art methods on benchmark domain generalization datasets. Code: [https://github.com/Masseeh/DCAug](https://github.com/Masseeh/DCAug) ## 1 Introduction The main assumption of commonly used deep learning methods is that all examples used for training and testing models are independently and identically sampled from the same distribution [44]. In practice, such an assumption does not always hold and this can limit the applicability of the learned models in real-world scenarios [22]. In order to tackle this problem, domain generalization (DG) [5] aims to predict well data distributions different from those seen during training. In particular, we assume access to multiple datasets during training, each of them containing examples about the same task but collected under a different domain or environment. One effective approach for DG is to increase the diversity of the training data [50]. Data augmentation, which is a widely used approach for generating additional training data [24, 35, 43], is especially beneficial since it can help to approximate the Figure 1: A conceptual illustration of our method. The inner circle and outer circle represent the space of weak (safe) and wider (possibly harmful) augmentations, respectively. Our method is able to automatically select for each combination of data samples and augmentation a wider transformation (when safe) or reject it when unsafe. This is achieved with the help of a reward function (represented as the yellow color gradient) that compares the diversity and the consistency of an augmented sample (see Section 4 for more details). In the illustration, given an image \(x\), we present two possible paths of augmentation. For the blue path, the wide augmentation has a high diversity and high consistency, and therefore it is selected (green box). For the purple path, although the wide augmentation has high diversity, it also has low consistency, therefore the transformation is rejected (red box), and the weak transformation is used instead as augmentation. true distribution of the dataset. However, choosing augmentations often depends on the underlying dataset. While learning the right data augmentation for in-domain setting (same distribution between training and test) has been explored in research [8, 9, 33, 42], there is currently little research on good augmentations for domain generalization and how to leverage domain information to make the augmentations more effective. In this work, we investigate those questions. First, we show that data augmentation is also useful for domain generalization, but to cover the different training domains and hopefully the target domain the proposed transformations need to be stronger than for in-domain tasks. However, too strong transformations could be harmful to the learning process (see Figure 1). To fully exploit stronger transformations, without harming the learning, we select diverse and challenging samples that provide helpful training information without losing the sample's original semantics. We introduce a reward function consisting of diversity and semantic consistency components and use it to select for each sample the best augmentation between a weak but safe and a strong but diverse augmentation. Thus, the proposed algorithm should be able to select which augmentation is better for each sample for training a model that can generalize to unknown data distributions. **The main contributions of this paper are as follows: (1)** We show that while commonly used augmentation-based techniques for in-domain settings are quite powerful for DG, we can increase the performance further by expanding the range of transformations. Consequently, we achieve superior results compared to the majority of approaches relying on domain-invariant representation. **(3)** With the new expanded range is easier to produce harmful transformations, therefore we introduce a data augmentation schema that selects the optimal augmentation strategy between a weak yet safe and a diverse yet strong augmentation technique. **(3)** Experimental on common benchmark datasets show the benefits of our proposed method achieving an accuracy that is better than state-of-the-art methods for DG. ## 2 Related Works **Data Augmentation:** There has been extensive research on data augmentation for computer vision tasks. Horizontal flips and random cropping or translations of images are commonly used for natural image datasets such as CIFAR-10 [23] and ImageNet [38], while elastic distortions and scalings are more common on MNIST dataset [49]. While data augmentation usually improves model generalization, if too strong, it might sometimes hurt performance or induce unexpected biases. Thus, one needs to manually find effective augmentation policies based on domain knowledge and model validation. To alleviate this issue, researchers propose various methods to automatically search efficient augmentation strategies for model in-domain generalization [8, 16, 55, 29, 18, 53]. AutoAugment (AA) [8] is the pioneering work on automating the search for the ideal augmentation policy which uses Reinforcement Learning (RL) to search for an optimal augmentation policy for a given task. Unfortunately, this search process requires extensive computing power, to the order of several thousands of GPU hours. Many subsequent works adopt AutoAugment search space for their own policy search [50]. In particular, [28, 32] propose methods to shorten the duration of the policy search for data augmentation while maintaining similar performance. Alternatively, other works resort to different guided search techniques to accelerate the search. Lim [29] uses a Bayesian optimization approach, [16] uses an online search during the training of the final model, and [18] employs an evolutionary algorithm to search for the optimal augmentation policy also in an online fashion. Adversarial AutoAugment (Adv. AA) [53] is another slightly cheaper method that uses multiple workers and learns the augmentation policy that leads to hard samples measured by target loss during training. However, all of these sophisticated approaches are comparable to RandAugment (RA), which uses the augmentation search space introduced in [9], but with a uniform sampling policy in which only the global magnitude of the transformations and the number of applied transformations are learned on a validation set. TrivialAugment (TA) [33] and UniformAugment [30] further push the RA method to the extreme and propose to use a truly search-free approach for data augmentations selection, yet achieving test set accuracies that are on-par or better than the more complex techniques previously discussed. However, all the mentioned methods use the search space of AutoAugment, which is already designed to not excessively distort the input image. To control the space of data augmentation, Gong [13] regularize augmentation models based on prior knowledge while Wei [46] use knowledge distillation to mitigate the noise introduced by aggressive AA data augmentation policies. Suzuki [42] proposes an online data augmentation optimization method called TeachAugment that introduces a teacher model into the adversarial data augmentation and makes it more informative without the need for careful parameter tuning. However, all the mentioned methods are designed for standard in-domain settings and do not consider the generalization problem for unknown domains as in domain generalization problems. **Domain Generalization:** Learning domain-invariant features from source domains is one of the most popular methods in domain generalization. These methods aim at learning high-level features that make domains statistically indistinguishable (domain-invariant). Ganin [12] propose Domain Adversarial Neural Networks (DANN), which uses GAN, to enforce that the features cannot be predictive of the domain. Albuquerque _et al_. [1] build on top of DANN by considering one-versus-all adversaries that try to predict to which training domain each of the examples belongs. Later work, consider a number of ways to enforce invariance, such as minimizing the maximum mean discrepancy (MMD) [26], enforcing class-conditional distributions across domains [27], and matching the feature covariance (second order statistics) across training domains at some level of representation [41]. Although popular, enforcing invariance is challenging and often too restrictive. As a result, Arjovsky _et al_. [2] propose to enforce the optimal classifier for different domains. GroupDRO [39] proposes to minimize the worst-case training loss by putting more mass on samples from the more challenging domains at train time. Bui _et al_. [6] use meta-learning and adversarial training in tandem to disentangle features in the latent space while jointly learning both domain-invariant and domain-specific features in a unified framework. However, Zhao _et al_. [54] show that learning an invariant representation, in addition to possibly ignoring signals that can be important for new domains, is not enough to guarantee target generalization. Furthermore as evidenced by the strong performance of ERM [15], these methods are either too strong to optimize reliably or too weak to achieve their goals [51]. **Data Augmentation for Domain Generalization:** Another effective strategy to address domain generalization [15, 47] is by using data augmentation. These methods focus on manipulating the inputs to assist in learning general representations. Zhou _et al_. [55] use domain information for creating an additive noise to increase the diversity of training data distribution while preserving the semantic information of data. Yan _et al_. [48] use mixup to blend examples from the different training distributions. In RSC [19], the authors iteratively discard the dominant features from the training data, aiming to improve generalization. This approach is inspired by the style transfer literature, where the feature statistics encode domain-related information. Similarly, MixStyle [56] synthesizes novel domains by mixing the feature statistics of two instances. SagNets [34] propose to disentangle style encodings from class categories to prevent style-biased predictions and focus more on the contents. The performance of these methods depends on whether the augmentation can help the model to learn invariance in the data. In this work, we build upon observations from [15, 47, 52], which show that data augmentation plays a vital role in improving out-of-distribution generalization. Our approach employs uniform sampling, similar to TrivialAugment [33], and a rejection reward inspired by TeachAugment [42]. This combination leads to the proposal of an effective data augmentation strategy for domain generalization. ## 3 Revisiting Random Data Augmentation for Domain Generalization ### Problem Definition We study the problem of Multi-Source Domain Generalization for classification. During training, we assume access to \(N\) datasets containing examples about the same task but collected under a different domain or environment, \(\mathcal{D}=\{1,2,..,N\}\). Let \(\mathcal{S}\) be a training dataset containing samples from all training domains, \(\mathcal{S}=\{(x_{1},y_{1},d_{1}),(x_{2},y_{2},d_{2})\ldots,(x_{M},y_{M},d_{M })\}\), with \(M=|\mathcal{S}|\). Here, \(x_{i}\in\mathcal{X}\) refers to an image, \(y_{i}\in\mathcal{Y}\) is the class label, and \(d_{i}\in\mathcal{D}\) is the domain label. Then, the goal of the domain generalization task is to learn a mapping \(f_{\theta}\colon\mathcal{X}\to\mathcal{Y}\) parametrized by \(\theta\) that generalizes well to an unseen domain, \(\hat{d}\notin\mathcal{D}\). In addition, we also consider a domain classifier \(h_{\phi}\colon\mathcal{X}\to\mathcal{D}\) parametrized by \(\phi\) that learns to recognize the domain of a given sample from \(\mathcal{S}\). As a baseline optimization problem, we consider the simple empirical risk minimization (ERM), which minimizes the average loss over all samples, \(\theta^{\star}=\arg\min_{\theta}\frac{1}{M}\sum_{(x,y)\in\mathcal{S}}\mathcal{ L}(f_{\theta}(x),y)\), where \(\mathcal{L}(\cdot)\) is the cross-entropy loss function. ### Data Augmentation Search Space A well-known approach to achieving domain generalization is transforming the training samples during the learning process to gain robustness against unseen domains [8, 33]. These transformations come from a predefined set of possible data augmentations that operate within a given range of magnitudes. We consider as standard transformations \(\mathcal{T}_{weak}\colon\mathcal{X}\to\mathcal{X}_{weak}\) the random flip, crop, and slight color-jitter augmentations that are safe, i.e., do not destroy image semantics. Such weak transformations are used in every training step. On top of standard transformation, we may also apply more transformations selected from either data augmentation search space Default from RandAugment [9] or Wide from TrivialAugment (TA) [33]. Here we use geometric transformations (ShearX/Y, TranslateX/Y, Rotate) as well as color-based transformations (Posterize, Solarize, Contrast, Color, Brightness, Sharpness, AutoContrast, Equalize, and Grey). However, unlike TA, we expand the magnitude ranges and construct \(\mathcal{T}_{wider}\colon\mathcal{X}_{weak}\to\mathcal{X}_{wider}\) to include more aggressive data augmentation (see 10) from supplementary materials). For sampling transformations, we follow the TA procedure, which involves randomly sampling an operation and magnitude from the search space for each image. ### Motivation for Wider Range Random augmentation over a set of predefined transformations as in TA, despite being very simple, is competitive to the state-of-the-art Data Augmentation in standard in-domain settings. In Table 1, we consider the performance of TA, RandAugment, and AutoAugment for domain generalization. As can be seen, such DA methods are already improving over ERM [44]. However, for domain generalization, we expect that more aggressive transformations can push the representation outside the training domains and help to adapt to new domains. In fact, as shown in Table 1, the uniform sampling strategy of TA, but with wider transformations further improves over the rest of the methods. However, as shown in Figure 2, stronger augmentations can easily lead to extreme transformations that do not keep the semantics of the image. Thus, the aim of this work is to further improve this strong baseline by proposing a mechanism to reject those extreme augmentations. For more details about the used datasets and the training procedures, see section 5. ## 4 Rejecting Extreme Transformations For each given input, we generate a weakly augmented version using standard transformation (i.e., using only a flip and a crop and slight color-jitter) and a strongly augmented version using \(\mathcal{T}_{wider}\) transformation as defined in the previous section. We then define a reward function \(R(x,z)\) that given an input \(x\) and metadata, either domain label \(d\) or class label \(y\), provides a measurement of the quality for the transformed sample. Then, maximizing such a reward function allows selecting which augmentation is more suitable for the training: \[\tilde{x}=\left\{\begin{array}{ll}\mathcal{T}_{wider}(x)&\text{if }R(\mathcal{T}_{wider}(x),z)\geq R(\mathcal{T}_{weak}(x),z)\\ \mathcal{T}_{weak}(x)&\text{otherwise}\end{array}\right. \tag{1}\] In the following, we define the reward function used. ### Augmentation Reward Intuitively, for domain generalization, a good augmentation creates challenging samples that provide useful training information without losing the sample's original meaning (i.e., the sample's class). We use the teacher-student paradigm to achieve this goal and introduce a unified reward function consisting of diversity and semantic consistency components for selecting the appropriate augmentation. Considering \(\tilde{x}\) as an augmented sample, the reward function is defined as: \[R(\tilde{x},z)=(1-\lambda)R_{div}(\tilde{x},z)-\lambda R_{con}(\tilde{x},z) \tag{2}\] where \(\lambda\) is the balancing coefficient between diversity and consistency. Here, \(z\) refers to either the domain of the sample \(d\) or the class label \(y\), and it is specified in the following sections for every term of the proposed reward. In the previous equation, the \(R_{div}\) term enforces diversity in the data by exploring the augmentations of the input, while \(R_{con}\) keeps the semantic meaning of augmented sample \(\tilde{x}\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Search Space**} & \multicolumn{6}{c}{**Dataset**} \\ \cline{3-8} & & PACS & VLCS & OfficeHome & TerraInc & DomainNet & **Avg.** \\ \hline ERM as in [44] & Weak & 84.2\(\pm 0.1\) & 77.3\(\pm 0.1\) & 67.6\(\pm 0.2\) & 47.8\(\pm 0.6\) & 44.0\(\pm 0.1\) & 64.2 \\ RandAugment [9] & Default & 86.1\(\pm 0.8\) & 78.7\(\pm 0.7\) & 67.8\(\pm 0.4\) & 44.7\(\pm 1.4\) & 44.0\(\pm 0.2\) & 64.3 \\ TA [33] & Wide & 85.5\(\pm 1.1\) & 78.6\(\pm 0.5\) & 68.0\(\pm 0.2\) & 47.8\(\pm 1.6\) & 43.8\(\pm 0.2\) & 64.7 \\ AutoAugment [8] & Default & 85.8\(\pm 0.5\) & 78.7\(\pm 0.8\) & 68.4\(\pm 0.2\) & 48.0\(\pm 1.3\) & 43.7\(\pm 0.2\) & 64.9 \\ TA (Ours) & Wider & 85.6\(\pm 0.8\) & 78.6\(\pm 0.4\) & 68.9\(\pm 0.4\) & 48.3\(\pm 0.8\) & 43.7\(\pm 0.3\) & 65.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Different strategies of data augmentation. We compare different search space ranges traditionally used and wider ones. TA with a wider search space leads to better average out-of-domain accuracy. Our experiments are repeated three times. For details about datasets and training procedures see section 5. Figure 2: Sample transformations from TA with wide and wider search space on PACS dataset. For each transformation, the first row shows the range of transformed samples with wide search space and the second row with wider. We see that the wider space can lead to more variety but also extreme and detrimental transformations that do not keep the semantics of the image. This motivates us to use wider transformations but find a way to reject the extreme ones. ### Diverse Student and Consistent Teacher ``` 0: source domains \(S\), label classifier \(f_{\theta}\), domain classifier \(h_{\phi}\), transformations \(\mathcal{T}_{weak}\) and \(\mathcal{T}_{wider}\), learning rate \(\eta\). 0: label classifier \(f_{\theta}\) or \(f_{\tilde{\theta}}\). 1:for minibatch \((x,y,d)\) in training dataset \(S\)do 2:\(\hat{x}_{1}\leftarrow\mathcal{T}_{weak}(x)\) 3:\(\hat{x}_{2}\leftarrow\mathcal{T}_{wider}(\hat{x}_{1})\) 4: select \(\hat{x}\) according to Eq. 1 5:\(\theta\leftarrow\theta-\eta\nabla_{\theta}[\mathcal{L}(f_{\theta}(\hat{x},y)]\) 6:if DCAug\({}^{label}\)then 7:\(\tilde{\theta}=(1-\beta)\theta+\beta\tilde{\theta}\) 8:else if DCAug\({}^{domain}\)then 9:\(\phi\leftarrow\phi-\eta\nabla_{\phi}[\mathcal{L}(h_{\hat{\phi}}(\hat{x},d)]\) 10:\(\tilde{\phi}=(1-\beta)\phi+\beta\tilde{\phi}\) 11:endif 12:endfor ``` **Algorithm 1** DCAug Training Procedure To make our idea work, we must ensure that the diversity reward takes into account the latest changes in the model. Thus, as a reward for diversity, we use the cross-entropy loss \(\mathcal{L}\) of a classifier \(h\) with parameters \(\phi\) trained to detect the domain of the image \(x\): \[R_{div}(x,d)=\mathcal{L}(h_{\phi}(x),d) \tag{3}\] In this way, the reward would avoid favoring multiple times the same samples because they are already included in the model. At the same time, the consistency reward needs to be robust because we need to make sure to classify those samples correctly. To do that, we use an exponential moving average (EMA) of our domain classifier \(\tilde{\phi}\) as a consistent teacher: \[\begin{split}&\tilde{\phi}=(1-\beta)\phi+\beta\tilde{\phi}\\ & R_{con}(x,d)=\mathcal{L}(h_{\tilde{\phi}}(x),d)\end{split} \tag{4}\] where \(\beta\) defines the smoothness of the moving average and is fixed at \(0.999\) for all experiments. We call this approach DCAug\({}^{domain}\). Alternatively, in situations where the domain meta-data \(d\) is not available, we can rewrite Eqs. 3 and 4 by using the label classifier \(f_{\theta}\) as the teacher and student, and the class label as ground truth. The method is referred hereafter as DCAug\({}^{label}\) and uses the rewards terms: \[\begin{split} R_{div}(x,y)&=\mathcal{L}(f_{\theta} (x),y)\\ R_{con}(x,y)&=\mathcal{L}(f_{\theta}(x),y)\end{split} \tag{5}\] being \(\tilde{\theta}\) the exponential moving average (EMA) of \(\theta\). This method also gives us the opportunity to use \(\tilde{\theta}\) instead of \(\theta\) as the final classifier which usually results in a more robust model [3]. We call this special variant TeachDCAug\({}^{label}\). Figure 3 shows an overview of the DCAug procedure. For each iteration, our training schema comprises two phases. In the first phase, we freeze \(\theta,\phi\) parameters and select the most appropriate transformation based on our reward function \(R\). In the second phase, we update \(\theta,\phi\) using a gradient descent procedure. The full algorithm of DCAug training procedure is presented in Algorithm 1. ## 5 Experiments **Dataset.** Following DomainBed benchmark [15] we evaluate our method on five diverse datasets: PACS [25] is a 7-way object classification task with 4 domains and 9,991 samples. VLCS [11] is a 5-way classification task with 4 domains and 10,729 samples. This dataset mostly contains real photos. The distribution shifts are subtle and simulate real-life scenarios well. OfficeHome [45] is a 65-way classification task depicting everyday objects with 4 domains and a total of 15,588 samples. TerraIncognita [4] is a 10-way classification problem of animals in wildlife cameras, where the 4 domains are different locations. There are 24,788 samples. This represents a realistic use case where generalization is indeed critical. DomainNet [36] is a 345-way object classification task with 6 domains. With a total of 586,575 samples, DomainNet is larger than most of the other evaluated datasets in both samples and classes. **Evaluation protocols and Implementation details.** All performance scores are evaluated by leave-one-out cross-validation, averaging all cases that use a single domain as the target (test) domain and the others as the source (training) domains. We employ DomainBed training and evaluation protocols [15]. In particular, for training, we use ResNet-50 [17] pre-trained on the ImageNet [38] as default. The model is optimized using Adam [21] optimizer. A mini-batch contains all domains and 32 examples per domain. For the model hyperparameters, such as learning rate, dropout rate, and weight decay, we use the same configuration as proposed in [7]. We follow [7] and train models for 15000 steps on DomainNet and 5000 steps for other datasets, which corresponds to a variable number of epochs dependent on dataset size. Every experiment is repeated three times with different seeds. We leave 20% of source domain data for validation. We use training-domain validation for the model selection in which, for each random seed, we choose the model maximizing the accuracy on the validation set. The balancing coefficient of our method, \(\lambda\), is coarsely tuned on the validation with three different values: [0.2, 0.5, 0.8]. ### Main Results In this section, we compare three variations of our model, \(\text{DCAug}^{domain}\), \(\text{DCAug}^{label}\) and \(\text{TeachDCAug}^{label}\) with and without domain meta-data (as explained in section 4.2), with 11 related methods in DG. Those methods are divided into two families: data augmentation and domain-invariant representation. For data augmentation, we compare with Mixup [48], MixStyle [56], DDAIG [55], SagNets [34] and RSC [19]. For invariant representation learning, we compare with IRM [2], GroupDRO [39], CORAL [41], MMD [26], DANN [12] and mDSDI [6]. We also included ERM as a strong baseline as shown in [15]. Table 2 shows the overall performance of DCAug and other methods on five domain generalization benchmarks on a classification task. The full result per dataset and the domain are provided in the supplementary material. From the table, we observe that, as shown in [20], most methods struggle to reach the performance of a simple ERM adapted to multiple domains and only a few methods manage to obtain good results on all datasets. \(\text{TeachDCAug}^{label}\), which essentially is the moving average version of \(\text{DCAug}^{label}\), while being simple, manages to rank among the first on all datasets, outperforming all data augmentation and domain-invariant methods. Furthermore, \(\text{DCAug}^{domain}\) and \(\text{DCAug}^{label}\) manage to outperform all data augmentation methods and obtain comparable results with the best \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Category**} & \multicolumn{6}{c}{**Dataset**} \\ \cline{3-8} & & PACS & VLCS & OfficeHome & TerraInc & DomainNet & **Avg.** \\ \hline ERM [44] & _Baseline_ & 84.2\(\pm\)0.1 & 77.3\(\pm\)0.1 & 67.6\(\pm\)0.2 & 47.8\(\pm\)0.6 & 44.0\(\pm\)0.1 & 64.2 \\ \hline MMD [26] & & 84.7\(\pm\)0.5 & 77.5\(\pm\)0.9 & 66.4\(\pm\)0.1 & 42.2\(\pm\)1.6 & 23.4\(\pm\)9.5 & 58.8 \\ IRM [2] & & 83.5\(\pm\)0.8 & 78.6\(\pm\)0.5 & 64.3\(\pm\)2.2 & 47.6\(\pm\)0.8 & 33.9\(\pm\)2.8 & 61.6 \\ GroupDRO [39] & _Domain-Imvariant_ & 84.4\(\pm\)0.8 & 76.7\(\pm\)0.6 & 66.0\(\pm\)0.7 & 43.2\(\pm\)1.1 & 33.3\(\pm\)0.2 & 60.7 \\ DANN [12] & & 83.7\(\pm\)0.4 & 78.6\(\pm\)0.4 & 65.9\(\pm\)0.6 & 46.7\(\pm\)0.5 & 38.3\(\pm\)0.1 & 62.6 \\ CORAL [41] & & 86.2\(\pm\)0.3 & 78.8\(\pm\)0.6 & 68.7\(\pm\)0.3 & 47.6\(\pm\)1.0 & 41.5\(\pm\)0.1 & 64.5 \\ mDSDI [6] & & 86.2\(\pm\)0.2 & 79.0\(\pm\)0.3 & 69.2\(\pm\)0.4 & 48.1\(\pm\)1.4 & 42.8\(\pm\)0.2 & 65.1 \\ \hline DDAIG [55] & & 83.1 & - & 65.5 & - & - & - \\ MixStyle [56] & & 85.2\(\pm\)0.3 & 77.9\(\pm\)0.5 & 60.4\(\pm\)0.3 & 44.0\(\pm\)0.7 & 34.0\(\pm\)0.1 & 60.3 \\ RSC [19] & & 85.2\(\pm\)0.9 & 77.1\(\pm\)0.5 & 65.5\(\pm\)0.9 & 46.6\(\pm\)1.0 & 38.9\(\pm\)0.5 & 62.7 \\ Mixup [48] & _Data Augmentation_ & 84.6\(\pm\)0.6 & 77.4\(\pm\)0.6 & 68.1\(\pm\)0.3 & 47.9\(\pm\)0.8 & 39.2\(\pm\)0.1 & 63.4 \\ SagNets [34] & & 86.3\(\pm\)0.2 & 77.8\(\pm\)0.5 & 68.1\(\pm\)0.1 & 48.6\(\pm\)1.0 & 40.3\(\pm\)0.1 & 64.2 \\ \(\text{DCAug}^{domain}\) (Ours) & & 86.1\(\pm\)0.9 & 78.9\(\pm\)0.5 & 68.8\(\pm\)0.4 & 48.7\(\pm\)0.8 & 43.7\(\pm\)0.3 & 65.2 \\ \(\text{DCAug}^{label}\) (Ours) & & 86.1\(\pm\)0.7 & 78.6\(\pm\)0.4 & 68.3\(\pm\)0.4 & 49.3\(\pm\)1.5 & 43.8\(\pm\)0.2 & 65.2 \\ \(\text{TeachDCAug}^{label}\) (Ours) & & 88.4\(\pm\)0.2 & 78.8\(\pm\)0.4 & 70.4\(\pm\)0.2 & 51.1\(\pm\)1.1 & 46.4\(\pm\)0.1 & **67.0** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with domain generalization methods Out-of-domain accuracies on five domain generalization benchmarks are presented. We highlight the best overall result. For each category, we also report the average accuracy per dataset. Accuracies other than our methods (DCAug) are from [7, 15]. Our experiments are repeated three times. Figure 3: Overview of \(\text{DCAug}^{domain}\) procedure for rejecting extreme augmentations. After calculating \(R_{div}\) and \(R_{con}\) for \(\mathcal{T}_{weak}\) and \(\mathcal{T}_{wider}\) our method selects the transformation with the highest reward (green box) and updates the label classifier \(f_{\theta}\) and domain student \(h_{\phi}\) using the transformed input \(\tilde{x}\). \(\text{DCAug}^{label}\) and \(\text{TeachDCAug}^{label}\) also follow the same procedure by replacing \(d\) and \(h_{\phi}\) by \(y\) and \(f_{\theta}\) respectively (see 10 from supplementary materials for more visual changes of the selected images). domain-invariant methods which underlines the importance of good data augmentation for DG. Another important observation is that as the dataset size increased, from PACS with 9,991 samples to DomainNet with 586,575 samples (see section 5), most of the methods, especially from the domain-invariant family struggled to show any performance increase compared to the ERM baseline. This poor performance is probably due to the large size and larger number of domains of Terralncopnita and DomainNet which makes all approaches that try to artificially increase the generalization of the algorithm not profitable. On the other hand, our approach manages to keep the performance close or better (in the case of Terralncopnita) to ERM. Recently, a new method based on ensembling models [37] trained with different hyperparameters has reached an average accuracy of \(68\) on the evaluated datasets. However, this approach does not belong to the studied methods and is orthogonal to them. Compared to ERM, DCAug has a small additional computational cost. In particular, DCAug\({}^{domain}\), other than updating the parameters of both domain and label classifier for each sample, computes the loss of the domain classifier twice without the need to calculate the gradients (see C from supplementary materials for full characterization.) ### Empirical analysis of DCAug **Rejection rate of strong augmentations.** We study the evolution of the rejection rate of strong augmentations over epochs on the PACS dataset. We use the same balancing hyperparameter \(\lambda\) as in the main experiments. For each domain, we show the selection of weak and strong augmentations for the entire training. As we observe in Figure 4, the rejecting rate is domain-dependent. As models have been pre-trained on ImageNet [38], we can see that our method selects weak and strong augmentations equally for domains that are closer to the pre-trained dataset (Art and Photo). However, for Cartoon and Sketch, which are far from ImageNet, we observe that our method uses more strong augmentations. This observation is in line with other works that suggest the effect of data augmentations diminishes if the training data already covers most of the variations in the dataset [31, 47]. **Measuring diversity and consistency.** Following the [14], in this section we compare the diversity and consistency (affinity) of the transformation space that our proposed approach induces with TA (Wide), TA (Wider), and ERM. Intuitively consistency measures the level of distortion caused by a given data augmentation schema on the target dataset. Here in our case, we use the in-domain validation and out-domain test set to measure in and out-domain performance respectively. On the other hand, diversity is a model-dependent element and captures the difficulty of the model to fit the augmented training data (see 3.2 from supplementary materials for precise definitions). As shown in Figure 5, neither of the two extremes, TA (Wider) as the most diverse and ERM with standard data augmentations as the most consistent, is sufficient for the best final performance. However, all of our proposed approaches provide a good trade-off between consistency and diversity which results in the best final performance for both in-domain and out-domain. **In-domain performance.** We investigate the in-domain performance of Autoaugment, RandAugment, TA (Wide), TA (Wider), and our proposed methods in Table 3. As we can see it has the exact opposite ranking compared to Table 1 which shows that aggressive data augmentation can indeed harm the in-domain performance. Our method, however, has proven to be highly effective in both cases, demonstrating that by limiting the scope of transformation, we can achieve optimal outcomes that combine the best of both worlds. **Diverse domain and consistent label.** Given that our final target is to select a transformation that creates challenging samples with diverse domains without losing the sample's original meaning, one might want to use the do Figure 4: Evolution over epochs of Weak vs Wider augmentations on the four domains of the PACS dataset. The title of each plot shows the domain, out-of-domain accuracies, and the average ratio of each augmentation for the entire training run. In the plots, we clearly see that for Cartoon and Sketch, the two domains that are farther from the pre-trained model on ImageNet and with lower performance, strong transformations are preferred over weak ones. main classifier and label classifier to satisfy this goal. In particular, we can write a variation of our reward functions as follows: \[\begin{split} R_{div}(x,d)&=\mathcal{L}(h_{\phi}(x),d) \\ R_{con}(x,y)&=\mathcal{L}(f_{\theta}(x),y),\end{split} \tag{6}\] in which the diversity is measured in terms of domain labels \(d\) and the consistency in terms of class labels \(y\). This formulation resembles the reward function used in [55], although for different purposes. We also derive another variation to this reward function which uses the EMA version of each model. Also for these rewards, we tune the hyper-parameter \(\lambda\) to find the best balance between diversity and consistency. As reported in Table 4, these two variants of our formulation do not really work. In particular, since the task of domain classification is significantly easier than the target task, finding a good balance between these two turns out to be difficult [40] ## 6 Conclusion In this work, we have presented a method for improving the performance of data augmentation when multiple domains are available at training time, but the distribution of the test domain is different from those and unknown. In this setting, we show that the state-of-the-art in-domain augmentation of TrivialAugment [33] based on uniform sampling of predefined transformation is beneficial and helps to improve results for a baseline based on ERM, which has been shown to be strong for domain generalization [15]. Then, we propose to further improve results by increasing the magnitude of those transformations while keeping the random sampling. This makes sense for domain generalization as we want to learn an unknown domain. Finally, we propose a rejection scheme that removes extreme and \begin{table} \begin{tabular}{l c c} \hline \hline **Diversity** & **Consistency** & **OOD Accuracy** \\ \hline \(\mathcal{L}(h_{\phi}(x),d)\) & \(\mathcal{L}(f_{\theta}(x),y)\) & 81.8 \\ \(\mathcal{L}(h_{\tilde{\phi}}(x),d)\) & \(\mathcal{L}(f_{\tilde{\theta}}(x),y)\) & 80.1 \\ \(\mathcal{L}(f_{\theta}(x),y)\) & \(\mathcal{L}(f_{\tilde{\theta}}(x),y)\) & **86.1** \\ \(\mathcal{L}(h_{\phi}(x),d)\) & \(\mathcal{L}(h_{\tilde{\phi}}(x),d)\) & **86.1** \\ \hline \hline \end{tabular} \end{table} Table 4: Variations of our reward function on PACS dataset. \(h_{\tilde{\phi}}\) and \(f_{\tilde{\theta}}\) refer to the EMA version of their corresponding models. Our methods are highlighted in the table. Figure 5: Consistency and diversity for different methods for in-domain (left) and out-of-domain (right) settings on the PACS dataset. Color represents the classification accuracy on the test set. For high accuracy, we need a good trade-off between diversity and consistency. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Search Space**} & \multicolumn{5}{c}{**Dataset**} \\ \cline{3-6} & & PACS & VLCS & OfficeHome & TerraInc & DomainNet & **Avg.** \\ \hline AutoAugment [8] & Default & 97.3\(\pm 0.2\) & 87.0\(\pm 0.1\) & 82.9\(\pm 0.3\) & 90.1\(\pm 0.2\) & 62.2\(\pm 0.1\) & 83.9 \\ TA (Ours) & Wider & 97.6\(\pm 0.1\) & 87.2\(\pm 0.1\) & 83.4\(\pm 0.3\) & 89.7\(\pm 0.1\) & 62.0\(\pm 0.1\) & 84.0 \\ RandAugment [9] & Default & 97.3\(\pm 0.2\) & 87.1\(\pm 0.2\) & 82.8\(\pm 0.3\) & 91.2\(\pm 0.1\) & 62.4\(\pm 0.4\) & 84.2 \\ TA [33] & Wide & 97.6\(\pm 0.2\) & 87.2\(\pm 0.1\) & 83.8\(\pm 0.4\) & 90.9\(\pm 0.2\) & 62.7\(\pm 0.1\) & 84.4 \\ DCAug\({}^{domain}\) (Ours) & Wider & 97.5\(\pm 0.2\) & 87.1\(\pm 0.1\) & 83.5\(\pm 0.3\) & 91.6\(\pm 0.2\) & 62.8\(\pm 0.1\) & 84.5 \\ DCAug\({}^{label}\) (Ours) & Wider & 97.6\(\pm 0.2\) & 87.3\(\pm 0.2\) & 83.4\(\pm 0.4\) & 91.3\(\pm 0.2\) & 62.8\(\pm 0.1\) & 84.5 \\ TeachDCAug\({}^{label}\) (Ours) & Wider & 98.1\(\pm 0.2\) & 87.5\(\pm 0.2\) & 84.1\(\pm 0.4\) & 92.5\(\pm 0.1\) & 65.6\(\pm 0.1\) & **85.6** \\ \hline \hline \end{tabular} \end{table} Table 3: In-domain accuracies of our methods on five domain generalization benchmarks. Our experiments are repeated three times. harmful transformations during training based on a reward function that compares the performance of a label classifier with an exponential moving average of it. All these contributions allowed our method to achieve equal or better results than state-of-the-art methods on five challenging domain generalization datasets with a minimum intervention in the standard ERM pipeline.
2303.06748
DTT: An Example-Driven Tabular Transformer for Joinability by Leveraging Large Language Models
Many organizations rely on data from government and third-party sources, and those sources rarely follow the same data formatting. This introduces challenges in integrating data from multiple sources or aligning external sources with internal databases. Commercial database systems do not offer adequate support for integrating data from heterogeneous sources, and manual integration is both time-consuming and inefficient. State-of-the-art data integration approaches that rely on similarity functions and textual transformations often fail to handle challenging cases where multiple mappings are required, or the mappings go beyond simple textual transformations. In this paper, we study the potentials of deep neural models for transforming tables for joinability. In particular, we cast the problem as a prediction task and develop a framework that leverages large deep-learning language models to transform tabular data from a source formatting to a desired target representation. Our framework can efficiently learn the patterns for mapping a source formatting into an expected target using just a few examples, which can then be used for tasks such as table joining, filling in missing values, and error detection. Compared to state-of-the-art mapping and joining approaches, our framework delivers noticeably more accurate and scalable performance on both real-world and synthetic datasets. Our experimental evaluation also shows that the performance of the proposed framework using our fine-tuned model is at par or better than large language models such as GPT-3, despite the significant difference in size, and that using large language models within our framework improves their performance.
Arash Dargahi Nobari, Davood Rafiei
2023-03-12T20:51:26Z
http://arxiv.org/abs/2303.06748v2
# DTT: An Example-Driven Tabular Transformer by Leveraging Large Language Models ###### Abstract. Many organizations rely on data from government and third-party sources, and those sources and organizations do not follow the same data formatting. This introduces challenges in integrating data from multiple sources. Commercial database systems do not offer adequate support for integrating data from heterogeneous sources, and manual integration is both time-consuming and inefficient. While state-of-the-art approaches rely on similarity functions and textual transformations, they often fail to handle challenging cases where multiple mappings are required, or the mappings go beyond simple textual transformations. In this paper, we study the potential of deep neural models for transforming tables for joinability. In particular, we cast the problem as a prediction task and develop a framework that leverages large deep-learning language models to transform tabular data from a source formatting to a desired target representation. Our framework can efficiently learn the pattern for mapping the source formatting into the expected target using just a few examples, which can then be used for table joining, filling in missing values, and error detection. Compared to state-of-the-art mapping and joining approaches, our framework delivers noticeably more accurate and scalable performance on both real-world and synthetic datasets. Our experimental evaluation also shows that the performance of the proposed framework using our fine-tuned model is at par or better than large language models such as GPT-3, despite the significant difference in size, and that integrating large language models into our framework improves their performance. ## 1. Introduction The drive towards data publishing and sharing by entities and governments over the past couple of years has led many organizations to rely on data from third-party sources. However, gathering data from multiple sources inevitably leads to data mismatches. Converting data from one format to another has long been a challenge in data integration and management (Han et al., 2012; Chen et al., 2013; Chen et al., 2014; Liu et al., 2015), with traditional approaches relying on manual development of guidelines and rules for integration or transformation (Sandhi et al., 2013; Chen et al., 2014). However, the sheer size and complexity of modern databases make manual transformation and integration impractical, fueling a significant research on automatic data transformation (Han et al., 2012; Chen et al., 2013; Chen et al., 2014; Liu et al., 2015). Our focus in this paper is on automated transformation of tabular data such as spreadsheets, web tables, and relational databases, which is widely adopted by both organizations and governments for representing, storing, and sharing data. In particular, given a few examples of matched rows between a source and a target, the goal is to learn a mapping. Those mappings can then be used to transform arbitrary rows in the source formatting into the formatting of the target, with applications in joining data from different sources (Dargahi et al., 2013; Liu et al., 2015), filling in the missing values and auto-completion (Han et al., 2012; Chen et al., 2014), and error detection and correction (Liu et al., 2015). Example 1 ().: _Figure 1 depicts a source and a target, representing the same entities (people) but in different formattings. The source table shows the names of the individuals, while the target table shows their corresponding user ids. Suppose that the target column is incomplete or unavailable, and the objective is to predict the missing values of the target column, based on a few examples of source-target pairs. Or, one may want to join the two columns despite the differences in formatting. The transformation process requires some reformatting rules and the choice of a rule may be conditional on the input. For example, in the first, third, and seventh rows, the reformatting rule involves concatenating the initial and the last name, converting them to lowercase, and using a period as a separator. However, in the second row, there is a middle name, while the last row lacks a first name, and these variations can affect the transformations. In general, detecting such transformations from a set of examples is not straightforward and must account for various challenges._ **Challenges** The _search space_ for possible transformations is huge. If each transformation is synthesized into a sequence of edit operations, the search space grows exponentially with the number of edit operations in a transformation and the parameter space of the operations. Also, both the search space and the runtime for many state-of-the-art approaches (Han et al., 2012; Chen et al., 2013; Chen et al., 2014; Liu et al., 2015) further grow with the input size, such as the number of rows and the length of a row. Despite some attempts to reduce the search space by limiting the number of operations (e.g. split and substring) within a transformation (Dargahi et al., 2013), sampling the input (Liu et al., 2015), and applying pruning strategies (Dargahi et al., 2013), the search space still remains relatively large. Hence the time needed to find a mapping is often much more than what Figure 1. Two columns, representing the name and user id of individuals, where a mapping from name to user id is sought may be considered as acceptable, for example, in an online setting. Also, some of these improvements and prunings are lossy, and they can miss transformations that are of a better quality than those found. Another challenge is _the availability of input examples and noise handling_. The examples are usually either user-provided or automatically generated from input. In the former, the examples are (extremely) limited but accurate, whereas in the latter, the examples can be extensive, but less accurate. In real-world settings, noise is usually unavoidable and inconsistencies can exist in data. Also when examples are automatically generated, some of them can be incorrect. A good model should perform well with only a limited number of examples, and it should also benefit from the abundance of examples, maybe ignoring those that are less useful. A good model should also be robust against any possible noise in data and deal with inaccuracy in the provided examples. **Existing Approaches** A wide array of studies target the problem of matching entity descriptions or records that describe the same real-world entities but differ in terms of formatting or representation (Bahdan et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2020). Traditional approaches rely on textual and semantic similarity, whereas more recent approaches incorporate machine learning and deep neural models. While these models provide effective solutions to the problem of formatting mismatch in joining tabular data, their use cases are limited. For example, these models cannot predict or rectify missing values, provide suggestions, or detect outliers in a table. This has led to another line of research where the focus is on finding a mapping between source and target tables and leveraging this mapping to transform the source formatting into that of the target. The majority of approaches aimed at detecting a mapping between two tables rely heavily on a limited set of string-based transformations (Bahdan et al., 2015; Chen et al., 2016; Chen et al., 2017) and an exhaustive search of the parameter space. While the search space can be bounded by limiting the number of string-based transformations, this can negatively impact the accuracy. Consider the source and target tables in Figure 1, where different rows may require different formatting rules. To transform all rows in the source to their corresponding target values, six distinct textual transformations may be needed, as illustrated in Example 1. Some studies (Bahdan et al., 2015; Chen et al., 2017; Chen et al., 2017) limit their search space and find a single transformation that covers all input rows, which will not be effective in this scenario. Other studies (Chen et al., 2017; Chen et al., 2017) can produce more than one transformation, but the problem of selecting a transformation from the set to apply to an arbitrary row is left unanswered. For instance, Nobari et al. (2018) provide a set of transformations that are required for a mapping but do not provide much hint on how to select a transformation for an input row. Furthermore, many state-of-the-art methods (Bahdan et al., 2015; Chen et al., 2017; Chen et al., 2017) exhaustively search the transformation space, and despite their pruning strategies, their runtimes increase dramatically when the input size grows (Nakamura et al., 2018). **Our Approach** In this paper, we introduce Deep Tabular Transformer (DTT), a novel framework for transforming tabular data into a joinable format using the power of deep learning for language modeling. Unlike traditional approaches that rely on a limited set of pre-defined string-based transformations and an exhaustive search process, DTT overcomes these limitations by leveraging advanced deep learning techniques. DTT predicts an expected output row in the target table for each row of the source table, enabling easy and efficient data joining. Our experimental results show that DTT outperforms existing state-of-the-art approaches in terms of accuracy, is applicable to a larger set of tables, and maintains an outstanding runtime performance even when dealing with large input size. Remarkably, the performance of DTT is at par or better than large language models such as GPT-3 despite having an order of magnitude less parameters and requiring dramatically less resources during inference. We are releasing DTT as a pretrained model, which demonstrates exceptional performance across multiple domains without the need for fine-tuning. Our hope is that this release will drive further advancements in the field. Our contributions can be summarized as follows: 1. We propose DTT, a novel example-driven approach for tabular data transformation leveraging pretrained language models. 2. We develop a diverse dataset for training our model, comprising of synthetic examples. Our experiments demonstrate that our model performs exceptionally well on real-world data from various domains. 3. We present an end-to-end framework for tabular data transformation which includes a decomposer, serializer, model, and aggregator. As an application of our framework, we demonstrate its effectiveness in table joining. 4. We conduct extensive evaluation on a wide range of datasets from different domains and show that our approach outperforms existing state-of-the-art baselines in terms of both accuracy and runtime. 5. We make all our resources, including our code, framework, pretrained model, synthetic data generator, and real-world benchmarks publicly available for the research community.1 Footnote 1: [https://github.com/arashdn/dtt](https://github.com/arashdn/dtt) ## 2. Problem Definition We want to transform tables from a source formatting to a target formatting using a few provided examples. Let \(S=\{s_{1},s_{2},\ldots\}\) denote a set of values in the source. For a small subset \(S^{\prime}\subset S\), let \(E=\{(s_{i},t_{i})|s_{i}\in S^{\prime}\}\) denote a set of \(k\) examples where the target values are given to guide the process for finding a transformation. The aim is to find the target formatting of every value in the source, i.e. \[R=\{(s_{i},f(s_{i}))|s_{i}\in S\wedge\forall s_{j}\in S^{\prime}((s_{j},f(s_{j} )\in E)\}. \tag{1}\] As an example, suppose we have a source table \(S\) that lists the recent prime ministers of Canada and an example set \(E\) that consists of three rows: S = {'Justin Trudeau', 'Stephen Harper', 'Paul Martin', 'Jean Chretien', 'Kim Campbell'}, E = { ('Justin Trudeau', 'frtudeau'), ('Stephen Harper','sharper'), ('Paul Martin', 'pmartin') }. Our aim is to find the target formatting for any arbitrary value in \(S\). For instance, the values 'Jean Chretien' and 'Kim Campbell' may be mapped to 'jchretien' and 'kcambbell' respectively. Tables can be transformed for joinability, for example, allowing a column of a source table to be joined with a column in the target. Tables may also be transformed to fill in missing values in a target column. In both cases, \(S\) can be the set of all values in the source column. In this study, we assume the source values and examples are provided. This a common practice to limit the scope of the problem and focus on data transformations (Kumar et al., 2017). If user-provided examples are not available, an unequal joining method (Kumar et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019) or token-based example generation (Kumar et al., 2017; Li et al., 2019) may be used to provide a set of examples, with the caveat that the automatically generated examples may contain noise and invalid pairs. We will discuss how our approach can deal with such noisy examples. ## 3. Background and Related Work Our work is related to the lines of work on (1) example-driven tabular data transformation, (2) language modeling and text-to-text transformers, and (3) language models applied to tabular data. We review the relevant recent works in these areas while also providing some background (when necessary). ### Example-Driven Tabular Data Transformation This is probably the closest line of work to ours. There are numerous studies in this area (Kumar et al., 2017; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), and FlashFill (Kumar et al., 2019) and BlinkFill (Kumar et al., 2019) are among the pioneers, with a focus on spreadsheet data. These two approaches construct an input graph based on a given set of user-provided examples, which is then traversed to generate a sequence of substring-based textual transformations that map source values to their corresponding targets in the input examples. However, FlashFill and BlinkFill heavily rely on the accuracy of the provided examples and are unable to handle noise in the examples. To address this issue, Zhu et al. propose a method called Auto-join (Li et al., 2019), which uses a set of pre-defined string-based transformation units, such as substring and split, to describe the transformations. The examples are automatically generated by token matching, and the method creates several subsets of the input examples to handle noise. A recursive backtracking algorithm is then applied to each subset to find the best transformation. While Auto-join is able to handle minor noise in the input and limits the search space by using pre-defined transformation units, it is a backtracking method and needs to search the entire transformation space in the worst case, which can be computationally expensive. Also, it may not perform well if the noise level in the examples is significant. In a more recent work by Nobari et al., referred to as Common String-based Transformer (CST), the search space for string-based transformations is further constrained by considering common text sequences between source and target examples as textual evidence to form the skeleton of transformations. CST uses the same string-based transformation units as Auto-join, but transformations for each row are generated independently to better handle input noise. The transformations are then ranked based on their coverage to build a final transformation set. While CST offers better noise handling and runtime performance compared to Auto-join, it is still limited to substring-based transformation units and performs well only when long matching sequences exist between source and target examples. While the pruning conditions in Auto-join and CST limit their search space and improve their runtime, they can end up missing some transformations, particularly those that cannot be covered by a small set of pre-defined transformation units. Our aim is to overcome these limitations on the search space by utilizing a Language Model (LM) to transform source values into the desired target representation. ### Language Modeling and Text to Text Transformers With large language models forming an integral component of our framework, we provide a brief background of those models. A vast majority of machine-learned LMs are based on the concept of masked language modeling, where some tokens in a given sequence are masked (or corrupted), and the model is trained to predict those masked tokens. Word2Vec (Vaswani et al., 2017) and GloVe (Pennington et al., 2017) are among the earliest models for pretraining, which generate static vectorized embeddings of each word using a shallow neural network. A later work, ELMo (Pennington et al., 2017) uses two layers of bidirectional LSTM (Kumar et al., 2019) to observe the context before and after the word and generates contextualized embedding of the words, unlike the static embeddings in Word2Vec. In recent years, Vaswani et al. (Vaswani et al., 2017) introduce transformers that use self attention (Vaswani et al., 2017), allowing the model to parallelize better than LSTM models and not giving more weight to nearby words. Transformer-based models consist mostly of an encoder, a decoder (Vaswani et al., 2017), or both. Encoder-only models, such as BERT (Devlin et al., 2016), aim to learn the natural languages and generate a latent representation of the input that can be used for tasks requiring an understanding of the input. Decoder-only models, such as GPT-2 (Vaswani et al., 2017) and GPT-3 (Bart et al., 2018), are widely used to generate natural language text given a context. Finally, Encoder-Decoder models, also referred to as sequence-to-sequence or text-to-text models, such as T5 (Vaswani et al., 2017) and BART (BART, 2018), use an encoder to create a latent representation of the input, which is passed to the decoder to generate a new text for a desired task. ### Language Models Applied to Tabular Data The rise of pretrained language models has led to their increasing use in various tasks, including those involving tabular data. In particular, these models are being applied to tasks such as entity matching (Bart et al., 2018; Li et al., 2019), text to SQL (Kumar et al., 2017; Li et al., 2019), question answering (Bart et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019) and data to text (Li et al., 2019; Li et al., 2019; Li et al., 2019). Since many deep learning and NLP models can only process data as a sequence of tokens, table serialization has become a common module in many of these tasks. Several serialization techniques have been developed to transform tables into sequences of tokens (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), while preserving the structural relationships that may be needed for these tasks. Since the relationships that need to be preserved can be task-dependent, various serialization methods are used in the literature. For example, Iida et al. (Iida et al., 2019) pass the rows and columns as two separate sequences to two transformer blocks and average the row and column values for each cell to generate a cell representation. In RPT (Kumar et al., 2017), tables are serialized using two special tokens, \(\tt{[A]}\) and \(\tt{[V]}\), to encode attribute names and their corresponding values respectively. While this serialization keeps the structural information about the tables, it is not very efficient as the attribute names are repeated in each row of the table. Our aim is not to generate a dense representation of the entire table, and this requires a different serialization approach, which we discuss in Section 4.1. ## 4. Approach As depicted in Figure 2, our framework consists of a few components: (1) a decomposer and serializer, which decomposes the problem into smaller subtasks and performs an input serialization; (2) a tokenizer, which performs tokenization to obtain a vectorized representation of the input; (3) a sequence-to-sequence model, which predicts an output for each subtask; and (4) an aggregator, which is responsible for combining the predictions of the subtasks to generate a final prediction. In the rest of this section, we will discuss the details of those components. ### Decomposer and Serializer Given a set of rows \(S\) from a source table and an example set \(E\) of source-target row pairs, the aim is to transform every row \(s_{i}\in S\) to a target based on the examples. Many large language models impose a limit on the length of the input. This limit usually varies from 512 tokens (e.g. BERT) to 2048 tokens (e.g. GPT-3) and is associated with the quadratic memory requirement of the self-attention mechanism with input length (Krizhevsky et al., 2017). However, the encoding of a table can be much longer for large tables and when there are many examples. To reduce this dependency on input length, we decompose the problem into smaller tasks, with each task being small enough to easily fit the input length requirement of many language models. This decomposition process is discussed in this section and the aggregation of the results is discussed in Section 4.3. Suppose the number of examples that describe the context in a sub-problem is set to two. For any arbitrary row in the source table, any subset of \(E\) of size two can be selected as the context. Let \(E^{2}\) denote the set of all subsets of \(E\) of size two, i.e. \[E^{2}=\{(s_{1},o_{1}),(s_{2},o_{2})|(s_{1},o_{1})\in E\wedge(s_{2},o_{2})\in E \wedge s_{1}<s_{2}\}. \tag{2}\] For each input row \(s_{i}\in S\) to be transformed, there are \(|E^{2}|\) possible contexts that can be chosen. As an example, consider sets \(S\) and \(E\) in Section 2. The set \(E^{2}\) of all subsets of size two of \(E\) will be \(E^{2}=\{\) <(Justin Trudeau','jtrudeau'),('Stephen Harper','sharper')>, <('Justin Trudeau','jtrudeau'),('Paul Martin','pmartin')>, <('Paul Martin','pmartin'),('Stephen Harper','sharper')>), and an encoding of the input 'Jean Chretien' \(\in S\) using one of these contexts is <('Justin Trudeau', 'jtrudeau'), ('Paul Martin', 'pmartin'), ('Jean Chretien',>. Each input row \(s_{i}\) can be fed to the model multiple times, each time with a different context. If the input is passed to the model \(n\) times, each time with a different context, the model will predict \(n\) possible targets, which can be aggregated as discussed in Section 4.3. It is common to use special tokens to mark the beginning and the end of sentences in natural language as it helps with training a model. The same convention is followed in encoding tabular data to describe the relationships between different input fields. Following this convention, we separate the source and target in an example with a <tr> token and two examples with <coe>. We also mark the beginning of input with <so> and the end of input with <eos>. With these symbols, our example given earlier can be encoded as: <sos>Justin Trudeau<tr>jtrudeau<coe>Paul Martin<tr>pmartin<coe>Jean Chretien<tr><coe>, and the expected label is <sos>jchretien<eos>. In general, the size of a sub-problem can vary depending on the lengths of the records in source and target, the length limitation of the large language model being employed, and maybe the complexity of transformations. In our case, each example consists of two rows, a source and a target. Assuming that the input consists of \(k\) examples and a source row to be transformed, and using a language model that takes 512 tokens, the length of each row is limited to \([512/(2k+1)]\) tokens, ignoring special tokens and separators. Also, more complex transformations generally require more examples to better describe the operations. For instance, consider the example ('john junior turner', 'jturner') in the context of the example given in Section 2. With only one example, one cannot tell if the letter j in the target is derived from 'john' or 'junior'. However, with two examples, there is less chance of an ambiguity. Unless explicitly stated otherwise, we set the number of examples in our contexts to two. ### Tokenizer and Model As large language models expect vectors as inputs, the input needs to be tokenized and each token is assigned a vector. Even though tokenization is considered as the first step in many NLP pipelines, the choice of tokenization is less obvious when working with tabular data. Conventional NLP models (e.g. word2vec, GloVe) use the vocabulary words as the unit for tokenization, and out-of-vocabulary tokens are usually all collapsed into an unknown token. Recent deep models utilize a more efficient tokenization that may better handle the morphological structure of the words, out-of-vocabulary words, or low-resource languages. Transformer-based language Figure 2. The architecture of the framework models [23; 34; 35; 36] generally use either WordPiece [37] or Byte Pair Encoding (BPE) [38] tokenizations, where more frequent consecutive byte pairs are represented as new symbols in the vocabulary (analogous to text compression techniques). In both cases, words are broken down into subword tokens based on the frequency of each subword in the language. As a result, common words are represented as a single token whereas rare words are broken down into multiple tokens. However, in our problem setting, a subword-level tokenizer may not be the right choice. Subword tokenizers are mainly optimized to pick subwords of natural language or words in the input domain while in tabular data, the values can be from any domain including words not from a specific language. Understanding meaning and semantics of the words or splitting them into smaller meaningful parts is not essentially helpful in predicting the output as each character may independently contribute to the output value. For instance, consider the pair ("Justin Trudeau", "J.Trudeau"), where the first character J in Justin is producing the letter J in J.Trudeau. In a different example pair, ("Justin Trudeau", "Trudeau, Justin"), the word Justin is used in the output as a single token. A pretrained subword-level tokenizer may not be the best choice for such input tokenization. A similar problem arises in low-resource languages that lack enough data for training a tokenizer. It is shown that character- or byte-level tokenizers work better in those settings [48; 49]. On the same basis, we adopt byte-level tokenizer ByT5 [48] in our work. Recent work has shown that byte-level models are competitive with their subword-level counterparts [48], especially in tasks dealing with short-length texts. Generally, table cells store short-length content and our serialization technique also generates short-length contexts with only two examples. Taking this into account, we use a byte-level UTF-8 encoder as the tokenizer, which benefits from the accuracy of character-level tokenizers and maintains a relatively short input length passed to the model. With the input represented as a sequence of tokens, the problem becomes a text-to-text transformation, where a suitable choice for the model architecture is a sequence-to-sequence model that comprises an encoder and a decoder block. Recent models are stacking a same number of transformer [44] layers for both encoder and decoder. However, it is shown that when the input is a sequence of characters, using a deeper encoder containing more layers of transformers, referred to as unbalanced architecture, performs better than a balanced model [48]. ByT5 [48] is a recent byte-level text-to-text model with an encoder block three times deeper than the decoder block, which we use as a starting point for the training process. Unlike the original model, which masks parts of the output, we mask all characters in the target, and the training objective is to predict the masked bytes. The decoder is an auto-regressive decoder, and only the initial token, <sos>, is passed to the decoder. In the next sections, we will delve into the details of passing the input and predicting an output. ### Aggregator We have decomposed the problem of transforming a table into a set of smaller tasks (see Section 4.1), where each of these tasks is carried out using a sequence-to-sequence model, as discussed in Section 4.2. To exploit all provided examples in the prediction process, each input is fed into the model multiple times, each time with a different context. If we denote the number of trials with \(n\), the model will predict \(n\) target candidates, denoted as \(O_{i}=\{o_{i1}\ldots o_{in}\}\), for each row \(s_{i}\in S\) in the source. In an ideal setting where there is no noise or inconsistency in the data and the model performs with no error, all of the predicted values for a specific source \(s_{i}\) should be the same, i.e. \(o_{i1}=o_{i2}=\ldots=o_{in}\). However, inaccurate examples, noisy input rows, and inconsistencies among multiple rows can lead to different predictions for a particular source row. It should be noted that due to the limitations in the model's input length, it is not feasible to pass the entire example set to the model, and instead, we create various subsets, each of which is treated as an independent problem. While noise in the examples may affect the output in some subsets, we ensemble the outputs generated under different contexts to obtain the best possible output. Consequently, the predicted target \(t_{i}\) for the source \(s_{i}\) can be estimated as \[t_{i}=\operatorname*{argmax}_{o_{ij}\in O_{i}}P(C_{i}|o_{ij}), \tag{3}\] where \(C_{i}\subseteq C\) is a subset of contexts that may include example sets that are relevant to source \(s_{i}\), and \(C_{i}\) may also be limited in size, for example to \(n\). By applying Bayes' theorem, we have \[P(C_{i}|o_{ij})=\frac{P(o_{ij}|C_{i})P(C_{i})}{P(o_{ij})}.\] Assuming a uniform prior probability \(P(o_{ij})\) for the predictions and treating \(P(C_{i})\) the same for all predictions, these terms can be ignored, and \(P(C_{i}|o_{ij})\propto P(o_{ij}|C_{i})\) can be used as a proxy for finding the argmax. Also, assuming independence among predictions, it is possible to use the maximum likelihood estimation to calculate \(P(o_{ij}|C_{i})\), i.e. \[t_{i}=\operatorname*{argmax}_{o_{ij}\in O_{i}}P(o_{ij}|C_{i})\propto\frac{|o_ {ij}|}{|O_{i}|} \tag{4}\] where \(|o_{ij}|\) is the frequency of \(o_{ij}\) in \(O_{i}\) and \(|O_{i}|\) is number of possible predictions. ### Downstream Tasks Given a source row and a set of source-target example pairs, the proposed model generates a target row following the examples. This framework can be useful in many downstream tasks such as auto-completion and auto-filling spreadsheets [10; 40], predicting missing values, error correction [12], and joining [29; 53]. In this section, we review two particular tasks: (1) filling missing values, and (2) joining heterogeneous tables. #### 4.4.1. Filling missing values It is common to have missing values in real-world tables [33], and sometimes those missing values can be predicted from the values of other columns [16]. Consider a scenario where two columns \(s\) and \(t\) are given, and column \(t\) has some missing or incorrect values. Those columns can be from the same or different tables. If there exists a mapping relationship from \(s\) to \(t\), our approach may be used to fill in the missing values. In this case, the given samples in \(s\) and \(t\) where the values of both columns are present may serve as examples from which a mapping is learned. The examples can then be utilized by the model as context to find the missing or incorrect values in \(s\). #### 4.4.2. Joining heterogeneous tables Consider a join scenario where a source table \(S\) and a target table \(T\) must be joined, based on some columns that are not formatted the same but there is a mapping from source to target. Examples of mappings may be provided by the user or obtained automatically (Sandhi et al., 2017). The model can be invoked to generate a target \(f(s_{i})\) for each source row \(s_{i}\), utilizing the examples as discussed earlier. Unlike the case for filling missing values where an exact prediction of the missing value is needed, an exact prediction is not necessary for a join. Instead, the goal is to use \(f(s_{i})\) as a bridge to establish a connection between rows in \(S\) and \(T\). For instance, under the setting where each value in the source is matched with a single value in the target (e.g. a primary-foreign key relationship), one needs to find the closest match in \(T\) for \(f(s_{i})\). This allows small discrepancies between predicted and target values, without affecting the join. There is a great chance that a significant string similarity exists between a model predicted value \(f(s_{i})\) and the corresponding value in \(T\). In many cases, this similarity is enough to perform the join. Therefore, for each \((s_{i},f(s_{i}))\) pair, we can select \(t_{j}\in T\) such that it yields the minimum edit distance between the two strings. This can be formalized as follows: \[m_{i}=\operatorname*{argmin}_{t^{\prime}\in T}\ edit\_dist(f(s_{i}),t^{\prime}) \tag{5}\] where \(m_{i}\) is considered a match in the target for \(s_{i}\). The approach can be generalized to cases where a value in the source is matched with either no values or multiple values in the target. To allow such many-to-many joins, one may set lower and upper bounds for the edit distance instead of aiming for the minimum distance. ## 5. Experiments and Analysis In this section, we evaluate our proposed model and analyze its performance under different settings. We also discuss our training data generation and the process of training our model. ### Dataset for Training DTT Pretrained language models are generally trained on large text corpora, and a common approach for training them involves masking out a word and predicting it, as seen in popular models such as T5 (Sandhi et al., 2017), BERT (Devlin et al., 2019) and GPT-2 (Sandhi et al., 2019). By using this approach, large amounts of unlabeled data can be utilized to train the model. Nonetheless, our particular task requires a vast set of source and target examples, grouped according to the transformations that map source examples to their corresponding targets. To the best of our knowledge, such a dataset is not currently available, and our experiments have shown that even advanced generative models pretrained on natural language text, such as T5 and GPT-2, are not capable of performing this task without extensive fine-tuning and training. This is because entries in real-world tables are typically short and have little relevance to other entries in the same column, aside from sharing the same domain (e.g. individual names). As a result, the prior language knowledge of these general models is less likely to be useful for this task. To address this challenge, we propose generating synthetic data to train the model. Before delving into the details of data generation, however, it is important to first review the desired features of the training data. #### 5.1.1. Training data features The training data must possess several key features. First, it should be organized as source-target pairs, categorized by their corresponding transformations, as previously discussed. It is worth noting that the mapping function can be general, as the model does not need to know the function itself; rather, it only requires the output of the function for the source examples and that the generated examples by the same mapping are grouped together. Second, the dataset must be sufficiently large to train a model with hundreds of millions of parameters, which is typical for many language models. Third, the dataset should cover a broad range of textual transformation patterns and various input lengths. Finally, the generated data should not be limited to the words in any specific language since many terms in table cells are not dictionary words and may not be limited to a particular language. Overall, the primary purpose of training data in our case is guiding the model to understand the mapping corresponding to a set of source-target example pairs. In this context, different combinations of edit operations can be performed on the source examples to generate the target outputs. Unlike NLP models that rely on understanding the syntax and the semantics of input, our model primarily focuses on discovering textual patterns and string operations. Hence, character-level tokenization is preferred in our case. In the rest of this section, we will delve into the process of generating a synthetic dataset to train our model. #### 5.1.2. Training data generation To generate our synthetic dataset we first build a set of textual transformations, denoted as \(T\), each consisting of a sequence of basic transformation units. We use the basic transformation units in Nobari et al. (Sandhi et al., 2017), which include substring, split, lowercase, uppercase, and literal. These units have their natural meanings: substr selects a portion of the input based on start and end parameters, split breaks the input by a given character and selects one part, literal returns a constant, and lowercase and uppercase return the lowercase and uppercase forms of the input respectively. Each unit copies either parts of the input or a literal to the output, and the output of a transformation is the concatenation of the outputs of its units. We randomly choose the units, parameters, and the length of each transformation in terms of the number of units, to provide a diverse set of examples. While the aforementioned transformations are expected to cover many cases of transformations in real-world settings (Sandhi et al., 2017; Sandhi et al., 2017), our aim is not to limit the model to learning a fixed set of transformations. Our findings indicate that, with sufficient independent training examples, the model can learn to find any necessary transformation even with a limited set of pre-defined transformations. The construction of transformations mainly helps us group input examples that follow a same mapping, but the model is not aware of the transformations and uses regular cross-entropy loss at the character level to learn a mapping that transforms the source into the target. The transformations in Nobari et al. (Sandhi et al., 2017) and Zhu et al. (Zhu et al., 2019) do not allow stacking of the units where one unit is applied on top of another unit. For the same reason, they introduce complex transformation units such as splitsubsting which stacks substring on top of split, with the output of one operation fed to the other. Instead of introducing many such new units, we allow random stacking of up to three transformation units. The stacking here refers to passing the output of one transformation unit to another one. Since our units include lower case and upper case transformations, the case of input may change in some transformations and not others. For each transformation \(tr\in T\), a set of examples is generated. To create these examples, a source text is randomly generated consisting of a mix of alphabetic and numeric characters, symbols, and special characters. The length of the input is selected at random. The transformation \(tr\) is then applied to source texts to generate a set of examples, denoted as \(I_{tr}=\{(s_{t},t_{i})\}_{1\leq i\leq u}\). Using random text instead of dictionary words avoids any potential bias towards natural language words and grammatical structures. To form example sets, subsets of size 3 are selected from \(I_{tr}\). Each example set is then serialized, as discussed in Section 4.1, with the target of the last example masked and labeled as the target for use in forming context sets for model training. ### Dataset for Evaluation To evaluate the effectiveness of our approach and compare its performance with state-of-the-art baselines, we use two real-world datasets as well as four synthetic datasets. In what follows, we provide a detailed explanation of each dataset. **Web Tables Dataset (WT)** This benchmark was initially introduced by Zhu et al. (2018) and was also used as a benchmark in Nobari et al. (2018). The dataset includes 31 pairs of tables from 17 distinct topics, with an average of 92.13 rows per table and an average length of 31 characters per input source. The tables were sampled from Google Fusion tables by identifying tables that appear in the results of the same queries but are formatted differently. This benchmark contains natural noise and inconsistencies, and not all entities can be transformed using traditional string-based transformations, which makes this dataset a relatively challenging benchmark (Zhu et al., 2018). **Spreadsheet Dataset (SS)** This dataset includes 108 pairs of tables, sourced from Microsoft Excel product team and user help forums, specifically focused on users' data cleaning issues. The tables are comprised of spreadsheet pages that present the same information in different formats. The dataset encompasses the public benchmarks presented in FlashFill (2018) and BlinkFill (2018), and was published in 2016 Syntax-Guided Synthesis Competition (SyGuComp) (2018). On average, each table in the dataset contains 34.43 rows and 19 characters per input source. Compared to web tables, this dataset features considerably less noise and inconsistency. **General Synthetic Dataset (Syn)** This is a synthetic dataset that contains 10 table pairs. Each pair is generated by applying a randomly generated textual transformation to a set of random input sources to create the output table. The transformations are constructed by putting together a random sequence of 3 to 6 units, the same as those discussed in Section 5.1, with random parameter sets. Unless stated differently, the dataset contains 10 tables, each of which contains 100 rows. Input length is randomly chosen in the range of 8 to 35, and no artificial noise is added to the dataset. While the model has been exposed to the units during the training, the transformations, the parameter sets of the units, and the inputs are unseen during the training process. **Easy Synthetic Dataset (Syn-RP)** This is a synthetic dataset containing 5 pairs of tables. Each pair is formed by randomly replacing one character with another (for example, the character '/' might be replaced with '-' for all rows). This dataset resembles simple formatting changes such as replacing a slash in a phone number with a hyphen. This replacement operation is not a transformation unit that exists in the model's training data and is thus unseen by the trained model. Each table contains 50 rows, and the length of input sources is randomly selected from a range of 8 to 35, unless stated otherwise. Since our model is generating the output character-by-character, we measure the difficulty of datasets based on the number of required edit operations. Accordingly, this is an easy dataset considering that only a few characters in the input need to be changed to generate the desired output. **Medium Synthetic Dataset (Syn-ST)** This synthetic dataset is similar to the previous one in terms of the number of table pairs and the input length. Each table pair is constructed by applying a single substring transformation unit to the input, with the start and end parameters selected randomly. Substring is one of the units included in the model's training data. In terms of difficulty, this dataset is considered to be medium-level based on the number of edit operations required. **Difficult Synthetic Dataset (Syn-RV)** This synthetic dataset consists of 5 tables, each containing 50 rows with input sources randomly selected to have a length between 8 to 35 characters. In this dataset, the target output is obtained by reversing all characters in the source (for instance, "Hello" is changed to "olleH"). This benchmark is considered difficult since almost all characters in the input source must be changed to generate the expected target. ### Experimental Setup Our model, DTT, was trained on a synthetic dataset containing 2000 groupings of transformations, each corresponding to a transformation, as discussed in Section 5.1. For each grouping, we generated 10 pairs of source-target samples with randomly chosen input lengths ranging from 8 to 35. 80% of the samples were used for training and the other 20% were the validation set. We also conducted experiments with other sample sizes and input lengths for training the model, and the results are discussed in Section 5.7. To evaluate the performance of our model, we divided the rows of each input table in our datasets into two equal-sized sets, denoted as \(S_{e}\) and \(S_{t}\). The former provided context examples to be passed to the model, while the latter was used for testing. Since DTT is an example-driven method, the selection of these examples is critical to the model's performance. To ensure the robustness of our predictions, we employ a technique where each input is fed to the model five times, and each time a distinct set of randomly chosen examples from \(S_{e}\) were given as context. The results of those trials were aggregated, as discussed in Section 4.3, to produce a final prediction. ### Evaluation Metrics We evaluate the performance of our models based on precision, recall, and F1-Score. This evaluation is in the context of heterogeneous join, as discussed in Section 4.4, where for a given source-target sample \((s,t)\), we consider a model prediction correct if it has the minimum edit distance with the target \(t\). In our case, precision represents the fraction of correct predictions that join with the target, recall measures the fraction of source rows that are correctly mapped, and F1-score is the harmonic mean of the two. It is important to note that not all source rows may be mapped due to various reasons2. In addition to the above metrics, we also report the Average Edit Distance(AED) and Average Normalized Edit Distance (ANED), which indicates the extent to which a prediction may differ from the given target. The normalization is performed based on the target length, enabling comparability across different datasets and lengths. All reported metrics for each dataset are the average over all tables in the dataset. Footnote 2: For AFJ, a threshold for similarity distance is set and based on that threshold, some source rows will not have a match. In CST, a match may not still be found after applying the detected transformations to all input rows. The language models may just return ‘ecos-’ with no prediction. ### Performance Compared to Heterogeneous Join Baselines In this section, we evaluate the performance of our model on the end-to-end task of heterogeneous or unequal table join. The task simulates the scenario where source and target columns are in two different tables that need to be joined. To provide a point of reference, we compare the performance of our model to two current state-of-the-art baselines: Common String-based Transformer (CST) (Zhu et al., 2017) and Auto-FuzzyJoin (AFJ) (Zhu et al., 2018). CST finds a set of textual transformations given a set of examples to transform tables for joinability and AFJ uses a set of similarity functions to detect the most probable rows to be joined. Table 1 summarizes the performance of DTT and the baselines, in terms of precision, recall, and F1-score, (denoted as P, R, and F respectively). The results show that DTT outperforms the baselines on all real-world datasets in terms of F1-Score and recall. On the synthetic datasets, our approach outperforms the baselines on three out of four datasets. On _Syn-RP_ and _Syn-ST_ datasets, our approach is either comparable or slightly worse than the baselines. The reason is that these datasets are relatively easy, with a significant textual similarity between the source and target. CST exhaustively searches the space for substring transformation, which is the only transformation used in the _Syn-ST_ dataset. Moreover, AFJ is based on the textual similarity, and every target in _Syn-ST_ is a substring of the source, leading to a significant similarity between source and target. Therefore, these datasets favor the baselines. Nevertheless, DTT still achieves an F1-score of 88% on the _Syn-ST_ dataset and a perfect F-score of 100% on _Syn-RP_, which is equal to AFJ and better than CST. There are significant differences between DTT and the baselines CST is limited in its ability to extract transformations, and cannot perform a join when there is no clear copying relationship between the source and target, as is the case with the _Syn-RV_ dataset where the target is obtained by reversing the input. As a result, CST achieves a 0% F1-score on this dataset. AFJ, on the other hand, employs similarity functions to determine if source and target values can be joined. However, this method struggles when there is not much similarity between the source and target, as demonstrated by its performance on the _Syn-RV_ dataset. Such challenges are common in real-world data. DTT, in contrast, leverages the provided examples to generate the desired output without relying on textual similarity or being bounded by the length of transformations. Hence, DTT performs significantly better than the baselines on more challenging datasets, such as the real-world _WT_ dataset and the synthetic _Syn_ and _Syn-RV_ datasets. For instance, DTT outperforms the baselines by a large margin on _Syn-RV_, where the target is obtained by reversing the order of characters in the input. Two more interesting observations can be made here. Firstly, to achieve a good performance on the join, it is not necessary to predict every single character correctly. Our framework can tolerate inaccuracies by aggregating results from multiple examples and using an edit-distance-based metric to form join candidates. For example, in _Syn-RV_ dataset, while the average normalized edit distance is more than 80%, the F1-score for join prediction is 63%. Secondly, our model performs very well on all real-world datasets and two synthetic datasets _Syn-RP_ and _Syn-RV_, despite the fact that our training data did not include any operation that simulates reversing the input or replacing a character, and no real-world examples or transformations were included in the training data. This highlights that the model is not limited to a given set of transformations, but rather focuses on extracting character-level patterns from the given set of input examples. Finally, in terms of comparing the runtime of DTT and our baselines, a direct comparison is not possible since DTT requires a GPU architecture whereas our baselines require CPU and memory. That said, some observations can be made on the scalability of the models The time required to predict a mapping for each row in DTT is independent of the number of rows and grows linearly with the length of the rows, whereas this time grows quadratically with the number of rows and polynomially with the length in CST. While the edit distance calculation in the joining process depends on the number of rows, our experiments suggest the growth in the runtime of DTT is noticeably less than CST when input length \begin{table} \begin{tabular}{|l||c|c|c|c||c||c|c||c|c|c|} \hline & \multicolumn{4}{c||}{Our Approach} & \multicolumn{4}{c||}{CST} & \multicolumn{4}{c|}{AFJ} \\ \hline Dataset & P & R & F & AED & ANED & P & R & F & P & R & F \\ \hline WT & 0.951 & 0.950 & **0.950** & 6.155 & 0.232 & 0.879 & 0.726 & 0.713 & 0.935 & 0.672 & 0.708 \\ SS & 0.954 & 0.952 & **0.953** & 2.399 & 0.135 & 0.995 & 0.792 & 0.812 & 0.943 & 0.662 & 0.691 \\ Syn & 0.934 & 0.934 & **0.934** & 6.986 & 0.150 & 0.990 & 0.259 & 0.324 & 0.993 & 0.490 & 0.511 \\ Syn-RP & 1.000 & 1.000 & **1.000** & 0.816 & 0.027 & 1.000 & 0.816 & 0.897 & 1.000 & 1.000 & **1.000** \\ Syn-ST & 0.880 & 0.880 & 0.880 & 5.032 & 0.316 & 1.000 & 1.000 & **1.000** & 1.000 & 1.000 & **1.000** \\ Syn-RV & 0.632 & 0.632 & **0.632** & 33.600 & 0.852 & 1.000 & 0.000 & 0.990 & 0.020 & 0.037 \\ \hline \end{tabular} \end{table} Table 1. Performance compared to heterogenous join baselines increases. For instance, with our machine setup 3, processing a table with row length set to 5 characters from our synthetic dataset takes 5 seconds for DTT and 3 seconds for CST. However, when the input length increases to 50 characters, DTT needs less than 17 seconds, while CST takes around 90 seconds to complete the join. It should be noted that the runtimes reported for DTT are the summation of decomposition, all 5 trails, and the aggregation time. For scalability in terms of the number of rows, we compared their performance on two tables from our spreadsheet dataset, "phone-10-short" and "phone-10-long", both with an average of 17 characters per row. The former has 7 rows, while the latter has 100. DTT takes 3 and 22 seconds respectively for short and long tables, while the same experiments require 4 and 366 seconds for CST, and 4 and 38 seconds for AFJ. This indicates how our framework scales better in terms of runtime when the input grows either horizontally or vertically. Footnote 3: Our experiments were conducted on a machine with Nvidia RTX 3090 GPU and AMD PCPC 7601 CPU with 64GB RAM. ### Performance Compared to Large Language Model Baselines Large Language Models (LLM) can be employed in many downstream tasks including joining heterogeneous tables. It has been shown that the recent models perform relatively well under zero or few shot settings (Chen et al., 2016; Chen et al., 2016), hence they set a strong baseline. In this section, we compare the performance of our model to GPT-3 (Chen et al., 2016), a state-of-the-art LLM with exceptional performance on many tasks. Compared to our ByT5-base (Wang et al., 2017) model that is fine-tuned on only 20,000 synthetically-generated samples and contains near 582M parameters, GPT-3 models are trained on billions of documents and resources such as web tables and has at least one to two orders of magnitude more parameters. Our experiment with GPT-3 is under few-shot setting with 1, 2, 3, 5, and 10 randomly selected samples from each table given as examples. Zero-shot setting is not applicable in our case since an input source row can be mapped to an unlimited number of targets without any examples. At the time of writing, GPT-3 models are not published publicly and are only accessible through OpenAI commercial API4. We use the Curie5 model of GPT-3 from the API. Curie is claimed to be extremely powerful, fast, and capable of many advanced tasks6. Nevertheless, the model specification and the number of parameters are not publicly announced. Comparing the general performance of the Curie model with the performance reported for various sizes of GPT-3 (Chen et al., 2016), it can be assumed that the Curie model has about 7B parameters and is trained on a huge text data from different datasets. Footnote 5: The complete model name is “text-curie-001” Footnote 6: Based on platform documentations on opensai.com We run two sets of experiments to analyze the performance of GPT-3 for unequal join. First, as the common method of using LLMs, we pass our examples as an input sequence to the GPT-3 and consider the model output as the expected target. The serialization used for GPT-3 is the same as DTT, as discussed in Section 4.1. In the second experiment, we use GPT-3 as a replacement for our fine-tuned ByT5 model (and byte-level tokenizer) inside our framework, keeping the serializer and aggregator from DTT. Figure 3 depicts the F1-Score of the model with 1 and 2 examples under both experimental settings for all datasets compared to DTT and Table 2 reports the F1-score and ANED of GPT-3 model for all 1, 3, and 5 input examples. As shown in Figure 3, GPT-3 struggles to perform well on the task with just one example despite some recent work suggesting that LLMs are capable of one-shot table reasoning (Chen et al., 2016). However, providing two examples significantly boost its performance, especially on real-world data, bringing it on par with DTT. In our synthetic datasets, however, DTT performs significantly better than GPT-3. The lack of publicly available data on GPT-3 Curie model's size and specification makes it challenging to conduct a more in-depth comparison. It can be noted that GPT-3 is trained on numerous web resources, including web tables, which increases the likelihood that it has encountered various representations of common entities and tables on the web. Since our real-world datasets are gathered from tables on web pages, this could explain why the model performs significantly better on _WT_ and _SS_ datasets than on synthetic datasets. Conversely, synthetic datasets consist of sequences of random characters that may not be tokens from natural language, and GPT-3 may not have encountered them during its training. Consequently, its performance on most of the synthetic datasets is weak and, in some cases, significantly inferior to DTT, especially in the _Syn-RV_ dataset, where the target and source are substantially different. Our ByT5-based model, however, is trained to extract patterns among Figure 3. Performance compared to GPT-3 as well as the performance of the combined model a sequence of characters, allowing it to perform better on more challenging synthetic datasets. In our second set of experiments with GPT-3, we used our framework and replaced the LLM module with GPT-3. By default, our framework employs two context example pairs because ByT5 has a maximum limit of 512 character-level tokens, and if a longer sequence is given to the model, it will be truncated. However, the limit in GPT-3 Curie model is 2048 subword-level tokens. This allows us to increase the number of example pairs that are given to the model. In our experiment with GPT-3 integrated into the DTT framework, we varied the number of examples from one to five. As demonstrated in Figure 3 and Table 2, using GPT-3 within our framework boosts its performance, in terms of both the F1-score and ANED, on nearly all datasets when the same number of examples were provided. For instance, the average F1-score across all datasets of the GPT-3 model increased from 0.624 to 0.667 with two examples and from 0.734 to 0.760 with five examples when integrated into the DTT framework. This demonstrates how the model inside the DTT framework can be substituted with other larger models and gain a performance boost. ### Performance Varying the Number and Length of Training Samples Our trained model has two important parameters: the number of samples and their length. To gain a deeper insight into the relationship between these parameters and the model's performance, we conducted an experiment where we varied the number of training samples from 0 to 10,000. Each sample here is a grouping of transformations that consists of 10 source-target pairs, and we kept the sequence length consistent with our other experiments, ranging between 8 and 35. When the number of samples was set to zero, the ByT5 model did not undergo any fine-tuning. As shown in the top left panel of Figure 4, the F1-Score of the model is typically less than 0.5 when no fine-tuning is performed. Also on all datasets, over 80% of characters are predicted incorrectly (i.e. ANED \(>\) 0.8) when the model is not fine-tuned, as indicated by the 0 training samples in the figure. For example, in the _Syn-ST_ dataset, over 84% of output characters are predicated incorrectly by the ByT5 model without fine-tuning. However, this error is reduced to 27% after a proper fine-tuning of the model. This finding suggests that, unlike GPT-3, the ByT5 model without fine-tuning struggles to perform well for unequal join. Nevertheless, our fine-tuning plays a crucial role in significantly improving the performance of the model. The general expectation for the model is to perform better when more training samples are provided and the trend for our experiments is not much different. However, some observations should be taken into account. As shown in Figure 4, when the number of training samples surpasses 2,000 7, the model performance does not significantly change, and it reaches its optimal performance level on our datasets. Beyond this point, a slight decrease in the performance can be observed on real-world data and synthetic datasets that contain transformations not covered in the training data. This behavior can be attributed to the bias that the model acquires from seeing more transformations of the same type, which hinders its ability to effectively use its prior knowledge of real-world data. Our extensive experiments show that even with a significantly larger training dataset, the decrease in performance is not significant. Thus the model performance will converge when 2000 or more training samples are provided. Footnote 7: This number refers to the number of transformation groupings, and it translates to 20,000 source-target examples of which 16,000 examples are used for training and the remaining 4,000 is kept as the validation set. To examine how the length of input affects the training process of the model, we conducted another experiment where we changed the length range of the training samples by randomly selecting values between 5 and 60 characters. The right panel of Figure 4 shows the performance when the model is trained with sequences that are generally longer and have an extended range. Increasing the length range of input sample pairs does not lead to any noticeable improvement on the performance of the model. That being said, increasing the length is expected to have an impact on how the model performs on longer inputs, which is discussed next. ### Performance Varying the Input Length In this section, we explore how the model performs under different input lengths. We also investigate how the length of input data during training affects the model's ability to handle longer input during inference time. To conduct our experiments, we regenerated synthetic datasets _Syn-RP_, _Syn-ST_, and _Syn-RV_, this time with the input lengths varying from 5 to 50 characters. We utilized two versions of the model in our analysis. The first version was trained on input examples with lengths randomly sampled between 8 and 35, while the second version was trained on examples with extended lengths selected randomly between 5 to 60 characters. As shown in Figure 5, when the benchmark dataset is easy in terms of edit distance between source and target, such as _Syn-RP_, the performance of the model is not significantly influenced by the \begin{table} \begin{tabular}{|l||c||c||c||c||c||c||c||c||c||c||c||c||c||c||c|} \hline & \multicolumn{2}{c||}{GPT3-1e} & \multicolumn{2}{c||}{GPT3-2e} & \multicolumn{2}{c||}{GPT3-3e} & \multicolumn{2}{c||}{GPT3-5e} & \multicolumn{2}{c||}{GPT3-DTT-1e} & \multicolumn{2}{c||}{GPT3-DTT-2e} & \multicolumn{2}{c||}{GPT3-DTT-3e} & \multicolumn{2}{c||}{GPT3-DTT-5e} \\ \hline Dataset & F & ANED & F & ANED & F & ANED & F & ANED & F & ANED & F & ANED & F & ANED & F & ANED \\ \hline WT & 0.625 & 0.499 & 0.933 & 0.151 & 0.954 & 0.108 & 0.966 & 0.088 & 0.759 & 0.341 & 0.979 & 0.072 & 0.985 & 0.074 & 0.987 & 0.073 \\ SS & 0.724 & 0.533 & 0.949 & 0.128 & 0.973 & 0.094 & 0.968 & 0.079 & 0.760 & 0.483 & 0.960 & 0.113 & 0.973 & 0.079 & 0.982 & 0.056 \\ Syn & 0.372 & 0.889 & 0.502 & 0.619 & 0.528 & 0.522 & 0.614 & 0.418 & 0.380 & 0.902 & 0.506 & 0.567 & 0.552 & 0.495 & 0.720 & 0.387 \\ Syn-RP & 0.264 & 0.824 & 0.920 & 0.195 & 0.976 & 0.127 & 0.984 & 0.111 & 0.352 & 0.748 & 0.968 & 0.125 & 1.000 & 0.098 & 1.000 & 0.095 \\ Syn-ST & 0.152 & 0.941 & 0.328 & 0.812 & 0.464 & 0.726 & 0.728 & 0.527 & 0.176 & 0.923 & 0.488 & 0.717 & 0.736 & 0.589 & 0.728 & 0.510 \\ Syn-RV & 0.120 & 0.947 & 0.112 & 0.944 & 0.112 & 0.944 & 0.144 & 0.940 & 0.152 & 0.944 & 0.104 & 0.948 & 0.120 & 0.944 & 0.146 & 0.939 \\ \hline \end{tabular} \end{table} Table 2. Performance of GPT-3 as well as that of the combined model length of the input. Both the model trained on short input examples and the model trained on long input examples deliver the highest F1-Score and near zero edit distance across almost all lengths of input. On the medium dataset (i.e., _Syn-ST_), the models start with almost perfect performance and the performance is sustained for input lengths that are shorter than the length of the majority of samples used in training. However, the performance begins to decrease once the input length surpasses this threshold. Nonetheless, even with an increase in ANED, the model still manages to predict a reasonable portion of the expected characters in the output. Interestingly, the drop in performance does not occur when the model is trained on longer input samples. It should be noted that if the input is too short (not a typical real-world scenario), such as when it contains only 5 characters, there may be a slight decrease in performance as the model may not fully comprehend the relationship between the source and target with such limited information. On the other hand, on more challenging datasets, such as _Syn-RV_, the performance drops even for input lengths that are shorter than the majority of training samples. This behavior is not unexpected for Auto-regressive models since a single incorrect prediction can influence the prediction of subsequent characters. The results of our experiment suggest that the extent of the decrease in performance is influenced by the training data of the model. When trained on shorter-length data, there is a significant decrease in both F1-Score and ANED as the input length increases. However, when trained on lengthier data, the decrease is relatively minimal. Overall, such cases are not very common in real-world datasets, and our experiments demonstrate that our model can perform well under various input lengths in real-world settings. Based on our experiment, we can assume that the model can accurately detect transformation patterns for various input lengths when the difficulty level of the transformation is reasonable. ## 6. Conclusion and Future Work We have studied the problem of mapping tabular data from a source formatting to a desired target formatting using a set of few examples. Tables may be transformed to enable joining heterogeneous tables, filling missing values, data corrections, and other data integration tasks. To address this challenge, we proposed a framework that leverages the power of large language models. We generated the Figure 4. Performance of the model varying the number of training data samples required training data and fine-tuned a character-level LLM based on ByT5 for this task. Our extensive experiments demonstrate that our model achieves impressive performance on a wide range of real-world and synthetic datasets, outperforming state-of-the-art models in the field. Our work suggests several possible avenues for future research. One potential direction is to explore the use of synthetic data generation to enhance model training for a variety of data integration tasks. Additionally, there is value in investigating the challenges and limitations of synthetic data in model training, as well as strategies for addressing those challenges. Furthermore, given concerns around privacy, federated learning may be a preferred approach for table transformation tasks. As such, an exploration of federated learning methods for this purpose is yet another promising direction for future research.
2302.06461
A Study on ReLU and Softmax in Transformer
The Transformer architecture consists of self-attention and feed-forward networks (FFNs) which can be viewed as key-value memories according to previous works. However, FFN and traditional memory utilize different activation functions (i.e., ReLU and Softmax respectively), which makes them not equivalent. In this paper, we first rebuild the connections between FFN and key-value memory by conducting extensive studies on ReLU and Softmax, and find they are equivalent when adding an additional layer normalization module on Softmax. In addition, ReLU outperforms Softmax on both FFN and key-value memory when the number of value slots is large. We analyze the reasons and then explore this good property of ReLU on the self-attention network where the original Softmax activation performs poorly on long input sequences. We then propose a full ReLU architecture named ReLUFormer which performs better than the baseline Transformer on long sequence tasks such as document translation. This paper sheds light on the following points: 1) Softmax and ReLU use different normalization methods over elements which lead to different variances of results, and ReLU is good at dealing with a large number of key-value slots; 2) FFN and key-value memory are equivalent, and thus the Transformer can be viewed as a memory network where FFNs and self-attention networks are both key-value memories.
Kai Shen, Junliang Guo, Xu Tan, Siliang Tang, Rui Wang, Jiang Bian
2023-02-13T15:41:20Z
http://arxiv.org/abs/2302.06461v1
# A Study on ReLU and Softmax in Transformer ###### Abstract The Transformer architecture consists of self-attention and feed-forward networks (FFNs) which can be viewed as key-value memories according to previous works. However, FFN and traditional memory utilize different activation functions (i.e., ReLU and Softmax respectively), which makes them not equivalent. In this paper, we first rebuild the connections between FFN and key-value memory by conducting extensive studies on ReLU and Softmax, and find they are equivalent when adding an additional layer normalization module on Softmax. In addition, ReLU outperforms Softmax on both FFN and key-value memory when the number of value slots is large. We analyze the reasons and then explore this good property of ReLU on the self-attention network where the original Softmax activation performs poorly on long input sequences. We then propose a full ReLU architecture named ReLUFormer which performs better than the baseline Transformer on long sequence tasks such as document translation. This paper sheds light on the following points: 1) Softmax and ReLU use different normalization methods over elements which lead to different variances of results, and ReLU is good at dealing with a large number of key-value slots; 2) FFN and key-value memory are equivalent, and thus the Transformer can be viewed as a memory network where FFNs and self-attention networks are both key-value memories. Machine Learning, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmaxmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmaxmax, Softmax, Softmax, Softmaxmax, Softmax, Softmaxmax, Softmax, Softmaxmax, Softmax, Softmaxmax, Softmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmaxmax, Softmaxmaxmax, Softmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmax, Softmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmaxmax, Soft long-sequences (Sun et al., 2022). Unfortunately, directly alternating Softmax to ReLU does not converge. With theoretical and experimental analysis, we find that the variance of SAN results with ReLU activation grows with the length of the input sequence, and the dynamic variance will lead to an unstable training process. Therefore, a variance reduction factor and regularization loss functions are introduced to solve this problem. As a result, we make it possible to utilize ReLU on self-attention, which performs better than Softmax when dealing with long input sequences. In summary, this paper provides insights into the difference and relationship between ReLU and Softmax activation functions in the Transformer architecture. * Softmax provides exponential normalization over all value slots and therefore highlights a small number of them while neglecting others, which may cause performance degradation when the number of slots is large, e.g., FFN with large hidden dimensions or SAN with long input lengths. ReLU bypasses this problem but faces variance exploding which varies over different training samples. * We revisit the relations between the FFN and key-value memory and find that they are equivalent when additional layer normalization is introduced. * With the ReLU activation function in both SAN and FFN components, we propose a fully-ReLU Transformer architecture (ReLUFormer). We claim that our ReLUFormer can be viewed as an integration of key-value memory, with FFNs as global key-value memories and SANs as local memories. We evaluate the model on the task of long-document translation and verify the superiority of the full ReLU architecture over the Transformer baseline. The rest of the paper is organized as follows. We introduce the backgrounds of FFN, SAN, and key-value memory in Section 2. And then we revisit the connections between FFN and key-value memory in Section 3. We then explore the connections between the ReLU and Softmax in SAN, and compare the ReLU and Softmax in the task of long document translation with the proposed fully-ReLU architecture in Section 4. We summarize and provide insights on our findings in Section 5. ## 2 Background ### Feed-Forward Network and Key-Value Memory Transformer (Vaswani et al., 2017) has achieved great success on natural language processing tasks, such as machine translation and language modeling. Recently, some works have been proposed to analyze the architecture and investigate the secrete of the success of Transformers. Some works have revealed the relation between feed-forward network (FFN) and key-value memory (Geva et al., 2020; Sukhbaatar et al., 2019). They intuitively regard the FFN as key-value memory by unifying them in the formulation. Feed-Forward NetworkFormally, given an input sequence representation \(X\in\mathbb{R}^{n\times d}\) where \(n\) is the length and \(d\) is the dimension, an FFN consists of two linear projections with a non-linear activation function shown as follows: \[H=\text{ReLU}(X\cdot W_{1}^{T}+b_{1})\cdot W_{2}+b_{2}, \tag{1}\] where \(H\) is the output hidden representation, \(W_{1},W_{2}\in\mathbb{R}^{d_{h}\times d}\) are learnable parameters, and \(b_{1},b_{2}\) indicate bias terms which are omitted in the following1, \(d_{h}\) is the hidden dimension of FFN. Footnote 1: We omit the bias terms since it contains few parameters and has little influence on the results (Geva et al., 2020). Key-Value MemoryThe key-value memory networks (Sukhbaatar et al., 2015; Geva et al., 2020) consist of learnable key-value parameters, which are designed to store the knowledge in the training set. Given an input query \(X\in\mathbb{R}^{n\times d}\), the output is computed by aggregating the values \(V\in\mathbb{R}^{d_{h}\times d}\) w.r.t the distribution computed by keys \(K\in\mathbb{R}^{d_{h}\times d}\) as follows: \[H=\text{Softmax}(X\cdot K^{T})\cdot V, \tag{2}\] where \(d_{h}\) is the number of memory slots. RelationsPrevious works (Sukhbaatar et al., 2019; Geva et al., 2020) reveal that the FFN and key-value memory are similar in the formulation, i.e., by regarding \(W_{1}\) as keys and \(W_{2}\) as values the FFN can be viewed as a kind of key-value memory. In addition, Dai et al. (2021) conducts experiments on how knowledge is stored in a pre-trained model and finds that there are some knowledge neurons in the FFN layer related to the expression of factual knowledge. Lample et al. (2019) introduces a large-scale external memory based on product keys and successfully integrates it into transformer architecture by replacing the FFN layer. However, there still exists a difference in the choice of activation functions, where the FFN usually adopts ReLU and the key-value memory uses Softmax, which may lead to different model performance. In this paper, we will explore the connections between FFN and key-value memory by studying the ReLU and Softmax. ### Self-Attention Network and Key-Value Memory As for the self-attention network (SAN), which is initially proposed in a key-value computation format (Vaswani et al., 2017) previous works have also explored its relation with key-value memory (Sukhbaatar et al., 2019). Formally, given the input sequence \(X\in\mathbb{R}^{n\times d}\), the self-attention is calculated as follows: \[H=\text{Softmax}(\frac{(XW_{Q})\cdot(XW_{K})^{T}}{\sqrt{d}})\cdot XW_{V}, \tag{3}\] where \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{d\times d}\) are learnable parameters. By denoting \(\hat{X}\coloneqq XW_{Q}/\sqrt{d}\), \(\hat{K}\coloneqq XW_{K}\) and \(\hat{V}\coloneqq XW_{V}\), the SAN is identical to key-value memory in Equation (2) as well. Wu et al. (2022) introduces an external memory and integrates it with SAN by a gating mechanism. Dai et al. (2019) reuses the states of previous segments and augments them in SAN to capture long-term dependencies. These works conceptually regard the SAN as a local memory, in which the query is the current token and the keys and values are other context tokens. In conclusion, although FFN, SAN, and key-value memory are similar in formulation, previous works have not carefully discussed the differences in activation functions. In practice, it is a convention to use ReLU in FFN and Softmax in SAN and key-value memory. In this paper, we provide in-depth analyses of ReLU and Softmax as well as their performance on FFN and SAN. We start by revisiting the connections between FFN and key-value memory. ## 3 Connections Between FFN and Key-Value Memory ### ReLU and Softmax are Different We first investigate whether the difference in ReLU and Softmax activation functions will influence the performance of FFN and key-value memory. We conduct a preliminary experiment that simply replaces FFN layers with key-value memories (i.e., replacing Equation (1) with Equation (2)) in a vanilla Transformer and keeps other components the same. We test on the machine translation task and the IWSLT14 De-En benchmark dataset (refer to Appendix A for more details). We report the BLEU score (Papineni et al., 2002) as the evaluation metric. As shown in Table 1, we find the model performance drops from \(34.22\) to \(33.08\) in the BLEU score. According to the discussion in the previous section, the only difference lies in the activation function, we can conclude that the drop comes from changing ReLU to Softmax, which has a significant influence and can not be ignored. ### Bridge the Gap between FFN and Key-value Memory Then, we analyze the reason for the performance drop. When computing results, Softmax normalizes the score of all slots while ReLU does not. Intuitively, the results of Softmax will have a much smaller variance than the results of ReLU. In consequence, the residual from previous layers will dominate the output of the current FFN layer with the Softmax activation function, resulting in inefficient utilization of parameters and degeneration of model capacity. To demonstrate this phenomenon, we compute the ratio between the variance of FFN output and residual for both ReLU and Softmax. The results in Table 1 verify our claim, where the ratio of output with Softmax is much smaller than that of ReLU. To alleviate this problem, we add a layer normalization (Ba et al., 2016) module after the FFN layer to learn and adjust the variance ratio of the output, i.e., Equation (2) becomes to \(H=\text{LN}(\text{Softmax}(X\cdot K^{T})\cdot V)\). The experimental results are shown in Table 1. With layer normalization, both the variance ratio and BLEU scores of Softmax are promoted, and the FFN with Softmax performs similarly to ReLU. Therefore, we amend the previous findings (Geva et al., 2020) that the FFN and key-value memory are equivalent when an additional layer normalization module is introduced. \begin{table} \begin{tabular}{l l l l} \hline \hline Activation & Layer Norm & BLEU & Variance Ratio \\ \hline ReLU & No & \(34.22\) & \(0.27\) \\ \hline Softmax & No & \(33.08\) & \(0.01\) \\ Softmax & Yes & \(34.21\) & \(0.34\) \\ \hline \hline \end{tabular} \end{table} Table 1: The results on IWSLT14 De-En translation task with different activation functions on FFN. The variance ratio indicates the ratio between the variance of the FFN output and residual. Layer Norm denotes the layer normalization layer applied on the result of FFN. Figure 1: The BLEU scores of FFN, key-value memory, and key-value memory with layer normalization (key-value memory with LN) on different memory sizes. ### Scaling to Large Number of Values With the progress of large-scale Transformer networks, the hidden dimension (or the number of slots from the view of memories) \(d_{h}\) of FFN layers becomes larger and brings better results as shown in previous works (Shazeer et al., 2017; Fedus et al., 2021). To verify the performance and show the generalization ability of different activation functions when the number of values is large, we vary \(d_{h}\) from \(32\) to \(4096\) and train three models including FFN with ReLU (ReLU), FFN with Softmax (Softmax), and FFN with Softmax and layer normalization (Softmax with LN). Results are shown in Figure 1. The observations are twofold. 1) The ReLU activation function is superior to Softmax consistently on all sizes. 2) When equipped with LN, Softmax performs comparably to ReLU. However, when the memory size is large (i.e., \(3072\), \(4096\)), the ReLU performs better than Softmax with LN. In conclusion, ReLU shows stronger capacity when dealing with a large number of values than Softmax. ### Quantitative Analysis between ReLU and Softmax We conjecture the reason is the exponential normalization in Softmax. Concretely, since Softmax provides the exponential normalization on the elements while ReLU does not, Softmax provides over-centralized distribution over elements, which means only a few elements are highlighted while occupying most weights. Then when the memory size is large, Softmax will overlook most value slots and only utilize a few of them, which does not benefit from the large size of memory. In contrast, there is no competition among elements in ReLU, which is able to aggregate more knowledge. A straightforward method to alleviate this problem is to increase the temperature in Softmax to flatten the output distribution. However, we empirically find it has little effect in experiments. To verify the conjecture, we qualitatively analyze the competition by visualizing the score distribution, i.e., Softmax\((X\cdot K^{T})\) in Equation (2). For each distribution, we sort scores in descending order and calculate the summation of the top-\(p\%\) elements. We normalize ReLU scores to make sure the summation of all scores is \(1\) as \(\text{ReLU}(x_{i}k_{j}^{T})/\sum_{p=1}^{n}\text{ReLU}(x_{i}k_{p}^{T})\), where \(x_{i}\), \(k_{j}\) is the \(i\)-th, \(j\)-th elements of query \(X\), key \(K\) mentioned in Section 2.1. Therefore, a higher top-\(p\%\) score sum indicates a more centralized distribution, i.e., all scores are concentrated on a small number of elements. From Figure 2, we can find that Softmax provides a highly centralized distribution as top \(0.2\%\) elements occupy more than \(85\%\) scores, and becomes more severe when the memory size grows. In contrast, ReLU can alleviate this problem and therefore utilize the information of more memory slots. As a result of the over-centralized distribution, the optimization of memory slots with Softmax will be sub-optimal as most of them cannot receive enough gradient due to small scores. We then quantitatively evaluate the quality of the Figure 3: The anisotropy (ANI) score of ReLU, Softmax, and Softmax with layer normalization (Softmax with LN). Figure 2: The visualization of top-\(p\%\) score sum of ReLU, Softmax, and Softmax with layer normalization (Softmax with LN). learned values by measuring their anisotropy (Ethayarajh, 2019), which is defined as the average of pair-wise similarities and therefore the lower the better, i.e., value slots are different from each other and able to contain discriminative and diverse information. The anisotropy score (ANI) is defined as follows: \[\text{ANI}=\frac{1}{d_{h}\cdot(d_{h}-1)}\sum_{i=1}^{d_{h}}\sum_{j=1,j\neq i}^{d _{h}}\frac{V_{i}^{T}V_{j}}{\|V_{i}\|\cdot\|V_{j}\|}, \tag{4}\] where \(V_{i}\) indicates the \(i\)-th value slot. Visualization results are illustrated in Figure 3, from which we can find that the values of Softmax collapse and fail to store diverse knowledge, and adding LN alleviates this problem while utilizing ReLU achieves the best performance. ### Summary With these explorations, we have the following insights. * The results of Softmax and ReLU have different properties, including variance and normalization. For variance, the ReLU has a larger variance compared with Softmax, which is more expressive. The hidden output layer with a small variance may be dominated by the residual during end-to-end training, which can lead to the waste of the parameters and sub-optimal results. For normalization, since Softmax provides exponential normalization on the elements while ReLU does not, the distribution of Softmax is more centralized. When the memory size is large, Softmax will overlook most of the elements, thus resulting in a less diverse and discriminative memory value space. * When the Softmax is equipped with layer normalization in key-value memory, the FFN and key-value memory can be equivalent. The layer normalization can largely alleviate the small variance and over-centralized distribution brought by Softmax in the key-value memory, thus the Softmax with layer normalization can achieve comparable performance with FFN. * We find that the ReLU is more capable to deal with a large number of memory slots. When the number of memory slots is larger, the distribution of Softmax is more centralized, which results in the inefficient utilization of the memory slots. The last observation also inspires us that when handling long sequences in self-attention, it is beneficial to pay attention to more elements instead of centralizing on a small portal. However, since self-attention is similar to key-value memory and it is conventional to use Softmax as the activation function, will ReLU perform better than Softmax when handling long sequences? We will explore the differences of ReLU and Softmax in self-attention in the next section. ## 4 ReLU vs Softmax in Self-Attention Given the findings that ReLU outperforms Softmax in FFN, when dealing with a large number of value slots, a natural question is how will ReLU perform on the self-attention network (SAN). As discussed in Section 2.2, SAN can be straightforwardly formatted as key-value memory, with queries, keys, and values as different representations of the input. Then the memory size is denoted by the length \(n\) of the input instead of the hidden dimension \(d_{h}\) in FFN. Therefore, we expect to observe the superiority of ReLU over Softmax when dealing with long sequences, following the conclusions of the previous section. Similarly, we conduct preliminary experiments by directly replacing Softmax with ReLU, but we find the model fails to converge. Specifically, we find the variance of SAN results exploding. To solve the variance exploding problem, we add a layer normalization layer succeeding to the SAN results to adjust the variance of SAN. Unfortunately, there still occurs performance degradation. Such phenomenon is also reported in previous studies (Zhang et al., 2021). Therefore, in this section, we first analyze the reasons that ReLU fails and then propose our solutions. We then compare the performance of ReLU and Softmax on different sequence lengths. ### Solving Variance Exploding Caused by ReLU Recall the formulation of SAN with ReLU activation: \[h_{i}=\sum_{j=1}^{n}\text{ReLU}(q_{i}^{T}k_{j})v_{j}, \tag{5}\] where \(q_{i},k_{j},v_{j}\) is the \(i,j,j\)-th element of query \(\hat{X}\), key \(\hat{K}\), and value \(\hat{V}\) mentioned in Section 2.2 respectively, \(h_{i}\) is the output representation of the \(i\)-th token, and \(n\) is the sequence length. We find that the variance of \(h_{i}\) is dependent on the sequence length \(n\) by the following theory introduced by He et al., 2015: **Theorem 4.1**.: _Given \(n\) random variables \(x_{i}\sim\mathcal{N}(0,1),i\in[1,n]\) and \(v_{j}\sim\mathcal{N}(0,1),j\in[1,n]\), \(y_{i}\) defined as:_ \[y_{i}=\sum_{i=1}^{n}\text{ReLU}(x_{i})v_{i}.\] _Then \(y_{i}\) follows Gaussian distribution \(\mathcal{N}(0,\frac{n}{2})\)._ Based on Theorem 4.1, in Equation (5), the output \(h_{i}\) will follow the distribution \(N(0,\frac{n}{2})\). Therefore, the variance of results grows with the sequence length, and directly replacing Softmax with ReLU will lead to instability of the training process. Empirically, although adding layer normalization is supposed to learn and re-scale the variance, we find it still leads to sub-optimal BLEU results. We conjecture the different performance of LN on FFN and SAN is due to the dynamic memory size in SAN (i.e., the sequence length \(n\)) which is static in FFN (i.e., the hidden dimension \(d_{h}\)), and thus the LN module is not able to learn appropriate variance for sentences with different lengths. Motivated by these analyses, to stabilize the variance of ReLU output, we propose the variance reduction factor defined as \(\gamma\sqrt{n/2}\), where \(\gamma\) is a hyper-parameter. This is similar to the implementation of Kaiming Normalization (He et al., 2015) but we adapt it during end-to-end training instead of initialization since the sequence length \(n\) is dynamic. Formally, the Equation 5 goes to: \[h_{i}=\sum_{j=1}^{n}\frac{\text{ReLU}(q_{i}^{T}k_{j})}{\gamma\sqrt{n/2}}v_{j}. \tag{6}\] Then we apply it to the SAN and obtain \(33.19\) BLEU scores on IWSLT14 De-En machine translation task as shown in Table 2. Although this is a big step towards a successful ReLU-based self-attention model, it still has a gap of \(1.03\) BLEU scores to the vanilla Softmax-based self-attention. We then go a step further and close the gap between ReLU and Softmax for SAN in the following section. ### Closing the Gap Between ReLU and Softmax To better analyze the performance gap between ReLU and Softmax-based SAN, we denote the weight distribution over values as \(s=(s_{1},...,s_{n})\) where \(s_{j}=\text{ReLU}(q_{i}^{T}k_{j})/\gamma\sqrt{n/2}\) for ReLU w/ variance reduction, and \(s_{j}=\text{Softmax}(q_{i}^{T}k_{j})\) for Softmax, and then compute the entropy of \(s\), i.e., \(H(s)=-\sum_{i=1}^{n}s_{i}\log(s_{i})\). We normalize the output of ReLU to ensure the summation is \(1\) with the same method used in Section 3.4 as \(\text{ReLU}(s_{i})/\sum_{j=1}^{n}\text{ReLU}(s_{j})\). Theoretically, a larger entropy indicates the distribution is more uniform, while a smaller one indicates the distribution is more centralized. And in our case, we want to find a balance where the entropy is neither too small nor too large, i.e., the weight distribution is not too uniform or over-centralized. In this way, the context information can be well utilized. The computed entropy results are listed in Table 3. The distribution learned with ReLU has a very small entropy, and \(94\%\) weights are all zeros. Therefore, the ReLU on self-attention leads to weight distributions that are too sparse to cover enough context. To alleviate this problem, we propose a regularization loss including two parts. Firstly, we introduce a normalization regularization to enlarge the weights. Secondly, we constrain the entropy of the learned distribution by an entropy-margin regularization to make it more informative. The loss functions are shown as follows: \[\mathcal{L}_{\text{reg}}=\log(\sum_{i=1}^{n}s_{i})+\max(H(s)-C,0) \tag{7}\] where \(|\cdot|\) indicates the absolute value, \(C\) is a constant that represents the upper bound of \(H(s)\). The normalization regularization encourages the summation of weights \(\sum_{i=1}^{n}s_{i}\) to be \(1\), and therefore results in more non-zero elements. Note that different from the normalization in Softmax, it is not a hard constraint and therefore we do not observe the over-centralized distribution as that in Softmax. In contrast, the distribution becomes flat as we find the entropy grows drastically after the loss function is added. The entropy-margin regularization will encourage the model to try to keep the entropy of weight distribution \(s_{i}\) smaller than the upper bound \(C\), where \(C\) is a hyper-parameter which we describe in the Appendix A. It is worth noting that the Transformer contains casual self-attention and cross-attention in the decoder, which are slightly different from the self-attention we have discussed aforementioned. To generalize the ReLU-based SAN to the decoder, we propose two solutions. 1) For the causal self-attention, we assign different lengths to each token as tokens after the current one are masked off. 2) The cross-attention does not require additional adaptations thus we treat it as the self-attention in the encoder. By packing all the components, we propose a fully ReLU Transformer named ReLUFormer. ### ReLUFormer and Its Performance on Translation In this section, we will first demonstrate the effectiveness of the proposed ReLUFormer on traditional sentence-level machine translation benchmarks. Then, we compare ReLU with Softmax when dealing with long sequences on document-level benchmarks. \begin{table} \begin{tabular}{l l c} \hline \hline Activation & w/ Scale Factor & BLEU \\ \hline Softmax & No & \(34.22\) \\ ReLU & No & - \\ ReLU & Yes & \(33.19\) \\ \hline \hline \end{tabular} \end{table} Table 2: The BLEU scores on IWSLT14 De-En task with different activation functions on self-attention. w/ Scale Factor indicates whether the self-attention network has the scale factor. ’-’ indicates the setting does not converge. \begin{table} \begin{tabular}{l c} \hline \hline Activation & \(H(s)\) \\ \hline Softmax & 1.40 \\ ReLU & 3.45 \\ \hline \hline \end{tabular} \end{table} Table 3: The entropy of the weight distribution \(s\) with Softmax and ReLU activation functions on the IWSLT14 De-En test set. #### 4.3.1 Experiments on Sentence-Level Translation We evaluate ReLUFormer on the sentence-level translation task, a seminal task in NLP. We consider two benchmark machine translation datasets, i.e., IWSLT14 German-English and WMT14 English-German. Details of datasets can be found in Appendix A. BaselinesIn addition to the vanilla Transformer Vaswani et al. (2017), we also consider other sparse activation baselines which alternate Softmax with other functions: 1) Sparsemax Martins and Astudillo (2016), 2) 1.5Entmax Peters et al. (2019), and 3) Rectified Linear Attention (ReLA) Zhang et al. (2021). The Sparsemax and 1.5Entmax are specially designed sparse attention activation functions similar to ReLU. And ReLA is another baseline simply replacing Softmax with ReLU, in which they propose the RMS normalization mechanism to address the variance problem caused by ReLU. We leave the detailed introduction of baselines to Appendix C. ResultsThe results of ReLUFormer and baselines are listed in Table 4, from which we have the following observations. 1) Our proposed ReLUFormer outperforms the vanilla Transformer baseline, showing the effectiveness of the proposed techniques. 2) When comparing with sparse attention-based baselines, our model also achieves consistent improvements over the Sparsemax, 1.5Entmax, and ReLA baselines by a large margin. 3) By comparing the latency during inference, we first find that ReLUFormer is comparable with the vanilla Transformer, while slightly faster than the ReLA and at 1.7 times faster than the Sparsemax and 1.5Entmax methods. Ablation StudyIn this section, we conduct ablation studies on the proposed ReLUFormer to further discuss the effectiveness of the proposed three techniques: 1) attention scale factor (scale factor), and 2) regularization loss (reg loss). We demonstrate their effectiveness by removing each part individually. Besides the BLEU scores, we also use the entropy mentioned in Section 4.2 to quantitatively analyze the quality of the weight distribution. The results are shown in Table 5, and the observations are as follows. 1) The model cannot converge by removing the scale factor because of the exploding of the variance, and adding the variance reduction factor stabilizes the training of the model. 2) By removing the normalization loss, the performance drops by 1.37 BLEU and the entropy decreases by 1.52. It illustrates that the regularization loss can provide a more informative attention distribution. #### 4.3.2 Experiments on Document-Level Translation In this section, to verify our findings in Section 3 that ReLU performs better than Softmax when the number of value slots is large, we conduct experiments on the document-level translation task with long input and output sequences, because the length of the sequence represents the number of local memory slots in self-attention networks. We construct the long documents from a widely used document translation dataset Europarl7 En-De Maruf et al. (2019); Zheng et al. (2020). To compare our method under different lengths of input sequences, we reconstruct the documents by concatenating sentences to different limitations of lengths. The details can be found in Appendix B. We conduct our experiment on 5 generated datasets with length limit \(\{128,256,512,1024,2048\}\). \begin{table} \begin{tabular}{l c c c} \hline \hline & IWSLT14 De-En & WMT14 En-De & \(\Delta\)Speed \\ \hline Vanilla Transformer Vaswani et al. (2017) & \(34.22\) & \(27.21\) & \(1.00\times\) \\ Sparsemax Martins and Astudillo (2016) & \(33.98\) & \(27.32\) & \(0.56\times\) \\ 1.5Entmax Peters et al. (2019) & \(34.46\) & \(27.11\) & \(0.52\times\) \\ ReLA Zhang et al. (2021) & \(33.59\) & \(26.31\) & \(0.99\times\) \\ \hline ReLUFormer & \(34.56\) & \(27.64\) & \(1.01\times\) \\ \hline \hline \end{tabular} \end{table} Table 4: The experimental results of sentence-level translation on IWSLT14 De-En and WMT14 En-De datasets. \(\Delta\)Speed: relative translation speed compared with vanilla Transformer on WMT14 test set. Higher speedup indicates better efficiency. \begin{table} \begin{tabular}{l c c} \hline \hline & BLEU & Entropy \\ \hline ReLUFormer & \(34.56\) & \(1.60\) \\ - scale factor & - & - \\ - reg loss & \(33.19\) & \(0.08\) \\ \hline \hline \end{tabular} \end{table} Table 5: The ablation study on IWSLT14 De-En task. The reg loss indicates the regularization loss. “-” denotes the model can not converge. ResultsWe compare the proposed ReLUFormer with the vanilla Transformer (Vaswani et al., 2017) and the Sparsemax (Martins and Astudillo, 2016) baselines in document neural translation with different lengths. The results are shown in Table 6. We have the following observations. 1) We find that when the sequence length is small (i.e., 128, 256), the ReLUFormer is comparable with the vanilla transformer and the Sparsemax baseline, showing that both methods are able to deal with relatively smaller lengths. 2) When the sequence length is large (i.e., 512, 1024, 2048), we find that our ReLUFormer consistently outperforms the vanilla transformer and the Sparsemax baselines. For example, when the sequence length is 1024 in Europarl7, ReLUFormer achieves 1.15 BLEU gains in translation quality. And the Sparsemax fails to converge when facing long sequences. It confirms that when the sequence is long, the ReLU is more effective. Quantitative Analysis In this section, we intuitively explain why ReLU outperforms Softmax in long document translation. Similar to the study in FFN and key-value memory, we also visualize the top-\(p\%\) elements mentioned in Section 3.4 of the activated scores for both Softmax and ReLU activation functions from the Europarl7 with 1024 length. A higher top-p% score sum indicates a more centralized distribution. In practice, we observe that the specific token rating in top-1.5% only occupies 2.1% and 4.4% scores for Softmax and ReLU, respectively. It means the tokens rating before top-1.5% dominate the quality of the performance. Thus we only visualize the elements rating before top-1.5%. From Figure 4, we can find that the Softmax provides a more centralized distribution compared with ReLU, which is similar to the observation in Section 3.3. When modeling long sequence inputs in self-attention, the over-centralized distribution will pay attention to fewer contexts, which results in sub-optimal performance. We also provide self-attention visualization to further demonstrate the superiority of ReLU. We randomly select a case in the Europarl7 test set with 1024 length and visualize the attention map in Figure 5 for both ReLU and Softmax activation functions. We have the following observations. 1) We observe that the ReLU can capture more distant correlations. For example, the word "Madam" and "President" have relatively large attention weights in ReLU while small weights in Softmax. It shows the ReLU can capture more distant correlations compared to Softmax, which is beneficial to long sequence modeling. 2) We observe that the ReLU has less noise than Softmax. The ReLU assigns smaller attention values to some stop words such as "," and "this". Since the stop words contain little contextual information, it shows the ReLU has less noise. ### Summary In this section, we are motivated to explore whether ReLU outperforms Softmax in self-attention when handling long sequences. * We find that the variance of SAN results produced by ReLU is dependent on sequence length and therefore dynamic. Thus when ReLU is directly applied to replace Softmax in SAN, it can cause variance exploding and training instability. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{Sequence Length} \\ & \(128\) & \(256\) & \(512\) & \(1024\) & \(2048\) \\ \hline Transformer & \(31.19\) & \(31.94\) & \(31.81\) & \(31.74\) & \(31.89\) \\ Sparsemax & \(31.04\) & \(31.70\) & - & - & - \\ \hline ReLUFormer & \(31.22\) & \(32.09\) & \(32.69\) & \(32.89\) & \(32.58\) \\ \hline \hline \end{tabular} \end{table} Table 6: The experimental results of document translation with different document lengths on the Europarl7 dataset. Figure 4: The visualization of top-p% score sum. Figure 5: The visualization of self-attention for ReLU (left) and Softmax (right). The case is from Europarl7 test set with \(1024\) length limit. * With the extensive analysis of the performance degradation of ReLU, we propose solutions correspondingly to make the ReLU performs competitively to Softmax in SAN on the sentence-level translation task. * Similar to the observation in Section 3, we also find that the Softmax tends to generate a more centralized distribution which restricts the utilization of more context information especially when the sequence is long, while ReLU does not have the restriction. By applying the ReLU-based Transformer to the long document translation task, we verify that the ReLU outperforms Softmax when the input sequence is long. ## 5 Insights and Findings We summarize the insights and findings of the paper in this section. * Softmax and ReLU are different in Transformer from the perspective of variance and normalization. 1) Regarding the variance, the result of ReLU has a larger variance compared with that of Softmax. The hidden representation with a small variance will be dominated by the residual, leading to the inefficient usage of parameters and sub-optimal performance. In addition, the variance of SAN results produced by ReLU is related to the sequence length which will lead to the variance exploding problem. 2) Regarding the normalization, since Softmax provides exponential normalization on the elements while ReLU does not, the distribution of Softmax is more centralized compared with ReLU. When dealing with a large number of value slots, Softmax restricts the utilization of more context information and leads to sub-optimal performance, while ReLU does not. * When Softmax is equipped with layer normalization in key-value memory, the FFN and key-value memory are equivalent. The layer normalization can largely alleviate the scale and over-centralized problem caused by Softmax, which can boost performance. * The ReLU is good at handling a large number of key-value slots (in FFN and key-value memory) and long sequences (in SAN). With quantitive analyses, we find that ReLU is less centralized, thus it can integrate the context information of more tokens when the sequence is long. * As a whole, the Transformer can be viewed as the memory network, where FFN and SAN are global and local memory respectively. For FFN, the keys and values are parameters of two linear projection weights, which are globally shared by all input queries. For the SAN, the keys and values are constructed from the input sequences locally. ## 6 Conclusion In this work, we revisit the relations between the Transformer components: the self-attention and feed-forward network, and key-value memory. Then we propose the full ReLU architecture which can achieve competitive performance with the vanilla Transformer. We have the following findings: 1) The FFN and key-value memory are equivalent when layer normalization is introduced; 2) Compared with Softmax, ReLU performs better when the number of memory value slots is large; 3) With specific designs, the proposed the full ReLU architecture can work effectively in sentence-level translation and significantly outperform vanilla Transformer in long document translation. ## 7 Limitation and Future Works This work has the following limitations and will be extended from several aspects. First, we only conduct experiments on the machine translation task currently. Secondly, although we replace Softmax with a more efficient function ReLU, we obtain slight latency gains since it is still \(O(N^{2})\) complexity in self-attention. In the future, we will conduct experiments on more tasks, including language modeling, text summarization, etc. Then, since our work is parallel to the work designing the efficient Transformer, we will investigate better methods in self-attention to improve latency.
2304.02589
Measurements of the Crab Pulsar's Giant Radio Pulse Amplitude Power-Law Index Using Low-Frequency Arecibo and Green Bank Telescope Observations
We report two low-frequency measurements of the power-law index for the amplitudes of giant radio pulses from the Crab pulsar. The two observations were taken with the Arecibo and Green Bank radio telescopes at center frequencies of 327 MHz and 350 MHz, respectively. We find best-fit values for the differential power-law index $\beta$ (where $dN/dS \propto S^\beta$ and $S$ is pulse amplitude) of $-2.63 \pm 0.05$ and $-3.6 \pm 0.5$ from the Arecibo and Green Bank data sets, respectively. Both values are broadly consistent with other values previously measured for the Crab pulsar at low radio frequencies. These reported values may be useful in future giant pulse studies of the Crab pulsar.
F. Crawford, T. J. W. Lazio, A. McEwen, J. S. Deneva, J. M. Cordes, L. Spitler, R. F. Trainor
2023-04-05T17:02:19Z
http://arxiv.org/abs/2304.02589v1
Measurements of the Crab Pulsar's Giant Radio Pulse Amplitude Power-Law Index Using Low-Frequency Arecibo and Green Bank Telescope Observations ###### Abstract We report two low-frequency measurements of the power-law index for the amplitudes of giant radio pulses from the Crab pulsar. The two observations were taken with the Arecibo and Green Bank radio telescopes at center frequencies of 327 MHz and 350 MHz, respectively. We find best-fit values for the differential power-law index \(\beta\) (where \(dN/dS\propto S^{\beta}\) and \(S\) is pulse amplitude) of \(-2.63\pm 0.05\) and \(-3.6\pm 0.5\) from the Arecibo and Green Bank data sets, respectively. Both values are broadly consistent with other values previously measured for the Crab pulsar at low radio frequencies. These reported values may be useful in future giant pulse studies of the Crab pulsar. Pulsars (1306) -- Radio transient sources (2008) 0000-0002-4882-8868]Fronefield Crawford 0000-0002-4882-7887]T. Joseph W. Lazio 0000-0002-4882-7887]Alexander McEwen 0000-0002-4882-7887]Julia S. Deneva 0000-0002-4882-7887]James M. Cordes 0000-0002-4882-7887]Laura Spitler 0000-0002-4882-7887]Ryan F. Trainor ## 1 Introduction Since the discovery of the Crab pulsar as a source of individual dispersed radio pulses (Staelin & Reifenstein, 1968), it has been studied as an emitter of giant pulses (see, e.g., Lewandowska, 2015 for a review). Giant radio pulses from pulsars have been defined as pulses having energies much greater than the mean value and having amplitudes that follow a power-law distribution (Johnston & Romani, 2004; Geyer et al., 2021). A departure from this behavior has been observed for the Crab pulsar by Cordes et al. (2004) and Mickaliger et al. (2012), where they saw a slight excess at very large amplitudes that might be explained by rare "supergiant" pulses. However, such pulses would not necessarily be expected to repeat if, for example, they were due to lensing phenomena, which is possible given the role that filaments play in producing multiple images. Subsequent obser vations by Bera & Chengalur (2019) did not show evidence for supergiant pulses in a longer set of observations, so the observed excess may be a statistical fluke (as mentioned by Cordes et al., 2004). Here we present two new measurements of the differential amplitude power law index \(\beta\) for Crab pulsar giant pulses, where \(\beta\) is defined according to \(dN/dS\propto S^{\beta}\) and where \(S\) is the pulse amplitude. The measurements were obtained from two low-frequency observations taken with the Arecibo and the Green Bank telescopes. ## 2 Observations and Analysis The Crab pulsar was observed with the Arecibo 305-m telescope in a 5-minute diagnostic observation on 2014 April 13 (MJD 56760) as part of the Arecibo 327 MHz Drift-Scan Pulsar Survey (Deneva et al., 2013). This observation used an effective bandwidth of 68.75 MHz divided into 2816 channels sampled at 81.92 \(\mu\)s. The Crab pulsar was also observed with the Green Bank Telescope (GBT) on 2019 October 22 (MJD 58778) as part of the Green Bank Northern Celestial Cap (GBNCC) survey (Stovall et al., 2014). This survey used a bandwidth of 100 MHz centered on 350 MHz that was split into 4096 channels. This was the only survey beam from the GBNCC survey that overlapped with the position of the Crab pulsar. This beam had a position offset of 0.25 degrees from the Crab pulsar position and had an integration time of two minutes. This position offset is close to (but still within) the edge of the \(\sim 0.3\) deg beam radius at 350 MHz for the GBT. The native sampling time of 81.92 \(\mu\)s of the GBNCC survey beam was increased by a factor of two in our analysis, leading to an effective sampling time of 163.84 \(\mu\)s. At these observing frequencies, the amount of dispersion smearing experienced by Crab pulses within the frequency channels in both observations is \(\sim 0.3\) ms, which significantly exceeds the sampling times. Thus, the different effective sampling times used in the two different observations had no effect on the Crab pulse detection rate. In both observations the data were searched blindly for single pulses at a range of dispersion measures (DMs) encompassing the Crab pulsar's DM. We used the HEIMDALL single-pulse detection package in the search (Barsdell, 2012; Barsdell et al., 2012).1 All of the pulses detected by HEIMDALL that were within 0.3 pc cm\({}^{-3}\) the Crab's nominal DM of 56.77 pc cm\({}^{-3}\)(Bilous et al., 2016) were retained. This DM window is consistent with the DM variability observed for the Crab's giant pulses, which does not exceed this range (e.g., Lewandowska et al., 2022). We note that all of the pulses that were discarded as radio frequency interference (RFI) had DMs far from the Crab's DM value (at least 20 pc cm\({}^{-3}\) away), so no Crab pulses were accidentally eliminated. We measured the signal-to-noise ratio (S/N) of each detected pulse with HEIMDALL. The S/N is proportional to the pulse energy given that the (narrow) Crab pulses are temporally unresolved due to the dominance of the DM smearing, and they therefore will have similar observed widths. Footnote 1: [https://sourceforge.net/projects/heimdall-astro](https://sourceforge.net/projects/heimdall-astro) A total of 1943 and 60 Crab pulses were detected with a S/N above 6 in the Arecibo and the GBT observations, respectively. We did not make any distinction between main pulses (MPs) and interpulses (IPs) in our detected sample, even though IPs are expected to constitute a large fraction (about a third) of the giant pulses detected at these low frequencies (Cordes et al., 2004). We do not expect a significant difference between these two classes of pulses at low frequencies: Mikami et al. (2016) showed that at 325 MHz, the average power-law index values for MPs and IPs are quite close (see their Table 3). This is supported by other studies (Cordes et al., 2004; Hankins et al., 2016; Lin et al., 2022) that have found that Crab MPs and IPs have similar properties at frequencies below a few GHz. In contrast to this, Hankins et al. (2015, 2016) have shown that at high frequencies (\(\gtrsim 5\) GHz), the Crab MP and IP properties are much different and that the number of detected IPs exceeds MPs by more than an order of magnitude. We produced histograms of the pulse amplitudes using the Sturges criterion for the binning, where the number of bins \(k\) is determined by the total number of events \(n\) in the histogram, according to \(k=1+\log_{2}n\). Error bars were calculated by taking the square root of the number of events in each bin, in accordance with Poisson statistics. Each data set was separately fit using a power law of the form \(dN/dS\propto S^{\beta}\), where both a coefficient and a power-law index were fit as free parameters. Fig. 1 shows these two histograms and the best fits. ## 3 Results and Discussion Scaling the GBT integration from 2 to 5 minutes (the integration time of the Arecibo observation) would result in about 150 detected pulses in the GBT dataset. This number is still much smaller than the almost 2000 pulses detected with Arecibo. The telescope gain difference is not a significant factor in this difference since the Crab nebula dominates the system temperature in both observations, as shown below. The Crab nebula's flux density at 350 MHz is \(S\approx 1270\) Jy (derived from the relation \(S=955f^{-0.27}\) Jy, where \(f\) is the observing frequency in GHz; Bietenholz et al. 1997), while Arecibo's system equivalent flux density (SEFD) is 10 Jy and the GBT's is 35 Jy. The Crab nebula (with a Figure 1: Histograms of pulse amplitudes (S/N) for detected Crab pulses in observations taken with Arecibo (left) and the GBT (right). A total of 1943 pulses were detected with Arecibo in a 5-minute diagnostic observation at 327 MHz. For the GBT, 60 pulses were detected in a 2-minute observation that was part of the GBNCC survey at 350 MHz. In each case the error bars were computed as the square root of the number of pulses in each bin. The best-fit values from power law fits to the distributions are also plotted and yielded differential power-law index values of \(-2.63\pm 0.05\) and \(-3.6\pm 0.5\) for the Arecibo and GBT data sets, respectively. characteristic diameter \(5.5^{\prime}\); Cordes et al. (2004) is also unresolved in both observations: the beam diameters of Arecibo and the GBT at these frequencies are \(15^{\prime}\) and \(36^{\prime}\), respectively. Thus, all of the flux from the Crab nebula was received by the telescope in each observation (see, e.g., Cordes et al. (2004)). In both cases the telescope SEFD is more than an order of magnitude smaller than the Crab nebula's flux density and does not significantly increase the system noise. Therefore, despite their difference in raw sensitivity, both Arecibo and the GBT have the same effective sensitivity to Crab giant pulses. The small difference in the central observing frequencies would also not affect the detection rates to this degree, nor would the sampling rate difference in the observations (see above). The RFI in each observation was also minimal (less than \(0.1\%\) of the data was masked in each case). However, the two observations were conducted at epochs separated by about \(5.5\) years. Lundgren et al. (1995) showed that refractive interstellar scintillation (RISS) produces day-to-day variability for Crab giant pulses that affects the detection rate, so RISS may account for most of this difference in the detection rates. Our best fits to the two histograms yielded differential power-law index values \(\beta\) of \(-2.63\pm 0.05\) and \(-3.6\pm 0.5\) for the Arecibo and GBT data, respectively. An alternate method of estimating the power-law index using integrated number counts in a maximum likelihood estimate has been outlined by Crawford et al. (1970) and James et al. (2019). This method avoids binning of the data and is considered to be less biased. We fit our two data sets using this approach as a check, and we obtained differential power law indices (derived from the cumulative power law index) of \(-2.45\pm 0.03\) and \(-3.7\pm 0.4\) for the Arecibo and GBT data sets, respectively. These values are similar to those we obtained with binning, suggesting that both approaches produce consistent results. Our best-fit power-law values from the two data sets do not overlap within the stated (\(1\sigma\)) errors. This difference may be attributable to the large amount of time separating the two observations (5.5 years). Variability in the power-law index has been previously observed on long time-scales (e.g., Rudnitskii et al., 2017). Our power-law fit values are broadly consistent with previous low-frequency measurements from several different telescopes. Oronsaye et al. (2015) measured a power law index of \(-3.35\pm 0.35\) for the fluence distribution using the Murchison Widefield Array (MWA) at a center frequency of 193 MHz (see their Fig. 2). Observations taken with the Low Frequency Array (LOFAR) at a similarly low center frequency of 150 MHz by van Leeuwen et al. (2020) showed an amplitude index for the fluence of \(-3.04\pm 0.03\) (see their Fig. 3). From measurements at 325 MHz with theitate Planetary Radio Telescope (IPRT) that were taken just a few months after our Arecibo observation, Mikami et al. (2016) measured \(\beta=-2.61^{+0.13}_{-0.15}\) for the Crab main pulse (see their Fig. 4 and Table 3). This is remarkably close (within the formal uncertainties) to the value of \(-2.63\pm 0.05\) we obtained from our Arecibo observation at essentially the same central observing frequency (327 MHz). The Crab nebula's flux density slightly exceeds the SEFD of the IPRT (Mikami et al., 2016; see their Table 3), so the IPRT system noise is comparable to the Crab nebula contribution. This makes it slightly less sensitive to Crab giant pulses compared to Arecibo, where the SEFD is negligible relative to the Crab nebula contribution (see above). The similarity of the Arecibo power-law index value to the IPRT value indicates that even though the two observations were separated by a few months, a single power-law dependence extends to a slightly lower fluence level than what was measured with the IPRT. Table 4 of Mickaliger et al. (2012) listed four previously published low-frequency measurements (taken between 150 and 430 MHz) for the Crab differential power-law index (Argyle & Gower, 1972; Cordes et al., 2004; Bhat et al., 2007; Smirnova & Logvinenko, 2009) plus one new 330 MHz measurement using the Green Bank 43-m telescope. Combining these five measurements from this table yields an average power-law index of \(-2.9\pm 0.5\) for the low-frequency regime (\(<500\) MHz). Table 1 shows a summary of these and other low-frequency measurements. At higher observing frequencies, various studies have measured the power-law index. Mickaliger et al. (2012) (Table 4) listed measured values ranging from \(-2.1\) to \(-4.1\) for frequencies between 600 and 4850 MHz. Bera & Chengalur (2019) measured a value of \(-2.81\pm 0.05\) at 1330 MHz, consistent with this range. Some of this variability in values can be attributed to RISS (both Lundgren et al., 1995 and Mickaliger et al., 2012 observed day-to-day variability in the measured power law index at 812 MHz and 1.2 GHz, respectively). Note, however, that Bera & Chengalur (2019) did not see variability in the power-law index on time-scales of a few days in observations taken at 1330 MHz, though on much longer time-scales (a few years) it has been observed to vary (Rudnitskii et al., 2017). Mickaliger et al. (2012) do not see a clear trend in how the power law index changes with frequency, but Cordes et al. (2004) report a possible steepening of the index from 430 MHz to 8.8 GHz. \begin{table} \begin{tabular}{l c l l} \hline \hline \multicolumn{1}{c}{ Frequency Range} & Power-Law & Telescope & Reference \\ \multicolumn{1}{c}{(MHz)} & Index & & \\ \hline 20–84 & & LWA1 & Eftekhari et al. (2016) \\ 20–84 & & LWA1 & Ellingson et al. (2013) \\ 23 & & UTR-2 & Popov et al. (2006) \\ 111 & & LPA & Popov et al. (2006) \\ 600 & & RT-64 & Popov et al. (2006) \\ 110–180 & \(-1.73\pm 0.45\) & WSRT & Karuppusamy et al. (2012) \\ 185–200 & \(-3.35\pm 0.35\) & MWA & Oronsaye et al. (2015) (Figure 2) \\ 111–189 & \(-3.04\pm 0.03\) & LOFAR & van Leeuwen et al. (2020) (Figure 3) \\ 325 & \(-2.61^{+0.13}_{-0.15}\) & IPRT & Mikami et al. (2016) (Figure 4 and Table 3) \\ 112–430 & \(-2.9\pm 0.5\) & multiple & Mickaliger et al. (2012) (Table 4) \\ \hline 293–361 & \(-2.63\pm 0.05\) & Arecibo & this work \\ 300–400 & \(-3.6\pm 0.5\) & GBT & this work \\ \hline \end{tabular} Note. – The table lists differential (not cumulative) power-law indices. Ellingson et al. (2013) do not provide an estimated power-law index due to concerns about calibration and correcting for the flux density of the Crab Nebula. Eftekhari et al. (2016) do not provide an estimated power-law index due to concerns about the small number of giant pulses detected. Popov et al. (2006) do not report a power-law index. Telescope abbreviations are: LWA1 = Long Wavelength Array Station 1, UTR-2 = Ukrainian T-shaped Radio Telescope (Second Modification), LPA = Large Phased Array (Pushchino), MWA = Murchison Widefield Array, WSRT = Westerbork Synthesis Radio Telescope, RT-64 = Kalyazan 64-m Radio Telescope, LOFAR = Low Frequency Array, IPRT = Iitate Planetary Radio Telescope. \end{table} Table 1: Measured Crab Giant Pulse Power-Law Indices at Low Radio Frequencies In conclusion, we have measured power-law index values for the Crab giant pulse amplitude distribution using two separate low-frequency observations taken with Arecibo and the GBT. The best-fit values are broadly consistent with values previously measured at low-frequencies with different telescopes. These measurements may be useful in future giant pulse studies of the Crab pulsar. We thank the anonymous referee for helpful comments and corrections that have improved the manuscript. The Arecibo Observatory is a facility of the National Science Foundation operated under cooperative agreement by the University of Central Florida and in alliance with Universidad Ana G. Mendez, and Yang Enterprises, Inc. The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The NANOGrav project receives support from National Science Foundation (NSF) Physics Frontiers Center award numbers 1430284 and 2020265. J.S.D. was supported by the National Science Foundation (AST-2009335). R.F.T is supported by the Pittsburgh Foundation (UN2021-121482) and the Research Corporation for Scientific Advancement (28289). Arecibo, GBT HEIMDALL (Barsdell 2012; Barsdell et al. 2012),
2303.01027
Improving the X-ray energy resolution of a scientific CMOS detector by pixel-level gain correction
Scientific Complementary Metal Oxide Semiconductor (sCMOS) sensors are finding increasingly more applications in astronomical observations, thanks to their advantages over charge-coupled devices (CCDs) such as a higher readout frame rate, higher radiation tolerance, and higher working temperature. In this work, we investigate the performance at the individual pixel level of a large-format sCMOS sensor, GSENSE1516BSI, which has 4096 * 4096 pixels, each of 15 {\mu}m in size. To achieve this, three areas on the sCMOS sensor, each consisting of 99 * 99 pixels, are chosen for the experiment. The readout noise, conversion gain and energy resolutions of the individual pixels in these areas are measured from a large number (more than 25,000) of X-ray events accumulated for each of the pixels through long time exposures. The energy resolution of these pixels can reach 140 eV at 6.4 keV at room temperature and shows a significant positive correlation with the readout noise. The accurate gain can also be derived individually for each of the pixels from its X-ray spectrum obtained. Variations of the gain values are found at a level of 0.56% statistically among the 30 thousand pixels in the areas studied. With the gain of each pixel determined accurately, a precise gain correction is performed pixel by pixel in these areas, in contrast to the standardized ensemble gain used in the conventional method. In this way, we could almost completely eliminate the degradation of energy resolutions caused by gain variations among pixels. As a result, the energy resolution at room temperature can be significantly improved to 124.6 eV at 4.5 keV and 140.7 eV at 6.4 keV. This pixel-by-pixel gain correction method can be applied to all kinds of CMOS sensors, and is expected to find interesting applications in X-ray spectroscopic observations in the future.
Qinyu Wu, Zhixing Ling, Xinyang Wang, Chen Zhang, Weimin Yuan, Shuang-Nan Zhang
2023-03-02T07:34:52Z
http://arxiv.org/abs/2303.01027v1
# Improving the X-ray energy resolution of a scientific CMOS detector by pixel-level gain correction ###### Abstract Scientific Complementary Metal Oxide Semiconductor (sCMOS) sensors are finding increasingly more applications in astronomical observations, thanks to their advantages over charge-coupled devices (CCDs) such as a higher readout frame rate, higher radiation tolerance, and higher working temperature. In this work, we investigate the performance at the individual pixel level of a large-format sCMOS sensor, GSENSE-1516BSI, which has \(4096\times 4096\) pixels, each of 15 \(\mu\)m in size. To achieve this, three areas on the sCMOS sensor, each consisting of \(99\times 99\) pixels, are chosen for the experiment. The readout noise, conversion gain and energy resolutions of the individual pixels in these areas are measured from a large number (more than 25,000) of X-ray events accumulated for each of the pixels through long time exposures. The energy resolution of these pixels can reach 140 eV at 6.4 keV at room temperature and shows a significant positive correlation with the readout noise. The accurate gain can also be derived individually for each of the pixels from its X-ray spectrum obtained. Variations of the gain values are found at a level of 0.56% statistically among the 30 thousand pixels in the areas studied. With the gain of each pixel determined accurately, a precise gain correction is performed pixel by pixel in these areas, in contrast to the standardized ensemble gain used in the conventional method. In this way, we could almost completely eliminate the degradation of energy resolutions caused by gain variations among pixels. As a result, the energy resolution at room temperature can be signifi cantly improved to 124.6 eV at 4.5 keV and 140.7 eV at 6.4 keV. This pixel-by-pixel gain correction method can be applied to all kinds of CMOS sensors, and is expected to find interesting applications in X-ray spectroscopic observations in the future. X-ray detectors (1815), Astronomical instrumentation (799), Astronomical detectors (84) ## 1 Introduction Silicon image sensors, including charge-coupled devices (CCDs) and Complementary Metal Oxide Semiconductor (CMOS) sensors, are widely used in soft X-ray imaging and spectroscopy. In the past several decades, CCD detectors have been dominant in X-ray applications. Most modern X-ray astronomy missions, such as Chandra (Garmire et al., 2003), XMM-Newton (Struder et al., 2001), Suzaku (Koyama et al., 2007) and eROSITA (Meidinger et al., 2021), have chosen CCD sensors as their focal plane detectors. Recently, the performance of scientific CMOS (sCMOS) detectors has been improved considerably. Compared to CCDs, sCMOS sensors have several advantages, such as high readout frame rate, high radiation tolerance, and relaxed requirement for working temperature. These make sCMOS a favorable choice in future X-ray astronomy missions such as Einstein Probe (EP) (Yuan et al., 2018, 2022) and THESEUS (Heymes et al., 2020). Energy resolution is an important parameter of X-ray spectrometers, and is commonly characterized by the Full Width Half Maximum (FWHM) of a given peak. The focal plane detectors of some future X-ray missions, such as Lynx (Gaskin et al., 2019), are required to have good soft X-ray energy resolution. For silicon sensors, Fano fluctuation in the charge production process gives the theoretical limit (Fano limit) on the energy resolution (Fano, 1947), which is around 124 eV at 6.4 keV at room temperature. The energy resolution of Silicon Drift Detectors (SDDs) can reach 122 eV at 5.9 keV at \(-60^{\circ}\)C1. And modern CCDs can reach 130 eV at 6.4 keV under deep refrigeration of below \(-70^{\circ}\)C (Meidinger et al., 2021). But for CMOS sensors, the typical energy resolution is worse than that of CCDs (Wang et al., 2019; Narukage et al., 2020; Wu et al., 2022; Hsiao et al., 2022). Many efforts have been made to improve the energy resolution of CMOS sensors. Hull et al. (2019) reported that 148 eV at 5.9 keV can be reached for a hybrid CMOS sensor, by applying pixel-by-pixel gain correction. At around \(-30^{\circ}\)C, Heymes et al. (2022) reached a resolution of 142 eV at 5.9 keV with a CIS221-X sensor, and a backside illuminated version of this sensor reached 153 eV at 5.9 keV at \(-40^{\circ}\)C (Stefanov et al., 2022). Footnote 1: [https://www.amptek.com/products/x-ray-detectors/fastsdd-x-ray-detectors-for-xrf-eds/fastsdd-silicon-drift-detector](https://www.amptek.com/products/x-ray-detectors/fastsdd-x-ray-detectors-for-xrf-eds/fastsdd-silicon-drift-detector) Our lab has been studying the X-ray performance of sCMOS sensors since 2015. We have shown that sCMOS sensors are applicable to X-ray astronomical observations (Wang et al., 2019; Ling et al., 2021). Cooperating with Gpixel Inc., we proposed and fabricated an X-ray sCMOS of a large format and pixel size, and a thickened epitaxial layer. This sensor, namely GSENSE1516BSI, has a physical size of 6 cm \(\times\) 6 cm, with an array of \(4096\times 4096\) pixels and a pixel size of 15 \(\mu\)m. The epitaxial layer is 10 \(\mu\)m thick, and it is fully depleted. The frame rate is 20 fps in the current design. At \(-30^{\circ}\)C, the dark current is lower than 0.02 e\({}^{-}\)/pixel/s and the readout noise is lower than 5 e\({}^{-}\). The energy resolution reaches 180.1 eV at 5.90 keV for X-ray events with the single-pixel pattern (Wu et al., 2022). Although the readout noise of the CMOS sensor is low, usually lower than 5 e\({}^{-}\), the energy resolution is significantly worse than the theoretical FWHM with the readout noise taken into account. One possible explanation is that each pixel of a CMOS sensor has its own amplifier, and the conversion gain can differ from one pixel to another, resulting in degraded energy resolution. If this is the case, it is expected that careful gain correction at the individual pixel level may help improve the energy resolution. In this paper, we investigate this problem by studying the sensor properties at the pixel level, for which experiments of long time exposures are required. The experimental setup is described in section 2; the data reduction method is introduced in section 3; the results about the noise, gain variation and the corrected energy resolution are shown and discussed in section 4; and conclusions are summarized in section 5. ## 2 Experimental Setup The experimental layout of our X-ray tests is shown in Fig. 1. A commercial X-ray tube is placed at the right side of the device. It has a titanium anode and is operated at 25 keV. The tube can generate initial X-rays to irradiate and excite the target at the center. X-ray photons emitted from the target material are then received by the camera at the bottom. In this test, a GSENSE1516-BSI sCMOS sensor is used. The camera, developed by our laboratory, contains the sensor, readout electronics and temperature control structures (Wang et al., 2022). A unique feature of the camera is that it can process images in real time and record only the extracted events, which largely reduces the storage space when the sensor works at a high frame rate. In our study, a Ti target is used. Characteristic lines of not only Ti, but also Cr and Fe can be found in the spectrum, which come from the stainless steel of the surrounding structures and the walls of the device. The dark current of this chip is less than 8 e\({}^{-}\)/pixel/s or 0.4 e\({}^{-}\)/pixel/frame at room temperature, which is sufficiently low and has negligible effect on the experiment. Therefore, no cooling is needed, and all experiments are carried out at room temperature. Our goal is to accumulate at least 10,000 X-ray events on each pixel. In order to reduce the amount of experimental data, we cover the CMOS sensor with a plate, which has three 2 mm diameter holes on it. Therefore, only pixels in these three holes are exposed to X-rays. These holes are randomly located across the chip, as shown in the exposure map Fig. 2. So we can evaluate the non-uniformity in one area and also among these three areas, which can reflect the variations on small and large scales, respectively. Three \(99\times 99\)-pixel areas (red blocks on the map) in these holes, namely Area 1, 2 and 3, are chosen for the following data reduction. ## 3 Data Reduction The sensor is operated at a frame rate of 20 Hz and the exposure time accumulated to around 220 hours in total. For each of the frames taken in the exposures, X-ray events are searched over the X-ray image and recorded in real time by the camera. The procedure was described in detail in Wu et al. (2022). If a pixel as the local maximum of its adjacent \(3\times 3\) pixels is above a preset threshold, \(T_{\rm event}=50\) DN, this region is taken as an event. A 1-pixel event is defined as that no other pixels than the center one have a value above the split threshold, \(T_{\rm split}=15\) DN, which is about 10 times the readout noise. Only 1-pixel events are selected. For each of the pixels within the 3 hole areas (\(3\times 99\times 99=29403\) pixels in total), a number of 60,000 photons or so are accumulated. Among them, more than 40% are 1-pixel events. This fraction can vary with the energy of incident photons, as shown in Figure 16 of Wu et al. (2022). For each of the pixels, we construct an energy spectrum with the DN of the center pixel of its 1-pixel events. This spectrum is called a G0center spectrum as in Ling et al. (2021) and Wu et al. (2022). Fig. 3 gives an example of the G0center spectrum of a single pixel. Several lines can be seen: Si K\({}_{\alpha}\) (1.74 keV), Si escape line of Ti K\({}_{\alpha}\) (2.77 keV), Ag L\({}_{\alpha}\) (2.98 keV), Si escape line of Ti K\({}_{\beta}\) (3.19 keV), Si escape line of Cr K\({}_{\alpha}\) (3.67 keV), Ti K\({}_{\alpha}\) (4.51 keV), Ti K\({}_{\beta}\) (4.93 keV), Cr K\({}_{\alpha}\) (5.41 keV), Cr K\({}_{\beta}\) (5.95 keV), Fe K\({}_{\alpha}\) (6.40 keV), and Fe K\({}_{\beta}\) (7.06 keV). The Ti lines are mainly from the Ti target. The Cr and Fe lines are from the stainless steel in the structures of the experimental device. And the Ag line comes from the cover plate, which contains a small amount of silver. To determine the location and FWHM, each of the peaks is fitted with a single Gaussian function. Then we fit the relationship between the position and the known energy of the lines using the 6 strongest peaks of Ti, Cr and Fe with a linear function \[y=a_{1}\times x+a_{0}, \tag{1}\] where \(x\) is the position of the peaks in DN and \(y\) is the known energy of the emission lines in eV. The conversion gain \(a_{1}\) in eV/DN and the intercept \(a_{0}\) in eV of each pixel are obtained. With gain and intercept, the FWHM values in DN can be converted to energy resolutions in eV using Eq. 1. ## 4 Results and Discussion Figure 1: The experimental layout. Illuminated by initial X-rays produced by the X-ray tube at right side, the Ti target at center can produce characteristic photons. These secondary X-rays are then received and recorded by the CMOS sensor in the camera at the bottom. ### Readout Noise and Dark Current At room temperature, a series of dark experiments have been performed under different integration times, ranging from the shortest \(\sim\!13.92\;\mu\)s to 100 s, to measure the noise and the dark current. Thirty dark frames are recorded for each of the integration times, and the noise of each pixel is calculated as the standard deviation of these 30 values. The median value among these pixels is chosen to represent the noise of the whole frame. The readout noise \(\sigma_{\rm readout}\) is 2.65 e\({}^{-}\) under the shortest integration time. The noise for integration time of 50 ms, \(\sigma_{\rm dark50}\), increases to 2.79 e\({}^{-}\) with the dark current contribution. 50 ms is the same integration time as in the X-ray exposure experiments. For the dark current, it is calculated from a linear fit of the dark charge as a function of integration times. The overall dark current is 7.9 e\({}^{-}\)/pixel/s or 0.4 e\({}^{-}\)/pixel/frame, at room temperature. This value of dark current can just explain the difference in noise. ### Pixel-by-pixel Properties We obtain the conversion gain and energy resolutions of each pixel with the method described in section 3. The readout noise and the \(\sigma_{\rm dark50}\) of each pixel are obtained in dark experiments described Figure 2: An example exposure map. Only the pixels in three holes are exposed to X-rays. Three \(99\times 99\)-pixel areas (red blocks) in these holes, namely Area 1, 2 and 3, are used in the following study. in section 4.1. Therefore, we have built a complete database of the properties of each pixel in the three areas, 29403 pixels in total. For the gain of each pixel, the measurement error is around 0.15%, or 0.01 eV/DN. And for the intercept measurement, the error is around 6 eV. They are sufficiently small so that we can capture the tiny variations among the gains and intercepts. The left panel of Fig. 4 shows the distribution of the gains in the three areas separately and in total. In Area 1 and Area 3, the mean values of the gains are similar, around 6.59 eV/DN. However, Area 2 has a slightly higher mean value of 6.62 eV/DN. The gain variations in the three areas are not exactly the same as well: in Area 1 and Area 2 they are around 0.43%, while in Area 3 it is around 0.68%. This shows that there are gain differences not only among pixels in a small region, but also among different regions on the chip, which reflects the variations on small and large scales, respectively. The overall mean value comes to 6.60 eV/DN and the gain variation for pixels in all the three areas is 0.56%. The variation of the intercepts is given by the right panel of Fig. 4. In the three areas separately and in total, the distributions of the intercepts are all similar: the means are around 34 eV and the standard deviations are around 11 eV. These non-zero intercepts are caused by the charge loss of the G0center events. In theory, the energy resolution of a CMOS sensor is given by the following equation (Hull et al., 2019): \[FWHM=2.355\times\omega\sqrt{(\frac{\sigma_{\rm gain}\times E}{\omega})^{2}+ \frac{F\times E}{\omega}+\sigma_{\rm total}^{2}}, \tag{2}\] Figure 3: An example of the G0center spectrum of a single pixel. Several lines can be seen: Si K\({}_{\alpha}\) (1.74 keV), Si escape line of Ti K\({}_{\alpha}\) (2.77 keV), Ag L\({}_{\alpha}\) (2.98 keV), Si escape line of Ti K\({}_{\beta}\) (3.19 keV), Si escape line of Cr K\({}_{\alpha}\) (3.67 keV), Ti K\({}_{\alpha}\) (4.51 keV), Ti K\({}_{\beta}\) (4.93 keV), Cr K\({}_{\alpha}\) (5.41 keV), Cr K\({}_{\beta}\) (5.95 keV), Fe K\({}_{\alpha}\) (6.40 keV), and Fe K\({}_{\beta}\) (7.06 keV). where the pair creation energy \(\omega=3.65\) eV/e\({}^{-}\) at room temperature, the Fano factor \(F=0.118\)(Lowe & Sareen, 2007; Rodrigues et al., 2021), \(E\) is the incident photon energy, \(\sigma_{\rm gain}\) is the gain variation and \(\sigma_{\rm total}\) is the total noise. The dark noise at the integration time of 50 ms, \(\sigma_{\rm dark50}\), is included in the total noise. At the pixel level, there is no gain variation (\(\sigma_{\rm gain}=0\)), and the energy resolution of a single pixel is related to the noise level. Fig. 5 shows this correlation between the FWHM at Fe K\({}_{\alpha}\) and the noise at the integration time of 50 ms, \(\sigma_{\rm dark50}\). The blue curve gives the theoretical result that is contributed by the Fano fluctuation and \(\sigma_{\rm dark50}\). The gap between the curve and the data indicates that there are other types of noise, \(\sigma_{\rm others}\). Therefore, the energy resolution of a single pixel should be given by: \[FWHM=2.355\times\omega\sqrt{\frac{F\times E}{\omega}+\sigma_{\rm dark50}^{2}+ \sigma_{\rm others}^{2}}. \tag{3}\] The data is then fitted with Eq. 3, which gives \(\sigma_{\rm others}=7.1\) e\({}^{-}\). It shows that not only the readout noise and the contribution from the dark current are important for the energy resolution of a single pixel, but some other sources of noise should be concerned. ### Gain Correction and Energy Resolution In previous studies, we have always handled X-ray events from all pixels on a CCD or CMOS detector in the same way, using the same conversion gain across the whole chip. Based on the results of section 4.2, the gain inhomogeneity can apparently degrade the energy resolution. However, with a complete database of the pixel-by-pixel properties, we can perform this energy conversion for each of the pixels individually, which is expected to result in a more precise energy spectrum. We have done the gain correction for the G0center spectrum using the events from all three areas. According Figure 4: The distribution of gains (left panel) and intercepts (right panel) among pixels in three areas separately (red, green, blue) and in all areas altogether (black). The gain distributions in different areas are not the same. However, the intercept distributions are similar. to the position of the incident photon, the conversion gain of the pixel at this position is used to convert the digital numbers into energies. The spectra before and after the correction are given in Fig. 6. Obviously, the spectrum after the correction has higher peaks and better energy resolutions: the FWHM improves from 141.0 eV to 124.6 eV at 4.5 keV, and from 163.6 eV to 140.7 eV at 6.4 keV, which are close to those of a single pixel. Note that these energy resolutions are realized at room temperature, further highlighting the advantages of the CMOS sensors. The energy resolutions before and after the gain correction are given in Fig.7. The result of the Mg K\({}_{\alpha}\) line at 1.25 keV is obtained from a Mg target exposure experiment. The energy resolutions before the correction are well fitted with Eq. 2 (green curve). The fitted \(\sigma_{\rm gain}=0.54\%\pm 0.04\%\) is consistent with the gain variation measured in this work (0.56%). The energy resolutions after the gain correction are also well fitted with \(\sigma_{\rm gain}\) set to 0 in Eq. 2 (red curve), meaning that the degradation due to the pixel inhomogeneity is basically eliminated by the gain correction. The results of Fano fluctuation and that with \(\sigma_{\rm dark50}\) added are also shown. These results prove that the gain correction can significantly improve the energy resolution. The fit value of the total noise \(\sigma_{\rm total}\), around 7.7 e\({}^{-}\), is much higher than \(\sigma_{\rm dark50}\), which is only 2.8 e\({}^{-}\), indicating a \(\sigma_{\rm others}\) of around 7.2 e\({}^{-}\). This is consistent with the fit value of 7.1 e\({}^{-}\) in Fig. 5. \(\sigma_{\rm others}\) may come from varies of sources. The image lag, which is caused by the residual charges left by the previous frame, can influence the exposure of the next frame and can be one possible noise source. Noise may also come from the charge loss caused by the recombination in long-distance diffusion and the absorption by edges of pixels. We will study these noises further in the future. ## 5 Conclusions Figure 5: The relationship between the FWHM at Fe K\({}_{\alpha}\) line and the noise at the integration time of 50 ms, \(\sigma_{\rm dark50}\), at the pixel level. The blue curve gives the theoretical result that is contributed by the Fano fluctuation and \(\sigma_{\rm dark50}\). And the red curve gives the fitting result using Eq. 3. CMOS detectors have great potential in X-ray astronomy due to their excellent performances. Nowadays, the readout noise of a typical scientific CMOS sensor can be less than 5 e\({}^{-}\) and some even reach 1 e\({}^{-}\). However, the typical energy resolution of sCMOS sensors is worse than the Fano fluctuation limit even when the readout noise is taken into account. The inhomogeneity between pixels may be one of the factors. To study this problem and improve the energy resolution, we investigated the properties of the sCMOS sensor at the level of individual pixels. Using a customized sCMOS sensor GSENSE1516BSI, we established a complete database of pixel-by-pixel properties of around 30 thousand pixels in the areas we selected on the chip. These properties include the noise level, the conversion gain and the energy resolutions of each pixel. At room temperature, the energy resolution of a single pixel can reach 140 eV at 6.4 keV. The positive correlation between the noise and the energy resolution supports the notion that the noise, including the readout noise and the contribution from the dark current, is an important factor for the energy resolution of a single pixel. However, comparison with theoretical results indicates that other noise sources must exist to explain the energy resolution. These noises will be studied further in the future. The gain variation, which exists not only in one area but also among areas, is measured to be 0.56% for all pixels, consistent with the theoretical fit result. In the future, this variation is expected to be reduced by advances in manufacturing techniques. Based on the properties of each pixel, we can do a thorough pixel-by-pixel gain correction over the whole chip to eliminate the degradation of energy resolution caused by the gain variation. We used three regions, each consisting of \(99\times 99\) pixels, to demonstrate this method and obtained energy resolutions of 124.6 eV at 4.5 keV and 140.7 eV at 6.4 keV, respectively. In principle, this gain Figure 6: The G0center spectra before (green curve) and after (red dashed curve) the gain correction. Obviously, the spectrum after the correction has higher peaks and better energy resolutions: the FWHM improves from 141.0 eV to 124.6 eV at 4.5 keV, and from 163.6 eV to 140.7 eV at 6.4 keV. These results are obtained at room temperature. correction can be applied to the whole chip, which can make the energy resolution of the chip close to the single-pixel energy resolution of around 140 eV at 6.4 keV. In future work, we will apply this method to events of all grades, instead of 1-pixel events only. The energy resolutions above are realized at room temperature, which demonstrates the advantages of CMOS sensors. For future spectroscopic applications in X-ray, CMOS sensors are an excellent choice, and a thorough pixel-by-pixel gain correction is recommended to achieve a better performance. The authors thank the referee for his/her helpful comments. This work is supported by the National Natural Science Foundation of China (grant No. 12173055) and the Chinese Academy of Sciences (grant Nos. XDA15310100, XDA15310300, XDA15052100).
2310.19750
Chain-of-Thought Embeddings for Stance Detection on Social Media
Stance detection on social media is challenging for Large Language Models (LLMs), as emerging slang and colloquial language in online conversations often contain deeply implicit stance labels. Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks -- alleviating some of these issues. However, COT prompting still struggles with implicit stance identification. This challenge arises because many samples are initially challenging to comprehend before a model becomes familiar with the slang and evolving knowledge related to different topics, all of which need to be acquired through the training data. In this study, we address this problem by introducing COT Embeddings which improve COT performance on stance detection tasks by embedding COT reasonings and integrating them into a traditional RoBERTa-based stance detection pipeline. Our analysis demonstrates that 1) text encoders can leverage COT reasonings with minor errors or hallucinations that would otherwise distort the COT output label. 2) Text encoders can overlook misleading COT reasoning when a sample's prediction heavily depends on domain-specific patterns. Our model achieves SOTA performance on multiple stance detection datasets collected from social media.
Joseph Gatto, Omar Sharif, Sarah Masud Preum
2023-10-30T17:18:10Z
http://arxiv.org/abs/2310.19750v1
# Chain-of-Thought Embeddings for Stance Detection on Social Media ###### Abstract Stance detection on social media is challenging for Large Language Models (LLMs), as emerging slang and colloquial language in online conversations often contain deeply implicit stance labels. Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks -- alleviating some of these issues. However, COT prompting still struggles with implicit stance identification. This challenge arises because many samples are initially challenging to comprehend before a model becomes familiar with the slang and evolving knowledge related to different topics, all of which need to be acquired through the training data. In this study, we address this problem by introducing **COT Embeddings** which improve COT performance on stance detection tasks by _embedding_ COT reasonings and integrating them into a traditional RoBERTa-based stance detection pipeline. Our analysis demonstrates that 1) text encoders can leverage COT reasonings with minor errors or hallucinations that would otherwise distort the COT output label. 2) Text encoders can overlook misleading COT reasoning when a sample's prediction heavily depends on domain-specific patterns. Our model achieves SOTA performance on multiple stance detection datasets collected from social media. ## 1 Introduction Detecting the stance of a text with respect to a certain topic is vital to many NLP tasks Hardalov et al. (2022). Detecting stances on social media platforms like Twitter poses unique challenges, as emerging knowledge and colloquial language patterns can make it difficult to detect stances without additional context. For example, consider the top tweet shown in Figure 1. This tweet contains no direct mention of Donald Trump, and is thus difficult to classify without further context -- such as how Trump supporters on Twitter widely supported voter fraud propaganda. Such emerging knowledge is difficult for LLMs with knowledge cut-offs to understand and may only be discernible by observing similarly labeled samples in the training set. One way to solve this problem is by employing models with extensive world knowledge. For example, recent works have shown that using ChatGPT on Stance Detection can provide significant performance increases Zhang et al. (2023, 2023). Unfortunately, LLMs (e.g., ChatGPT, Llama) still have many issues understanding complex stance relationships from Twitter data. In this study, we highlight two issues with the state-of-the-art **Chain-of-Thought (COT)** prompting approach to stance detection. 1) _Implicit Stance Confusion:_ As shown in Figure 1, LLMs continue to struggle with understanding implicit tweet stance, even when employing advanced prompting strategies like COT reasoning Wei et al. (2023). 2) _Stance Label Hallucination:_ LLMs are prone to hallucinations, which cause them to output sound reasonings, but for the wrong stance topic (see Figure 1 example). Even Figure 1: Common errors made by Chain-of-Thought reasoning models. _Implicit Stance Confusion_ refers to LLMs inability to understand the implicit reference to the stance topic. In the example above, ChatGPT should have predicted that the tweet is [IN FAVOR] of Trump. In this context, _Stance Label Hallucination_ refers to the scenario where LLMs use the label space to argue the wrong point. In this example, the reasoning is correct, but ChatGPT used the [IN FAVOR] label towards the wrong topic. when LLMs analyze the correct topic, they are also prone to using the provided label space incorrectly, producing accurate but ill-structured outputs. In this study, we mitigate these two problems by introducing **Chain-of-Thought (COT) Embeddings**. Our approach feeds the COT reasoning text to a transformer encoder to be used as an additional feature in a traditional stance detection pipeline. The intuition behind this approach is three-fold: _(i)_ Text encoders are robust to stance label hallucinations if the COT reasoning is correct. This can make incorrect COT predictions useful in a text classification pipeline. _(ii)_ Text encoders can choose to ignore certain signals as needed. Thus, when a sample is too implicit to be understood by LLMs, the model may choose to focus on how similar tweets were classified. _(iii)_ COT reasonings can inject world knowledge into a text encoder. That is, COT texts often contain reasonings and justifications grounded in world knowledge not available in the tweet. We find that, by using this approach, we can achieve state-of-the-art results on multiple stance detection datasets. A summary of our contributions is as follows: 1. To the best of our knowledge, this is the first investigation into the embedding of COT reasonings. Our approach achieves state-of-the-art results on two stance detection datasets: Tweet-Stance Mohammad et al. (2016); Barbieri et al. (2020) and Presidential-Stance Kaw-intiranon and Singh (2021). 2. Our error analysis on COT reasoning highlights two key flaws on stance detection tasks: _Implicit Stance Confusion_ and _Stance Label Hallucinations_. Our approach, Chain-of-Thought Embeddings, makes COT outputs more robust to these two issues. ## 2 Related Work Stance Detection:This task is a well-explored research problem, where early studies employed various machine learning and deep learning techniques Hardalov et al. (2022). The emergence of large language models has further pushed the state-of-the-art performance on many stance detection datasets Li and Caragea (2021). Many stance detection problems require domain-specific solutions with models which explicitly inject world knowledge into stance detection systems He et al. (2022); Liu et al. (2021). This work is motivated by knowledge infusion but substantially differs from existing works. To the best of our knowledge, while some prior work has used prompting for stance detection Zhang et al. (2023), no work has attempted to use LLMs as a knowledge base for improved stance detection. While we also do not explicitly explore LLMs as a knowledge extraction tool, we do find that our method has the capacity to inject world knowledge into a inference pipeline due to the nature of COT text generation. LLMs for Stance DetectionRecently, few works have used ChatGPT for stance detection directly Zhang et al. (2023, 2023). In Zhang et al. (2023), the authors achieve superior performance on several stance detection datasets by prompting ChatGPT to do Chain-of-Thought inference. In this study, we use a similar prompting strategy to perform stance detection, but show the benefits of embedding these COT reasoning texts and using them as a feature in a stance detection pipeline. ## 3 Methods We employ a 1-shot COT prompt for each tweet in each dataset, aiming to determine the stance of the tweet in relation to a specific topic1. We specifically ask the models to provide a COT reasoning and to include its predicted label in brackets (e.g. [NEUTRAL] for a neutral tweet), so the output may be parsed and converted to a numeric representation. An example tweet and corresponding \begin{table} \begin{tabular}{l|c c c c c|c c} \hline \hline & \multicolumn{6}{c}{**Tweet-Stance**} & \multicolumn{6}{c}{**Presidential-Stance**} \\ \hline & HC & FM & LA & AT & CC & BD & TR \\ \hline Train & 620 & 597 & 587 & 461 & 355 & 875 & 875 \\ Dev & 69 & 67 & 66 & 52 & 40 & - & - \\ Test & 295 & 285 & 280 & 220 & 169 & 375 & 375 \\ \hline \hline \multicolumn{6}{c}{_Class-wise distribution of topics_} \\ \hline Neutral & 256 & 170 & 167 & 145 & 203 & 487 & 410 \\ Against & 565 & 511 & 544 & 464 & 26 & 385 & 499 \\ Favor & 163 & 268 & 222 & 124 & 335 & 378 & 341 \\ \hline Total & 984 & 949 & 933 & 733 & 564 & 1250 & 1250 \\ \hline \hline \end{tabular} \end{table} Table 1: Topic-wise (e.g., HC, FM, TR) distribution of train, development, test, and classes of the Tweet-Stance and Presidential-Stance datasets. The Presidential-Stance dataset does not have a development set. COT excerpt can be found in Figure 1. After producing COT reasoning for a given text, we embed it with a transformer encoder and use it as a part of a stance detection pipeline. We specifically use a RoBERTa model Liu et al. (2019) trained on Twitter data as our encoder since it has been shown to perform better on Tweet-Stance when compared to RoBERTa-base2. We denote this model as Twitter-RoBERTa (TR) in this paper. Footnote 2: Huggingface Model: cardiffnlp/twitter-roberta-base-sep2022 We consider three different Twitter-RoBERTa variants in our experiments. **TR-Tweet**: We fine-tune with only tweet information. **TR-COT**: Fine-tune using only COT reasoning as the input and **TR-Tweet+COT**: Fine-tune Twitter-RoBERTa where tweet and COT reasoning are treated as a pair-wise input to the model (i.e. Tweet and COT reasoning texts are concatenated and jointly encoded by the pre-trained language model). All fine-tuning follows the standard text classification pipeline introduced in Devlin et al. (2018). Please refer to Appendix A for model hyperparameters and training details for each stance detection task. ### Dataset We assess our method on two well-known Twitter-based stance detection datasets: Tweet-Stance Mohammad et al. (2016); Barbieri et al. (2020) and Presidential-Stance Kawinitranon and Singh (2021). These datasets involve a 3-way classification task to determine whether tweets are in favor, against, or neutral towards a specific topic. The Tweet-Stance dataset comprises five topics: Hillary Clinton (HC), Feminism (FM), Abortion (LA), Atheism (AT), and Climate Change (CC). The Presidential-Stance dataset contains two subtasks focusing on the 2020 election cycle, with annotation for stance towards presidential candidates Joe Biden (BD) and Donald Trump (TR). The topic-wise and class-wise distribution and statistics for the training, development, and test sets of both datasets are presented in Table 1 and Table 2, respectively. The class-wise distribution indicates that both datasets are skewed towards the _against_ class. ### Evaluation Tweet-Stance:We report the macro average of the _Favor_ and _Against_ F1 scores as defined in Barbieri et al. (2020). We report baseline performance of 3 encoder-based stance detection models: BERT-Spc Devlin et al. (2018), BERT-GCN Lin et al. (2021) and PT-HCL Liang et al. (2022) as well as two ChatGPT prompting based methods: DQA and StSQA Zhang et al. (2023). All baseline scores are extracted from Zhang et al. (2023), where we note that evaluation was conducted on only a subset of the label space. Presidential-Stance:We report both the per-class F1 score and the macro average F1 score as reported in Kawinitranon and Singh (2021). Due to the lack of development set in Presidential-Stance, we report the average results over three experimental trials with different random seeds. We report the results of three baseline models BERT Devlin et al. (2018), SKEP Tian et al. (2020), and KE-MLM Kawinitranon and Singh (2021). ## 4 Results ### Tweet-Stance Results on Tweet-Stance are exhibited in Table 3. Results show that TR-Tweet+COT produces the best-performing model on Tweet-Stance, with an F1 score of 76.3. Notably, we can retain most of the performance by only embedding the COT reasoning, as TR-COT has only a 0.6 difference in F1 from TR-Tweet+COT. Our best model provides a **6.1-pt improvement** over our ChatGPT COT reasoning model, and simply embedding COT provides a 5.5 boost in F1 vs extracting results from COT directly. After investigating the subset of samples where TR-Tweet+COT is correct, but disagrees with the prediction from ChatGPT COT, we find that 74% \begin{table} \begin{tabular}{l c c c} \hline \hline & **Favor** & **Against** & **Neutral** \\ \hline \multicolumn{4}{c}{_Tweet-Stance_} \\ \hline Train & 678 & 1254 & 688 \\ Dev & 75 & 141 & 78 \\ Test & 304 & 715 & 230 \\ \hline \multicolumn{4}{c}{_Presidential-Stance-Biden_} \\ \hline Train & 266 & 279 & 330 \\ Test & 112 & 106 & 157 \\ \hline \multicolumn{4}{c}{_Presidential-Stance-Trump_} \\ \hline Train & 243 & 347 & 285 \\ Test & 98 & 152 & 125 \\ \hline \hline \end{tabular} \end{table} Table 2: Class-wise (i.e., neutral, against, favor) train, development, and test set statistics of the Tweet-Stance and Presidential-Stance datasets. Note that we aggregate the topics in Tweet-Stance in our experiments. (131/175) of the samples are on tweets incorrectly labeled as neutral by ChatGPT COT. This confirms our intuition that passing COT information to text encoders may help solve the _Implicit Stance Confusion_ problem. Of the remaining 44 samples TR-Tweet+COT was able to predict correctly, we manually inspected the 20/44 where ChatGPT predicts "Against" when the true label was "In Favor". We find that 9/9 samples from the HC, FM, LA, AT topics are examples of _stance label hallucination_. For example, consider the COT reasoning: "...it is clear that [NO] this text is against Jeb Bush and in favor of Hillary". This text was marked "[NO] = Against Hillary" by our COT parser but was able to be handled by our encoder model as the reasoning was accurate. The remaining 11 samples in this analysis are from the climate change topic, where most COT errors largely pertain to questions of what it means to be "in favor" or "against" climate change, which we view as more of a natural misunderstanding than instances of stance label hallucination. Future works may explore better prompts to elicit better predictions on climate change tweets. In Table 5, we evaluate the performance of COT produced by different LLMs. We find that while ChatGPT produces the highest performing COT, we achieve a meaningful performance increase when employing the smaller open-source LLM Llama-2-7b3(Touvron et al., 2023). Unfortunately, lower-performing LLMs such as Falcon-7b4(Almazrouei et al., 2023) do not provide useful COT, highlighting the importance of LLM performance on this task. Footnote 3: [https://huggingface.co/meta-llama/Llama-2-7b-chart-hf](https://huggingface.co/meta-llama/Llama-2-7b-chart-hf) ### Presidential-Stance Table 4 presents the results of the Presidential-Stance dataset. Results indicate that our approach outperforms all baseline models. When we analyze the Biden data, TR-Tweet+COT **outperforms previous works by 1.4 F1-pts**. A very interesting result is the extreme difference in performance between ChatGPT-COT and TR-COT, which provides a 20.7-pt boost in F1 score. This is driven by a large number of _Implicit Stance Confusion_ examples where it's challenging to understand the label without seeing other training samples. Specifically, our model is correcting Neutral class predictions 56% of the time -- as ChatGPT can assume mentions of democratic figures or ideals are taking a stance on Joe Biden -- which is not always the case, causing under-prediction on Neutral samples. Our error analysis also found stance label hallucinations as ChatGPT was found to go off-topic when the focus of the tweet is on another political figure: "wow Bernie sander is the only one who supports democracy #demdebate" provoked a ChatGPT response of "... this tweet is [IN FAVOR] of Bernie Sanders." which is of course not the question being asked. Similarly, on the Trump data, we find that our best-performing model **outperforms the closest baseline by 2.4 F1-pts**. Interestingly, we note that our best model _does not use the tweet information at all_, as TR-COT obtains the highest average F1 score (81.5). This outcome suggests that the COT reasoning is often logically sound, but our TR-COT model makes the predictions more robust to errors in the ChatGPT COT output structure. In Table 5, we again evaluate the performance of COT produced by different LLMs on Presidential Stance. We find that on both the Biden and Trump datasets, ChatGPT provides the highest performing COT. On both the Biden and Trump datasets, we also find that Llama-2 performs much better than Falcon, again highlighting the importance of LLM quality in our pipeline. Notably, Llama-2 only provides helpful COT for the Biden dataset, not Trump. This result, however, is expected as ChatGPT, a higher-performing language model than Llama-2 \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & HC & FM & LA & AT & CC & F1\({}_{avg}\) \\ \hline \multicolumn{8}{c}{_Baselines_} \\ \hline BERT-Spc\({}^{\dagger}\) & 49.6 & 41.9 & 44.8 & - & - & - \\ BERT-GCN\({}^{\dagger}\) & 50.0 & 44.3 & 44.2 & - & - & - \\ PT-HCL\({}^{\dagger}\) & 54.5 & 54.6 & 50.9 & - & - & - \\ DOA\({}^{\dagger}\) & 78.0 & 69.0 & 59.3 & - & - & - \\ StSQA\({}^{\dagger}\) & 78.9 & 68.7 & 61.5 & - & - & - \\ \hline \multicolumn{8}{c}{_ChatGPT Only_} \\ \hline 0-Shot & 71.5 & 61.6 & 49.1 & 21.6 & 37.1 & 51.6 \\ COT & 75.3 & 71.3 & 62.6 & 58.3 & 67.3 & 70.2 \\ \hline \multicolumn{8}{c}{_COT-Embeddings + Twitter-RoBERTa (TR)_} \\ \hline TR-Tweet & 59.0 & 56.6 & **64.0** & 67.0 & 52.6 & 69.0 \\ TR-COT & **81.3** & **72.6** & 61.4 & 70.7 & **69.3** & 75.7 \\ TR-Tweet & 78.7 & 70.6 & 63.8 & **72.9** & 54.1 & **76.3** \\ +COT & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: Results on the Tweet-Stance dataset. The F1\({}_{avg}\) column represents the F1 score on the full test set. Pertopic F1 score is additionally reported above by subsetting TweetStance by topic and re-computing the F1 score. Results marked with \(\dagger\) are taken from prior work. 7b, only provides a minor improvement over the baseline TR-Tweet. ## 5 Conclusion In this study, we have shown that embedding Chain-of-Thought reasoning extracted from LLMs (e.g., ChatGPT, LLlama) can boost the performance of stance detection models. Specifically, we highlight how we can outperform vanilla COT by augmenting text encoders with COT embedding. Our analysis highlights how text encoders are robust to LLM hallucinations and aid in the prediction of deeply implicit stance labels. We encourage future works to consider embedding COT reasoning for stance detection and similar tasks using social media data. ## 6 Limitations A limitation of this work is that stance detection using COT reasoning is very sensitive to the prompt provided to ChatGPT [22]. In this study, we do not thoroughly investigate which COT prompt produces the best results, but rather try a few standard approaches inspired by related works. Future works aiming to optimize COT prompt structure for stance detection may find ways to reduce the effects of error hallucinations. In general, our work reduces the need for prompt optimization by mitigating issues pertaining to common COT errors. Another limitation of this work is that one of its core takeaways -- that COT Embeddings reduce effects of implicit stance confusion -- may only be applicable to popular social media platforms where colloquial language is constantly changing. Application of COT Embeddings to other domains where all necessary information for inference is present in a single sample (e.g., in certain NLI tasks), COT Embeddings may not be as helpful. Finally, we note that the addition of COT embeddings may impact the computational efficiency of the model. Specific measures of computational efficiency are currently outside the scope of this paper. However, we highlight that if one is in a setting where the COT reasoning can be pre-computed, the impact of COT on computational efficiency is limited. While if COT reasonings had to be computed at inference time, there may be noticeable inference speed degradation depending on the efficiency of the LLM used for COT reasoning.
2306.04781
Learning to Navigate in Turbulent Flows with Aerial Robot Swarms: A Cooperative Deep Reinforcement Learning Approach
Aerial operation in turbulent environments is a challenging problem due to the chaotic behavior of the flow. This problem is made even more complex when a team of aerial robots is trying to achieve coordinated motion in turbulent wind conditions. In this paper, we present a novel multi-robot controller to navigate in turbulent flows, decoupling the trajectory-tracking control from the turbulence compensation via a nested control architecture. Unlike previous works, our method does not learn to compensate for the air-flow at a specific time and space. Instead, our method learns to compensate for the flow based on its effect on the team. This is made possible via a deep reinforcement learning approach, implemented via a Graph Convolutional Neural Network (GCNN)-based architecture, which enables robots to achieve better wind compensation by processing the spatial-temporal correlation of wind flows across the team. Our approach scales well to large robot teams -- as each robot only uses information from its nearest neighbors -- , and generalizes well to robot teams larger than seen in training. Simulated experiments demonstrate how information sharing improves turbulence compensation in a team of aerial robots and demonstrate the flexibility of our method over different team configurations.
Diego Patiño, Siddharth Mayya, Juan Calderon, Kostas Daniilidis, David Saldaña
2023-06-07T21:02:20Z
http://arxiv.org/abs/2306.04781v1
Learning to Navigate in Turbulent Flows with Aerial Robot Swarms: A Cooperative Deep Reinforcement Learning Approach ###### Abstract Aerial operation in turbulent environments is a challenging problem due to the chaotic behavior of the flow. This problem is made even more complex when a team of aerial robots is trying to achieve coordinated motion in turbulent wind conditions. In this paper, we present a novel multi-robot controller to navigate in turbulent flows, decoupling the trajectory-tracking control from the turbulence compensation via a nested control architecture. Unlike previous works, our method does not learn to compensate for the air-flow at a specific time and space. Instead, our method learns to compensate for the flow based on its effect on the team. This is made possible via a deep reinforcement learning approach, implemented via a Graph Convolutional Neural Network (GCNN)-based architecture, which enables robots to achieve better wind compensation by processing the spatial-temporal correlation of wind flows across the team. Our approach scales well to large robot teams -- as each robot only uses information from its nearest neighbors--, and generalizes well to robot teams larger than seen in training. Simulated experiments demonstrate how information sharing improves turbulence compensation in a team of aerial robots and demonstrate the flexibility of our method over different team configurations. Swarm Robotics, Reinforcement Learning, Wind Turbulence, Machine Learning for Robot Control, Graph Neural Networks. ## I Introduction Aerial vehicles naturally have to operate in environments with windy conditions. The wind field directly affects the vehicle's motion, potentially leading it outside its desired trajectory or even to crash. Navigating in windy conditions is even more difficult when air-flow is turbulent, presenting a chaotic behavior with hard-to-predict changes in pressure and flow velocity. This challenge is exacerbated in aerial multi-robot scenarios where a team of robots has to perform coordinated tasks which might require staying within communication range without colliding with one another. However, operating multi-robot systems in turbulent environments is highly relevant to reducing delivery and transportation delays, as well as supporting search and rescue operations during natural disasters from storms, tornadoes, and hurricanes. The existing robotics literature has studied the problem of navigation flows, relying on assumptions to make the problem tractable. While some approaches assume a known (static or dynamic) wind field, e.g., [1, 2, 3], other methods learn an association between a _location_ in the environment and the effect of the flow [4, 5]. These are relevant limitations because it does not allow the robots to reuse their learned information in turbulent flows where such associations are constantly evolving or being faced with a new or unknown environment. In Fig. 1, we show an aerial multi-robot system operating in a turbulent wind flow. This figure illustrates a key observation: _sensory information sharing_ can provide valuable information to improve the robots' turbulence compensation in the absence Fig. 1: A team of 36 robots navigating in turbulent wind. The robots are trying to maintain a square formation using only a trajectory-tracking controller. Blue arrows show the wind vector field. The red dot shows the target location of the bottom-left robot in the formation. The X-axis and y-axis units are in meters. of predictive wind-flow maps. For example, the approach of a new wind front could be detected by a robot, which can then relay pertinent information to other robots to better compensate the wind. This essentially occurs because the rapid fluctuations in wind velocity and direction inherent to turbulent winds are spatio-temporally correlated across the region. The primary contribution of this paper is a novel method for trajectory tracking in turbulent flows using multiple aerial vehicles equipped with sensors to measure wind pressure and relative distance to other robots. Specifically, our method leverages structured information sharing over a graph where robots represent nodes and communication between robots represents edges. To ensure generality over qualitatively different turbulent flows, we develop a deep reinforcement learning approach, implemented via a Graph Convolutional Neural Network (GCNN). Our approach learns to fuse and transform sensory information received from neighbors [6] in order to compensate for wind forces. Crucially, our method does not need to learn to map between a specific location and the wind flow. Instead, it leverages spatio-temporal correlations (as described by the Navier-Stokes equations [7]) in wind flow between team members. Our method ensures that the learned information will not be associated with a specific training environment or trajectory. Furthermore, this ensures a decoupling between the nominal trajectory tracking controller and the controller for turbulence compensation. Our approach is scalable due to the use of the GCNN because each robot only uses the information from its on-board sensors and the information of its neighbors in the communication graph. Our experiments demonstrate this scalability as well as the efficacy of the proposed approach. These experiments also offer insights into how the learned models leverage shared information among the robots for effective turbulence compensation. **Related Work:** The original robotic navigation problem in windy environments was proposed by Zermelo in 1931 [8]. When modeling the flow as a vector field, some works assume that the flow is known and quasi-static, i.e., does not change in time and space. These works focus on developing planning methods for static vector fields [1, 2], and spatio-temporal dynamic fields [3]. However, those methods rely on knowing the vector field at the planning stage, which is unpredictable for turbulent flows. For unknown static flows, the works in [9, 10] design robot navigation strategies that drive the robot to sweep the environment and create a map of the flow. In [11], the authors designed an adaptive controller for a quadrotor that models the flow as two parts: 1) a time-varying vector field that can be estimated and 2) an unknown speed-bounded flow that is assumed as noise. Flow prediction is also studied and implemented in realistic settings [12, 13]. Similar to the aforementioned works, however, they involve a large number of samples of the environment. For unknown dynamics of the flow, learning approaches have shown promising results. A safe learning approach for a quadrotor is presented in [14]. Assuming the flow is static, the robot starts in a safe region that can be expanded as the learning process evolves. The work in [4] presents an adaptive flight control that learns how to track a given trajectory on a static flow. A reinforcement learning approach to navigate a static wind field is presented in [5]. As discussed in the introduction, our method does not need to create associations between locations in the environment and the wind flow. Towards this end, we leverage Graph Convolutional Neural Networks (GCNNs) [6, 15]. They are effective at modeling associations within a graph and have been applied in a wide range of fields, including multi-robot coordination and decision making e.g., [16, 17]. ## II Problem Statement **Robot Team:** Consider a team of \(n\) aerial robots, denoted by the set \(\mathcal{V}=\{1,...,n\}\). Assuming that all robots are at the same height, we analyze their location and motion on the plane. The position of each robot \(i\in\mathcal{V}\) is denoted by \(\boldsymbol{r}_{i}\in\mathbb{R}^{2}\). We define the state vector by the position and the velocity of the robot, i.e, \(\boldsymbol{x}_{i}=[\boldsymbol{r}_{i}^{\top},\dot{\boldsymbol{r}}_{i}^{\top} ]^{\top}\). We assume all robots are homogeneous and have the same mass \(m\). Each robot \(i\) can use its local sensors to estimate its state as well as select variables of the environment. Each robot can generate a force vector \(\boldsymbol{f}_{i}\in\mathbb{R}^{2}\) as control input, i.e., \[\boldsymbol{f}_{i}=\boldsymbol{u}_{i}. \tag{1}\] For this formulation, our aerial vehicles can be a fully actuated hexator [18] or an under-actuated quadrotor that tilts to generate a force in any direction [19]. Each robot \(i\) can exchange messages with its \(k\) nearest robots denoted by \(\mathcal{N}_{i}\). At every time step, each robot communicates its state and information from on-board sensors. **Wind field:** Our robot team operates in a windy environment \(\mathcal{W}\subset\mathbb{R}^{2}\). We represent the wind's velocity at a time \(t\) and a location \(\boldsymbol{r}_{i}\in\mathcal{W}\) as a vector-valued function \(\boldsymbol{w}:\mathbb{R}_{\geq 0}\times\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\). The vector field follows the dynamics of a fluid, described by the incompressible Navier-Stokes equations [7] \[\nabla\cdot\boldsymbol{w} =0\] \[\dot{\boldsymbol{w}}+\boldsymbol{w}\cdot\nabla\boldsymbol{w} =-\nabla p+\frac{1}{Re}\nabla^{2}\boldsymbol{w}, \tag{2}\] where \(Re\) is the flow's Reynolds number, and \(p\) is the scalar pressure field. The Reynolds number measures the ratio between inertial and viscous forces. It characterizes flow patterns in a fluid, e.g., at low \(Re\), flows tend to be laminar, while at high \(Re\) flows tend to be turbulent. In this work, we focus on turbulent environments with high Reynolds numbers [20], \(Re\geq 4\times 10^{3}\), in the flow dynamics (2). Note that this type of turbulent environment has not been explored in the mobile robotics literature. As a robot moves through the air, the wind exerts a drag force on the robot in the fluid's direction [21]. We compute the drag force as \[\mathbf{f}^{drag}=\frac{1}{2}\rho\|\boldsymbol{w}\|^{2}C_{d}\:A\:\hat{ \boldsymbol{w}}, \tag{3}\] where \(\rho\) is the air density, the operator \(\|\cdot\|\) is the 2-norm, \(C_{d}\) is the robot's drag coefficient, \(A\) is the cross-sectional area, and \(\hat{\boldsymbol{w}}\) is a unit vector in the direction of \(\boldsymbol{w}\). In this context, the reference area is the orthogonally projected frontal area, i.e., the object's visible area as seen from a point on its line of travel. We assume that the drag coefficient and the air density are constant. **Sensors:** The robots in our team do not know the wind field nor any of the coefficients in (3). However, they can use their equipped sensors and noisy measurements to gather information about their surroundings. Each robot is equipped with a pressure sensor, a location sensor, and an inertial measurement unit (IMU). The IMU estimates the robot's linear velocity. The robot can measure the relative distance to their \(k\)-nearest neighbors using any relative location system, e.g., camera, LIDAR, sonar, or time of flight (ToF) sensor. **Robot dynamics:** The robot's actuation and the turbulent wind generate linear forces that determine the robot's motion. We model the dynamics of the \(i\)th robot using Newton's equation, \[m\ddot{\mathbf{r}}_{i}=\mathbf{u}_{i}+\mathbf{f}_{i}^{drag}. \tag{4}\] **Trajectory tracking:** The goal for the robot \(i\) is to follow a given trajectory \(\mathbf{x}_{i}^{d}(t)=[\mathbf{r}_{i}^{d\top}(t),\dot{\mathbf{r}}_{i}^{d\top}(t)]^{\top}\), specified by a desired location \(\mathbf{r}_{i}^{d}\) and desired velocity \(\dot{\mathbf{r}}_{i}^{d}\) in a time interval \([0,T_{f}]\)[22]. Assuming an environment without wind, i.e., \(\mathbf{f}_{i}^{drag}=\mathbf{0}\) in (4), we can use a classical trajectory-tracking approach that provides exponential stability [23] based on a feed-forward controller, \[\mathbf{u}_{i}^{tt}=\mathbf{K}_{p}(\mathbf{r}_{i}^{d}-\mathbf{r}_{i})+\mathbf{K}_{d}(\dot{\mathbf{r}} _{i}^{d}-\dot{\mathbf{r}}_{i})+\ddot{\mathbf{r}}_{i}^{d}, \tag{5}\] where \(\mathbf{K}_{p}\) and \(\mathbf{K}_{d}\) are the diagonal gain matrices. The main challenge here is that \(\mathbf{f}_{i}^{drag}\) is not negligible and can drive the robot far away from the given trajectory, thereby making the dynamical system in (4) unstable. **Objective:** Our objective is to allow the robot team to track a trajectory while operating in a dynamic, turbulent wind field. We, therefore, need to solve the following problem: **Problem 1**: _Given a set of \(n\) robots and a trajectory that can be solved with a control policy \(\mathbf{u}_{i}^{tt}\), which does not consider turbulence, find a control input \(\mathbf{u}_{i}\) such that the robots can perform the given task in a turbulent environment._ Our key insight is that although the robots do not know the wind field, each can share its state and sensor measurements with neighboring robots. Sharing information allows each robot to increase its knowledge about the working environment, leading to an action policy that effectively compensates for the wind's drag force. Note that our approach is independent of the trajectory tracking because we aim to learn the wind patterns independently of the trajectory-tracking controller. ## III Deep Reinforcement Learning Method **Control Strategy:** The key to our control strategy is decoupling the trajectory-tracking controller and the wind compensation. Trajectory-tracking controllers already show exponential convergence [19, 23]. However, convergence is not guaranteed when an external force from the wind is added to the dynamics as modeled in (4). To overcome this limitation, we leverage Reinforcement Learning (RL) to design a second controller that compensates for wind disturbances. This new controller forms an inner control loop, as seen in Fig. 2, and assists the trajectory-tracking controller by helping it converge as if operating in a disturbance-free setting. The force generated by a robot is the combination of an RL-based wind compensation force \(\mathbf{f}_{i}^{rl}\) and trajectory tracking force \(\mathbf{f}_{i}^{tt}\). So the total force generated by the robot is \(\mathbf{u}_{i}=\mathbf{f}_{i}^{rl}+\mathbf{f}_{i}^{tt}\). Substituting the total force in (4), we obtain \[m\ddot{\mathbf{r}}_{i}=\mathbf{f}_{i}^{rl}+\mathbf{f}_{i}^{tt}+\mathbf{f}_{i}^{drag}. \tag{6}\] We set the trajectory-tracking force to be the control's action from (5), such that \(\mathbf{f}^{tt}=\mathbf{u}_{i}^{tt}\). The purpose of the \(\mathbf{f}_{i}^{rl}\) is to compensate for the effect of the wind flow, thereby allowing the robots to track their desired trajectory. To this end, let \(\mathcal{A}\) be the action space and \(\mathcal{S}\) the state space in the RL context. We use a Deep-RL policy - \(\mathbf{\pi}_{i}^{\mathbf{\theta}}(\mathbf{a}_{i}|\mathbf{s}_{i})\) - to compute a wind compensation action for each robot. We model the policy with a deep neural network with parameters \(\mathbf{\theta}\), conditioned on a set of observed variables \(\mathbf{s}_{i}\in\mathcal{S}\). Then, we set \(\mathbf{f}^{rl}=\mathbf{a}_{i}\) where \(\mathbf{a}_{i}\sim\mathbf{\pi}_{i}^{\mathbf{\theta}}(\mathbf{a}_{i}|\mathbf{s}_{i})\). We set the action space \(\mathcal{A}\) to be \([-f_{rf}^{max},f_{rf}^{max}]^{2}\subset\mathbb{R}^{2}\), representing a two-dimensional bounded force. Unlike classical RL methods, the \(i\)th robot's policy depends on all the states in the robotic team rather than just \(\mathbf{s}_{i}\). This allows our method to use information across robots. Later in this section, we will offer a precise definition of \(\mathcal{S}\) and the information-sharing architecture of our RL method. **Soft Actor-Critic:** We learn \(\mathbf{\pi}^{\mathbf{\theta}}(\mathbf{a}_{i}|\mathbf{s}_{i})\) using the Soft Actor-Critic algorithm (SAC). SAC is an off-policy Deep Reinforcement Learning (DRL) algorithm based on entropy regularization to trade off exploitation and exploration policies. SAC has demonstrated stability, sample-efficient learning, and optimal policy convergence [24]. The SAC method optimizes \(\mathbf{\pi}_{i}^{\mathbf{\theta}}\) by jointly maximizing its expected reward and its entropy [24, 25]. Incorporating the entropy term into the RL framework casts an optimization problem of the form \[\pi_{i}^{*}=\arg\max_{\pi}\mathop{\mathbb{E}}_{\tau\sim\pi}\left[\sum_{t=0}^{ \infty}\gamma_{t}\bigg{(}r(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})+\alpha H \left(\pi(\cdot|\mathbf{s}_{i})\right)\bigg{)}\right], \tag{7}\] where \(\mathbf{s}_{i}^{\prime}\) is the state in the next time step after applying the action \(\mathbf{a}_{i}\), \(\alpha\) is the trade-off coefficient, \(r\) is the reward signal, \(\gamma\) is the discount factor, and \(H\) is the policy's entropy. The \(\alpha\) values control the trade-off between the expected reward and entropy of the policy, balancing exploration and exploitation. Appropriate values of \(\alpha\) accelerate the learning process towards the optimal policy and prevent convergence to local minima [24]. Following (7), SAC uses a Deep Q-Learning strategy that incorporates \(H\) into a slightly modified version of the Bellman equation for the value function \[V(\mathbf{s}_{i})=\mathop{\mathbb{E}}_{\mathbf{a}_{i}\sim\pi}\left[Q(\mathbf{s}_{i},\mathbf{a}_ {i})\right]+\alpha H\left(\pi(\cdot|\mathbf{s}_{i})\right) \tag{8}\] and the Bellman equation for the Q-function \[Q(\mathbf{s}_{i},\mathbf{a}_{i})=\mathop{\mathbb{E}}_{\mathbf{s}_{i}^{\prime}\sim P}\left[r (\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime})+\gamma V(\mathbf{s}_{i}^{\prime})\right], \tag{9}\] Fig. 2: Control diagram of our proposed method. where \(P\) is the probability distribution of the future state \(s_{i}^{\prime}\). In practice, SAC estimates three functions: The policy (Actor) and two Q-functions (Critics). First, it approximates the policy as a Gaussian distribution \(\mathbf{\pi}^{\mathbf{\theta}}\sim\mathcal{N}(\mu_{\mathbf{\theta}},\Sigma_{\mathbf{\theta}})\). Both \(\mu_{\mathbf{\theta}}\) and \(\Sigma_{\mathbf{\theta}}\) are the outputs or a neural network parametrized with \(\mathbf{\theta}\) and optimized through gradient descent using the re-parametrization trick [26]. Similarly, SAC estimates two Q-functions \(Q_{\mathbf{\theta}_{1}}\) and \(Q_{\mathbf{\theta}_{2}}\) as neural networks with parameters \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\), respectively. The Q-function networks train by minimizing the objective \(J_{Q}(\mathbf{\theta}_{i})\) \[\mathop{\mathbb{E}}_{(\mathbf{s}_{i},\mathbf{\alpha}_{i},\mathbf{s}_{i}^{\prime})\sim \mathcal{D}}\left[\left(Q_{\mathbf{\theta}_{j}}(\mathbf{s}_{i},\mathbf{a}_{i})-(r(\mathbf{s}_ {i},\mathbf{a}_{i})+\gamma V_{\mathbf{\theta}_{1},\mathbf{\theta}_{2}}(\mathbf{s}_{i}^{\prime} ))\right)^{2}\right] \tag{10}\] over samples taken from a replay buffer \(\mathcal{D}=\mathcal{S}\times\mathcal{A}\times\mathcal{S}\) of experience gathered during multiple episodes in the training process. The value function \(V_{\mathbf{\theta}_{1},\mathbf{\theta}_{2}}\) is implicitly defined through the Q-function and the policy, as stated in [25]. Similarly, the objective for the Gaussian policy is given by \[J_{\pi}(\mathbf{\theta})=\mathop{\mathbb{E}}_{\mathbf{s}_{i}\sim\mathcal{D},\mathbf{a}_{i }\sim\mathbf{\pi}^{\mathbf{\theta}}}\left[\alpha\log\mathbf{\pi}^{\mathbf{\theta}}(\mathbf{a}_{i}| \mathbf{s}_{i})-\min_{j\in\{1,2\}}Q_{\mathbf{\theta}_{j}}(\mathbf{s}_{i},\mathbf{a}_{i})\right]. \tag{11}\] Note that minimizing (10) is equivalent to finding the Q-function that best approximates the value function \(V\). Analogous, minimizing (11) is equivalent to jointly maximizing the expected reward and the policy's entropy. In this work, we adapt the SAC method to optimize the \(i\)th robot's policy conditioned on all the robot states in the team as opposed to a single agent state. **State space:** Our approach does not focus on tracking the trajectory but on learning how to directly compensate for the disturbance experienced by the robots, such that the trajectory-tracking controller can operate freely. For this purpose, we integrate the dynamics in (6) to simulate the robot's dynamics under perfect conditions. In these conditions, there is no drag force and hence no need for wind compensation. Therefore, our RL approach's state \(\mathbf{s}_{i}\in\mathcal{S}\) relates to how much the trajectory-tracking state \(\mathbf{x}_{i}\) differs from a simulated state \(\mathbf{x}_{i}^{sim}\). Note that \(\mathbf{x}_{i}^{sim}\) does not consider the wind effect. In our method, we perform sampling and actuating periodically. Consequently, we assume that time is discrete, i.e., we use the variable, \(\tau=0,1,2,...\) to represent discrete time steps. We use a constant step size \(\Delta\tau\) small enough to apply our method in the dynamics equations in (6). Let us denote the trajectory-tracking state of the \(i\)th robot at a time step \(\tau\) by \(\mathbf{x}_{i}[\tau]\), and its simulated state by \(\mathbf{x}_{i}^{sim}[\tau]\). Using Euler integration, we can predict the disturbance-free state \(\mathbf{x}_{i}^{sim}[\tau]\) using the past state \(\mathbf{x}_{i}[\tau-1]\), and a trajectory-tracking action \(\mathbf{u}_{i}^{tt}[\tau-1]\). We can write the discrete-time dynamics from (6) in matrix form, assuming \(\mathbf{\mathsf{f}}^{drag}=\mathbf{f}^{rl}=0\), to compute the simulated state at \(\tau\), \[\mathbf{x}_{i}^{sim}[\tau]=\mathbf{A}\mathbf{x}_{i}[\tau-1]+\mathbf{B}\mathbf{u}_{i}^{tt}[\tau-1], \tag{12}\] where \[\mathbf{A}=\begin{bmatrix}\mathbf{1}&\Delta\tau&\mathbf{1}\\ \mathbf{0}&\mathbf{1}\end{bmatrix},\ \mathbf{B}=\begin{bmatrix}\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\frac{\Delta\tau}{m}\mathbf{1}\end{bmatrix}.\] Then, the wind disturbance displacement vector is the difference between the current state \(\mathbf{x}_{i}[\tau]\) and the simulated state \(\mathbf{x}_{i}^{sim}[\tau]\), \[\mathbf{e}_{i}[\tau]=\mathbf{x}_{i}^{sim}[\tau]-\mathbf{x}_{i}[\tau]. \tag{13}\] As described in Sec. II, the wind applies a drag force \(\mathbf{\mathsf{f}}_{i}^{drag}\) on the robots. This force results from the pressure field gradient plus the friction forces due to air particles as described by (2). Each robot takes noisy measurements of the pressure field \(p_{i}\) at its location to account for the effect of these forces. Finally, we define the state vector \(\mathbf{s}_{i}\) for our RL method at each robot \(i\), by concatenating the displacement vector \(\mathbf{e}_{i}\), the pressure field value \(p_{i}\), and the robot's velocity \(\dot{\mathbf{r}}_{i}\) such that \[\mathbf{s}_{i}=\mathbf{e}_{i}\parallel\dot{\mathbf{r}}_{i}\parallel p_{i}, \tag{14}\] where \(\cdot\parallel\cdot\) is the concatenation operator. We include the robot's velocity because the drag force directly affects this quantity. During training, we add Gaussian noise to \(\mathbf{s}_{i}\) to simulate real-world sensory noise, as discussed in Sec. IV. **Graph Convolutional Neural Network Architecture:** The wind flow dynamics in (2) reveal a spatio-temporal correlation for \(\mathbf{w}\), i.e., the wind velocity at a given location correlates with the wind velocities at nearby areas. Our proposed method takes advantage of the spatial correlation by enabling information sharing between the robotic team members. When we use multiple robots spatially distributed in \(\mathcal{W}\), we form a sensing network that indirectly samples information about the effects of the wind on the robots. Consequently, we use this sensing network to improve the action that compensates for the drag force exerted on a robot \(i\) with the help of its neighbors \(\mathcal{N}_{i}\). Since SAC was designed for a single agent, its actor's architecture is a multi-layer perceptron (MLP). An MLP acts only on the individual robot's states \(\mathbf{s}_{i}\) to compute the robot's action \(\mathbf{a}_{i}\). Hence, the MLP's architecture does not use information from other robotic team members. To model this information exchange explicitly, we design the actor - and the two critics - as Graph Convolutional neural networks (GCNN) [27]. A \(L\)-layered GCNN is a type of neural network that can process data represented as a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E},\mathbf{H})\) with nodes \(\mathcal{N}\), edges \(\mathcal{E}\), and a feature set \(\mathbf{H}=\{\mathbf{H}^{0},\ldots,\mathbf{H}^{L}\}\). In the context of this paper, the nodes represent robots, and the edges represent the information exchange between them. We present an overview of the full architecture for our GCNN-based actor and the critic in Fig. 3. At a given layer \(l\in[0,...,L]\), the network computes a feature vector for each robot \(i\), denoted by \(\mathbf{h}_{i}^{l}\), and organizes them into a \(n\times c_{l}\) matrix \[\mathbf{H}^{l}=[\mathbf{h}_{1}^{l},..,\mathbf{h}_{n}^{l}]^{\top}. \tag{15}\] We compute \(\mathbf{H}^{l}\) from the previous layer's features following \[\mathbf{H}^{l+1}=\sigma\left(\mathbf{H}^{l}\mathbf{\Theta}_{1}^{l}+\mathbf{A}_{adj}\mathbf{H}^{l} \mathbf{\Theta}_{2}^{l}\right), \tag{16}\] where \(\mathbf{A}_{adj}\) is the graph's adjacency matrix, \(\mathbf{\Theta}_{1}^{l}\) and \(\mathbf{\Theta}_{2}^{l}\) are learnable weight matrices of size \(c_{l}\times c_{l+1}\), and \(\sigma(\cdot)\) is an element-wise non-linear activation function. We set the input features of the network to be a matrix containing all the robot's states defined in (14), such that \[\mathbf{H}^{0}=[\mathbf{s}_{1},...,\mathbf{s}_{n}]^{\top}. \tag{17}\] The operation in (16) is a graph convolution operation where a robot's features are updated using information from its neighbors in the graph. However, this operation does not include information about the relative position \(\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{j}\) between robot \(i\) and its neighbor \(j\). Without the relative position, the robots do not know where the neighboring robots are located. This makes it difficult to approximate vector quantities such as the pressure gradient in (2). To overcome this limitation, we incorporate the relative position into the convolution operator by concatenating \(\mathbf{r}_{ij}\) to the features at each layer right before the weighting and the neighbor aggregation. For simplicity, we will use the per-node notation of (16) to denote the convolution at each robot \(i\). We define the layer's features at each robot as \[\mathbf{h}_{i}^{l+1}=\sigma\left(\mathbf{\Theta}_{1}^{l}\mathbf{h}_{i}^{l}+\mathbf{\Theta}_{2}^ {l}\sum_{j\in\mathcal{N}_{i}}\left(\mathbf{h}_{j}^{l}\mid\mathbf{r_{ij}}^{l}\right) \right), \tag{18}\] where \(\mathbf{\Theta}_{2}^{l}\) is now a \(c_{l+1}\times(c_{l}+2)\) matrix. The actor's GCNN architecture takes \(\mathbf{H}^{0}\) and \(\mathbf{A}_{adj}\) as inputs, and computes a latent vector representation \(\mathbf{h}_{i}^{L}\) at the last layer \(L\). To decode \(\mathbf{h}_{i}^{L}\) into the robot's action, we pass \(\mathbf{h}_{i}^{L}\) through an small MLP network. We split the MLP's output into \(\mathbf{\mu}_{i}^{\mathbf{\theta}}\) and \(\mathbf{\Sigma}_{i}^{\mathbf{\theta}}\), and we use them to parameterize \(\mathbf{\pi}^{\mathbf{\theta}}\) as a normal distribution. Following [24], we set \(\mathbf{\Sigma}_{i}^{\mathbf{\theta}}\) to be a diagonal matrix. Finally, we use the policy to obtain the action \(\mathbf{a}_{i}\). Each of the critic's architecture follows a similar design with two small modifications since the critic is a function \(Q:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}\). First, the critic's output is a single-value function instead of a probability distribution. To model its output properly, we modify the critic's MLP decoder to have a single output neuron rather than \(\mathbf{\mu}_{i}^{\mathbf{\theta}}\) and \(\mathbf{\Sigma}_{i}^{\mathbf{\theta}}\). Second, the input space of the critic architecture consists of the robot's action in addition to just the state. Consequently, the input to the critic's GCNN is a feature vector \[\mathbf{H}^{0^{\prime}}=\left[(\mathbf{s}_{1}\mid\mathbf{a}_{1}),...,(\mathbf{s}_{n}\mid\mathbf{a }_{n})\right]^{\top}. \tag{19}\] In each architecture, we use a two-layer GCNN with \(\mathrm{ReLU}\) as the non-linear activation function and two hidden layers of \(64\) neurons per layer. The MLP decoders are two-layer networks of size \(64\) and \(16\), respectively. We add an extra output layer to the decoders to re-shape the network's output to the appropriate size for the actor or critics. The MLP's layers use \(\mathrm{ReLU}\) as their activation function in the inner layers and a linear activation function for the output layer. Finally, the actor's output is squeezed into the range \([-1,1]\) using a \(\tanh\) function as described in the SAC formulation. In practice, we scale \(\mathbf{a}_{i}\) by a preset factor of \(\sqrt{2}f_{rl}^{max}\) representing the maximum force that the robots can generate, as discussed after (6). Note that the depth of the GCNN is directly related to the robot network's bandwidth load. At a layer \(l\), a robot has to communicate with its neighbors to compute \(\mathbf{h}_{i}^{l}\). Since our architecture has only two layers, robot communication must only reach up to their 2-ring neighborhood. This property allows our model to scale to large formations since robot communication farther than the 2-ring is not required. **Reward Signal:** The final component of our proposed method is the reward signal. The reward signal at each step tells the SAC how well the robots compensate for the wind's drag force at a given step in the training process. Recall that we expect the robot team to learn to operate the same as when there is no turbulence. Because turbulence affects the acceleration of the robots, the divergence between the expected simulated velocity at a robot \(i\) and its actual velocity is an appropriate quantity to incorporate into our reward signal. We can do a similar analysis on the divergence between the simulated position and the actual position measured with the robot's instruments. These divergence quantities are captured into the displacement vector \(\mathbf{e}_{i}\) in (13). Hence, we define our reward function for our RL method as the L1-norm to weighted displacement, \[r[\tau]=-\|\mathbf{\beta}\odot\mathbf{e}_{i}\|, \tag{20}\] with \(\odot\) the Hadamard product, and \(\mathbf{\beta}\) a weight vector rating the importance of each component of \(\mathbf{e}_{i}\) in the reward signal. As the displacement vector between the simulated state and the actual state vector approaches the zero vector, the reward signal becomes less negative. Therefore, learning a policy that maximizes (20) is equivalent to learning an action policy that compensates for the effect of the wind on the robots. ## IV Experiments We design three experiments to evaluate our method's performance. First, we show that our method allows robots to navigate turbulent wind regimes by independently compensating the wind and tracking the target's trajectory separately. Second, we show that our method is robust to changes in the robot team's configuration, such as neighborhood size and formation size. Third, we demonstrate that the advantages of our method arise from our GCNN-based RL strategy by ablating the GCNN and replacing it with an MLP. **Experimental Setup:** We conduct all our experiments in a 2-dimensional square simulation space \(\mathcal{W}\) of size \(10\times 10\) sq m. We simulate \(M=60\) wind fields \(\mathbf{w}\) by solving the Navier-Stokes equations inside \(\mathcal{W}\), with random initial conditions. Each \(\mathbf{w}\) is guaranteed to be in a turbulent regime at \(Re\geq 4\times 10^{3}\). The turbulence intensifies with time in all of our \(\mathbf{w}\), increasing the \(Re\) value as shown in Fig. 4. We control the maximum possible wind speed in each wind simulation and bound it to a value of \(15\) m/s. We generate the wind flows using a publicly available Computational Fluid Fig. 3: Team-level architecture of the actor and critic networks used within the proposed RL architecture. Dynamics (CFD) software [28, 29, 30]. We provide a script to compute simulations along with the project's source code1. For each robot, we compute the drag force exerted by the wind as per (3). We set the air density to \(\rho=1.184\ \mathrm{kg}\,\mathrm{m}^{-3}\) and the drag coefficient to \(C_{d}=0.47\). Additionally, we assume all of the robots are small spheres of radius \(r=0.1\ \mathrm{m}\) with a cross-sectional area of \(A=\pi r^{2}\ \mathrm{s}\)m. We use lattice formations in all of our experiments at different sizes and chose the lattices' initial location to fit entirely into \(\mathcal{W}\). Footnote 1: [https://github.com/dipaco/robot_wind_navigation](https://github.com/dipaco/robot_wind_navigation) We train all our models on only \(50\) of the wind simulations and reserve the remaining \(10\) for testing. We train each RL model for \(5\times 10^{6}\) steps using a replay buffer of \(2\times 10^{5}\). This replay buffer's size ensures the RL model focuses more on recent experiences where the reward is expected to be better. We optimize the SAC's loss functions from (10) and (11) using Adam optimizer [31] with a fix learning rate of \(1\times 10^{-3}\). At training, all the episodes have a fixed duration of \(T=60\ \mathrm{s}\). We set the weights in the reward to \(\mathbf{\beta}=[1,1,10,10]\). We use the \(k\)-nearest neighbor algorithm (knn) to define the graph's adjacency matrix at each time step. In all of our experiments, we start the robot's formation at random locations within \(\mathcal{W}\). We report average absolute errors over \(20\) episodes with corresponding \(95\%\) accuracy confidence intervals. **Experiment 1: Wind compensation.** In this experiment, we explore the benefits of assisting the trajectory-tracking control from (5) with our RL method to compensate for the force that a turbulent wind field exerts on a robot. To this end, we compute the position and velocity errors at each time \(\tau\) of the trajectory-tracking control with and without the RL wind-compensation strategy. We use a formation size of \(n=25\) robots and a neighborhood size of \(k=12\). We report average errors over \(20\) episodes and all \(n\) robots in the swarm and summarize the results in Fig. 5. The noise to all our sensors follows a zero-mean Gaussian distribution with \(\sigma=0.001\) for the position and velocity sensor and \(\sigma=0.1\) for the pressure sensor. Our method (blue curve) shows a statistically significant improvement compared to trajectory tracking only (green curve). Note that our method maintains the position and velocity errors at relatively stable values despite the increase in the turbulence regime described in Fig. 4. From this result, we conclude that our proposed method can capture and compensate for the wind effects that affect the robots, regardless of the intensity and complexity of the wind. Additionally, we report in Fig. 6 the magnitude of the total control signal of each robot - in Newtons - averaged over all the robots in the formation. Recall that the total control signal from our proposed method is the sum of trajectory-tracking control and the RL action as per (6). The magnitude of the control signal is associated with the amount of energy the robots use to complete their task, e.g., tracking a trajectory. By comparing the curves in Fig. 6, we conclude that our method achieves significantly lower errors with approximately the same control signal magnitude. Hence, our methods preserve the amount of energy the robots use to fulfill their tasks while achieving better performance. Moreover, we report the trajectory-tracking component of our method (dotted blue) and highlight the smoothness of the curve compared to the trajectory-tracking alone. We conclude that this occurs because the robots can track a target free of perturbations when the RL compensates for the wind's effect. **Experiment 2: Sensitivity Analysis.** In this experiment, we test our method's sensitivity regarding two key parameters of our model: the robot's neighborhood size, \(k\), and the number of robots in the team, \(n\). We investigate the effect of the neighbor size on our method's ability to learn a wind compensation action. To this end, we train five models varying the neighborhood size at Fig. 4: Reynolds number (\(Re\)) evolution. In all of our wind simulations, the value of \(Re\) increases as the wind becomes more turbulent. Fig. 5: Our method’s performance compared to only the trajectory-tracking control. The curves show the mean error across \(20\) episodes with corresponding \(95\%\) confidence interval. Fig. 6: Magnitude of the control signal. Solid lines show the total action signal for our method (blue) and only the trajectory-tracking control (green). Additionally, we show the isolated tracking-trajectory component of our method (dotted blue). increasing values of \(k\), such that \(k\in\{2,4,8,12,16\}\), and maintain the formation size constant at \(n=25\). We report the average position error of each of these models in Fig. 7. Our results show a decrease in the error when \(k\) increases. Note that the error gap between curves with lower values of \(k\) and curves with larger \(k\) increases with the turbulence intensity (See Fig. 4). We did not observe a significant improvement in performance for models trained with \(k>12\). We train eight of our RL-based models, varying the training and testing formation size to test our method's sensitivity to the training formation. We use \(n^{\text{train}},n^{\text{test}}\in\{3^{2},...,10^{2}\}\) while maintaining the neighborhood size constant at \(k=12\). We report the average position error of each test in Fig. 8. Note that our method scales well to large formation when trained with enough robots without retraining, e.g., \(n\geq 25\). The performance decrease in the first two columns results from testing on formations that do not satisfy the neighborhood requirements when training the models, \(k=12\). Similarly, the two first rows in Fig. 8 show a decrease in performance due to training with insufficient robots. In this last scenario, the neighborhood cannot meet the requirements to capture the wind dynamics. **Experiment 3: Ablation Study.** We conduct an ablation study to investigate the contribution of our proposed architecture to the overall system. We compare our model with five baselines to highlight the advantages of information sharing in our model. In the first baseline, we replaced the GCNN with an MLP shared across all robots in the team. The MLP has the same number of hidden layers and neurons but does not share information with its neighbors. It can only access the features of the nodes in which it is operating. The second baseline is a deeper MLP of four hidden layers. The increase in depth has the effect of approximately doubling the number of weights. Similarly, the third baseline is a wider MLP with a layer width of \(128\) neurons. Doubling the layer's width increases the number of weights in the base MLP by approximately a factor of four. We include a fourth baseline to test the ability of our model to learn spatially distributed information from a robot's neighbor. In this baseline, we ablate the inclusion of the relative position \(\mathbf{r}_{i,j}\) in the convolution definition of (18). By removing the relative position, our GCNN can still share information between a robot \(i\) and its neighbors. However, the robot cannot identify where those neighbors are located relative to itself. Finally, the last baseline is the trajectory-tracking controller without our RL wind compensation. Our experiments show that our approach achieves the lowest position error among all methods in the ablation study. We summarize all the ablation experiments in Tab. I and Fig. 9. We report the average position error of each method along the corresponding \(Re\) values along an episode. Note that all the MLP-based baselines have similar error curves, despite the significant increase in capacity of the Deeper and Wider MLP. These results demonstrate that the advantages of our method arise from our GCNN-based RL strategy and not from the neural network's size. **Discussion:** Navigation in turbulent flows with high levels of turbulence, \(Re>4\times 10^{6}\), is a challenging problem. However, these high turbulence levels have not been studied in the state-of-the-art. This scenario is especially challenging for a single robot since its perception of the flow is limited. In this paper, we leveraged multiple robots to navigate high-turbulence flows and evaluate different factors that help understand the difficulties of operation in this type of aggressive environment. Although the spherical robots we presented only exist in simulations, our method can be implemented in actual robots. ## V Conclusion and Future Work In this paper, we introduced a novel RL-based method to control a team of aerial robots to track a trajectory while working together in a dynamic, turbulent wind field. Our method's strategy decouples the trajectory-tracking controller and wind compensation. So our method can learn to compensate for the wind turbulence independently of the motion controller. Our RL approach allowed us to find an optimal policy to compensate for the wind force via a graph neural network designed to share information among the robotic team members. Fig. 8: Sensitivity to the formation size. Fig. 7: Sensitivity to the neighborhood size. Fig. 9: Ablation study results. Our method (blue curve) shows a statistically significant improvement compared to all baselines. Our method shows that sharing sensor measurements between nearby robots provides valuable information to improve the robots' turbulence compensation and learn spatially-distributed wind patterns. We demonstrate the advantages of our strategy through several simulations strategically designed to test our method's performance for wind compensation, its scalability to large robot formations, and its parameter sensitivity. In future work, we want to design and implement a lab testbed to generate air flows with high turbulence levels like the ones presented in this paper. Although this type of testbed has a high cost and complexity, it would allow us to test and extend methods for navigation in high turbulence. Another direction of future work is to test our model against increasing sensor noise - as it can arise from more turbulent winds. Additionally, we want to model the temporal dependencies of turbulent vector fields through recurrent neural network architectures such as GRU or LSTM.
2310.08885
InstructTODS: Large Language Models for End-to-End Task-Oriented Dialogue Systems
Large language models (LLMs) have been used for diverse tasks in natural language processing (NLP), yet remain under-explored for task-oriented dialogue systems (TODS), especially for end-to-end TODS. We present InstructTODS, a novel off-the-shelf framework for zero-shot end-to-end task-oriented dialogue systems that can adapt to diverse domains without fine-tuning. By leveraging LLMs, InstructTODS generates a proxy belief state that seamlessly translates user intentions into dynamic queries for efficient interaction with any KB. Our extensive experiments demonstrate that InstructTODS achieves comparable performance to fully fine-tuned TODS in guiding dialogues to successful completion without prior knowledge or task-specific data. Furthermore, a rigorous human evaluation of end-to-end TODS shows that InstructTODS produces dialogue responses that notably outperform both the gold responses and the state-of-the-art TODS in terms of helpfulness, informativeness, and humanness. Moreover, the effectiveness of LLMs in TODS is further supported by our comprehensive evaluations on TODS subtasks: dialogue state tracking, intent classification, and response generation. Code and implementations could be found here https://github.com/WillyHC22/InstructTODS/
Willy Chung, Samuel Cahyawijaya, Bryan Wilie, Holy Lovenia, Pascale Fung
2023-10-13T06:36:26Z
http://arxiv.org/abs/2310.08885v1
# InstructTODS: Large Language Models for End-to-End Task-Oriented Dialogue Systems ###### Abstract Large language models (LLMs) have been used for diverse tasks in natural language processing (NLP), yet remain under-explored for task-oriented dialogue systems (TODS), especially for end-to-end TODS. We present InstructTODS, a novel off-the-shelf framework for zero-shot end-to-end task-oriented dialogue systems that can adapt to diverse domains without fine-tuning. By leveraging LLMs, InstructTODS generates a proxy belief state that seamlessly translates user intentions into dynamic queries for efficient interaction with any KB. Our extensive experiments demonstrate that InstructTODS achieves comparable performance to fully fine-tuned TODS in guiding dialogues to successful completion without prior knowledge or task-specific data. Furthermore, a rigorous human evaluation of end-to-end TODS shows that InstructTODS produces dialogue responses that notably outperform both the gold responses and the state-of-the-art TODS in terms of helpfulness, informativeness, and humness. Moreover, the effectiveness of LLMs in TODS is further supported by our comprehensive evaluations on TODS subtasks: dialogue state tracking, intent classification, and response generation. Code and implementations could be found here1. Footnote 1: [https://github.com/WillyHC22/InstructTODS/](https://github.com/WillyHC22/InstructTODS/) ## 1 Introduction LLMs have consistently pushed new frontiers in natural language processing (NLP) in terms of performance across a variety of benchmarks, such as MMLU Hendrycks et al. (2020), BIG-Bench Lewkowycz et al. (2022) and HELM Bommasani et al. (2022), achieving state-of-the-art results in both natural language understanding (NLU) and generation (NLG) tasks Bang et al. (2023). Various applications of LLMs have also been adopted in the industry, most prominently ChatGPT2 and GPT-43, which can provide a natural answer to a diverse range of questions fluently and coherently. Footnote 2: [http://chatgpt.openai.com/](http://chatgpt.openai.com/) Among the manifold tasks in NLP, task-oriented dialogue systems (TODS) represent a crucial domain. In general, TODS can be categorized into: the pipelined approach Ham et al. (2020); Hosseini-Asl et al. (2020); Ohashi and Higashinaka (2022), relying on multiple sequential modules and heavy annotations for dialogue states and system actions, and the end-to-end approach Banerjee and Khapra (2019); Qin et al. (2020); He et al. (2022), where the systems generate responses directly from the user input and the KB. Both approaches lack adaptability to unseen domains. This adaptability often requires domain-specific structures (ontology), and data for TODS is notoriously expensive to collect and annotate Eric et al. (2020). In this regard, LLMs present great potential thanks to their extensive pre-trained knowledge, enabling them to adapt to contextual information without any parameter updates or additional task-specific data. However, utilizing LLMs for tasks requiring knowledge grounding, such as TODS, poses a criti Figure 1: InstructTODS is the first zero-shot end-to-end task-oriented dialogue system that requires no task-specific annotations, and ontology while generating more human-preferred responses. cal challenge that calls for thorough investigation and exploration. TODS requires dialogue systems to adeptly complete a specific goal by interacting with a user in natural language according to a certain set of bounded functions, ontology, and knowledge within the corresponding domain. Nevertheless, naively feeding all of the knowledge to the LLMs in TODS could lead to the generation of misleading and unfaithful information, i.e., hallucination Ji et al. (2023); Azamfirei et al. (2023). In this work, we first investigate the capability of LLMs to perform three key TODS objectives in zero-shot settings, specifically dialogue state tracking (DST), intent classification (IC), and response generation (RG). While LLMs demonstrate impressive capabilities and understanding of these tasks individually, a closer examination of their shortcomings reveals that the modular approach is not the most suitable for effectively using LLMs in TODS due to its restrictiveness. Rather than confining interactions within predefined elements like slots, values, or system actions, it is more advantageous to harness the emergent abilities of LLMs to process unstructured information, which also enables the system to easily adapt to new domains. From these observations, we propose InstructTODS, a fully off-the-shelf framework to perform end-to-end unified TODS in a zero-shot setting using LLMs. InstructTODS is adaptable to any KB and does not require any ontologies or task-specific data. Instead of using predefined slot values, InstructTODS generates an unstructured proxy belief state from the dialogue context. Then, an action thought is generated to query the KB dynamically in natural language using an LLM. The retrieved information is then given to generate the response. In summary, our contributions are as follows: * We provide an extensive evaluation and comprehensive analysis of LLMs' zero-shot performance in several TODS subtasks, notably intent classification, dialogue state tracking, and response generation. * We introduce InstructTODS, a fully off-the-shelf framework to leverage instruction-tuned LLMs in zero-shot setting for end-to-end unified task-oriented dialogue, with the benefit of being effectively adaptable to any knowledge base (KB) while alleviating the need for any additional form of task-relevant data, such as intent, belief state, system action, etc. * We provide valuable insights from the TODS experiments on the more general advantages and failure cases of LLMs to perform complex zero-shot NLP tasks. ## 2 Evaluating LLMs on Zero-Shot Task-Oriented Dialogue Subtasks As an intermediary step in exploring the potential of end-to-end TODS solutions, we first investigate how well the performance of state-of-the-art LLMs (we presented the comparison of different LLMs over multiple tasks in Appendix A), i.e., GPT-3.5 and GPT-4, in performing various modular task-oriented objectives in their respective settings. ### TODS Subtasks Let us define a dialogue set \(\mathcal{D}_{n}=\{u_{1},r_{1},u_{2},r_{2},...,u_{n},r_{n}\}\) where \(u_{i}\) and \(r_{i}\) denotes the user utterance and the system reply at turn \(i\), respectively. Intent Classification (IC)For IC, we have the set of labels \(C=\{c_{1},c_{2},...,c_{t}\}\), from which we build the input for the LLM as \(x_{i}^{ic}=\mathbb{P}^{ic}(\mathcal{I}^{ic},Concat(c_{j})_{j=0}^{t},u_{i})\) where \(\mathbb{P}^{ic}(.)\) is the IC \begin{table} \begin{tabular}{l|l|c|c c c|c c c|c} \hline \hline \multirow{3}{*}{**Setting**} & \multirow{3}{*}{**Model**} & \multicolumn{4}{c|}{**Banking77**} & \multicolumn{4}{c|}{**CLINC150**} \\ \cline{3-10} & & \multirow{3}{*}{**Single**} & \multicolumn{3}{c|}{**Multi**} & \multirow{3}{*}{**Single**} & \multicolumn{3}{c|}{**Multi**} & \multirow{3}{*}{**OOS***} \\ & & top-1 & top-2 & top-3 & & & top-1 & top-2 & top-3 \\ \hline \multirow{3}{*}{**Few-Shot**} & RoBERT\({}_{large}\) & 78.99 & – & – & – & 89.89 & – & – & – \\ & ICDA\({}_{XL}\) & 83.90 & – & – & – & 92.62 & – & – & – \\ & DNNC & 80.40 & – & – & 91.02 & – & – & – & – \\ & CPFT & 80.86 & – & – & 92.34 & – & – & – & – \\ \hline \multirow{3}{*}{**Zero-Shot**} & BART\({}_{large}\)-MNLI & 35.91 & 35.91 & 49.09 & 56.14 & 26.44 & 26.44 & 36.46 & 43.13 & 0.2 \\ & BART\({}_{large}\)-MNLI\({}_{large}\) & 42.24 & 42.24 & 55.19 & 62.2 & 40.16 & 40.16 & 51.88 & 57.66 & 0.9 \\ \hline \multirow{3}{*}{**Zero-Shot**} & Modular (GPT-3.5) & 65.45 & **64.51** & 77.69 & 83.18 & 64.91 & 63.22 & 72.20 & 82.42 & 10.9 \\ & Modular (GPT-4) & **74.09** & 64.06 & **80.75** & **86.33** & **73.91** & **69.90** & **81.88** & **90.33** & **62.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison on **intent classification**. LLMs outperform most baselines in our benchmark. Best performances in each section are in **bold**. *Out-of-scope intent of CLINC150. input template, \(\mathcal{I}^{ic}\) refers to the natural language instruction for IC and \(Concat(c_{j})\) is the concatenation of all labels. We evaluate two generation settings, a single output setting where we query the model for the inferred intent, and a multi-output setting where we query the model for the top-3 intents given the user query by simply changing the instruction \(\mathcal{I}^{ic}\). As such, we recast the classification task in a text-generation manner and compare our results with state-of-the-art IC baselines. Dialogue State Tracking (DST)For DST, we define the total set of slots \(S=\{s_{1,D_{1}},s_{2,D_{1}},...,s_{k,D_{l}}\}\) where \(s_{i,D_{j}}\) is the i-th slot associated to domain \(D_{j}\). We give a singular hand-crafted exemplar distinct from the dataset to guide the generation format directly as JSON. We build the input \(x_{i}^{dst}=\mathbb{P}^{dst}(\mathcal{I}^{dst},f^{dst}(S),\mathcal{D}_{i})\) by providing the entire dialogue context, where \(\mathbb{P}^{dst}(.)\) is the DST input template, \(\mathcal{I}^{dst}\) denotes the instruction for DST \(f^{dst}(S)\) refers to a textual transformation of the set of slots. We evaluate two settings with different slot transformations: one by providing all slots and another with only the active domain slots. Response Generation (RG)For RG, given a dialogue \(\mathcal{D}\), we define the set of oracle system actions \(A=\{a_{1,1},a_{1,2},...,a_{n,m}\}\) where \(a_{i,j}\) denotes the j-th system action of turn i. We construct the input \(x_{i}^{rg}=\mathbb{P}^{rg}(\mathcal{I}^{rg},f^{rg}(a_{i,1},a_{i,2},...,a_{i,m} ),\mathcal{D}_{i})\) where \(\mathbb{P}^{rg}(.)\) is RG input template, \(\mathcal{I}^{rg}\) denotes the instruction for RG and \(f^{rg}(.)\) refers to a textual transformation of the set of system actions. We evaluate the capability of LLMs to leverage a structured system action while addressing the dialogue context to generate a response to the user. ### Experiment Settings DatasetFor the dialogue state tracking, we evaluate the LLMs' capability on MultiWOZ 2.1 (MWOZ) (Eric et al., 2020). For intent classification, we evaluate two datasets: Banking77 Casanueva et al. (2020), a fine-grained intent dataset in the banking domain, and CLINC150 Larson et al. (2019), coarse-grained intents classification datasets covering over 10 different domains. The main challenge of the CLINC150 dataset is on inferring out-of-scope intent, which is particularly challenging without any model training. EvaluationWe evaluate dialogue state tracking with Joint Goal Accuracy (JGA) and Slot-F1. We compute JGA using exact matching instead of fuzzy matching, with minor typo fixes in MWOZ following prior works Hosseini-Asl et al. (2020); Su et al. (2022). For intent classification, we evaluate the accuracy when predicting only one intent (single) and the top-3 intents (multi) in a text generation setting. BLEU Papineni et al. (2002), Inform, and Success Eric et al. (2020) are used for response generation. In addition to these metrics, we also compare lexical diversity Shen (2022), i.e., HDD McCarthy and Jarvis (2010), MATTR Coverington and McFall (2010), MTLD McCarthy (2005), and VOCD McCarthy and Jarvis (2007), fluency through perplexity, and human-likability using USL-H Phy et al. (2020). BaselineFor intent classification, we compare with various few-shot fine-tuned baselines: RoBERTa Liu et al. (2019), ICDA Lin et al. (2023), DNNC Zhang et al. (2020), and CPFT Zhang et al. (2021). While for zero-shot baseline, we employ MNLI Williams et al. (2018) fine-tuned BART\({}_{large}\) models Lewis et al. (2020) by framing intent classification as an NLI task. For dialogue state tracking, we compare with multiple strong zero-shot baselines in the single-domain setting: TRADE Wu et al. (2019), MA-DST Kumar et al. (2020), TransferQA Lin et al. (2021) and T5Dep Wang et al. (2022). \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{Attraction} & \multicolumn{2}{c|}{Hotel} & \multicolumn{2}{c|}{Restaurant} & \multicolumn{2}{c|}{Taxi} & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c}{**Average**} \\ & **JGA** & **Slot-F1** & **JGA** & **Slot-F1** & **JGA** & **Slot-F1** & **JGA** & **Slot-F1** & **JGA** & **Slot-F1** & **JGA** & **Slot-F1** \\ \hline TRADE & 20.06 & – & 14.20 & – & 12.59 & – & 59.21 & – & 22.39 & – & 25.69 & – \\ MA-DST & 22.46 & – & 16.28 & – & 13.56 & – & 59.27 & – & 22.76 & – & 26.87 & – \\ TransferQA & 31.25 & – & 22.72 & – & 26.28 & – & 61.87 & – & 36.72 & – & 35.77 & – \\ T5Dep & 37.83 & – & 26.50 & – & 27.05 & – & **69.23** & – & 40.27 & – & 40.18 & – \\ \hline Modular (GPT-3.5, w/ all slots) & 30.23 & 65.38 & 26.77 & 76.28 & 48.28 & 82.90 & 56.22 & 75.33 & 53.75 & 83.64 & 42.02 & 78.60 \\ Modular (GPT-3.5, w/ domain slot) & 39.53 & 74.89 & 27.03 & 79.78 & 51.72 & 85.06 & 63.24 & 83.98 & 52.50 & 84.84 & 44.48 & 82.53 \\ Modular (GPT-4, w/ all slots) & 39.53 & 78.99 & 31.23 & **84.07** & 55.86 & 88.23 & 63.24 & 82.71 & **59.83** & 89.72 & 48.16 & 85.62 \\ Modular (GPT-4, w/ domain slot) & **46.51** & **81.13** & **31.76** & 83.42 & **56.90** & **88.47** & **65.96** & **84.33** & 52.50 & **89.73** & **48.35** & **85.82** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison on **zero-shot DST benchmark**. LLMs outperform all baselines in our benchmark. Baseline results are directly taken from their respective works. The best performances in each section are in **bold**. For the response generation, we compare with modular and non-unified end-to-end TODS--e.g., having split decoder modules for DST and response generation--including SFN Mehri et al. (2019), LAVA Lubis et al. (2020), DAMD Zhang et al. (2020), MARCO Wang et al. (2020), MinTL Lin et al. (2020), HDSA Santra et al. (2021), RSTOD Cholakov and Kolev (2022), and BORT Sun et al. (2022). ### Key Takeaways The evaluation results for intent classification, DST, and response generation are shown in Table 1, Table 2, and Table 3, respectively. We summarize the key insights as follows: LLMs outperform most baselines.LLMs show significant improvements in intent classification and DST tasks compared to other zero-shot and few-shot baselines and perform almost comparably to few-shot models in the intent classification task. LLMs offer better generalization and adaptable solutions to TOD.Unlike fine-tuned models, LLMs approach all tasks in an autoregressive generation manner, allowing greater flexibility and scalability to adapt to other tasks and domains. LLMs generate responses that better reflect human preference.Unlike other fine-tuned approaches, LLMs generate responses that are distinct from the gold responses resulting in lower BLEU scores. Nevertheless, the responses from modular LLMs are more fluent, diverse, and human-likely compared to baselines, while having competitive Inform and Success rates. LLMs do not solve multi-domain DST problemsDespite their strong performance, LLMs often over-predict active slots, leading to errors in the _all slots_ setting. LLMs tend to mix up parallel slots over different domains, especially for either temporal or spatial information, e.g., destination and departure, leave time and arrival time, etc. ## 3 InstructTODS: An Instruction-Based Zero-shot End-to-End TODS By leveraging the insights from solving TODS subtasks on SS2.3, we develop the first zero-shot end-to-end framework that operates without any domain information (ontology) and requires no task-specific annotations such as dialogue state, system act, intent, etc. This method is not only cost-efficient but also alleviates the ontology constraint of LLMs in the modular DST task and promotes the strength of LLMs in generating better and more human-preferred responses. Let us define a structured knowledge base (KB) as a set of tuples \(\mathcal{K}=\{(v_{1}^{a_{1}},...,v_{1}^{a_{k}}),...,(v_{p}^{a_{1}},...,v_{p}^{ a_{k}})\}\) where \((a_{i})_{i=0}^{k}\) are the attributes of the KB, and \((v_{j}^{a_{i}})_{j=0}^{p}\) are all the values associated to the attribute \(a_{i}\). We first define a naive modular LLM response generation approach that serves as a baseline, denoted as \(\mathbf{RG}_{naive}\).4\(\mathbf{RG}_{naive}\) generates the user response by taking the entire KB along with the dialogue context as input. In this approach, we rely on the ability of the LLM to parse the entire KB during inference while processing the dialogue context, in order to perform in-context retrieval and response generation at the same time. As such, we build the input \(x_{i}^{RG}=\mathbb{P}^{RG}(\mathcal{I}^{RG},f^{RG}(\mathcal{K}),\mathcal{D}_{i})\) where \(\mathbb{P}^{RG}(.)\) is the response generation input template, \(\mathcal{I}^{RG}\) denotes the instruction for response generation and \(f^{RG}(\mathcal{K})\) refers to a textual transformation \begin{table} \begin{tabular}{l|c c c|c|c c c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{Reference-based} & \multicolumn{2}{c|}{Fluency} & \multicolumn{3}{c|}{Lexical diversity} & \multicolumn{1}{c}{Human pref.} \\ & **BLEU** & **Inf.** & **Succ.** & **PPL** & **HDD** & **MATTR** & **MTLD** & **VOCD** & **USL-H** \\ \hline LAVA & 11.33 & 95.8 & **94.9** & **25.45** & 65.35 & 74.84 & 30.72 & 25.84 & 59.68 \\ SFN & 14.11 & **97.7** & 91.6 & 51.97 & 70.68 & 78.67 & 34.25 & 36.02 & 65.41 \\ DAMD & 14.94 & 78 & 68.7 & 58.41 & 71.45 & 78.09 & 29.08 & 37.57 & 62.62 \\ MARCO & 16.5 & 95.3 & 91.1 & 36.00 & 73.40 & 83.39 & 44.48 & 42.78 & 70.35 \\ MinTL & 18.39 & 85 & 80.8 & 49.77 & 71.31 & 82.76 & 38.99 & 37.26 & 65.36 \\ BORT & 16.75 & 91.1 & 88.3 & 53.45 & 70.94 & 81.82 & 38.41 & 36.28 & 66.00 \\ HDSA & **20.02** & 95.8 & 90.2 & 43.37 & 71.71 & 82.95 & 42.04 & 38.02 & 68.36 \\ RSTOD & 15.98 & 91.6 & 86.9 & 76.05 & 73.11 & 82.41 & 42.08 & 41.88 & 68.54 \\ \hline Modular (GPT-4) & 6.12 & 86.42 & 78.48 & 36.63 & **80.59** & **89.56** & **66.64** & **70.13** & **89.66** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison on **response generation**. Although lower in BLEU, responses by the LLM-powered modular TODS are more human-preferred. The reported results for the baselines are taken from their respective work. The best performances in each group are in **bold**. of the KB where we filter unnecessary information and values that are too long as they are not needed to accomplish the user goal. In this approach, the bottleneck resides in the context window limit of the LLMs. Unlike other approaches, InstructTODS aims to make the best use of the LLM abilities to perform end-to-end tasks in zero-shot settings without the need for additional modular NLU and DST models, allowing zero-cost adaptation to various domains with no parameter update. In general, in order to process the dialogue history and interact with the KB, InstructTODS introduces two concepts, i.e., proxy belief state and action thought. The results from KB and the dialogue history are then fed as a context to the LLM for generating the user response. In the following paragraphs, we describe each component of InstructTODS in more detail. Proxy Belief StateWe generate a proxy belief state \(\tilde{B}_{i}=\mathbb{P}^{BS}(\mathcal{D}_{i})\) from the dialogue history where \(\mathbb{P}^{BS}(.)\) denotes the prompt template and \(\mathcal{D}_{i}\) the dialogue context. \(\tilde{B}_{i}\) encapsulates everything that the user is looking for in natural language at this point of the dialogue. Note that, the proxy belief state does not need any prior knowledge about the domain nor any ontology to operate (e.g. domain, trackable slots, values, types of information, etc.). The proxy belief state is directly used to interact with the KB in a multi-turn fashion. KB InteractionTo interact with the KB, we generate an Action thought \(A=\mathbb{P}^{act}(\tilde{B}_{i},(a_{i})_{i=0}^{k})\) where \(\mathbb{P}^{act}(.)\) is the template for action generation and \((a_{i})_{i=0}^{k}\) the attributes of the KB. By providing the existing attributes of the KB at this step, we ground the LLM to accurately translate the belief state into information that can be queried from the KB, while filtering out unnecessary data. The action thought serves as an intermediary to leverage the code generation ability of LLM by generating a query \(Q=\mathbb{P}^{KB}(A,\mathcal{K})\) where \(\mathbb{P}^{KB}(.)\) is the template for code generation. The output from the KB is then parsed by the LLM to extract relevant information, denoted as \(I\), presented in natural language, which provides a summary of the KB interaction. It also determines whether the current action thought has been fulfilled. If it remains unanswered, a new action thought is generated based on the extracted information, and the process repeats until a stopping criterion is reached indicating that no relevant knowledge is found in the KB. Response GenerationOnce the KB interaction concludes, the final information, together with the original dialogue context, is passed to the model to generate the response \(Y=\mathbb{P}^{RG}(I,\mathcal{D}_{i})\) where \(\mathbb{P}^{RG}\) represents the response generation template and \(I\) the final information from the KB interaction. In the case where no knowledge is found in the KB, the LLM prompts the user to provide additional information. We provide the prompt template in Appendix D. The depiction of how the InstructTODS framework works is presented in Figure 2. ## 4 Experiment settings BaselinesOur framework is compared to other end-to-end unified TODS approaches that perform end-to-end TODS using a unified text-to-text paradigm through a single generalized text Figure 2: Overview of **InstructTODS**, a framework to utilize LLM for zero-shot end-to-end task-oriented dialogue. generation model, i.e., SimpleTOD (Hosseini-Asl et al., 2020), PPTOD (Su et al., 2022), Soloist (Peng et al., 2021), UBAR (Yang et al., 2021), AuGPT (Kulhanek et al., 2021), and Galaxy (He et al., 2022). In addition, as described in SS3, we add the naive version of the LLM response generation approach which is fed by the full KB (\(\mathbf{RG}_{naive}\)), as an additional baseline to better evaluate the effectiveness of our framework. DatasetsWe evaluate the end-to-end zero-shot capability on MultiWOZ 2.1 (MWOZ) (Eric et al., 2020; Lewkowycz et al., 2022). We split the evaluation into two settings, i.e., single-domain and multi-domain evaluation settings, where we show the capability of LLMs to tackle more complex TODS tasks in zero-shot end-to-end settings. Automatic EvaluationFor evaluating the end-to-end framework, we measure the per domain Inform and Success rates, and the BLEU (Papineni et al., 2002), Inform rate, and Success rate (Eric et al., 2020) for all domains. The evaluation metric is computed on the delexicalized responses to avoid favoring models that provide more information than others and focus solely on the vocabulary used for the response generation. Additionally, we also incorporate an automatic human-likability score, namely USL-H (Phy et al., 2020). Human EvaluationWe conduct an extensive human evaluation to measure the capability of LLMs in conducting zero-shot end-to-end unified TOD. Specifically, we conduct two human evaluations, which measure: 1) the informativeness, helpfulness, and humanness of the generated responses, and 2) the information correctness and hallucination rate of our InstructTODS. For evaluating informativeness, helpfulness, and humanness, we ask 3 annotators to rate the quality of the response using a 4-point Likert scale (see Appendix B). The system is helpful if it answers the user's request while pushing the conversation towards goal completion, informative if the system provides enough related information while answering the user, and human if the generated answer is fluent and human-preferred. For measuring the incorrectness and the hallucination rate, the metrics are evaluated by a single TOD expert. The incorrectness and hallucination rate are measured by manually checking the ratio of correct, incorrect, and hallucinated entities provided in the generated responses. We conduct the human evaluation by taking 50 generated responses from all the models and the gold responses. ## 5 Results and Analysis ### Automatic Evaluation Our automatic evaluation is shown in Table 4. In general, we find a similar trend with the modular LLMs where LLMs produce lower BLEU scores--\(\sim\)4 BLEU against \(\sim\)15 BLEU--with competitive Inform and Success rates compared to other end-to-end unified TODS baselines. Note that, as mentioned in SS2.3, LLMs often generate completely different responses to the gold knowledge, hence producing low automatic evaluation scores. Nevertheless, the low automatic evaluation scores do not sufficiently reflect the capability of InstructTODS. We will further elaborate on this in SS5.2, raising a question of the sufficiency of evaluating TODS quality using only a single gold response. Some comparative generation samples between the different models can be found in Appendix C. ### Informativeness, Helpfulness, and Humanness of InstructTODS The results for our human evaluation are shown in Figure 3 for InstructTODS in comparison with the naive approach, the gold responses, and the two best-performing baselines in task completion (i.e., \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Attraction} & \multicolumn{2}{c|}{Hotel} & \multicolumn{2}{c|}{Restaurant} & \multicolumn{2}{c|}{Taxi} & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c}{**All**} \\ **Model** & **Inf.** & **Succ.** & **Inf.** & **Succ.** & **Inf.** & **Succ.** & **Inf.** & **Succ.** & **Inf.** & **Succ.** & **BLEU** & **Inf.** & **Succ.** \\ \hline SOLOIST & **100** & **90.90** & **90.00** & **85.00** & 78.30 & 70.00 & **100** & **100** & 81.80 & 78.80 & 13.58 & 88.80 & 84.30 \\ UBAR & **100** & **90.90** & 85.00 & 70.00 & 91.70 & 83.30 & **100** & 90.00 & 90.90 & 84.80 & 15.05 & 91.90 & 82.10 \\ AUGPT & 90.90 & 81.80 & 71.70 & 60.00 & 81.70 & 73.30 & **100** & 84.00 & **97.00** & **93.90** & 15.56 & 86.10 & 76.20 \\ GALAXY & 90.90 & 72.70 & 81.70 & 76.70 & 91.70 & 83.80 & **100** & **100** & 93.90 & **93.90** & **18.10** & 91.00 & **86.10** \\ PPTOD & 81.80 & 81.80 & 71.70 & 71.70 & 86.70 & **86.70** & **100** & **100** & 89.20 & 84.80 & 16.44 & 89.20 & 84.80 \\ \hline RO\({}_{naive}\) & 81.80 & 36.37 & **90.00** & 83.33 & **96.70** & 83.33 & **100** & 89.80 & **97.00** & 63.67 & 3.95 & **94.90** & 82.16 \\ \hline InstructTODS & 72.70 & 54.55 & 85.00 & 75.00 & 91.70 & 73.33 & **100** & 89.80 & 90.90 & 72.73 & 3.94 & 90.70 & 76.20 \\ \hline \hline \end{tabular} \end{table} Table 4: **Task completion performance comparison. InstructTODS have competitive Inform and Success rates compared to other end-to-end fine-tuned TODS baselines. Bold represents the highest score in each column.** Galaxy and PPTOD). From the results, we show that InstructTODS is more informative, helpful, and human-like than the two fine-tuned end-to-end baselines by a noticeable margin. For both helpfulness and humanness, InstructTODS also outperforms \(\text{RG}_{naive}\) and the gold response. Aligning with the human evaluation results, the generated responses by our framework also have higher humanness scores as shown in Figure 4, even higher than the gold responses. \(\text{RG}_{naive}\) is the most informative, which is expected as the model processes the entire KB for information, however, the quality of the information greatly differs as shown inSS5.3. ### Incorrectness and Hallucination We show the results for incorrectness and hallucination for the LLM-generated responses in Figure 5. While a sample can be incorrect, e.g., if the LLM database interaction fails, the LLMs do not necessarily generate unfaithful information. InstructTODS is more robust than naively employing the LLMs, improving the correctness by 15% and showing 11% of hallucination, half the amount of the \(\text{RG}_{naive}\). We observe that some types of information are more prone to hallucination, notably time and address. This bias towards temporal and spatial information aligns with our observation of LLMs' performance in DST (SS2.3). ### LLMs on Multi-Domain TOD While it is possible to use InstructTODS in multi-domain with distinct KBs per domain, as we see in Figure 6, the performance degrades quickly for Success and slightly less for Inform as the number of domains increases. While fine-tuned end-to-end baselines operate with only one KB at a given turn by tracking the active domain through either state changes Peng et al. (2021); Yang et al. (2021) or slot names Kulhanek et al. (2021), our zero-shot framework does not assume any external knowledge nor ontology information. As such, all KBs are provided at each turn, and due to different KBs attributes overlapping in MWOZ, InstructTODS often queries incompatible information from the proxy belief state (e.g., "food" and "destination" at the same time), which are in different KBs. Hence, multi-domain degradation is largely due to the KB interaction failure. ## 6 Related Work ### Task-Oriented Dialogue System Task-oriented dialogue systems (TODS) can be broadly classified into two categories Chen et al. (2017); Gao et al. (2018); Zhang et al. (2020) which include pipelined approaches Nagata and Morimoto (1994); Levin et al. (1997, 2000); Hurtado et al. (2005); Williams and Young (2007); Hori Figure 4: InstructTODS have higher human preference scores than the gold responses and baselines. Figure 5: Human evaluation on correctness, incorrectness, and hallucination for \(\text{RG}_{naive}\) and \(\text{InstructTODS}\). Figure 3: Human evaluation comparison on informativeness (**left**), helpfulness (**center**), and humanness (**right**). et al., 2009; Lee et al., 2009) and end-to-end approaches (Madotto et al., 2018; Wu et al., 2019; Madotto et al., 2020; Raghu et al., 2019; Qin et al., 2020; Hosseini-Asl et al., 2020; Lei et al., 2018; Hosseini-Asl et al., 2020; Lin et al., 2020; He et al., 2022; Kulhanek et al., 2021; Peng et al., 2021; Su et al., 2022; Yang et al., 2021). The pipelined approach utilizes multiple modules in order to generate the system responses. While the end-to-end approach directly generates responses from the user input and the KB in an end-to-end manner. End-to-end TODSEarly approaches for end-to-end TODS employ template responses in a retrieval or generation setting (Zhao et al., 2017; Eric et al., 2017; Wu et al., 2019). While other approaches inject KBs directly into the model to perform end-to-end generation (Madotto et al., 2018, 2020). A more recent end-to-end TODS tackles end-to-end response generation in a single sequence prediction problem (Hosseini-Asl et al., 2020; Yang et al., 2021; Peng et al., 2021) with an autoregressive model. These approaches still mostly leverage TOD data (belief states, system acts, etc.) during generation. As general pre-trained LMs were shown to be effective for TODS (Mehri et al., 2019; Lubis et al., 2020; Lin et al., 2020), several subsequent works have explored pre-training approaches directly tailored towards TODS (Zhang et al., 2020; Su et al., 2022; He et al., 2022). To the best of our knowledge, prior works require a structured format of dialogue states, system acts, and/or template responses, whereas InstructTODS alleviates such needs by incorporating an unstructured proxy belief state, which requires no domain-specific knowledge nor ontology to operate, allowing zero-shot adaptation to various TOD domains. ### Zero-Shot Generalization of LLMs LLMs have shown remarkable zero-shot generalization capabilities in various NLP tasks (Brown et al., 2020; Scao et al., 2022; Chowdhery et al., 2022; Thoppilan et al., 2022). This is further improved through instruction tuning (Wei et al., 2021; Sanh et al., 2021; Wei et al., 2022; Chung et al., 2022; Longpre et al., 2023; Cahyawijaya et al., 2023), which enables a better generalization to unseen tasks, and reinforcement learning with human feedback (Christiano et al., 2017; Ouyang et al., 2022; Bai et al., 2022), which enables a better alignment of human preferences. The zero-shot generalization ability of LLMs has also been explored in more specific cases, e.g., multiple choice question answering (Robinson and Wingate, 2023), biomedical NLP (Fries et al., 2022), reasoning (Bang et al., 2023), low-resource languages (Cahyawijaya et al., 2023; Asai et al., 2023), code-switching (Yong et al., 2023; Zhang et al., 2023). LLMs for TODSRecent works explore the applicability of LLMs in solving modular TOD tasks (Bang et al., 2023; Hudecek and Dusek, 2023) and a pipeline manner (Hosseini-Asl et al., 2020; Su et al., 2022; Peng et al., 2021; Yang et al., 2021; Kulhanek et al., 2021; He et al., 2022). Additionally, Bang et al. (2023) inspect ChatGPT's capability for zero-shot end-to-end TODS, however, it is limited to only \(\sim\)1% of the test set available. Therefore, to the best of our knowledge, our work is the first to comprehensively study the utilization of LLMs for zero-shot end-to-end TODS. ## 7 Conclusion In this paper, we introduce InstructTODS, an off-the-shelf framework to effectively perform end-to-end TODS in zero-shot utilizing LLMs. We compare InstructTODS to several state-of-the-art fully fine-tuned end-to-end TODS and show that InstructTODS manages to guide the conversation towards goal completion similarly to the fine-tuned systems on MWOZ while generating answers that are more informative, helpful, and human-like than previous approaches. Furthermore, we investigate the capability of LLMs in performing various TOD Figure 6: End-to-end TODS performance degrades as the number of active domains in the dialogue increases. subtasks in zero-shot settings, demonstrating better diversity and human preference on response generation, and state-of-the-art zero-shot results on dialogue state tracking and intent classification. ## 8 Limitation Generalization to Other DatasetsIn our work, we only assess the effectiveness of InstructTODS on MultiWoZ 2.1 dataset, whose size is a magnitude higher than other TODS datasets Eric et al. (2020). We conjecture that the generalization to other datasets will follow the same trend as described in SS5, where it excels in the single-domain setting while still struggling in the multi-domain setting. We expect future work to extend the assessment on InstructTODS to other datasets and domains. Generalization to Other LanguagesIn recent years, various task-oriented dialogue systems in languages other than English have been introduced, such as CrossWoZ Zhu et al. (2020), BiTOD Lin et al. (2021), GlobalWoZ Ding et al. (2022), and COD Majewska et al. (2023). As suggested in prior works evaluating LLMs in low-resource languages Bang et al. (2023); Asai et al. (2023); Cahyawijaya et al. (2023); Workshop et al. (2023); Kabra et al. (2023); Zhang et al. (2023), we conjecture that the performance in other languages follow the general trend in LLMs where the performance in low-resource languages will be lower compared to the high-resource languages. Future work might explore and further extend methods for improving the generalization of InstructTODS to other languages. Generalization to Other LLMsIn this work, we only explore two proprietary LLMs which display strong performance on various NLP tasks, i.e., GPT-3.5 and GPT-4. Despite the lack of transparency of these models, we expect that when other publicly available LLMs achieve the same performance as these proprietary LLMs, a similar capability of zero-shot end-to-end TODS will emerge. We expect future work to explore the generalization of InstructTODS and its improvement in other LLMs. ## 9 Ethics Statement Our research endeavors to develop an off-the-shelf framework for zero-shot end-to-end Task-Oriented Dialogue Systems (TODS) using Large Language Models (LLMs). It is important to note that this study does not involve the use of any sensitive data and the experimental evaluation is conducted on publicly available datasets. To ensure the quality of our results, we have employed crowdsourcing for the human evaluation of the generated dialogue responses. While our study does not raise any ethical concerns regarding privacy, confidentiality, or bias, we recognize that the use of LLMs in dialogue systems may have ethical implications related to potential biases in the training data and the generated responses. Therefore, we emphasize the importance of ongoing research toward developing ethical guidelines and best practices for the use of LLMs in dialogue systems. In line with our commitment to transparency and reproducibility, we will be releasing our code publicly. We believe that this will encourage open and collaborative research towards the development of more ethical and effective dialogue systems.
2310.02416
Bag of Tricks for Fully Test-Time Adaptation
Fully Test-Time Adaptation (TTA), which aims at adapting models to data drifts, has recently attracted wide interest. Numerous tricks and techniques have been proposed to ensure robust learning on arbitrary streams of unlabeled data. However, assessing the true impact of each individual technique and obtaining a fair comparison still constitutes a significant challenge. To help consolidate the community's knowledge, we present a categorization of selected orthogonal TTA techniques, including small batch normalization, stream rebalancing, reliable sample selection, and network confidence calibration. We meticulously dissect the effect of each approach on different scenarios of interest. Through our analysis, we shed light on trade-offs induced by those techniques between accuracy, the computational power required, and model complexity. We also uncover the synergy that arises when combining techniques and are able to establish new state-of-the-art results.
Saypraseuth Mounsaveng, Florent Chiaroni, Malik Boudiaf, Marco Pedersoli, Ismail Ben Ayed
2023-10-03T20:28:09Z
http://arxiv.org/abs/2310.02416v2
# Bag of Tricks for Fully Test-Time Adaptation ###### Abstract Fully Test-Time Adaptation (TTA), which aims at adapting models to data drifts, has recently attracted wide interest. Numerous tricks and techniques have been proposed to ensure robust learning on arbitrary streams of unlabeled data. However, assessing the true impact of each individual technique and obtaining a fair comparison still constitutes a significant challenge. To help consolidate the community's knowledge, we present a categorization of selected orthogonal TTA techniques, including small batch normalization, stream rebalancing, reliable sample selection, and network confidence calibration. We meticulously dissect the effect of each approach on different scenarios of interest. Through our analysis, we shed light on trade-offs induced by those techniques between accuracy, the computational power required, and model complexity. We also uncover the synergy that arises when combining techniques and are able to establish new state-of-the-art results. ## 1 Introduction Deep neural networks perform well at inference time when test data comes from the same distribution as training data. However, they become inaccurate when there is a distribution shift [25]. This distribution shift can be caused by natural variations [14] or corruptions [9, 10]. Test-Time Adaptation (TTA) aims at addressing this problem by adapting a model pre-trained on source data to make better predictions on shifted target data [2, 13, 28]. In this work, we focus on the particular case of Fully Test-Time Adaptation (Fully TTA) [22, 30, 36]. In this setting, the adaptation is done source free and relies only on: i) a model pre-trained on data from a source domain and ii) unlabeled test data from a shifted target domain. Separating the training phase from the adaptation phase is particularly relevant for privacy-oriented applications where the training data is not available or can not be disclosed. Fully TTA is also online. Test data is received as a continuous stream and the model adaptation is done on-the-fly as data is received. This makes the setup more realistic and closer to real-world "in-the-wild" scenarios where information about potential distribution shifts or about the quantity of data to be received is not necessarily available. Most of the recent solutions proposed to address Fully TTA are follow-ups of seminal work Tent [30] and aim at solving problems inherent to the online and unsupervised aspect of Fully TTA. For example, [32, 36] deal with the problem of the class imbalance data stream, [22, 35] improve the quality of the predictions used to adapt a model by selecting samples with a low entropy or leveraging the predictions of augmented samples and [35, 36, 18, 35] investigate different normalization to stabilize the adaptation Figure 1: **Classification Accuracy in function of Batch Size for different methods and architectures on ImageNet-C.** In this work, we choose to focus on small batches (16 and below, white zone). As the batch size decreases, the model performances remain stable until a batch size of 32 and then drops significantly for methods running on ResNet50-BN. Results reported are averaged over 15 corruptions and 3 runs. Confidence intervals are too small to be displayed. process. However, most of the tricks and techniques are presented in combination with others, which makes it difficult to identify their impact on the final model performance. Some techniques might already help when applied alone whereas others might only work or work better in combination with other tricks. As this area of research is very active and developing fast, we aim in this study at disentangling the impact of some techniques recently proposed and evaluate objectively their contribution to the performance of Fully TTA models. We also propose possible improvements in specific cases. **Contribution.** To address the Fully Test-Time Adaptation problem, we analyzed the following techniques: i) Usage of batch renormalization or batch-agnostic normalization ii) Class re-balancing iii) Entropy-based sample selection iv) Temperature scaling. Those analyses were made considering small batch sizes (16 and below), which are closer to the potentially uncontrollable batch sizes of real-world scenarios. Our experimental results show that those techniques are already boosting the performance at test time when used alone, but that combining all of them leads to the best classification accuracy compared to a vanilla Tent method and 2 recent state-of-the-art methods on 4 different datasets. Additionally, to the accuracy improvement, the selected techniques also bring other interesting benefits like higher and more stable performance with small batch sizes and a reduced computational load by adapting the model with a reduced set of selected data. The remainder of the paper is structured as follows. We conduct a literature review in Section 2. Then we analyze each trick separately in a different section: architecture design in Sec. 4, class rebalancing in Sec 5, sample selection in Sec. 6 and network calibration in Sec. 7 before showing results on combinations of tricks in Sec. 8 and results on other datasets in Sec. 9. Finally, we conclude about the presented work in Sec. 10. ## 2 Related Work **Test-time adaptation (TTA).** Test-time adaptation assumes access to a pre-trained model and aims at leveraging unlabeled test instances from a (shifted) target distribution to make better predictions. Proposed methods usually employ one or a combination of the following techniques: _self-training_ to reinforce the model's own predictions through entropy minimization [30] or Pseudo-Labelling schemes [15], _manifold regularization_ to enforce smoother decision boundaries through data augmentation [35] or clustering [4], _feature alignment_ to mitigate covariate shift by batch norm statistic adaptation [16, 27], and _meta-learning_ methods [6] that try to meta-learn the best adaptation loss. **TTA in the broader literature.** Although recently introduced [30], TTA shares important motivations and similarities with earlier or concurrent settings that are source-free domain adaptation (SFDA) [3, 17, 34] and test-time training (TTT) [23, 28]. In SFDA, methods also leverage samples from the target distribution of interest but have no access to source data, and the evaluation is still done on held-out test data. In other words, TTA is the transductive counterpart of SFDA. On the other hand, TTT works by constructing an auxiliary task that can be solved both at training and adaptation time and therefore, unlike TTA, is not agnostic to the training procedure or to the model architecture. **Fully TTA.** TTA is of particular interest for online applications, in which the model receives samples as a stream. Operational requirements for online applications break crucial properties of the vanilla TTA setting e.g. large batch size or class balance. Under such operational requirements, standard TTA methods degrade, underperforming the non-adapted baseline and even degenerating to random performance in some cases [22, 4]. Multiple regularization procedures have been proposed to address such shortcomings. Among them, (i) Improved feature alignment procedures that interpolate, between source and target statistics [18, 20, 36], thereby improving overall estimation and decreasing reliance upon specific test batches, (ii) Sample re-weighting [21, 36] to alleviate the influence of class biases, (iii) Improving loss' intrinsic robustness to noisy samples, either encouraging convergence towards local minima [22] or preventing large deviations from the base model's predictions [4, 21]. Recently, [29] explored the update of the model weights using Hebbian learning instead of just updating the BatchNorm layers. As this line of work grows, the current study provides an objective evaluation of how recently proposed ingredients translate into actual robustness for Fully TTA and quantifies the progress made so far, as well as pinpoints possible areas of improvement. A detailed comparison of the Fully TTA setting with the other TTA settings is available in the supplementary material. ## 3 Experimental Setup In this section, we present the details of our experimental setup. Firstly, we introduce the datasets used, then the different methods we want to compare and the different models, and finally, we explain the evaluation metric and protocol. For reproducibility purposes, the links to the code and model weights used in our experiments are provided in the supplementary material. ### Datasets We evaluate the different methods on several datasets used by prior SFDA or TTA studies: (i) ImageNet-C [10] is a variant of ImageNet [26] where 19 corruption types and 5 levels of severity were applied. For our experiments, we report results using 15 corruption types at the most severe level of corruption (level 5) and keep the 4 remaining extra (speckle noise, gaussian blur, spatter, and saturate) as "validation" corruptions to select hyperparameters following [36] and [22]. (ii) ImageNet-Rendition [9] consists of 30,000 images distributed in 200 Imagenet classes obtained by the rendition of ImageNet images like art, cartoons, tattoos, or video games. (iv) ImageNet-Sketch [31] is a dataset of 50,0000 images distributed in all ImageNet classes and obtained by querying Google Images with "sketch of __" where __ is the name of original ImageNet classes. Images are in the black and white color scheme. (v) Finally, VisDA2017 [24] is a dataset of over 72K images distributed in 12 ImageNet classes and containing a mix of synthetic and real domain images. In the sections where we analyze tricks (Class rebalancing Sec. 5, Sample Selection Sec. 6, Calibration Sec. 7, and Tricks combination Sec. 8), all experiments are done using ImageNet-C. ### Methods In this work, we chose to analyze the following tricks and methods: (i) Tent [30] is a seminal work in Fully Test-Time Adaptation and is the first work to use an entropy-based loss in the adaptation process. (ii) SAR [22] is a state-of-the-art method in Fully TTA and proposes a method to select the most useful samples based on their entropy. (iii) Delta [36] is also a state-of-the-art method in Fully TTA and focuses on addressing the problem of online class rebalancing. (iv) in our experimental setup, we call BoT the model combining the best tricks selected in the different experiments. ### Models In our experiments, we use different architectures depending on the datasets tested. In experiments with ImageNet-C, we follow [22] and use two variants of the ResNet50 architecture [8] and a ViT-Base/16 transformer architecture. The first ResNet50 variant (ResNet50-BN) uses batch normalization layers [12] whereas the second one (ResNet50-GN) uses group normalization [33] layers. The ViTBase/16 transformer uses layer normalization [1] and will be referred to as VitBase-LN. For experiments with VisDA2017, we follow [34] and [3] and use a ResNet101 architecture. The number of parameters of each architecture is available in the supplementary material. ### Evaluation metrics To evaluate the different approaches, we use the classification accuracy metric. To compute this metric, we follow [22] and [36] and consider the accumulated predictions of the test samples after each model update. In other words, we do not compute the classification accuracy on the whole test set after the model has seen all test samples but online after each batch. Results reported are averaged over 3 runs. ## 4 Architecture and Normalization In this section, we investigate the influence of different architectures and normalization on the model performance. Normalization in particular has been an active area of research in the TTA literature. [36] shows that in the case of a distribution shift, normalization statistics are inaccurate within test mini-batches and the gradient of the loss can show strong fluctuations potentially destructive for the model. To address this issue, [18] proposes to combine linearly the statistics learned during training with the statistics computed at test time to reduce the gap between the source domain and the target domain. However, this method is not applicable in Fully TTA as it requires access to labeled source data to learn the linear combination in a post-training phase before using it at test time. [19, 35] also use a linear combination of the training statistics and the test statistics to handle the distribution shift. [36] adapts batch renormalization [11] to test-time adaptation. Batch normalization parameters are updated using a combination of the mini-batch statistics and moving averages of these statistics like in the original paper, but in the TTA context, statistics and moving averages are computed using test batches. Another way to address the issues inherent to batch normalization is to use group or layer normalization instead as investigated in [22]. As the normalization differs a lot between works, this study aims at disentangling its effect from other techniques used. In our experiments, we follow [22] and use the following architectures: i) a ResNet50 with BatchNorm layers (ResNet50-BN) ii) a ResNet50 with GroupNorm layers (ResNet50-GN) iii) a VitBase/16 with LayerNorm layers (VitBase-LN) iv) to complete our pool of models to compare, we also include a variant of ResNet50-BN where batch normalization is replaced by batch renormalization (ResNet50-BReN). Experimental resultsIn Fig. 2, we observe that the performance of Tent method on a ResNet50-BN architecture is dropping when the batch size is becoming small, with a particularly low performance when the batch size is 2 (\(5.53\%\) accuracy) or 1 (\(0.14\%\) accuracy). Intuitively, those results can be explained by the fact that batch normalization layers are normalizing the weights based on the statistics of the current batch. When the batch becomes too small, the statistics computed have a high variance, are not representative anymore of the test distribution and are not informative enough about the domain shift. However, we see that using batch renormalization instead of standard batch normalization improves the performance of a ResNet50 model and avoids a complete collapse of the model when the batch size is 1. Also in Fig. 2, we observe that Tent performance on architectures with batch-agnostic normalization layers such as GroupNorm or LayerNorm is more stable and less impacted by a reduction of the batch size. ## 5 Class rebalancing In this section, we explore the problem of online class imbalance in the context of Fully TTA. This problem is strongly relevant in this setting as data is received as a continuous stream. In this case, there is no guarantee that classes will appear in a balanced way or that different classes will appear in a given batch, especially when the batch size becomes much smaller than the total number of classes in the dataset. Imbalanced data can be particularly detrimental to the model performance as shown in [22, 36, 32] and can lead in extreme cases to a model collapse to trivial solutions like assigning all samples to the dominant class. To evaluate methods in regard to this problem, we consider two approaches. In the first one, we follow the setup proposed in [22]. In this setup, the online imbalanced label distribution shift is simulated by controlling the order of the input samples using a dataset generated using the following sampling strategy: a probability vector \(Q_{t}(y)=[q_{1},q_{2},...,q_{K}]\) is defined, where \(t\) is a time step and \(T\) is the total number of steps and is equal to \(K\) the total number of classes, and \(q_{k}=q_{max}\) if \(k=t\) and \(q_{k}=q_{min}\triangleq(1-q_{max})/(K-1)\) if \(k\neq\ t\). The ratio \(q_{max}/q_{min}\) represents the imbalance ratio. For ImageNet-C, at each time step \(t\in{1,2,...,T=K}\), 100 images are sampled using \(Q_{t}(y)\) and so in total, the dataset contains 100x1000 images. An imbalance factor of 500000 is represented in Fig. 3 as \(\infty\) and represents a setup very close to the adaptation of the model one class after the other. Then, in a second approach, we investigate the evolution of the classification accuracy of different models simply in function of the batch size. We consider small batch sizes already as a factor of online class imbalance as not all classes can be present in the same batch. We compare three methods: i) Tent without any class rebalancing method is used as baseline. ii) SAR [22] is not a class rebalancing method per se but the sample selection method introduced in this work is presented as a way to address the class imbalance problem by the authors. iii) DOT is an adaptation of the class-wise reweighting method proposed in [5] adapted to the context of test-time adaptation in [36]. The idea of DOT is to estimate the class frequencies in the test set by maintaining a momentum-based class-frequency vector \(z\in\mathbb{R}^{K}\) where \(K\) is the total number of classes, based on the prediction of the model of each sample seen previously. At inference time, each new sample receives a weight in function of its pseudo label and the current \(z\) vector. A sample belonging to a rare class will receive a higher weight than a sample from a class seen more often. The DOT algorithm is detailed in the supplementary material. Experimental resultsIn Fig. 3, we can observe the following: i) On the ResNet50-BN architecture, the performance of all methods and for all batch sizes is dropping when the imbalance factor is increasing. Batch normalization does not seem to be a suitable normalization method when the test set is unbalanced ii) The performances of Tent and SAR are more stable when the imbalance factor varies on the ResNet50-GN architecture. On this architecture, DOT is the most performing method when the batch size is still high and the imbalance factor is still low. However, DOT performance is dropping drastically when the batch size becomes very small or the imbalance factor is very high. iii) Best performances are obtained by the VitBase-LN architecture. Performances are stable for all methods when the imbalance factor increases for a batch size of 16 or 8 but decrease when the imbalance factor increases for lower batch sizes. Our main takeaways from Fig. 3 are that group normalization and layer normalization are less sensitive than batch normalization to imbalance classes and that even if DOT and SAR are both performing better than Tent, the sample selection of SAR yields more stable performances in the case of small batch sizes and stronger class imbalance factor. In Fig. 4, we observe that the performance of all methods on ResNet50-BN is dropping when the batch size decreases. On ResNet50-GN and VitBase-LN, the classification accuracy remains stable when the batch size decreases for all models, DOT yielding the best results except when the batch size is 1. This particular case is explained in the next paragraph. Our main takeaways from Fig. 4 are that architectures with group or layer normalization are more suitable to handle small batch sizes and that the class rebalancing method DOT is performing better than the sample selection method SAR for small batch sizes greater than 1. Figure 2: **Impact of Normalization, Architecture, and Batch Size on classification accuracy of Tent method on ImageNet-C.** Using a batch renormalization layer leads to better performance than using a vanilla batch normalization. Tent performance is more stable on architectures with batch-agnostic normalization like group or layer normalization. Single point learning for DOT methodIn Fig. 4, we observe that in the specific case of batch size 1, the performance of DOT drops to the level of Tent. This is because in DOT, the weight of each sample in a batch is normalized by the sum of all weights of this batch. So, when the batch size is 1, the sum of the weights of the batch is equal to the weight of the single sample of the batch. Thus, the normalization of the weight of this single sample by the sum of all weights of the batch gives a weight of 1 and brings back to the same loss formulation as Tent. To address this issue, we propose to approximate the weight of a single sample in this particular case as if it was part of a bigger batch of size N. This approach does not require any additional processing time as we can still infer the class of an input test sample immediately and it is very cheap in terms of memory as we do not need to save any sample in a queue but just the weights of the N previous samples, which are only scalars. In Tab. 1, we analyze the impact of a buffer of different sizes on Tent performance on different architecture when the batch size is 1. We can see that an additional buffer of size 2 yields a significant performance improvement. Higher buffers yield no additional improvement on ResNet50-BN and a performance decrease on ResNet50-Gn and VitBase-LN. We assume that they lead to sample weights that are too noisy. ## 6 Sample selection In the previous sections, we explored standard mechanisms to address covariate shift (through normalization) and label shift (through class rebalancing). In this section, we go one step further and explore mechanisms that cast TTA as a noisy learning problem. In particular, we explore the sample selection method first proposed in [21] and analyzed more thoroughly after in [22]. The main idea of this method is to select only reliable samples for the model adaptation. \begin{table} \begin{tabular}{l|l|l|l|l|l} BatchSize=1 & DOT & DOT+buff=2 & DOT+buff=2 & DOT+buff=1 & DOT+buff=16 \\ \hline ResNet50-BN & 0.14\(\pm\)0.00 & **20.31\(\pm\)**0.02 & 20.31\(\pm\)0.02 & 20.31\(\pm\)0.02 & 20.31\(\pm\)0.02 \\ ResNet50-GN & 23.91\(\pm\)0.00 & **38.94\(\pm\)**0.03 & 38.32\(\pm\)0.06 & 36.23\(\pm\)0.03 & 34.13\(\pm\)0.02 \\ VitBase-LN & 50.89\(\pm\)0.00 & **54.15\(\pm\)**0.00 & 50.56\(\pm\)0.04 & 46.39\(\pm\)0.01 & 42.13\(\pm\)0.06 \\ \end{tabular} \end{table} Table 1: **Impact of Additional Buffer on Tent performance on different architecture on ImageNet-C in the single point learning scenario.** An additional buffer of size 2 yields a significant performance improvement. Higher buffer sizes can lead to noisy sample weights and yield no additional improvement on ResNet50-BN or a performance decrease on ResNet50-GN and VitBase-LN. Figure 4: **Impact of Architecture and Batch Size on the classification accuracy of different methods on ImageNet-C.** Batch-agnostic normalizations like group or layer normalization are more suitable to handle small batch sizes. Moreover, in this scenario, the class rebalancing method DOT is performing better than the sample selection method of SAR. Figure 3: **Impact of Imbalance Factor, Architecture, and Batch Size on classification accuracy of different methods on ImageNet-C.** On ResNet50-BN, the performance of all models decreases when the imbalance factor increases. On ResNet50-GN, DOT, and SAR are more efficient than Tent, but SAR is more stable with very small batch sizes and stronger imbalance factors. On VitBase-LN, Tent performs lower than DOT and SAR with a batch size 4 and a moderate imbalance factor. However, DOT and SAR performance is dropping significantly for small batch sizes and strong imbalance factors. The number after the architecture in the legend is the batch size. Indeed, in [22], authors show that samples with high entropy are more likely to have a strong and noisy gradient potentially harmful to the model performance. Furthermore, low-entropy samples contribute more to the model adaptation than high-entropy ones. However, there is no easy way to directly filter out samples with a strong gradient from the optimization process. So, instead, an entropy-based filtering method was proposed. More precisely, a threshold entropy \(E_{0}\) is defined as the maximum entropy \(\log K\) multiplied by a factor \(F\), which is a scalar with a value between 0 and 1, 1 meaning no selection at all. All samples with an entropy below this threshold \(F\log K\) are kept whereas the others are discarded when computing the loss value to update the model. Formally, this filtering method can be expressed as a sample selection function \(S\): \[S(x)=\mbox{1I}_{\{E(x;\Theta)<E_{0}\}}(x) \tag{1}\] where \(\mbox{1I}_{\{.\}}(.)\) is an indicator function, \(E(x;\Theta)\) is the entropy of sample \(x\), and \(E_{0}\) is a threshold predefined as: \[E_{0}=F\log K \tag{2}\] where \(K\) is the total number of classes in the dataset and \(F\) is a real number in \([0;1]\). Experimental resultsIn Fig. 5, we can see that fine-tuning the selection threshold via factor \(F\) can lead to a significant increase in the performances in all cases. We also observe that in the case of smaller batch sizes, the optimal value for \(F\) is smaller than the value of 0.5 recommended in [22] for a batch size of 64. Moreover, as mentioned in [22], another advantage of this method is that it requires less computational power to perform the adaptation as fewer samples are used in the optimization. _e.g_. for the Gaussian noise corruption, severity level 5, on ResNet50-GN and an entropy factor \(F\) of 0.4, the model forward passes 50K samples but keep less than 13K after selection for the backward pass, which is only 26% of the whole dataset. ## 7 Calibration In this section, we investigate the problem of network calibration in the context of Fully TTA. The calibration of classification networks is a measure of the confidence of the predictions. It is of utmost importance in the context of Fully TTA as it impacts directly the predictions entropy. Temperature scaling is one technique introduced in [7] to improve the calibration of under- or overconfident neural networks by correcting the logits in the softmax function. Formally, it is expressed as: \[softmax_{\tau}(z)_{i}=\frac{e^{z_{i}/\tau}}{\sum_{j=1}^{K}e^{z_{j}/\tau}} \tag{3}\] where \(\tau\) is the temperature scaling factor, \(z\) is the logits vector of an input sample, i is a class index and K is the total number of classes. A \(\tau\) value above 1 will lead to a higher entropy with a flattened distribution of the model predictions whereas a \(\tau\) value smaller than 1 will lead to a low entropy with a more peaky predictions distribution. In the context of test-time adaptation, [6] shows that using temperature scaling improves the model accuracy after adaptation when using an entropy minimization-based method. [15] also shows that when meta-learning the optimal loss for test-time adaptation, the result is an entropy minimization loss with a temperature scaling factor. To determine the temperature scaling factor in our experiments, we follow [36] in the way to select hyperparameters using the 4 Imagenet-C validation corruptions. For each network architecture, we select the temperature scaling factor \(\tau\) for each validation corruption using a grid search on values between 0.5 and 1.5 with a step of 0.1 and keep the average of the 4 values. For the 3 network architectures considered, we obtain a temperature scaling factor of 1.2, which means that without correction, the models are too confident in their predictions. Experimental resultsIn Tab. 2, we observe that applying temperature scaling during adaptation leads to an increase in Tent performance on ResNet50-BN and VitBase-LN. On ResNet50-GN, the mean is slightly lower, but the standard deviation is significantly reduced, which means overall a better performance in terms of statistical significance. The performance increase is not very high when using temperature alone. However, we will see in Sec. 8 that it leads to higher performance when combined with other tricks. Figure 5: **Impact of Sample Selection and Architecture on classification accuracy of different methods on ImageNet-C. The best results are circled in red. The optimal threshold varies in function of the architecture and the batch size and is lower for the smaller batch sizes than the values 0.5 or 0.4 for a batch size of 64 recommended in [21].** ## 8 Tricks combinations In this section, we investigate the performance of Tent using different combinations of the tricks presented in the previous sections. For ResNet50-BN, we consider the usage of batch renormalization as an essential trick when dealing with very small batch sizes as presented in Sec. 4 and always integrate it in the different tricks combinations tested. In the ResNet50-BN section of Tab. 3, we report first the results already presented in Fig. 2 to see the performance improvement with batch renormalization. Then we consider all the possible combinations of 2 of the tricks presented and finally, we consider the combination of all the tricks. For ResNet50-GN and VitBase-LN, we also present results considering all the possible combinations of 2 of the tricks presented previously and then combining all the tricks. Experimental resultsIn Tab. 3, we observe that when using a ResNet50-BN network, the best pair of tricks is the class rebalancing method DOT combined with the entropy-based sample selection. The best results overall are obtained when using this pair with a temperature scaling factor, in other words when using all tricks together. In this case, compared to Tent, we obtain an average improvement of +17.08% accuracy over all batch sizes. In the case of a ResNet50-GN architecture, the best pair of tricks is class rebalancing combined with the temperature scaling factor. Surprisingly, combining temperature scaling with sample selection is performing better than vanilla Tent but much lower than other pairs of tricks. We assume that as the temperature scaling is changing the entropy of the test samples, a finer tuning of the sample selection margin should be done to ensure that samples useful for the model adaptation are not discarded. The best performances are obtained using all tricks. In this case, we obtain an average improvement of +19.92% accuracy over all batch sizes compared to Tent. When considering the VitBase-LN architecture, we can see that the two pairs of tricks class rebalancing and temperature and class rebalancing and sample selection are close over all the batch sizes and yield the best results of the pairs of tricks. The overall best results are obtained when combining all tricks. Doing this leads to an average improvement compared to Tent of +7.66% over all batch sizes. Our main takeaway for this series of experiments is that the best results are obtained when combining all tricks (class rebalancing, sample selection, and temperature scaling), and this for the 3 architectures and the different batch sizes considered. Among the different architectures, VitBase-LN has the best classification accuracy when combining all the tricks and on all the batch sizes tested. ## 9 Comparison to other methods and on other datasets In this final experimental section, we compare the performance of BoT (i.e. Tent with all the tricks presented in this article) to a vanilla Tent and 2 state-of-the-art methods, SAR [22] and Delta [36]. This comparison is performed on different network architectures and different datasets: ResNet50-BN, ResNet50-GN, VitBase-LN for ImageNet-C, ImageNet-Rendition and ImageNet-Sketch, and ResNet101 for VisDA2017. Experimental resultsIn Tab. 4, we can see that on the ImageNet-C dataset, BoT obtains better results than a vanilla Tent, and the two state-of-the-art methods for all the batch sizes considered. Interesting to see is the collapse of SAR performance for very small batch sizes (2 and 1) on ResNet50-BN that we do not observe with Delta due to the usage of batch renormalization. If the performance increase by using all the tricks is not significant on ResNet50-BN (+0.78% accuracy on average versus Delta), it is much more noticeable on ResNet50-GN (+4.31% ac \begin{table} \begin{tabular}{l|c|c|c|c|c|c} & 16 & 8 & 4 & 2 & 1 \\ \hline ResNet50-BN & 39,43\(\pm\)0.13 & 33.30\(\pm\)0.04 & 20.81\(\pm\)0.08 & 5.53\(\pm\)0.01 & 0.14\(\pm\)0.00 \\ ResNet50-BN+ temp & **39,45\(\pm\)0.05** & **33,86\(\pm\)0.04** & **20,84\(\pm\)0.07** & **6,11\(\pm\)0.01** & **0,15\(\pm\)0.00** \\ \hline ResNet50-GN & **24,15\(\pm\)0.05** & **24,00\(\pm\)0.05** & **23,99\(\pm\)0.05** & **23,92\(\pm\)0.05** & **23,90\(\pm\)0.05** \\ ResNet50-GN+ temp & 24,01\(\pm\)0.07** & **23,87\(\pm\)0.07** & 23,82\(\pm\)0.05** & 23,76\(\pm\)0.09** & 23,74\(\pm\)**0.19** \\ \hline VitBase-LN & 50,97\(\pm\)0.07 & 50,90\(\pm\)0.04 & 50,91\(\pm\)0.07 & 50,89\(\pm\)0.08 & 50,89\(\pm\)0.04 \\ \hline VitBase-LN + temp & **52,84\(\pm\)0.07** & **52,81\(\pm\)0.05** & **25,76\(\pm\)0.06** & **25,76\(\pm\)0.05** & **52,77\(\pm\)0.02** \\ \end{tabular} \end{table} Table 2: **Impact of Temperature on classification accuracy of Tent method performance on different architecture on ImageNet-C. Using a temperature scaling factor increases the mean accuracy on ResNet50-BN and VitBase-LN. On ResNet50-GN, using temperature decreases slightly the mean classification accuracy but decreases also the standard deviation, which means that the model is better with respect to statistical significance.** \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} & \multicolumn{2}{c|}{Tent +} & \multicolumn{4}{c}{Batch Size} \\ \cline{3-8} \multicolumn{1}{c|}{} & BR CR CR SS T & 16 & 8 & 4 & 2 & 1 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & & 39.40\(\pm\)0.13 & 33.30\(\pm\)0.04 & 20.81\(\pm\)0.08 & 5.53\(\pm\)0.01 & 0.14\(\pm\)0.00 \\ & ✓ & & 43.26\(\pm\)0.01 & 41.39\(\pm\)0.06 & 37.72\(\pm\)0.05 & 30.84\(\pm\)0.04 & 20.25\(\pm\)0.01 \\ & ✓ & ✓ & 45.89\(\pm\)0.04 & 43.70\(\pm\)0.05 & 39.17\(\pm\)0.05 & 31.44\(\pm\)0.04 & **20.31\(\pm\)0.02** \\ & ✓ & ✓ & 45.17\(\pm\)0.26 & 43.03\(\pm\)0.01 & 39.02\(\pm\)0.02 & 31.60\(\pm\)0.02 & 20.26\(\pm\)0.01 \\ & ✓ & ✓ & 46.57\(\pm\)0.04 & 44.46\(\pm\)0.01 & 39.95\(\pm\)0.02 & 31.65\(\pm\)0.02 & 20.30\(\pm\)0.02 \\ & ✓ & ✓ & **46.90\(\pm\)0.12** & **44.90\(\pm\)0.00** & **40.42\(\pm\)0.14** & **32.03\(\pm\)0.03** & **20.31\(\pm\)0.02** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & 24.15\(\pm\)0.55 & 24.06\(\pm\)0.05 & 23.99\(\pm\)0.07 & 23.92\(\pm\)0.05 & 23.90\(\pm\)0.05 \\ & ✓ & ✓ & 46.35\(\pm\)0.07 & 45.89\(\pm\)0.04 & 44.77\(\pm\)0.02 & 42.07\(\pm\)0.03 & 39.91\(\pm\)0.04 \\ & ✓ & ✓ & 26.85\(\pm\)0.07 & 27.34\(\pm\)0.05 & 29.03\(\pm\)0.09 & 30.19\(\pm\)0.20 & 27.20\(\pm\)0.04 \\ & ✓ & ✓ & 45.78\(\pm\)0.09 & 45.31\(\pm\)0.11 & 44.21\(\pm\)0.01 & 43.33\(\pm\)0.03 & 38.94\(\pm\)0.03 \\ & ✓ & ✓ & **46.50\(\pm\)0.05** & **46.07\(\pm\)0.05** & **45.02\(\pm\)0.01** & **43.22\(\pm\)0.01** & **37.90\(\pm\)0.04** \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & & 50.97\(\pm\)0.07 & 50.90\(\pm\)0.04 & 50.91\(\pm\)0.07 & 50.89\(\pm\)0.06 & 50.89\(\pm\)0.04 \\ & ✓ & & 59.26\(\pm\)0.05 & 59.20\(\pm\)0.02 & 58.97\(\pm\)0.05 & 58.52\(\pm\)0.03 \\ \cline{1-1} & ✓ & ✓ & 57.59\(\pm\)0.04 & 58.11\(\pm\)0.14 & 57.88\(\pm\)0.07 & 57.02\(\pm\)0.10 & 55.10\(\pm\)0.07 \\ \cline{1-1} & ✓ & ✓ & 59.31\(\pm\)0.00 & 59.22\(\pm\)0.04 & 59.96\(\pm\)0.07 & 57.51\(\pm\)0.78 & 54.15\(\pm\)0.03 \\ \cline{1-1} & ✓ & ✓ & **59.80\(\pm\)0.07** & **59.77\(\pm\)0.04** & **59.99\(\pm\)0.03** & **59.04\(\pm\)0.05** & **56.15\(\pm\)0.03** \\ \hline \end{tabular} \end{table} Table 3: **Effect of Tricks Combination on model performance. Best results are obtained when combining all tricks and this for the 3 architectures and the different batch sizes considered. Among the different architectures, VitBase-LN has the best classification accuracy in all the different setups.** curacy on average versus Delta) and VitBase-LN (+1.53% accuracy in average versus Delta). In Tab. 5, we also observe that BoT performs the best in all cases. Interesting to note is that results are more stable over the different batch sizes with ResNet50-GN compared to ResNet50-BN, which is in line with observations from previous experiments. Delta performs better than SAR but worse than BoT. The performance increase of BoT compared to Delta is similar on ResNet50-BN and ResNet50-GN (respectively +0.85% and +0.87% accuracy) but reaches +1.23% accuracy on VitBase-LN. In Tab. 6, we make the same observations on ImageNet-Sketch as on the other ImageNet variants. ResNet50-BN performance drops when the batch size becomes small. In all cases, Delta performs better than SAR but not as good as BoT. BoT performs best in all cases. The performance increase of BoT versus Delta is +0.72% accuracy on ResNet50-BN, +1.32% accuracy on ResNet50-GN, and +1.03% accuracy on VitBase-LN. In Tab. 7, we observe that also for the VisDA2017 dataset, results are in line with previous experiments. Delta performs better than Tent and SAR but not as well as BoT. The performance improvement of BoT versus Delta is +0.36% accuracy on ResNet101. ## 10 Conclusion In this work, we addressed the Fully Test-Time Adaptation problem when dealing with small batch sizes by analyzing the following tricks and methods: i) Usage of Batch renormalization or batch-agnostic normalization ii) Class re-balancing iii) Entropy-based sample selection iv) Temperature scaling. Our experimental results show that if those tricks used alone already yield an improved classification accuracy, using them in pairs is even better, and the best results are obtained by combining them all. By doing that, we significantly improve the current state-of-the-art across 4 different image datasets in terms of prediction performances. Furthermore, the selected tricks bring additional benefits concerning the computational load: i) Using group normalization instead of batch normalization in ResNet50 yields more stable results for the same number of total parameters ii) using the entropy-based sample selection improves the adapted model performance by using fewer samples. We hope that this study will be useful for the community and that the presented tricks and techniques will be integrated into future baselines and benchmarks. ## 11 Acknowledgment This research was supported by the National Science and Engineering Research Council of Canada (NSERC), via its Discovery Grant program, and enabled in part by support provided by Calcul Quebec and the Digital Research Alliance of Canada. \begin{table} \begin{tabular}{l|l|c|c|c|c|c} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{Batch Size} \\ \cline{3-6} & & 16 & 8 & 4 & 2 & 1 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}\)} & \multirow{2}{*}{27.8\(\pm\)0.30} & 22.47\(\pm\)0.40 & 10.71\(\pm\)0.42 & 2.94\(\pm\)0.08 & 0.13\(\pm\)0.00 \\ & SAR & 1.05\(\pm\)0.29 & 26.73\(\pm\)0.76 & 16.80\(\pm\)0.07 & 6.72\(\pm\)0.05 & 0.13\(\pm\)0.00 \\ & Delta & 31.92\(\pm\)0.11 & 30.36\(\pm\)0.16 & 27.32\(\pm\)0.12 & 22.56\(\pm\)0.16 & **15.85\(\pm\)**0.04 \\ & BoT & **33.24\(\pm\)**0.13 & **31.50\(\pm\)**0.21 & **28.16\(\pm\)**0.12 & **22.86\(\pm\)**0.16 & **15.85\(\pm\)**0.04 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}\) \\ \end{tabular} } & Test & 23.04\(\pm\)0.20 & 22.95\(\pm\)0.38 & 22.92\(\pm\)0.38 & 22.92\(\pm\)0.35 \\ & SAR & 32.11\(\pm\)0.30 & 32.26\(\pm\)0.07 & 31.89\(\pm\)0.16 & 31.16\(\pm\)0.20 & 31.64\(\pm\)0.22 \\ & Delta & 34.50\(\pm\)0.20 & 34.26\(\pm\)0.09 & 33.57\(\pm\)0.18 & 31.56\(\pm\)0.08 & 30.93\(\pm\)0.07 \\ & BoT & **35.77\(\pm\)**0.35 & **35.49\(\pm\)**0.19 & **34.91\(\pm\)**0.15 & **33.19\(\pm\)**0.10 & **32.07\(\pm\)**0.09 \\ \hline \multirow{4}{*}{ \begin{tabular}{c} \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ 5}}}}}}}}}}}\) \\ \end{tabular} } & Test & 5.83\(\pm\)0.32 & 5.69\(\pm\)0.20 & 5.99\(\pm\)0.44 & 5.38\(\pm\)0.25 & 5.51\(\pm\)0.49 \\ & SAR & 25.40\(\pm\)0.65 & 25.80\(\pm\)0.64 & 27.87\(\pm\)0.08 & 32.89\(\pm\)0.57 & 30.68\(\pm\)0.09 \\ & Delta & 38.67\(\pm\)0.08 & 38.50\(\pm\)0.08 & 38.18\(\pm\)0.11 & 37.18\(\pm\)0.14 & 33.90\(\pm\)0.08 \\ & BoT & **39.69\(\pm\)**0.08 & **39.68\(\pm\)**0.06 & **39.50\(\pm\)**0.09 & **38.64\(\pm\)**0.03 & **34.09\(\pm\)**0.10 \\ \end{tabular} \end{table} Table 6: **Results on ImageNet-Sketch.** BoT performs best in all case. The performance increase of BoT versus Delta is +0.72% accuracy on ResNet50-BN, +1.32% accuracy on ResNet50-GN and +1.03% accuracy on VitBase-LN. \begin{table} \begin{tabular}{l|l|c|c|c|c|c} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{Batch Size} \\ \cline{3-6} & & 16 & 8 & 4 & 2 & 1 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbf{ }}}}}}}}}}}\)} \\ \end{tabular} } & Tent & 40.80\(\pm\)0.11 & 37.75\(\pm\)0.21 & 29.70\(\pm\)0.21 & 14.24\(\pm\)0.05 & 0.56\(\pm\)0.00 \\ & SAR & 42.11\(\pm\)0.10 & 38.95\(\pm\)0.21 & 30.07\(\pm\)0.08 & 16.13\(\pm\)0.12 & 0.57\(\pm\)0.00 \\ & Delta & 43.11\(\pm\)0.15 & 41.80\(\pm\)0.20 & 39.64\(\pm\)0.16 & 35.17\(\pm\)0.08 & 26.75\(\pm\)0.01 \\ & BoT & **46.48\(\pm\)**0.14 & **43.12\(\pm\)**0.11 & **40.61\(\pm\)**0.21 & **35.55\(\pm\)**0.00 & **26.75\(\pm\)**0.00 \\ & BoT & **46.80\(\pm\)**0.14 & **43.21\(\pm\)**0.12 & **40.61\(\pm\)**0.22 & **35.55\(\pm\)**0.00 & **26.75\(\pm\)**0.00 \\ \hline \multirow{4}{*}{ \begin{tabular}{c} \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{p
2306.01821
Missing levels in intermediate spectra
We derive an expression for the nearest-neighbor spacing distribution $P(s)$ of the energy levels of quantum systems with intermediate dynamics between regularity and chaos and missing levels due to random experimental errors. The expression is based on the Brody distribution, the most widely used for fitting mixed spectra as a function of one parameter. By using Monte Carlo simulations of intermediate spectra based on the $\beta$-Hermite ensemble of Random Matrix Theory, we evaluate the quality of the formula and its suitability for fitting purposes. Estimations of the Brody parameter and the fraction of missing levels can be obtained by a least-square two-parameter fitting of the experimental $P(s)$. The results should be important to distinguish the origins of deviations from RMT in experimental spectra.
María Hita-Pérez, Laura Muñoz, Rafael A. Molina
2023-06-02T13:12:12Z
http://arxiv.org/abs/2306.01821v2
# Missing levels in intermediate spectra ###### Abstract We derive an expression for the nearest-neighbor spacing distribution \(P(s)\) of the energy levels of quantum systems with intermediate dynamics between regularity and chaos and missing levels due to random experimental errors. The expression is based on the Brody distribution, the most widely used for fitting mixed spectra as a function of one parameter. By using Monte Carlo simulations of intermediate spectra based on the \(\beta\)-Hermite ensemble of Random Matrix Theory, we evaluate the quality of the formula and its suitability for fitting purposes. Estimations of the Brody parameter and the fraction of missing levels can be obtained by a least-square two-parameter fitting of the experimental \(P(s)\). The results should be important to distinguish the origins of deviations from RMT in experimental spectra. ## 1 Introduction Statistical analysis of spectra is a very important tool for understanding the dynamics of quantum and wave systems [1, 2, 3]. Quantifying the level repulsion, for example, it is possible to study the transition between integrable and chaotic quantum or wave systems [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], between localized and extended states in disordered systems [15, 16, 17, 18] and between ergodic and many-body localized phases in many body strongly correlated systems [19, 20, 21, 22, 23]. This relationship is based on the identification of complex wave or quantum spectra with Random Matrix Theory (RMT) [24]. These ideas initially came from nuclear physics but are now backed up with a semiclassical justification and a great body of numerical evidence behind them [3, 25]. Experimental studies of the spectral statistics of different quantum and wave systems from this perspective are numerous [2, 26, 27, 28, 13]. However, there are intrinsic limitations that have prevented a more wide use of these tools in experimental spectra. For a meaningful statistical analysis one needs complete sequences with no missing levels, no mixing of symmetries and long enough to have sufficient statistics. When the spectrum to be analyzed does not fulfill these conditions the reliability of the statistical analysis can be compromised, leading to incorrect conclusions, as explained next. In chaotic systems with time-reversal symmetry the appropriate RMT ensemble is the GOE (Gaussian Orthogonal Ensemble) whereas regular systems show spectral fluctuations described by Poisson statistics, corresponding to uncorrelated spectra [29]. Thus a transition from chaos to regularity manifests in the spectral fluctuations as a transition from GOE to Poisson statistics. But this kind of transition can also be induced in a GOE spectrum by the loss of correlations caused by missing levels or mixed symmetries, taking its spectral statistics also towards Poisson. Thus, when statistical analysis of a spectrum throws an intermediate result between GOE and Poisson it is quite difficult and requires a complex analysis probably taking into account different statistics [30] to distinguish the actual origin of the intermediate behavior, whether it is due to a true mixed dynamics between chaos and regularity or to missing levels and/or mixed symmetries in an actual GOE spectrum. Moreover, if it is due to both reasons it is even more difficult (when not impossible, unless one has some previous theoretical or experimental information on the dynamics of the system or the completeness of the spectrum) to find out what is the main one or estimate the weight of each, that is, estimate the degree of chaos and the number of missing levels or mixed symmetries independently. There has been a line of research that tries to circumvent some of these limitations and even take advantage of them in order to estimate the number of missing levels and the number of mixed symmetries in a particular experimental level sequence [5, 31, 32, 33, 34]. These approaches are based on RMT and the key point in order to be able to extract reliable information about missing levels or mixed symmetries is to assume that the spectral statistics coincide with the GOE (or the corresponding RMT ensemble for each symmetry class), that is, the actual experimental spectrum is chaotic. For example, analyzing the spectral statistics of chaotic nuclei it is possible to estimate how isospin symmetry is broken [35]. The number of missing levels in experimental sequences can also be estimated using these techniques. It is possible, then, to correct the value of experimentally obtained level densities [36, 37, 38, 39]. To sum up, there are tools available to determine the degree of chaos assuming that the spectrum is complete (no missing levels) and tools to estimate the number of missing levels assuming that the spectrum is chaotic (GOE). But to the best of our knowledge, there are no tools with which one can obtain both parameters at the same time: there has been no attempt to study the effect of missing levels on the spectral statistics of a mixed system between chaos and regularity. It is the purpose of this paper to fill the gap. We have been able to derive a two-parameter formula for the nearest-neighbor spacing distribution \(P(s)\) that allow an independent estimation of the degree of chaos and the fraction of experimentally observed levels. Through Monte Carlo simulations of mixed spectra based on the \(\beta\)-Hermite ensemble we study the accuracy of our formula and perform some tests to prove its usefulness by fitting the \(P(s)\) of these simulated spectra. We prove then that our formula is useful for estimating at the same time the chaoticity and the fraction of missing levels of an experimental level sequence when this fraction is not very large. ## 2 \(P(s)\) for missing levels in intermediate systems The two extremes (regular-Poisson and chaotic-GOE) in the nearest-neighbor spacing distribution (NNSD) are universal. However, there is not a universal transition for the intermediate dynamics. Two general behaviors in this transition can be distinguished in terms of level repulsion: fractional level repulsion and level repulsion of only a fraction of levels [40, 41]. Fractional level repulsion when the NNSD behaves as \(P(s)\sim s^{q}\) for small values of \(s\) is the most common. It can be described phenomenologically quite well by the Brody [4] and the Izrailev [6] distributions. Both describe this kind of repulsion at small spacings but are slightly different at large spacings. They are phenomenological distributions, although the Brody distribution can be derived from a power-law ansatz for the level repulsion function [42]. We choose the Brody distribution in this paper as it is the more extensively used for fitting results in intermediate systems. The Brody's formula is given by: \[P_{B}(s)=as^{q}\exp(-bs^{q+1}), \tag{1}\] where \(a=(q+1)b\), \(b=\left[\Gamma(\frac{q+2}{q+1})\right]^{q+1}\), and \(q\) is the so called Brody's parameter or mixing parameter, which allows to interpolate between Poisson distribution for \(q=0\) (regularity): \[P_{P}(s)=\exp(-s), \tag{2}\] and Wigner distribution for GOE for \(q=1\) (chaos): \[P_{W}(s)=\frac{\pi}{2}s\exp\left(-\frac{\pi}{4}s^{2}\right). \tag{3}\] The NNSD is the first of a series of spacing distributions between neighbors of \(n-\)order, \(p(n,s)\), \(n=0\) being the NNSD. These distributions are defined after the unfolding of the spectrum where the level density is locally normalized to unity [3]. A proper unfolding is not an easy task [43] but, for our purposes, we assume that the spectrum has been properly unfolded. Moreover, for our calculations here with \(\beta\)-ensemble spectra the unfolding is straightforward as the level densities are known from RMT. For unfolded levels these distributions need to be normalized and centered with the following conditions: \[\int_{0}^{\infty}p(n,s)ds=1, \tag{4}\] \[\langle s\rangle=\int_{0}^{\infty}sp(n,s)ds=n+1. \tag{5}\] When levels are missing from a sequence the higher order spacing distributions start to play a role. Using simple statistical considerations, Bohigas and Pato derived the general formula for the NNSD of a sequence with only a fraction of observed levels \(f\) (the fraction of missing levels would be then \(1-f\)) as a function of the \(p(n,s)\) distributions and assuming that the missing levels are randomly distributed [31]: \[p(s)=\sum_{k=0}^{\infty}(1-f)^{k}p(k,s/f). \tag{6}\] The interpretation of this formula is easy as the prefactor \((1-f)^{k}\) for \(k\geq 1\) is the probability that there are \(k\) missing levels between two given levels. An observed nearest neighbor spacing (left hand side) can really be a 2nd neighbor spacing with 1 missing level in between, a 3rd neighbor spacing with 2 missing levels in between,... a \((k+1)\)th neighbor spacing with \(k\) missing levels in between... in the original spectrum (right hand side). One, then, needs to use \(s/f\) instead of \(s\), given that the level sequence with a fraction \(f\) of observed levels is unfolded to \(=1\), so the average spacing in the original sequence without missing levels is a factor \(1/f\) higher. From Eq. (6) and the form of the higher-order spacing distibution for GOE one can see that the small spacing behavior of the transition from GOE to Poisson when we have fractional behavior (\(P(s)\sim s^{q}\)) is very different from the transition to Poisson due to missing levels where we still have \(P(s)\sim s\). This can be seen in Fig. 1, where we show the evolution with the type of dynamics of Eq. (1) for different values of \(q\) compared to the evolution with missing levels of Eq. (6) for different values of \(f\). This gives us hope that we can derive a two-parameter distribution that can distinguish the origin of the transition through a fit to experimental spectra. Given Brody's nearest-neighbor spacing distribution (1), Abul-Magd and Simbel [44] derived a general expression for the level-spacing distributions for higher order neighbors \(p(n,s)\) using a statistic treatment proposed by Engel, Main and Wunner [42]. They obtained the following generalization of Brody's formula: \[p_{B}(n,s)=a_{n}s^{(n+1)q}\exp{(-b_{n}s^{(n+1)q+1})}\int_{0}^{s}p(n-1,x)\exp( b_{n}x^{(n+1)q+1})dx, \tag{7}\] for \(n\geq 1\). Here \(a_{n}=[(n+1)q+1]b_{n}\) and both constants, \(a_{n}\) and \(b_{n}\), are determined by \(p(n,s)\) normalization conditions (4) and (5). This expression presents many complications when trying to calculate high-order spacings. This is due to the fact that an exact solution for \(b_{n}\) can only be found in the case in which the system is regular (\(q=0\)) where \(b_{n}=1\), and in all the other cases it has to be calculated by solving the integrals numerically. Those calculations are too heavy for practical purposes but we have found that Gaussian approximations for \(n\geq 3\) work very well. This trick was already used in the original work of Bohigas and Pato for missing levels in GOE [31]. Then, we have proceed as follows. For \(n=1\), a generalization of the Brody formula for the next-nearest-neighbor distribution is obtained by substituting equation (1) for \(p_{B}(0,s)\) into equation (7). \[p_{B}(1,s)=aa_{1}\int_{0}^{s}s^{2q}x^{q}\exp\left[-bx^{q+1}-b_{1}(s^{2q+1}-x^{ 2q+1})\right]dx, \tag{8}\] where \(a_{1}=(2q+1)b_{1}\). The parameter \(b_{1}\) was parametrized by Abul-Magd and Simbel and has the form: \[b_{1}(q)=\frac{1}{1+2.7q+3.5q^{2}}. \tag{9}\] For \(n=2\), following the same steps and substituting equation (8) into equation (7), we find that \[p_{B}(2,s)=aa_{1}a_{2}\int_{0}^{s}\int_{0}^{x}s^{3q}x^{2q}y^{q} \exp\left[-by^{q+1}\right.\] \[\left.-b_{1}(x^{2q+1}-y^{2q+1})-b_{2}(s^{3q+1}-x^{3q+1})\right] dxdy. \tag{10}\] Here \(a_{2}=(3q+1)b_{2}\) and \(b_{2}\) has to be obtained numerically from the normalization conditions. As Abul-Magd and Simbel did for \(b_{1}\), we have calculated using Monte Carlo simulations the values of \(b_{2}\) for a scan of values of \(q\) (\(q=0.1,0.2,\ldots 0.9,1\)) and parametrized the result in the form: \[b_{2}(q)=\frac{1}{1+6.7q+1.3q^{2}+51q^{3}}. \tag{11}\] For \(n\geq 3\), we found that the _n_th-order level-spacing distribution follows mostly a Gaussian distribution making the calculation of \(b_{n}\) not worth the time it takes, especially taking into account that one should keep many terms to have a good approximation to the infinite sum of expression (6). In this work we have kept up to 150 terms. Thus instead Figure 1: Comparison of the evolution of \(P(s)\) from GOE to Poisson with the mixing parameter \(q\) of Eq. (1) for the transition with the type of dynamics (a) and with the fraction \(f\) of observed levels of Eq. (6) (b). of using the expression (7) to describe the \(p(n,s)\) distribution, we use now a Gaussian approximation given by \[p(n,s)=\frac{1}{\sqrt{2\pi}\sigma(n,q)}\exp\left[-\frac{(s-\mu)^{2}}{2\sigma(n,q )}\right],\qquad n\geq 3, \tag{12}\] where \(\mu\) is the mean of the distribution, \(\mu=\langle s\rangle=n+1\), and \(\sigma(n,q)\) is its standard deviation. From now on, \(\sigma(n,q)\) will be calculated using spline interpolation from a battery of \(\sigma(n,q)\) values previously obtained from spectra with different \(q\) values (\(q=0.1,0.2,\ldots 0.9,1\)). Once we have an expression for \(p(n,s)\), we are ready to propose an expression to describe the nearest-neighbor spacing distribution, \(P(s)\), associated with incomplete spectra of intermediate and chaotic systems. Substituting equations (1), (8), (10) and (12) into equation (6), we obtain: \[\begin{split} P(s,f,q)&=a\left(\frac{s}{f}\right)^{ q}\exp\left[-b\left(\frac{s}{f}\right)^{q+1}\right]\\ &+(1-f)aa_{1}\int_{0}^{\frac{s}{f}}\left(\frac{s}{f}\right)^{2q} x^{q}\exp\left\{-bx^{q+1}-b_{1}\left[\left(\frac{s}{f}\right)^{2q+1}-x^{2q+1} \right]\right\}dx\\ &+(1-f)^{2}aa_{1}a_{2}\int_{0}^{\frac{s}{f}}\int_{0}^{x}\left( \frac{s}{f}\right)^{3q}x^{2q}y^{q}\exp\Biggl{\{}-by^{q+1}-b_{1}\left[x^{2q+1} -y^{2q+1}\right]\\ &-b_{2}\left[\left(\frac{s}{f}\right)^{3q+1}-x^{3q+1}\right] \Biggr{\}}dydx\\ &+\sum_{n\geq 3}\frac{(1-f)^{n}}{\sqrt{2\pi}\sigma(n,q)}\exp \left\{-\frac{\left[\left(\frac{s}{f}\right)-n-1\right]^{2}}{2\sigma(n,q)} \right\},\end{split} \tag{13}\] where all the parameters have already been defined. This is the central result of this work. Despite its complicated appearance it is suitable for calculations and fitting purposes, as shown in the next sections. But besides its practical interest, the two-parameter formula is interesting in its own as a first step to start to unravel the puzzle of the actual origin of intermediate behavior of fluctuations in a given spectrum. ## 3 Comparison with \(\beta\)-ensemble results The \(\beta\)-Hermite random matrix ensemble was proposed as a continuous generalization for all values of the repulsion parameter \(\beta>0\) of the classical random matrix ensembles corresponding to integer values \(\beta=1\) (GOE), 2 (GUE) and 4 (GSE) [45]. It allows a transition between the Poisson integrable results corresponding to \(\beta=0\) and the chaotic GOE results \(\beta=1\) with fractional level repulsion. Moreover, due to its simple form the use of the \(\beta\)-Hermite ensemble results in an unrivalled speed-up of numerical simulations. These features make the \(\beta\)-Hermite ensemble the best choice for checking our two-parameter formula for \(P(s)\) with a huge amount of matrices of different sizes with a fine scan of values of \(\beta\) and fractions \(f\) of observed levels. The precise value of \(P(s)\) for \(\beta\)-Hermite spectra does not follow exactly the Brody distribution but a more complicated two parameter formula, though the Brody formula is a good description except for low values of \(\beta\)[46]. However, this fact also fits our purpose as we want a practical distribution that can be used to obtain a good estimation of the fractional level repulsion and the fraction of missing levels in any type of system without worrying if it follows exactly the Brody distribution. We summarize below how the matrices belonging to the \(\beta\)-Hermite ensemble are constructed. The matrices in the \(\beta\)-Hermite ensemble are real symmetric tridiagonal matrices whose matrix elements are constructed as: \[H_{\beta}=\frac{1}{\sqrt{2}}\left(\begin{array}{cccccccc}N(0,2)&\chi_{(N-1) \beta}&&&&\\ \chi_{(N-1)\beta}&N(0,2)&\chi_{(N-2)\beta}&&\\ &\ddots&\ddots&\ddots&\\ &&&\chi_{2\beta}&N(0,2)&\chi_{\beta}\\ &&&&\chi_{\beta}&N(0,2)\end{array}\right), \tag{14}\] That is, the diagonal matrix elements are random variables with a Gaussian distribution \(N(\mu,\sigma^{2})\) of zero mean and variance \(\sigma^{2}=2\) while the non-diagonal matrix elements come from a \(\chi_{\nu}\) distribution with \(\nu=\beta(N-n)\), \(N\) being the dimension of the matrix. The level density of the \(\beta\)-Hermite ensemble is the semicircle law as in the case of the GOE, so the unfolding is easy in this case: \[\bar{\rho}(E)=\frac{1}{\pi\beta}\sqrt{2N\beta-E^{2}},\,|E|<\sqrt{2N\beta}. \tag{15}\] In order to check the accuracy of the formula for \(P(s,f,q)\), Eq. (13), we need spectra of as high dimension as possible. Thus, we have generated ensembles of 1000 spectra from \(\beta\)-Hermite matrices of size \(N=10000\) (equivalent to have spectra of dimension \(10^{7}\)) and values of the repulsion parameter \(\beta=0,0.1,0.2,\ldots,0.9,1\) where we take out a certain number of levels \(N(1-f)\) in order to have a fraction \(f\) of observed levels. In Fig. 2 we show the comparison of our formula \(P(s,f,q)\) (with \(q=\beta\)) with the \(P(s)\) obtained from these ensemble averages in two extreme cases (very low and very high values of \(\beta\) and \(f\)) and four cases with intermediate values, which are the ones of practical interest. In Table 1 we show the values of \(\chi^{2}\) for all the distributions we have calculated. We have found that the agreement is excellent in most cases. We make additional comments on two cases below. First, looking at the figures in the region of very low values of \(\beta\) and \(f\) the \(P(s)\) distributions are very close to Poisson statistics but, being the \(\beta\)-ensembles compatible with fractional level repulsion, the transition from \(\beta>0\) to \(\beta=0\) cannot be smooth and that is why, even though the values of \(\chi^{2}\) shown in Table 1 in this region are small, the curves are not as similar to the one for Poisson statistics as expected. In any case, we have scanned all these values of \(\beta\) and \(f\) for the sake of completeness but this region is of no much practical interest. No one can expect reasonable estimations with too low fractions of observed levels in too uncorrelated spectra. Second, looking at Table 1 there is another region of values which could be strikingly higher than the rest: \(f\geq 0.9\), \(q\leq 0.3\), where \(\chi^{2}\) is of order tenths whereas the rest of values are of order hundredths or less. However, this is also expected as this region correspond to the fitting of Brody's formula for low values of \(\beta\) for a practically complete spectrum (\(f\simeq 1\)), and, as we have mentioned before, the \(P(s)\) of the \(\beta\)-ensembles deviate from the Brody distribution in this case [46]. In summary, in this section we made use of the \(\beta\)-Hermite ensemble to show that the derivation of the two-parameter formula is correct by checking its accuracy with very high dimension. In the next section we explore if the formula could be used for realistic spectra to gain information confidently about the chaoticity and the number of missing levels simultaneously by fitting data with our expression. ## 4 Distributions of fitted results In this section we again take advantage of the \(\beta\)-ensemble in order to show that the two-parameter formula is suitable for fitting purposes and the possible limitations one should be careful with. We generate ensembles of matrices with certain values of the dimension \(N\) and the parameters \(\beta\) and \(f\) as in the previous section, but now we perform fits of these data with our two-parameter formula and we represent in a two-dimensional histogram the distribution of the results for \(q\) and \(f\), so we can analyze the probability of obtaining the correct values of the parameters when analyzing a single spectrum of interest. Thus, the uncertainty in the estimation of the parameters is not only related to errors in the fit but to the variance of the Figure 2: Comparison of Eq.13 (black line) with ensemble averages of 1000 matrices of dimension \(N=10000\) from the \(\beta-\)Hermite ensemble with \(\beta=q\) and a fraction \(f\) of observed levels (that is, the theoretical formula, not a fit). Green line is Eq. 3 (GOE) and red line is Eq. 2 (Poisson). The values of (\(f\), \(q\)) in each panel are: top-left (0.3, 0.2), top-middle (0.95, 0.9), top-right (0.6, 0.5), bottom-left (0.8, 0.6), bottom-middle (0.6, 0.7), bottom-right (0.7, 0.6). parameter distribution in the proper ensemble [34]. This variance increases when the matrix size is reduced. We want to stress that we do not intend to give a general recipe on how to use the formula. We have used the \(\beta\)-ensemble as a simple RMT model to simulate the transition from regularity to chaos. However, the transition is not universal and can, then, be different in actual experimental spectra. So we will not present here results for a fine scan of both parameters as in the previous section, but only show some representative cases to explain important aspects to take into account and give a general recommendation. That is, we make available the formula and await for experimental results to evaluate its real applicability and usefulness when approaching each case with its particular features. For few missing levels and dynamics close to chaos (high values of \(f\) and \(\beta\)), the spectra are still very correlated and we can obtain reliable estimations of both parameters. We show an example in Fig. 3 where we have represented the results for the joint distribution of fitted parameters for an ensemble of 10000 spectra of dimension \(N=1000\) with \(f=0.8\) and \(\beta=0.9\). Here we find a narrow peak of values of the parameters concentrated around the correct estimations. The ensemble variance is small in this case, so the probability of obtaining the correct values of the parameters \(q\) and \(f\) from a fit of the data is high. However, it is obvious that when spectra are very uncorrelated (very close to Poisson statistics) little information can be gained. The loss of correlations can occur when there are many missing levels (low values of \(f\)) or when the dynamics is very close to regularity (low values of \(\beta\)). This intuition should be reflected in a very high ensemble variance of the parameters. For example, in Fig. 4 we show the results of the distribution of the fitted parameters of the two-parameter formula for \(P(s)\) in an ensemble of 10000 spectra of dimension \(N=1000\) with \(f=0.6\) and \(\beta=0.3\). As can be seen the results for \((q,f)\) are spread over a wide region. One obtains with a high probability values around (\(f\simeq 1,q\simeq 0.3\) \begin{table} \begin{tabular}{|r|r|r|r|r|r|r|r|r|} \hline \(q\)\({}^{f}\) & 0.1 & 0.3 & 0.6 & 0.7 & 0.8 & 0.9 & 0.95 & 1 \\ \hline 0.1 & 0.021 & 0.040 & 0.052 & 0.056 & 0.063 & 0.073 & 0.078 & 0.082 \\ 0.2 & 0.017 & 0.035 & 0.054 & 0.063 & 0.076 & 0.091 & 0.10 & 0.11 \\ 0.3 & 0.013 & 0.029 & 0.048 & 0.054 & 0.066 & 0.082 & 0.092 & 0.10 \\ 0.4 & 0.013 & 0.022 & 0.035 & 0.041 & 0.049 & 0.061 & 0.071 & 0.083 \\ 0.5 & 0.013 & 0.016 & 0.023 & 0.027 & 0.031 & 0.039 & 0.050 & 0.057 \\ 0.6 & 0.013 & 0.0096 & 0.012 & 0.014 & 0.016 & 0.020 & 0.024 & 0.032 \\ 0.7 & 0.012 & 0.0062 & 0.0059 & 0.0065 & 0.0075 & 0.0076 & 0.0089 & 0.013 \\ 0.8 & 0.015 & 0.0079 & 0.0087 & 0.0083 & 0.0076 & 0.0054 & 0.0034 & 0.0022 \\ 0.9 & 0.019 & 0.018 & 0.023 & 0.022 & 0.020 & 0.015 & 0.0096 & 0.0014 \\ 1 & 0.030 & 0.047 & 0.054 & 0.051 & 0.045 & 0.038 & 0.029 & 0.012 \\ \hline \end{tabular} \end{table} Table 1: Values of \(\chi^{2}\) for the NNSD of \(\beta\)-ensembles of 1000 matrices of dimension \(N=10000\) generated with mixing parameter \(q=\beta\) and fraction of observed levels \(f\) with respect to the formula of Eq. 13. and, as expected, there are many different combinations of values (\(q,f\)) to reproduce such an uncorrelated spectra as any NNSD in which any of the two parameters is very low will be very close to Poisson. Now let us analyze in more detail a case of more practical interest with intermediate statistics between the two extremes. Let us think of a experimental spectrum with a \(30\%\) Figure 4: Histogram of the joint distribution of fitted values of the parameters (\(q,f\)) of an ensemble of 10000 \(\beta\)-Hermite matrices with \(f=0.6\) and \(\beta=0.3\). The matrix sizes are \(N=1000\). Figure 3: Histogram of the joint distribution of fitted values of the parameters (\(q,f\)) of an ensemble of 10000 \(\beta\)-Hermite matrices with \(f=0.8\) and \(\beta=0.9\). The matrix sizes are \(N=1000\). of missing levels and dynamics between chaos and regularity with \(\beta=0.6\), which we can represent here by one individual member of a \(\beta\)-ensemble generated with \(f=0.7\) and \(\beta=0.6\). We can perform a fit to the two-parameter formula to obtain a first estimation of the parameters \(q\) and \(f\). But in view of the previous figures 4 and 3 it seems clear that the best estimation of the uncertainty of the parameters deals with a simulation of an ensemble of spectra of the same dimension of the experimental one to analyze the dispersion of values of the parameters more than with any error estimation from the fit. We represent the results for this ensemble, for \(N=1000\), in the left panel of Fig. 5 and we can see that the spread of values is wider than the one for higher values shown in Fig. 3, and one could have a first estimation of the uncertainties of the parameters from the widths of this distribution. Moreover, for lower dimensions one obtains even wider distributions of the parameters, as shown in the central panel of Fig. 5 (\(N=500\)) and the right panel of Fig. 5 (\(N=200\)). That is, the probability of obtaining the correct values of the parameters from a single fit decreases, as expected, confirming that a safe estimation of the parameters and their errors cannot be obtained from a single fit but analyzing the distributions of several simulations of ensembles. In these two panels we can also observe the same effect as in Fig. 4 for the extreme of very uncorrelated spectra when the \(P(s)\) can be well-fitted in some realizations of the ensemble with a combination of \(f\) and \(q\) such that one of them is close to unity and the other one is representative of the actual intermediate behavior of the NNSD. That is, the conclusion from the fit in these realizations would be that the intermediate statistics is due to just one reason, missing levels or intermediate dynamics. Thus, in view of these examples, our conclusion and our recommendation when analyzing a spectrum of interest would be not only performing the fit but simulating several \(\beta\)-ensembles of the same dimension and observe the histograms of the distribution of the fitted results in order to try to obtain the most reliable estimation of the parameters and their uncertainties. For example, when having a spectrum like the ones in the left panel of Fig. 5 one could most probably obtain parameters near the center of the distribution and then start simulating ensembles with similar values of \(f\) and \(\beta\) and estimating their uncertainties from the width of the distributions. On the other hand, for a spectrum of lower dimension like the ones in the right panel of Fig. 5 one could obtain parameters around the right centered peak or parameters nearer the two extreme peaks around \(f\simeq 1\) or \(q\simeq 1\). Then for lower dimensions, when obtaining a value close to unity for any of the two parameters one should take this result with more caution and try to perform more simulations and checks to be sure of its reliability and make the best error estimation. Apart from intrinsic limitations from low values of the parameters or low dimension, here we only have simulated spectra and have assumed that not any previous information on them is known, but in practical cases if one has some previous information about the type of dynamics of the system or the estimated fraction of missing levels, the limitations become less. In order to have the safest estimations of the uncertainties of the parameters we recommend to perform simulations of several ensembles in any case, but if the values can be restricted to some ranges from previous known information, much narrower distributions will be obtained with only one peak instead of two or three, even for low dimensions. ## 5 Conclusions Quantum spectra with statistical properties intermediate between the GOE result of RMT and the Poisson uncorrelated spectra can be due to genuine intermediate properties between chaos and integrability or because there are missing levels destroying correlations in the level fluctuations induced by chaotic properties. But what happens when both things occur at the same time? Is it possible to distinguish the different causes of intermediate behavior and to quantify them? In this work, we have tried to answer this question. We have developed a two-parameter formula for fitting short-range spectral correlations with one of the parameters based on the Brody distribution accounting for the chaoticity of the spectra, \(q\), and the other for the fraction of observed levels, \(f\). The formulas work perfectly well when comparing with RMT results of the \(\beta-\)Hermite ensemble. This proofs the correctness of the formula for describing fractional level repulsion and missing levels in the spectra at the same time. The theoretical interest of the formula is clear, as a first step to start to unravel the puzzle of the actual origin of intermediate behavior of fluctuations in a spectrum. Now, the next step would be to judge its applicability for actual experimental individual spectra. In this work we have shown some examples by simulating individual spectra from matrices of \(\beta\)-Hermite ensembles (\(q=\beta\)) and analyzed the accuracy and precision of the fit by computing the full distribution of the fitted results of the whole ensemble. From these distributions one can evaluate the probability to obtain reliable results depending on the values of \(\beta\) and \(f\) and the size of the spectrum, and estimate the uncertainties of the parameters. So this is our recommendation on how to proceed when using the two-parameter formula. In view of the distributions of fitted results, we conclude that reliable results of both parameters at the same time can be obtained when the fraction of missing levels is not very large \(f>0.6\) and the total number of levels in the analyzed sequence is large enough \(N>1000\), assuming no previous information on the spectrum of interest. Though, in each particular case one can have some previous information on the values of \(q\) and \(f\) of the spectrum, which can help to shorten their ranges and improve the results even for lower dimensions. The formula would certainly be applicable to estimate the fraction of missing levels if we know in advance what is the value of the repulsion parameter \(q\) or viceversa. A similar procedure as the one we describe here for obtaining the two-parameter formula Figure 5: Histogram of the joint distribution of fitted values of the parameters (\(q,f\)) of ensembles of 10000 \(\beta\)-Hermite matrices with \(f=0.7\) and \(\beta=0.6\). The matrix sizes are from left to right \(N=1000\), \(N=500\), and \(N=200\). could be implemented for the Izrailev distribution, as it is also suitable to describe fractional level repulsion. We expect very similar results as the ones we present here. It should also be possible to start with the Berry-Robnik distribution that is able to fit the spectral statistics in the so-called 'far-semiclassical regime' which sometimes show a different behavior for small spacings, \(P(0)\neq 0\) and its value is given by the fraction of the classical regular phase space. Thus, in these systems there is not fractional level repulsion but full level repulsion only for a fraction of levels of the spectrum. In order to obtain a suitable two-parameter formula for the \(P(s)\) in this case one should consider the Berry-Robnik distribution as a starting point [40, 41]. We have taken advantage of the \(\beta\)-Hermite ensemble for testing our formula as it is a simple way to scan the transition chaos (\(\beta=1)-\)regularity (\(\beta=0\)) and its simple form results in an unrivalled speed-up of numerical simulations, but actual experimental spectra can be different from the \(\beta\)-ensemble, as the transition chaos-regularity is not universal. Thus, we make available this two-parameter formula and await results from experimental spectra to evaluate its real applicability and usefulness. The difficulties we have found in the estimation, specifically when the number of levels is limited, point also to the fact that, whenever possible, one should analyze together all the information available including short and long-range statistics and statistics of the widths in order to have more reliable conclusions regarding the chaoticity and the completeness of experimental spectra. _Acknowledgments:_ This research has been supported by CSIC Research Platform on Quantum Technologies PTI-001. We also acknowledge financial support from European Union's Horizon 2020 FET-Open project AVAQus Grant No. 899561, Projects No. PGC2018-094180-B-I00, PID2019-106820RBC21, RTI2018-098868-B-I00 and PID2021-126998OB-I00 funded by MCIN/AEI/10.13039/501100011033 and FEDER "A way of making Europe". _Bibliography:_
2308.08417
Porting Batched Iterative Solvers onto Intel GPUs with SYCL
Batched linear solvers play a vital role in computational sciences, especially in the fields of plasma physics and combustion simulations. With the imminent deployment of the Aurora Supercomputer and other upcoming systems equipped with Intel GPUs, there is a compelling demand to expand the capabilities of these solvers for Intel GPU architectures. In this paper, we present our efforts in porting and optimizing the batched iterative solvers on Intel GPUs using the SYCL programming model. These new solvers achieve impressive performance on the Intel GPU Max 1550s (Ponte Vecchio GPUs) which surpass our previous CUDA implementation on NVIDIA H100 GPUs by an average of 2.4x for the PeleLM application inputs. The batched solvers are ready for production use in real-world scientific applications through the Ginkgo library, complementing the performance portability of the batched functionality of Ginkgo.
Phuong Nguyen, Pratik Nayak, Hartwig Anzt
2023-08-16T15:05:13Z
http://arxiv.org/abs/2308.08417v3
# Porting Batched Iterative Solvers onto Intel GPUs with SYCL ###### Abstract. Batched linear solvers play a vital role in computational sciences, especially in the fields of plasma physics and combustion simulations. With the imminent deployment of the Aurora Supercomputer and other upcoming systems equipped with Intel GPUs, there is a compelling demand to expand the capabilities of these solvers for Intel GPU architectures. In this paper, we present our efforts in porting and optimizing the batched iterative solvers on Intel GPUs using the SYCL programming model. These new solvers achieve impressive performance on the Intel GPU Max 1550s (Ponte Vecchio GPUs) which surpass our previous CUDA implementation on NVIDIA H100 GPUs by an average of 2.4x for the PeleLM application inputs. The batched solvers are ready for production use in real-world scientific applications through the Ginkgo library, complementing the performance portability of the batched functionality of Ginkgo. S YCL, Performance Portability, Batched Linear Solvers, Intel GPUs + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + + Footnote †: journal: Computer Physics Communications + + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Computer Physics Communications + Footnote †: journal: Journal: Computer Physics Communications